69
2011 Pavan 1/5/2011 Informatica

FAQs On Informatica final

Embed Size (px)

Citation preview

Page 1: FAQs On Informatica final

2011

Pavan

1/5/2011

Informatica

Page 2: FAQs On Informatica final

35

Informatica Questions & Answers

1) How can you recognise whether or not the newly added rows in the source are gets insert in the target? Ans) 1.checking target success rows in the workflow monitor2.through scd type2 flag version

2) What is the difference between Informatica 7.0 and 8.0 ? Ans)

Informatica 7.0 Informatica 8.1.1

Architecture Informatica 7.0 is a client server architecture where

8.0 is service oriented architecture

PC8 is service-oriented for modularity, scalability and flexibility.

Service The Repository Service and Integration Service (as replacement for Rep Server and Informatica

Server) can be run on different computers in a network (so called nodes), even

redundantly.

Management Management is centralized, that means services can be started and stopped on nodes via a

central web interface.

Tools Client Tools access the repository via that centralized machine, resources are distributed

dynamically.

Portability Running all services on one machine is still possible, of course.

Supports . It has a support for unstructured data which includes spreadsheets, email, Microsoft Word files, presentations and .PDF documents. It

provides high availability, seamless fail over, eliminating single points of failure.

Performance grid and pushdown optimization is not there in

7.0 but in 8.0 these are available

It has added performance improvements (To bump up systems performance, Informatica has added "push down optimization" which moves data transformation processing to the native relational database I/O engine whenever its is

most appropriate.)

Capabilities Through 7.0 migration is critical where as with 8.0 migration is possible and

easy

Informatica has now added more tightly integrated data profiling, cleansing, and matching

capabilities.

web Informatica has added a new web based administrative console.

Additional Transformations

Ability to write a Custom Transformation in C++ or Java. Midstream SQL transformation has been

added in 8.1.1, not in 8.1.

Page 3: FAQs On Informatica final

35

Informatica Questions & Answers

encryption and description is not in 7.0 possible with

8.0

User defined functions

7.0 we cant change the lookup cache size but with

8.0 we can change the lookup cache size

.Dynamic configuration of caches and partitioning

Power Center 8 release has "Append to Target file" feature.

3) Performance tuning in Informatica? Ans) Network connections: The performance of the Informatica Server is related to network connections. Data generally moves across a network at less than 1 MB per second, whereas a local disk moves data five to twenty times faster. Thus network connections often affect on session performance. So avoid network connections.Flat files: If your flat files stored on a machine other than the informatica server, move those files to the machine that consists of informatica server.Relational data sources: Minimize the connections to sources, targets and informatica server to improve session performance. Moving target database into server system may improve session performance.Staging areas: If u uses staging areas u force informatica server to perform multiple data passes. Removing of staging areas may improve session performance.Distributing load: Distributing the session load to multiple informatica servers may improve session performance.Data Movement: Run the informatica server in ASCII data movement mode improves the session performance .Because ASCII data movement mode stores a character value in onebyte .Unicode mode takes 2 bytes to store a character.If a session joins multiple source tables in one Source Qualifier, optimizing the query may improve performance. Also, single table select statements with an ORDER BY orGROUP BY clause may benefit from optimization such as adding indexes.We can improve the session performance by configuring the network packet size ,which allows data to cross the network at one time .To do this go to server manger ,choose server configure database connections.If u are target consists key constraints and indexes u slow the loading of data .To improve the session performance in this case drop constraints and indexes before u run thesession and rebuild them after completion of session.Running parallel sessions by using concurrent batches will also reduce the time of loading the data. So concurrent batches may also increase the session performance .Partitioning the session improves the session performance by creating multiple connections to sources and targets and loads data in parallel pipe lines.In some cases if a session contains a aggregator transformation, You can use incremental aggregation to improve session performance.Avoid transformation errors to improve the session performance.If the session contains lookup transformation You can improve the session performance by enabling the look up cache.If Ur session contains filter transformation, create that filter transformation nearer to the sources or you can use filter condition in source qualifier.Aggreagator, Rank and joiner transformation may often decrease the session performance .Because they must group data before processing it. To improve sessionperformance in this case use sorted ports option.

4) Differences between Normalizer and Normalizer transformation.

Page 4: FAQs On Informatica final

35

Informatica Questions & Answers

Ans)Normalizer Normalization

It is a transformation mainly using for cobol sources,

To remove the redundancy and inconsistency

it's change the rows into columns and columns into rowsNormalizer Transformation can be used to obtain multiple columns from a single row.

5) How do we do unit testing in Informatica?How do we load data in informatica?Ans) Unit testing are of two types1. Quantitaive testing 2.Qualitative testingSteps.1. First validate the mapping2.Create session on the mapping and then run workflow.Once the session is succeeded the right-click on session and go for statistics tab.There you can see how many number of source rows is applied and how many number of rows loaded in to targets and how many number of rows rejected. This is called Quantitative testing.If once rows are successfully loaded then we will go for qualitative testing.Steps1.Take the DATM(DATM means where all business rules are mentioned to the corresponding source columns) and check whether the data is loaded according to the DATM in to target table. If any data is not loaded according to the DATM then go and check in the code and rectify it.This is called Qualitative testing.This is what a developer will do in Unit Testing. 6) How do you handle decimal places while importing a flatfile into informatica? Ans) While importing flat file definition just specify the scale for a numeric data type. In the mapping, the flat file source supports only number data type (no decimal and integer). In the SQ associated with that source will have a data type as decimal for that number port of the source .source ->number data type port ->SQ -> decimal data type .Integer is not supported. Hence decimal is taken care. Import the field as string and then use expression to convert it, so that we can avoid truncation if decimal places in source itself.

7) Diff between static and dynamic cache? and please explain with one example? Ans) Difference between static and dynamic cache-

Static cache Dynamic cache1) Once the data is cached , it will not

change. example unconnected lookup uses static cache.

2) The cache is updated as to reflect the update in the table( or source) for which it is reffering to.(ex. connected lookup).

3) while using a static cache in lookup we can use all operators like =,<,>... while giving condition in condition tab

4) but in using dynamic cache we only can use = operator

5) It is read-only cache 6) Dynamic Cache: It is Read and Write

7) Informatica returns value when condition is true and if it is false it will return

8) It will return only if condition is false

Page 5: FAQs On Informatica final

35

Informatica Questions & Answers

default value in connected look up and Null value in unconnected look up

9) We can configure any static or read only cache for any lookup source.By Default Integration service creates a static cache.In ths it caches the lookup table, and when lookup condition is true it returns a value.

10)To cache a target table/FF src and 1)insert rows or 2)update existing rows in the cache

8) What is power center repository? Ans) Standalone repository. A repository that functions individually, unrelated and unconnected to other repositories. Global repository. (PowerCenter only.) The centralized repository in a domain, a group of connected repositories. Each domain can contain one global repository. The global repository can contain common objects to be shared throughout the domain through global shortcuts. Local repository. (PowerCenter only.) A repository within a domain that is not the global repository. Each local repository in the domain can connect to the global repository and use objects in its shared folders.

Power Center repository is used to store informatica's meta data .Information such as mapping name,location,target definitions,source definitions,transformation and flow is stored as meta data in the repository.

9) How the informatica server sorts the string values in Ranktransformation? Ans) We can run informatica server either in UNICODE data moment mode or ASCII data moment mode.Unicode mode: in this mode informatica server sorts the data as per the sorted order in session. It uses the sort order configured in session properties.ASCII Mode:in this mode informatica server sorts the data as per the binary order

10) Is sorter an active or passive transformation?What happens if we uncheck the distinct option in sorter.Will it be under active or passive transformation? Ans) Sorter is an active transformation. if you don't check the distinct option it is considered as a passive transformation. because this distinct option eliminates the duplicate records from the table.

11) What is the difference between stop and abort

Ans) stop: In the session if u want to stop a part of batch you must stop the batch,

if the batch is part of nested batch, Stop the outer most batch

Abort:You can issue the abort command , it is similar to stop command except it has 60 second time out .

If the server cannot finish processing and committing data within 60 sec

12) Explain about Informatica server Architecture? Ans) Informatica Server Architecture is as above.1. Node2. Integration Server3. Repository Server

Page 6: FAQs On Informatica final

35

Informatica Questions & Answers

13) How can you improve session performance in aggregator transformation? Ans) There are 3 ways to improve session performance for an aggregator transformation :-

A)1)Size of data cache = Bytes required for variable columns + bytes required for output columns.2) Size of index cache = size of ports used in group by clause.

B) If you provide sorted data for group by ports aggregation will be faster, so for ports which are used in group by of an aggregator sort those ports in a sorter.

C) We can use incremental aggregation if we think that there will be no change in data which is already aggregated.

14) How can we use pmcmd command in a workflow or to run a session Ans) pmcmd>startworkflow -f foldername workflowname

15) In update strategy target table or flat file which gives more performance ? why? Ans) Pros: Loading, Sorting, Merging operations will be faster as there is no index concept and Data will be in ASCII mode.Cons: There is no concept of updating existing records in flat file.As there is no indexes, while lookups speed will be lesser.

16) What is the difference between filter and lookup transformation? Ans) 1) Filter transformation is an Active transformation and Lookup is a Passive transformation2) Filter transformation is used to Filter rows based on condition and Lookup is used to to look up data in a flat file or a relational table, view, or synonym

17) What are the out put files that the informatica server creates during the session running? Ans) Informatica server log: Informatica server(on unix) creates a log for all status and error messages(default name: pm.server.log). It also creates an error log for errormessages. These files will be created in informatica home directory:-

Session log file: Informatica server creates session log file for each session.It writes information about session into log files such as initialization process,creation of sqlcommands for reader and writer threads,errors encountered and load summary.The amount of detail in session log file depends on the tracing level that you set.

Session detail file: This file contains load statistics for each targets in mapping.Session detail include information such as table name,number of rows written or rejected.Ucan view this file by double clicking on the session in monitor window

Performance detail file: This file contains information known as session performance details which helps you where performance can be improved.To genarate this file selectthe performance detail option in the session property sheet.

Reject file: This file contains the rows of data that the writer does notwrite to targets.

Control file: Informatica server creates control file and a target file when you run a session that uses the external loader.The control file contains the information about thetarget flat file such as data format and loading instructios for the external loader.

Page 7: FAQs On Informatica final

35

Informatica Questions & Answers

Post session email: Post session email allows you to automatically communicate information about a session run to designated recipents.You can create two differentmessages.One if the session completed sucessfully the other if the session fails.

Indicator file: If you use the flat file as a target,You can configure the informatica server to create indicator file.For each target row,the indicator file contains a number to indicatewhether the row was marked for insert,update,delete or reject.output file: If session writes to a target file,the informatica server creates the target file based on file prpoerties entered in the session property sheet.

Cache files: When the informatica server creates memory cache it also creates cache files.For the following circumstances informatica server creates index and datacache files:-Aggreagtor transformationJoiner transformationRank transformationLookup transformation

18) How many types of dimensions are available in Informatica? Ans) The types of dimensions available are:1. Junk dimension2. Degenerative Dimension3. Conformed Dimension

19) Define informatica repository? Ans) Infromatica Repository:The informatica repository is at the center of the informatica suite. You create a set of metadata tables within the repository database that the informatica application and tools access. The informatica client and server access the repository to save and retrieve metadata.

20) How do you configure mapping in informatica Ans) You should configure the mapping with the least number of transformations and expressions to do the most amount of work possible. You should minimize the amount of data moved by deleting unnecessary links between transformations. For transformations that use data cache (such as Aggregator, Joiner, Rank, and Lookup transformations), limit connected input/output or output ports. Limiting the number of connected input/output or output ports reduces the amount of data the transformations store in the data cache. You can also perform the following tasks to optimize the mapping: Configure single-pass reading. Optimize datatype conversions. Eliminate transformation errors. Optimize transformations. Optimize expressions.

21) How can you create or import flat file definition in to the warehouse designer? Ans) You can not create or import flat file defintion in to warehouse designer directly.Instead you must analyze the file in source analyzer,then drag it into the warehousedesigner.When you drag the flat file source defintion into warehouse desginer workspace,the warehouse designer creates a relational target defintion not a file defintion.If you want to load to a file,configure the session to write to a flat file.When the informatica server runs the session,it creates and loads the flat file.

22) When we create a target as flat file and source as oracle.. how can i specify first rows as column names in flat files...

Page 8: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) use a pre sql statement....but this is a hardcoding method...if you change the column names or put in extra columns in the flat file, you will have to change the insert statement. You can also achive this by changing the setting in the Informatica Repository manager to display the columns heading. The only disadvantage of this is that it will be applied on all the files that will be generated by This serverWhen importing a flat file into target designer a flat file import wizard appears. In this there is an option as 'import field names from first line'. Just check this option so integration server treats first row values as column names.

23) Discuss the advantages & Disadvantages of star & snowflake schema? Ans) In a STAR schema there is no relation between any two dimension tables, whereas in a SNOWFLAKE schema there is a possible relation between the dimension tables.In star schema there is no relationship between two relational tables. All dimensions are de-normalized and query performence is degrades. In this snow flake schema dimensions are normalized. In this SF schema table space is increased.Maintenence cost is high.Query performence is increaced.

24) Difference between Rank and Dense Rank? Ans) Rank:12<--2nd position2<--3rd position45Same Rank is assigned to same totals/numbers. Rank is followed by the Position. Golf game ususally Ranks this way. This is usually a Gold Ranking.Dense Rank:12<--2nd position2<--3rd position34Same ranks are assigned to same totals/numbers/names. the next rank follows the serial number.

25) Can anyone explain error handling in Informatica with examples so that it will be easy to explain the same in the interview. Ans) Go to the session log file there we will find the information regarding to thesession initiation process,errors encountered.load summary.

so by seeing the errors encountered during the session running, we can resolve the errors.

There is one file called the bad file which generally has the format as *.bad and it contains the records rejected by informatica server. There are two parameters one fort the types of row and other for the types of columns. The row indicators signifies what operation is going to take place ( i.e. insertion, deletion, updation etc.). The column indicators contain information regarding why the column has been rejected.( such as violation of not null constraint, value error, overflow etc.) If one rectifies the error in the data preesent in the bad file and then reloads the data in the target,then the table will contain only valid data.

26) What is the difference between connected and unconnected stored procedures.

Page 9: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) Unconnected:The unconnected Stored Procedure transformation is not connected directly to the flow of the mapping. It either runs before or after the session, or is called by an expression in another transformation in the mapping.

connected:The flow of data through a mapping in connected mode also passes through the Stored Procedure transformation. All data entering the transformation through the input ports affects the stored procedure. You should use a connected Stored Procedure transformation when you need data from an input port sent as an input parameter to the stored procedure, or the results of a stored procedure sent as an output parameter to another transformation.

by using unconnected stored procedurereusability is possiblein connected only one time is possible

27) Which tasks can be performed on port level(using one specific port)? Ans) I think unconnected Lookup or expression transformation can be used for single port for a row.

28) What are main advantages and purpose of using Normalizer Transformation in Informatica? Ans) Narmalizer Transformation is used mainly with COBOL sources where most of the time data is stored in de-normalized format. Also, Normalizer transformation can be used to create multiple rows from a single row of data

29) What is the difference between constraind base load ordering and target load plan

Ans) Constraint based load ordering example:Table 1---Master Table 2---Detail

If the data in table1 is dependent on the data in table2 then table2 should be loaded first.In such cases to control the load order of the tables we need some conditional loading which is nothing but constraint based load

In Informatica this feature is implemented by just one check box at the session level.

A CBl specifies the order in which data loads into the targets based on key constraintsA target load plan defines the order in which data being extracted from the source qualifier

30) What is difference between IIF and DECODE function

Ans) You can use nested IIF statements to test multiple conditions. The following example tests for various conditions and returns 0 if sales is zero or negative:

IIF( SALES > 0, IIF( SALES < 50, SALARY1, IIF( SALES < 100, SALARY2, IIF( SALES < 200, SALARY3, BONUS))), 0 )

You can use DECODE instead of IIF in many cases. DECODE may improve readability. The following shows how you can use DECODE instead of IIF :

Page 10: FAQs On Informatica final

35

Informatica Questions & Answers

SALES > 0 and SALES < 50, SALARY1,

SALES > 49 AND SALES < 100, SALARY2,

SALES > 99 AND SALES < 200, SALARY3,

SALES > 199, BONUS)

Decode function can used in sql statement. where as if statment cant use with SQL statement.

31) How can u work with remote database in informatica?did u work directly by using remote connections?

Ans) To work with remote datasource u need to connect it with remote connections.But it is not preferable to work with that remote source directly by using remote connections .Instead u bring that source into U r local machine where informatica server resides.If uwork directly with remote source the session performance will decreases by passing less amount of data across the network in a particular time.

32) How to import oracle sequence into Informatica. Ans) CREATE ONE PROCEDURE AND DECLARE THE SEQUENCE INSIDE THE PROCEDURE,FINALLY CALL THE PROCEDURE IN INFORMATICA WITH THE HELP OF STORED PROCEDURE TRANSFORMATION.

33) Identifying bottlenecks in various components of Informatica and resolving them. Ans) The best way to find out bottlenecks is writing to flat file and see where the bottle neck is .

34) What is parameter file? Ans) For UNIX shell users, enclose the parameter file name in single quotes: -paramfile '$PMRootDir/myfile.txt' For Windows command prompt users, the parameter file name cannot have beginning or trailing spaces. If the name includes spaces, enclose the file name in double quotes: -paramfile ?$PMRootDirmy file.txt? Note: When you write a pmcmd command that includes a parameter file located on another machine, use the backslash () with the dollar sign ($). This ensures that the machine where the variable is defined expands the server variable.pmcmd startworkflow -uv USERNAME -pv PASSWORD -s SALES:6258 -f east -w wSalesAvg -paramfile '$PMRootDir/myfile.txt'

35)What is the difference between summary filter and detail filter Ans) Summary filter can be applied on a group of rows that contain a common value. whereas detail filters can be applied on each and every rec of the data base.

36) What is the difference between Narmal load and Bulk load? Ans) Normal Load: Normal load will write information to the database log file so that if any recorvery is needed it is will be helpful. when the source file is a text file and loading data to a table,in such cases we should you normal load only, else the session will be failed.

Bulk Mode: Bulk load will not write information to the database log file so that if any recorvery is needed we can't do any thing in such cases.

Page 11: FAQs On Informatica final

35

Informatica Questions & Answers

compartivly Bulk load is pretty faster than normal load.

37) How u will create header and footer in target using informatica? Ans) If you are focus is about the flat files then one can set it in file properties while creating a mapping or at the session level in session properties

38) What are two types of processes that informatica runs the session? Ans) Load manager Process: Starts the session, creates the DTM process, and sends post-session email when the session completes.The DTM process. Creates threads to initialize the session, read, write, and transform data, and handle pre- and post-session operations.

39) What r the types of groups in Router transformation? Ans) A Router transformation has the following types of groups:

Input Output Input Group The Designer copies property information from the input ports of the input group to create a set of output ports for each output group.

Output Groups There are two types of output groups:

User-defined groups Default group You cannot modify or delete output ports or their properties.

40) What are the real time problems generally come up while doing/running mapping/any transformation?can any body explain with example. Ans) Here are few real time examples of problems while running informatica mappings:

1) Informatica uses OBDC connections to connect to the databases.The database passwords (production ) is changed in a periodic manner and the same is not updated at the Informatica side.Your mappings will fail in this case and you will get database connectivity error.2) If you are using Update strategy transformation in the mapping, in the session propertiesyou have to select Treat Source Rows : Data Driven. If we do not select this Informatica server will ignore updates and it only Inserts rows.3) If we have mappings loading multiple target tables we have to provide the Target Load Planin the sequence we want them to get loaded.4) Error:Snapshot too old is a very common error when using Oracle tables. We get this error

while using too large tables. Idealy we should schelude these loads when server is not verybusy (meaning when no other loads are running).5) We might get some poor performance issues while reading from large tables. All the source tablesshould be indexed and updated regularly.

41) What is difference between maplet and reusable transformation? Ans) mapplet:---contains input and output transformations.--designed in mapplet designer.

Page 12: FAQs On Informatica final

35

Informatica Questions & Answers

--reusable.--contains multiple transformations.--we use it to reuse multiple tr for a task to be done.

Reusable transformation:---no input and output transformation is needed.--designed in mapping designer.--reusable.--It is a singl transformation--we create it to reuse a single transformation in future

42) How many types of facts and what are they? Ans) There are Factless Facts:Facts without any measures.Additive Facts:Fact data that can be additive/aggregative.Non-Additive facts: Facts that are result of non-additonSemi-Additive Facts: Only few colums data can be added.Periodic Facts: That stores only one row per transaction that happend over a period of time.Accumulating Fact: stores row for entire lifetime of event.

43) what are load types in informatica and what is delta load Ans) There are two types of load i) Normal Load ii) Bulk LoadNormal Load-The integration service writes to the database log then it enters into target.a)performance of loading to target decreases but session recovery occurs.b)Rollback and commit possibleBulk Load:The integration service bypasses the database log without writing into it and directly loaded into target.a)Performance increases but session recovery doesnot occur.b) Rollback and commit also not possible.In bulk loading we need to consider the following:1)Without creating any primary and foreign key at database level but however in target definition.2)drop index before loading into target and create index after loading.3)disable enable parallel mode option

44) What are the session parameters? Ans) Session parameters are like maping parameters,represent values you might want to change between sessions such as database connections or source files.Server manager also allows you to create userdefined session parameters.Following are user defined session parameters:-Database connectionsSource file names: use this parameter when you want to change the name or location ofsession source file between session runs.Target file name : Use this parameter when you want to change the name or location ofsession target file between session runs.Reject file name : Use this parameter when you want to change the name or location ofsession reject files between session runs.By using Session parameters we can reuse the session how many times you want. The main purpose of session parameter it represents connection path to a database system. we can reuse the different types of databases.If you want to use session parameter we have to follow the following procedure 1. Double click the session and select mapping tab in the mapping tab select target connection, in the target connection select writer property click on Radio Button USE Connection Variable and write the connection variable $$<Connection Variable>.

Page 13: FAQs On Informatica final

35

Informatica Questions & Answers

After that create a parameter file with .txt or .prm using the following syntax [folder.session]$$<Connection Variable=<databse name>

45) What are the methods for creating reusable transforamtions? Ans) Two methods:-1.Design it in the transformation developer.2.Promote a standard transformation from the mapping designer.After you add a transformation to the mapping , You can promote it to the status of reusable transformation.Once you promote a standard transformation to reusable status,You can demote it to a standard transformation at any time.If you change the properties of a reusable transformation in mapping,You can revert it to the original reusable transformation properties by clicking the revert button.

46) What does the expression n filter transformations do in Informatica Slowly growing target wizard? Ans) Filter transformation filters the rows that are not flagged and passes the flagged rows to the Update strategy transformation

EXP is used to perform record level operations and is a passive transformation.like op_col1=ip_col1*10+ip_col2for all the records same operation will be performed on values of 2 i/p fields - ip_col1,ip_col2 and o/p will pass through o/p field-op_col1FIL is used to filter some records based on any condition.(the way we write condition in where clause,we can simply put the condition in FIL transformation)...records not matching the condition will be DROPPED(not rejected)from the mapping flow and there is no way to capture dropped rows(unlike rejected rows in UPD can be captured in reject file iff forward reject rows option is not ticked)...so FIL is activetransformation... FIL-Filter TransformationEXP-Expression TransformationUPD-Update Strategy Transformation

47) Where to store informatica rejected data? How to extract the informatica rejected data ? Ans) The reject rows say for example due to unique key constrain is all pushed by session into the $PMBadFileDir (default relative path is <INFA_HOME/PowerCenter/server/infa_shared/BadFiles) which is configured with path at Integration Service level. Every Target will have property saying Reject filename which gives the file in which rejects rows are stored.

48)How to use the unconnected lookup i.e., from where the input has to be taken and the output is linked? What condition is to be given? Ans) The unconnected lookup is used just like a function call. in an expression output/variable port or any place where an expression is accepted(like condition in update strategy etc..), call the unconnected lookup...something like :LKP.lkp_abc(input_port).......(lkp_abc is the name of the unconnected lookup...(plz check the exact syntax)).....give the input value just like we pass parameters to functions, and it'll return the output after looking up.

49) What is the Rankindex in Ranktransformation? Ans) The Designer automatically creates a RANKINDEX port for each Rank transformation. The Informatica Server uses the Rank Index port to store the ranking position for<br>each record in a group. For example, if you create a Ranktransformation that ranks the top 5 salespersons for each quarter, the rank index numbers the salespeople from 1<br>to 5.

Page 14: FAQs On Informatica final

35

Informatica Questions & Answers

50) What is difference between partioning of relatonal target and partitioning of file targets? Ans) Partition's can be done on both relational and flat files.Informatica supports following partitions1.Database partitioning2.RoundRobin3.Pass-through4.Hash-Key partitioning5.Key Range partitioningAll these are applicable for relational targets.For flat file only database partitioning is not applicable.Informatica supports Nway partitioning.U can just specify the name of the target file and create the partitions, rest will be taken care by informatica session.

51) Why did u use update stategy in your application? Ans) Update Strategy is used to drive the data to be Inert, Update and Delete depending upon some condition. You can do this on session level tooo but there you cannot define any condition.For eg: If you want to do update and insert in one mapping...you will create two flows and will make one as insert and one as update depending upon some condition.Refer : Update Strategy in Transformation Guide for more information

52) What is IQD file? Ans) IQD file is nothing but Impromptu Query Definetion,This file is maily used in Cognos Impromptu tool after creating a imr( report) we save the imr as IQD file which is used while creating a cube in power play transformer.In data source type we selectImpromptu Query Definetion.

53) What r the mapings that we use for slowly changing dimension table?

Ans) We can use the following Mapping for slowly Changing dimension table.

? Expression? Lookup? Filter? Sequence Generator? Update Strategy

54) How do I import VSAM files from source to target. Do I need a special plugin Ans) As far my knowledge by using power exchange tool convert vsam file to oracle tables then do mapping as usual to the target table.

55) What is meant by aggregate fact table and where is it used? Ans) Basically fact tables are two kinds 1. Aggregated factable and Factless fact table. Agregated factable has aggregarted columns. for eg. Total-Sal, Dep-Sal. where as in factless factable will not have aggregated colums and it only has FK to the Dimension tables.

56) What are Target Types on the Server? Ans) Target Types are File, Relational and ERP.

57) What are mapping parameters and varibles in which situation we can use it Ans) If we need to change certain attributes of a mapping after every time the session is run, it will be very difficult to edit the mapping and then change the attribute. So we use mapping parameters and variables and define the values in a parameter file. Then we could edit the parameter file to change the attribute values. This makes the process simple.

Page 15: FAQs On Informatica final

35

Informatica Questions & Answers

Mapping parameter values remain constant. If we need to change the parameter value then we need to edit the parameter file .But value of mapping variables can be changed by using variable function. If we need to increment the attribute value by 1 after every session run then we can usemapping variables . In a mapping parameter we need to manually edit the attribute value in the parameter file after every session run.

58) How do you create single lookup transformation using multiple tables? Ans) Write a override sql query. Adjust the ports as per the sql query.

59) Why is meant by direct and indirect loading options in sessions? Ans) when we use multiple source files, we create a file containing the names and directories of each source file we want the PowerCenter Server to use. This file is referred to as a file list.

when configuring the session properties,choose Indirect in the Source Filetype field,enter the file name of the file list in the Source Filename field and enter the location of the file list in the Source File Directory field. When the session starts, thePowerCenter Server reads the file list, then locates and reads the first file source in the list. After the PowerCenter Server reads the first file, it locates and reads the next file in the list.

60) What are Target Options on the Servers? Ans) Target Options for File Target type are FTP File, Loader and MQ.There are no target options for ERP target type.Target Options for Relational are Insert, Update (as Update), Update (as Insert), Update (else Insert), Delete, and Truncate Table.

61) what are the difference between view and materialized view? Ans) Materialized views are schema objects that can be used to summarize, precompute, replicate, and distribute data. E.g. to construct a data warehouse.

A materialized view provides indirect access to table data by storing the results of a query in a separate schema object. Unlike an ordinary view, which does not take up any storage space or contain any data

62) To achieve the session partition what are the necessary tasks you have to do? Ans) Configure the session to partition source data.Install the informatica server on a machine with multiple CPU?s.

63) On a day, I load 10 rows in my target and on next day if I get 10 more rows to be added to my target out of which 5 are updated rows how can I send them to target? How can I insert and update the record? Ans) We can achieve this task by SCD(slowly changing dimensions) type 1.

1. have a lookup on target and check for the primary key values, if the record is new, insert the record and if the record has changed, then update the record.2. for this u have to create a update strategy transformation inside the mapping.

64) Can you generate reports in Informatcia? Ans) Yes. By using Metadata reporter we can generate reports in informatica.Informatica is tool to support data extracting ,transforming and loading.i am not sure informatica support for reporting.my experience is concern informatica doesn't support reporting.

Page 16: FAQs On Informatica final

35

Informatica Questions & Answers

65) Explain use of update strategy transformation Ans) This is the important transformation,is used to maintain the history data or just most recent changes into the target table.We can set or flag the records by using these two levels.1) Within a session:When you configure the session,you can instruct the informatica server to either treat all the records in the same way.2) Within a mapping:within a mapping we use update strategy transformation to flag the records like insert,update,delete or reject.

66) The designer includes a "Find" search tool as part of the standard tool bar. What can it be used to find? Ans) Search for two things:

1. Transformations2. Ports in the Transformation

67) If you have four lookup tables in the workflow. How do you troubleshoot to improve performance?

Ans) There r many ways to improve the mapping which has multiple lookups.1) we can create an index for the lookup table if we have permissions(staging area).2) divide the lookup mapping into two (a) dedicate one for insert means: source - target,, these r new rows . only the new rows will come to mapping and the process will be fast . (b) dedicate the second one to update : source=target,, these r existing rows. only the rows which exists allready will come into the mapping.3)we can increase the chache size of the lookup.

68) How to recover sessions in concurrent batches? Ans) If multiple sessions in a concurrent batch fail, you might want to truncate all targets and run the batch again. However, if a session in a concurrent batch fails and the rest ofthe sessions complete successfully, you can recover the session as a standalone session.To recover a session in a concurrent batch:1.Copy the failed session using Operations-Copy Session.2.Drag the copied session outside the batch to be a standalone session.3.Follow the steps to recover a standalone session.4.Delete the standalone copy.

69) Briefly explian the Versioning Concept in Power Center 7.1. Ans) When you create a version of a folder referenced by shortcuts, all shortcuts continue to reference their original object in the original version. They do not automatically update to the current folder version.

For example, if you have a shortcut to a source definition in the Marketing folder, version 1.0.0, then you create a new folder version, 1.5.0, the shortcut continues to point to the source definition in version 1.0.0.

Maintaining versions of shared folders can result in shortcuts pointing to different versions of the folder. Though shortcuts to different versions do not affect the server, they might prove more difficult to maintain. To avoid this, you can recreate shortcuts pointing to earlier versions, but this solution is not practical for much-used objects. Therefore, when possible, do not version folders referenced by shortcuts.

70) Why we use lookup transformations?

Page 17: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) Get a related value-Get the Employee Name from Employee table based on the Employee IDPerform Calculation.Update slowly changing dimension tables - We can use unconnected lookup transformation to determine whether the records already exist in the target or not.

71) What is Datadriven? Ans) The informatica server follows instructions coded into update strategy transformations with in the session maping determine how to flag records for insert, update, delete orreject. If you do not choose data driven option setting,the informatica server ignores all update strategy transformations in the mapping.If the data driven option is selected in the session properties,it follows the instructions in the update strategytransformation in the mapping o.w it follows instuctions specified in the session.

72) What is batch and describe about types of batches? Ans) Batch--- is a group of any thing

Different batches ----Different groups of different things.There are two types of batches1. Concurrent2. Sequential

73) Can Informatica be used as a Cleansing Tool? If Yes, give example of transformations that can implement a data cleansing routine. Ans) Yes, we can use Informatica for cleansing data. some time we use stages to cleansing the data. It depends upon performance again else we can use expression to cleasing data.

For example an feild X have some values and other with Null values and assigned to target feild where target feild is notnull column, inside an expression we can assign space or some constant value to avoid session failure.

The input data is in one format and target is in another format, we can change the format in expression.

we can assign some default values to the target to represent complete set of data in the target.

74) Differences between connected and unconnected lookup? Ans) Connected lookup:-1> Receives input values diectly from the pipe line.2> You can use a dynamic or static cache.3> Cache includes all lookup columns used in the maping.4> Support user defined default values.

Unconnected lookup:-1> Receives input values from the result of a lkp expression in a another transformation.2> You can use a static cache.3> Cache includes all lookup out put ports in the lookup condition and the lookup/return port.4> Does not support user defiend default values.

75) How to read rejected data or bad data from bad file and reload it to target? Ans) Correction the rejected data and send to target relational tables using loadorder utility. Find out the rejected data by using column indicatior and row indicator.

Page 18: FAQs On Informatica final

35

Informatica Questions & Answers

76) What are the various test procedures used to check whether the data is loaded in the backend, performance of the mapping, and quality of the data loaded in INFORMATICA. 2) What are the common problems developers face while ETL development Ans) If you want to know the performance of a mapping at transformation level, then select the option in the session properties-> collect performance data. At the run time in the monitor you can see it in the?performance tab or you can get it from a file.

The PowerCenter Server names the file session_name.perf, and stores it in the same directory as the session log. If there is no session-specific directory for the session log, thePowerCenter Server saves the file in the default log files directory.

Quality of the data loaded depends on the quality of data in the source. If cleansing is required then have to perform some data cleansing operations in informatica. Final data will always be clean if followed.

77) What are the types of data that passes between informatica server and stored procedure?

Ans) Three types of data:-Input/Out put parametersReturn ValuesStatus code.

78) What are the types of metadata that stores in repository? Ans) Following are the types of metadata that stores in the repository:-Database connectionsGlobal objectsMappingsMappletsMultidimensional metadataReusable transformationsSessions and batchesShort cutsSource definitionsTarget defintionsTransformations.

79) How to move the mapping from one database to another? Ans) 1.? Open the mapping you want to migrate.? Go to File Menu - Select 'Export Objects' and give a name - an XML file will be generated.? Connect to the repository where you want to migrate and then select File Menu - 'Import Objects' and select theXML file name.

2.? Connect to both the repositories.??Go to the source folder and select mapping name from the?object navigator and select?'copy' from 'Edit' menu.? Now, go to the target folder and select 'Paste' from 'Edit' menu.? Be sure you open the target folder.

80) What is the target load order? Ans) The Integration Service reads sources in a target load order group concurrently, and it processes target load order groups sequentially.To specify the order in which the Integration Service sends data to targets, create one source qualifier for each target within a mapping. To set the target load order, you then determine in which order the Integration Service reads each source in the mapping.

To set the target load order:

Page 19: FAQs On Informatica final

35

Informatica Questions & Answers

1.Create a mapping that contains multiple target load order groups.2.Click Mappings > Target Load Plan.The Target Load Plan dialog box lists all Source Qualifier transformations in the mapping and the targets that receive data from each source qualifier.3.Select a source qualifier from the list.4.Click the Up and Down buttons to move the source qualifier within the load order.5.Repeat steps 3 to 4 for other source qualifiers you want to reorder.6.Click OK.

81) Can we eliminate duplicate rows by using filter and router transformation ?if so explain me in detail .

Ans) U can use SQL query for uniqness if the source is Relational

But if the source is Flat file then u should use Shorter or Aggregatot transformation

82) What is parameter file? Ans) Parameter file is to define the values for parameters and variables used in a session.A parameterfile is a file created by text editor such as word pad or notepad.You can define the following values in parameter file:-Maping parametersMaping variablessession parameters.

83) Can you use the maping parameters or variables created in one maping into another maping? Ans) No

84) How do u check the source for the latest records that are to be loaded into the target. i.e i have loaded some records yesterday, today again the file has been populated with some more records today, so how do i find the records populated today. Ans) a) Create a lookup to target table from Source Qualifier based on primary Key.b) Use and expression to evaluate primary key from target look-up. ( If a new source record look-up primary key port for target table should return null). Trap this with decode and proceed.

85) What is the default join that source qualifier provides? Ans) Inner equi join. cross join

86) Why did you use stored procedure in your ETL Application? Ans) usage of stored procedure has the following advantages1) checks the status of the target database2) drops and recreates indexes3) determines if enough space exists in the database4) performs aspecilized calculation

87) What is the diff b/w Stored Proc (DB level) & Stored proc trans (INFORMATICA level) ? again why should we use SP trans ? Ans) Stored Procedure tr:-===========In database :- we execute it using "EXECUTE ST_PRO_NAME" commandIn Informatica:-

Page 20: FAQs On Informatica final

35

Informatica Questions & Answers

when we create stored procedure tr it contains a return port by default.this port is already assigned to the st pro which we selected while creating this tr.just we need to connect it to output port in target and connect input ports to it.

Uses:-1)used to populate and maintain database.2)allow user defined variables, conditional statements and other powerful programmimg features.3)very useful as they are flexible than SQL statements.4)provide error handling and logging necessary for critical tasks.5)used for many other tasks.

88) What is difference between stored procedure transformation and external procedure transformation? Ans) In case of storedprocedure transformation procedure will be compiled and executed in a relational data source.U need data base connection to import the stored procedurein to u?r maping.Where as in external procedure transformation procedure or function will be executed out side of data source.Ie u need to make it as a DLL to access in u rmaping.No need to have data base connection in case of external procedure transformation.

89) What is the procedure to load the fact table.Give in detail? Ans) your business needs. For the fact table, you need a primary key so use a sequence generator transformation to generate a unique key and pipe it to the target (fact) table with the foreign keys from the source tables.

90) What is the status code? Ans) Status code provides error handling for the informatica server during the session.The stored procedure issues a status code that notifies whether or not stored procedurecompleted sucessfully.This value can not seen by the user.It only used by the informatica server to determine whether to continue running the session or stop.

91) What are variable ports and list two situations when they can be used? Ans) We have mainly tree ports Inport, Outport, Variable port. Inport represents data is flowing into transformation. Outport is used when data is mapped to next transformation. Variable port is used when we mathematical calculations are required. If any addition i will be more than happy if you can share.We can use variable ports to store values of previous records which is not otherwise possible in Informatica.

92) While importing the relational source defintion from database, what are the meta data of source you import? Ans) Source nameDatabase locationColumn namesDatatypesKey constraints.

93) What is Transaction? Ans) A transaction can be define as DML operation.

means it can be insertion,modification or deletion of data performed by users/ analysts/applicatorsTransaction is a logical unit of work that comprises one or more sql statements executed by a single user

Page 21: FAQs On Informatica final

35

Informatica Questions & Answers

94) How can you access the remote source into your session? Ans) Relational source: To acess relational source which is situated in a remote place ,u need to configure database connection to the datasource.FileSource : To access the remote source file you must configure the FTP connection to the host machine before you create the session.Hetrogenous : When U?r maping contains more than one source type,the server manager creates a hetrogenous session that displays source options for all types.

95) What are the basic needs to join two sources in a source qualifier? Ans) Basic need to join two sources using source qualifier:1) Both sources should be in same database2) The should have at least one column in common with same data types

96) What are the diffrence between joiner transformation and source qualifier transformation? Ans) Joiner Transformation can be used to join tables from hetrogenious (different sources), but we still need a common key from both tables. If we join two tables without a common key we will end up in a Cartesian Join. Joiner can be used to join tables from difference source systems where as Source qualifier can be used to join tables in the same database.We definitely need a common key to join two tables no mater they are in same database or difference databases.

97) With out using Updatestretagy and sessons options, how we can do the update our target table? Ans) n the target definition there is an option to write the update override query, here we can specify the update query and this will update the rows.

98) What are the types of maping in Getting Started Wizard? Ans) Simple Pass through maping :Loads a static fact or dimension table by inserting all rows. Use this mapping when you want to drop all existing data from your table before loading new data.Slowly Growing target :Loads a slowly growing fact or dimension table by inserting new rows. Use this mapping to load new data when existing data does not require updates.

99) in the concept of mapping parameters and variables, the variable value will be saved to the repository after the completion of the session and the next time when u run the session, the server takes the saved variable value in the repository and starts assigning the next value of the saved value. for example i ran a session and in the end it stored a value of 50 to the repository.next time when i run the session, it should start with the value of 70. not with the value of 51. how to do this. Ans) u can do one thing after running the mapping,, in workflow manager

start-------->session.

right clickon the session u will get a menu, in that go for persistant values, there u will find the last value stored in the repository regarding to mapping variable. then remove it and put ur desired one, run the session... i hope ur task will be done

100) What are the joiner caches? Ans) master rows.After building the caches, the Joiner transformation reads records from the detail source and perform joins.

Page 22: FAQs On Informatica final

35

Informatica Questions & Answers

101) What transformation you can use inplace of lookup? Ans) Look-up transformation can serve in so many situations.So, if you can a bit particular about the scenarioo that you are talking about, it will be easy to interpret.

102) How to define Informatica server? Ans) Informatica server is the main server component in informatica product family..Which is resonsible for reads the data from various source system and tranforms the data according to business rule and loads the data into the target table

103) How can u complete unrcoverable sessions? Ans) Under certain circumstances, when a session does not complete, you need to truncate the target tables and run the session from the beginning. Run the session from thebeginning when the Informatica Server cannot run recovery or when running recovery might result in inconsistent data.

104) How to lookup the data on multiple tabels. Ans) if the two tables are relational, then u can use the SQL lookup over ride option to join the two tables in the lookup properties.u cannot join a flat file and a relatioanl table.

eg: lookup default query will be select lookup table column_names from lookup_table. u can now continue this query. add column_names of the 2nd table with the qualifier, and a where clause. if u want to use a order by then use -- at the end of the order by.

105) What is the default source option for update stratgey transformation? Ans) default option for update stratgey t/r is dd_insert or we can put '0'.in session level data driven

106)What is pushdown optimizations in pc 8.x with example? Ans) Use pushdown optimization to push transformation logic to the source or target database. The Integration Service analyzes the transformation logic, mapping, and session configuration to determine the transformation logic it can push to the database. At run time, the IntegrationService executes any SQL statement generated against the source or target tables, and it processes any transformation logic that it cannot push to the database. Select one of the following values:- None. The Integration Service does not push any transformation logic to the database.

- To Source. The Integration Service pushes as much transformation logic as possible to the source database.

- To Target. The Integration Service pushes as much transformation logic as possible to the target database.

- Full. The Integration Service pushes as much transformation logic as possible to both the source database and target database.

- $$PushdownConfig. The $$PushdownConfig mapping parameter allows you to run the same session with different pushdown optimization configurations at different times. For more information about configuring the $$PushdownConfig mapping parameter and parameter file, see Using the $$PushdownConfig Mapping Parameter.

107) In a scenario I have col1, col2, col3, under that 1,x,y, and 2,a,b and I want in this form col1, col2 and 1,x and 1,y and 2,a and 2,b, what is the procedure?

Page 23: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) Use Normalizer : create two ports -first port occurs = 1 second make occurs = 2 two output ports are created andconnect to target

108) If u had to split the source level key going into two seperate tables. One as surrogate and other as primary. Since informatica does not gurantee keys are loaded properly(order!) into those tables. What are the different ways you could handle this type of situation? Ans) foreign key

109) What are the transformations that restricts the partitioning of sessions? Ans) Advanced External procedure tranformation and External procedure transformation: This transformation contains a check box on the properties tab to allow partitioning.Aggregator Transformation: If u use sorted ports You can not parttion the assosiated sourceJoiner Transformation : You can not partition the master source for a joiner transformationNormalizer TransformationXML targets.

POWER EXCHANGE SOURCE and TARGETSAdvanced External procedure tranformation and External procedure transformation: Thistransformation contains a check box on the properties tab to allow partitioning.Aggregator Transformation: If u use sorted ports You can not parttion the assosiated sourceJoiner Transformation : You can not partition the master source for a joiner transformationNormalizer TransformationXML targets.

110) Can u explain one critical mapping? 2.performance issue which one is better? whether connected lookup tranformation or unconnected one?

Ans) it depends on your data and the type of operation u r doing.

If u need to calculate a value for all the rows or for the maximum rows coming out of the source then go for a connected lookup.

Or,if it is not so then go for unconnectd lookup.

Specially in conditional case like,

we have to get value for a field 'customer' from order tabel or from customer_data table,on the basis of following rule:

If customer_name is null then ,customer=customer_data.ustomer_Id

otherwise

customer=order.customer_name.

so in this case we will go for unconnected lookup

Page 24: FAQs On Informatica final

35

Informatica Questions & Answers

Dimesions are 1.SCD2.Rapidly changing Dimensions3.junk Dimensions4.Large Dimensions5.Degenerated Dimensions6.Conformed Dimensions.

111) What is hash table informatica? Ans) In hash partitioning, the Informatica Server uses a hash function to group rows of data among partitions. The Informatica Server groups the data based on a partition key.Use hash partitioning when you want the Informatica Server to distribute rows to the partitions by group. For example, you need to sort items by item ID, but you do not know how many items have a particular ID number.

112) In a joiner transformation, you should specify the source with fewer rows as the master source. Why? Ans) Joiner transformation compares each row of the master source against the detail source. The fewer unique rows in the master, the fewer iterations of the join comparison occur, which speeds the join process.Joiner Transformation will cache Master table's data hence it is advised to define table with less #of rows as master.

113) what is difference between lookup cashe and unchashed lookup? Can i run the mapping with out starting the informatica server? Ans) The difference between cache and uncacheed lookup iswhen you configure the lookup transformation cache lookup it stores all the lookup table data in the cache when the first input record enter into the lookup transformation, in cache lookup the select statement executes only once and compares the values of the input record with the values in the cachebut in uncache lookup the the select statement executes for each input record entering into the lookup transformation and it has to connect to database each time entering the new record

114) What are the tasks that Loadmanger process will do? Ans) Manages the session and batch scheduling: Whe you start the informatica server the load maneger launches and queries the repository for a list of sessions configured to runon the informatica server.When you configure the session the loadmanager maintains list of list of sessions and session start times.When you sart a session loadmanger fetches the session information from the repository to perform the validations and verifications prior to starting DTM process.

Locking and reading the session: When the informatica server starts a session lodamaager locks the session from the repository.Locking prevents you starting the session again and again.

Reading the parameter file: If the session uses a parameter files,loadmanager reads the parameter file and verifies that the session level parematers are declared in the fileVerifies permission and privelleges: When the sesson starts load manger checks whether or not the user have privelleges to run the session.

Creating log files: Loadmanger creates logfile contains the status of session.

115) How can we join the tables if the tables have no primary and forien key relation and no matchig port to join? Ans) without common column or common data type we can join two sources using dummy ports.

Page 25: FAQs On Informatica final

35

Informatica Questions & Answers

1.Add one dummy port in two sources.2.In the expression trans assing '1' to each port.2.Use Joiner transformation to join the sources using dummy port(use join conditions).

116) In a sequential Batch how can we stop single session? Ans) We can stop it using PMCMD command or in the monitor right click on that perticular session and select stop.this will stop the current session and the sessions next to it.

117) How to create the staging area in your database Ans) A Staging area in a DW is used as a temporary space to hold all the records from the source system. So more or less it should be exact replica of the source systems except for the laod startegy where we use truncate and reload options.

So create using the same layout as in your source tables or using the Generate SQL option in the Warehouse Designer tab.

118) What is the logic will you implement to laod the data in to one factv from 'n' number of dimension tables. Ans) Noramally evey one use

!)slowly changing diemnsions

2)slowly growing dimensions

119) What r the basic needs to join two sources in a source qualifier? Ans) The both the table should have a common field with same data type.Its not necessary both should follow primary and foreign relationship. If any relation ship exists that will help u in performance point of view.The two sources should be a relational and homogeneous

120) What are various types of Aggregation? Ans) Various types of aggregation are SUM, AVG, COUNT, MAX, MIN, FIRST, LAST, MEDIAN, PERCENTILE, STDDEV, and VARIANCE.

121) If you want to create indexes after the load process which transformation you choose? Ans) Its usually not done in the mapping(transformation) level. Its done in session level. Create a command task which will execute a shell script (if Unix) or any other scripts which contains the create index command. Use this command task in the workflow after the session or else, You can create it with a post session command.

122) How the informatica server increases the session performance through partitioning the source? Ans) For a relational sources informatica server creates multiple connections for each parttion of a single source and extracts seperate range of data for each connection.Informatica server reads multiple partitions of a single source concurently.Similarly for loading also informatica server creates multiple connections to the target and loads partitions of data concurently.For XML and file sources,informatica server reads multiple files concurently.For loading the data informatica server creates a seperate file for each partition(of a source file). You can choose to merge the targets.

123) How can you improve the performance of Aggregate transformation?

Page 26: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) we can improve the agrregator performence in the following ways

1.send sorted input.2.increase aggregator cache size.i.e Index cache and data cache.3.Give input/output what you need in the transformation.i.e reduce number of input and output ports.

Use Sorter Transformation to sort input in aggregrator propertiesfilter the records before

124) What r the unsupported repository objects for a mapplet? Ans) Source definitions. Definitions of database objects (tables, views, synonyms) or files that provide source data. Target definitions. Definitions of database objects or files that contain the target data. Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions.

Mappings. A set of source and target definitions along with transformations containing business logic that you build into the transformation. These are the instructions that the Informatica Server uses to transform and move data. Reusable transformations. Transformations that you can use in multiple mappings. Mapplets. A set of transformations that you can use in multiple mappings. Sessions and workflows. Sessions and workflows store information about how and when the Informatica Server moves data. A workflow is a set of instructions that describes how and when to run tasks related to extracting, transforming, and loading data. A session is a type of task that you can put in a workflow. Each session corresponds to a single mapping.

125) What r the types of lookup caches? Ans) 1)Static Cache2)Dynamic Cache3)Persistent Cache4)Reusable Cache5)Shared Cache

126) What r the tasks that source qualifier performs? Ans) Join data originating from the same source database. You can join two or more tables with primary-foreign key relationships by linking the sources to one Source Qualifier. Filter records when the Informatica Server reads source data. If you include a filter condition, the Informatica Server adds a WHERE clause to the default query. Specify an outer join rather than the default inner join. If you include a user-defined join, the Informatica Server replaces the join information specified by the metadata in the SQL query.

Specify sorted ports. If you specify a number for sorted ports, the Informatica Server adds an ORDER BY clause to the default SQL query. Select only distinct values from the source. If you choose Select Distinct, the Informatica Server adds a SELECT DISTINCT statement to the default SQL query. Create a custom query to issue a special SELECT statement for the Informatica Server to read source data. For example, you might use a custom query to perform aggregate calculations or execute a stored procedure.

127) If a session fails after loading of 10,000 records in to the target.How can u load the records from 10001 th record when u run the session next time in informatica 6.1? Ans) Running the session in recovery mode will work, but the target load type should be normal. If its bulk then recovery wont work as expected

Page 27: FAQs On Informatica final

35

Informatica Questions & Answers

128) Why dimenstion tables are denormalized in nature ? Ans) Because in Data warehousing historical data should be maintained, to maintain historical data means suppose one employee details like where previously he worked, and now where he is working, all details should be maintain in one table, if u maintain primary key it won't allow the duplicate records with same employee id. so to maintain historical data we are all going for concept data warehousing by using surrogate keys we can achieve the historical data(using oracle sequence for critical column).so all the dimensions are marinating historical data, they are de normalized, because of duplicate entry means not exactly duplicate record with same employee number another record is maintaining in the table.

129) What is polling? Ans) It displays the updated information about the session in the monitor window. The monitor window displays the status of each session when you poll the informatica server.

130) In which condtions we can not use joiner transformation(Limitaions of joiner transformation)?

Ans) Both pipelines begin with the same original data source.Both input pipelines originate from the same Source Qualifier transformation.Both input pipelines originate from the same Normalizer transformation.Both input pipelines originate from the same Joiner transformation.Either input pipelines contains an Update Strategy transformation.Either input pipelines contains a connected or unconnected Sequence Generator transformation.

131) What r the active and passive transforamtions? Ans) Transformations can be active or passive. An active transformation can change the number of rows that pass through it, such as a Filter transformation that removes rows that do not meet the filter condition.

A passive transformation does not change the number of rows that pass through it, such as an Expression transformation that performs a calculation on data and passes all rows through the transformation.

132) What is the maplet?

Ans) Maplet is a set of transformations that you build in the maplet designer and You can use in multiple mapings.

A Mapplet is a reusable object defined with business logic using set of transformations. It is created using Mapplet designer tool.

133) What is surrogatekey ? In ur project in which situation u has used ? explain with example ?

Ans) A surrogate key is system genrated/artificial key /sequence number or A surrogate key is a substitution for the natural primary key.It is just a unique identifier or number for each row that can be used for the primary key to the table. The only requirement for a surrogate primary key is that it is unique for each row in the tableI it is useful because the natural primary key (i.e. Customer Number in Customer table) can change and this makes updates more difficult.but In my project, I felt that the primary reason for the surrogate keys was to record the changing

Page 28: FAQs On Informatica final

35

Informatica Questions & Answers

context of the dimension attributes.(particulaly for scd )The reason for them being integer and integer joins are faster. Unlike other

134) Partitioning, Bitmap Indexing (when to use), how will the bitmap indexing will effect the performance Ans) Bitmap indexing a indexing technique to tune the performance of SQL queries. The default type is B-Tree indexers which is of high cardinality (normalized data). You can use bitmap indexers for de-normalized data or low cardinalities. The condition is the amount of DISTINCT rows should be less than 4% of the total rows. If it satisfies the given condition then bitmap indexers will optimize the performance for this kind of tables.

135) What is difference between dimention table and fact table and what are different dimention tables and fact tables

Ans) In the fact table contain measurable data and less columns and meny rows,

It's contain primarykey

Diffrent types of fact tables:

additive,non additive, semi additive

In the dimensions table contain textual descrption of data and also contain meny columns,less rows

Its contain primary key

Both contains primary keysFact tables are which are measurable and have less columns and more rowsBut in dimension which are not measurable

136) What are cost based and rule based approaches and the difference Ans) Cost based and rule based approaches are the optimization techniques which are used in related to databases, where we need to optimize a sql query. Basically Oracle provides Two types of Optimizers (indeed 3 but we use only these two techniques., bcz the third has some disadvantages.)When ever you process any sql query in Oracle, what oracle engine internally does is, it reads the query and decides which will the best possible way for executing the query. So in this process, Oracle follows these optimization techniques. 1. cost based Optimizer(CBO): If a sql query can be executed in 2 different ways ( like may have path 1 and path2 for same query),then What CBO does is, it basically calculates the cost of each path and the analyses for which path the cost of execution is less and then executes that path so that it can optimize the quey execution.2. Rule base optimizer(RBO): this basically follows the rules which are needed for executing a query. So depending on the number of rules which are to be applied, the optimzer runs the query.

Page 29: FAQs On Informatica final

35

Informatica Questions & Answers

Use: If the table you are trying to query is already analysed, then oracle will go with CBO. If the table is not analysed , the Oracle follows RBO. For the first time, if table is not analysed, Oracle will go with full table scan.

137) What will happen if you are using Update Strategy Transformation and your session is configured for "insert"? What are the types of External Loader available with Informatica? If you have rank index for top 10. However if you pass only 5 records, what will be the output of such a Rank Transformation?

Ans) if u r using a update strategy in any of ur mapping, then in session properties u have to set treat source rows as Data Driven. if u select insert or udate or delete, then the info server will not consider UPD for performing any DB operations.ELSEu can use the UPD session level options. instead of using a UPD in mapping just select the update in treat source rows and update else insert option. this will do the same job as UPD. but be sure to have a PK in the target table.2) for oracle : SQL loaderfor teradata:tpump,mload.3) if u pass only 5 rows to rank, it will rank only the 5 records based on the rank port.

138) What is aggregate cache in aggregator transforamtion? Ans) When you run a workflow that uses an Aggregator transformation, the Informatica Server creates index and data caches in memory to process the transformation. If the Informatica Server requires more space, it stores overflow values in cache files.

139) Which transformation should we use to normalize the COBOL and relational sources? Ans) The Normalizer transformation normalizes records from COBOL and relational sources, allowing you to organize the data according to your own needs. A Normalizer transformation can appear anywhere in a data flow when you normalize a relational source. Use a Normalizer transformation instead of the Source Qualifier transformation when you normalize a COBOL source. When you drag a COBOL source into the Mapping Designer workspace, the Normalizer transformation automatically appears, creating input and output ports for every column in the source

140) What are the measure objects Ans) Aggregate calculation like sum,avg,max,min these are the measure objetcs.

141) What is DTM process? Ans) After the loadmanger performs validations for session,it creates the DTM process.DTM is to create and manage the threads that carry out the session tasks.I creates themaster thread.Master thread creates and manges all the other threads.DTM means data transformation manager.in informatica this is main back ground process.it run after complition of load manager.in this process informatica server search source and tgt connection in repository if it correct then informatica server fetch the data from source and load it to target.

142)What are the options in the target session of update strategy transformation? Ans) InsertDeleteUpdate

Page 30: FAQs On Informatica final

35

Informatica Questions & Answers

Update as updateUpdate as insertUpdate esle insertTruncate table

143) What are the designer tools for creating tranformations? Ans) Mapping designerTansformation developerMapplet designer.

144) What is Code Page used for? Ans) Code Page is used to identify characters that might be in different languages. If you are importing Japanese data into mapping, you must select the Japanese code page of source data.

145) Can i start and stop single session in concurent bstch? Ans) Just right click on the particular session and going to recovery option or by using event wait and event rise

146) What are the rank caches? Ans) During the session ,the informatica server compares an inout row with rows in the datacache.If the input row out-ranks a stored row,the informatica server replaces thestored row with the input row.The informatica server stores group information in an index cache and row data in a data cache.

147) Why and where we are using factless fact table? Ans) Factless Fact Tables are the fact tables with no facts or measures(numerical data). It contains only the foriegn keys of corresponding Dimensions. Factless fact is used to track the events by using the key values

148) How can you delete duplicate rows with out using Dynamic Lookup? Tell me any other ways using lookup delete the duplicate rows? Ans) For example u have a table Emp_Name and it has two columns Fname, Lname in the source table which has douplicate rows. In the mapping Create Aggregator transformation. Edit the aggregator transformation select Ports tab select Fname then click the check box on GroupBy and uncheck the (O) out port. select Lname then uncheck the (O) out port and click the check box on GroupBy. Then create 2 new ports Uncheck the (I) import then click Expression on each port. In the first new port Expression type Fname. Then second Newport type Lname. Then close the aggregator transformation link to the target table.

149) What are the different options used to configure the sequential batches? Ans) Two optionsRun the session only if previous session completes sucessfully. Always runs the session.

150) How to Generate the Metadata Reports in Informatica? Ans) You can generate PowerCenter Metadata Reporter from a browser on any workstation, even a workstation that does not have PowerCenter tools installed.

151) How do we estimate the number of partitons that a mapping really requires? Is it dependent on the machine configuration?

Page 31: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) It depends upon the informatica version we r using. suppose if we r using informatica 6 it supports only 32 partitions where as informatica 7 supports 64 partitions.

152) How the informatica server sorts the string values in Ranktransformation? Ans) When the informatica server runs in the ASCII data movement mode it sorts session data using Binary sortorder.If you configure the seeion to use a binary sort order,theinformatica server caluculates the binary value of each string and returns the specified number of rows with the higest binary values for the string.

153) How can U create or import flat file definition in to the warehouse designer? Ans) U can create flat file definition in warehouse designer.in the warehouse designer,u can create new target: select the type as flat file. save it and u can enter various columns for that created target by editing its properties.Once the target is created, save it. u can import it from the mapping designer.

154) To provide support for Mainframes source data,which files r used as a source definitions? Ans) COBOL Copy-book files

155) Can u copy the session to a different folder or repository? Ans) In addition, you can copy the workflow from the Repository manager. This will automatically copy the mapping, associated source,targets and session to the target folder.Yes it is possible. For copying a session to a folder in the same repository or to another in a different repository, we can use the repository manager ( which is client sid etool).Simply by just dragging the session to the target destination, the session will be copied.

156) How to get two targets T1 containing distinct values and T2 containing duplicate values from one source S1. Ans) Use filter transformation for loading the target with no duplicates. and for the other transformation load it directly from source.The above requirement can be achived using Lookup transformation in Dynamic mode

157) What is worklet and what use of worklet and in which situation we can use it Ans) A set of worlflow tasks is called worklet,Workflow tasks means1)timer2)decesion3)command4)eventwait5)eventrise6)mail etc......

158) We are using Update Strategy Transformation in mapping how can we know whether insert or update or reject or delete option has been selected during running of sessions in Informatica. Ans) In Designer while creating Update Strategy Transformation uncheck "forward to next transformation". If any rejected rows are there automatically it will be updated to the session log file. Update or insert files are known by checking the target file or table only.

159) What are the different types of Type2 dimension maping? Ans) Type2 Dimension/Version Data Maping: In this maping the updated dimension in the source will gets inserted in target along with a new version number. Newly addeddimension in source will inserted into target with a primary key.

Type2 Dimension/Flag current Maping: This maping is also used for slowly changing dimensions.In addition it creates a flag value for changed or new dimension.

Page 32: FAQs On Informatica final

35

Informatica Questions & Answers

Flag indiactes the dimension is new or newlyupdated.Recent dimensions will gets saved with cuurent flag value 1. And updated dimensions are saved with the value 0.

Type2 Dimension/Effective Date Range Maping: This is also one flavour of Type2 maping used for slowly changing dimensions.This maping also inserts both new andchanged dimensions in to the target. And changes are tracked by the effective date range for each version of each dimension.

160) Can you use the maping parameters or variables created in one maping into any other reusable transformation? Ans) Yes.Because reusable tranformation is not contained with any maplet or maping.

161) What is tracing level? Ans) Ya its the level of information storage in session log.The option comes in the properties tab of transformations. By default it remains "Normal". Can be Verbose InitialisationVerbose DataNormal or Terse.

162) What is meant by EDW? Ans) EDW is Enterprise Datawarehouse which means that its a centralised DW for the whole organization.

this apporach is the apporach on Imon which relies on the point of having a single warehouse/centralised where the kimball apporach says to have seperate data marts for each vertical/department.

Advantages of having a EDW:

1. Golbal view of the Data

2. Same point of source of data for all the users acroos the organization.

3. able to perform consistent analysis on a single Data Warehouse.

to over come is the time it takes to develop and also the management that is required to build a centralised database.

163) There are 1000 source tables containing the same data with different file formats,now i want to load into a single target table ..how to achieve ? Ans) first u should convert diff. file format to one format then create 1 to 1 mapping,run it and see the o/p in unix whether file is posted or not.

164) Where is the cache stored in informatica? Ans) Cache is stored in the Informatica server memory and over flowed data is stored on the disk in file format which will be automatically deleted after the successful completion of the session run. If you want to store that data you have to use a persistant cache.

Page 33: FAQs On Informatica final

35

Informatica Questions & Answers

165) Can you start a batches with in a batch? Ans) You can not. If you want to start batch that resides in a batch,create a new independent batch and copy the necessary sessions into the new batch.

166) What is a command that used to run a batch? Ans) pmcmd is used to start a batch.

167) What are the unsupported repository objects for a mapplet? Ans) COBOL source definitionJoiner transformationsNormalizer transformationsNon reusable sequence generator transformations.Pre or post session stored proceduresTarget defintionsPower mart 3.5 style Look Up functionsXML source definitionsIBM MQ source defintions.

168) What r the types of metadata that stores in repository? Ans) Source definitions. Definitions of database objects (tables, views, synonyms) or files that provide source data. Target definitions. Definitions of database objects or files that contain the target data. Multi-dimensional metadata. Target definitions that are configured as cubes and dimensions. Mappings. A set of source and target definitions along with transformations containing business logic that you build into the transformation. These are the instructions that the Informatica Server uses to transform and move data. Reusable transformations. Transformations that you can use in multiple mappings. Mapplets. A set of transformations that you can use in multiple mappings. Sessions and workflows. Sessions and workflows store information about how and when the Informatica Server moves data. A workflow is a set of instructions that describes how and when to run tasks related to extracting, transforming, and loading data. A session is a type of task that you can put in a workflow. Each session corresponds to a single mapping.

169) How do we analyse the data at database level? Ans) Data can be viewed using Informatica's designer tool.If you want to view the data on source/target we can preview the data but with some limitations.We can use data profiling too.

170) In my source table 1000 rec's r there.I want to load 501 rec to 1000 rec into my Target table ? how can u do this ?

Ans) You can overide the sql Query in Wofkflow Manager. LIkeselect * from tab_name where rownum<=1000minus

select * from tab_name where rownum<=500;This will work fine. Try it and get back to me if u have any issues about the same.Use the below query:

Page 34: FAQs On Informatica final

35

Informatica Questions & Answers

select * from (select rownum r, from test_table) where r between 3 and 6will fetch rows between a range.

171) Can any one explain real time complain mappings or complex transformations in Informatica. Specially in Sales Domain. Ans) Most complex logic we use is denormalization. We dont have any Denormalizer transformation in INformatica. So we will have to use an aggregator followed by an expression. Apart from this, we use most of the complexicity in expression transformation involving lot of nested IIF's and Decode statements...another one is the union tranformation and joiner.

172) Could anyone please tell me what are the steps required for type2 dimension/version data mapping. how can we implement it Ans) 1. Determine if the incoming row is 1) a new record 2) an updated record or 3) a record that already exists in the table using two lookup transformations. Split the mapping into 3 seperate flows using a router transformation. 2. If 1) create a pipe that inserts all the rows into the table.3. If 2) create two pipes from the same source, one updating the old record, one to insert the new.

173) If you are workflow is running slow in informatica. Where do you start trouble shooting and what are the steps you follow? Ans) When the work flow is running slowly u have to find out the bottlenecksin this ordertargetsourcemappingsessionsystem

174) Identifying bottlenecks in various components of Informatica and resolving them. Ans) The best way to find out bottlenecks is writing to flat file and see where the bottle neck is .

175) Can we lookup a table from a source qualifer transformation-unconnected lookup Ans) No. we can't do.

I will explain you why.

1) Unless you assign the output of the source qualifier to another transformation or to target no way it will include the feild in the query.

2) source qualifier don't have any variables feilds to utalize as expression.

176) What r the tasks that Loadmanger process will do? Ans) Manages the session and batch scheduling: Whe u start the informatica server the load maneger launches and queries the repository for a list of sessions configured to run on the informatica server.When u configure the session the loadmanager maintains list of list of sessions and session start times.When u sart a session loadmanger fetches the session information from the repository to perform the validations and verifications prior to starting DTM process. Locking and reading the session: When the informatica server starts a session lodamaager locks the

Page 35: FAQs On Informatica final

35

Informatica Questions & Answers

session from the repository.Locking prevents U starting the session again and again. Reading the parameter file: If the session uses a parameter files,loadmanager reads the parameter file and verifies that the session level parematers are declared in the file Verifies permission and privelleges: When the sesson starts load manger checks whether or not the user have privelleges to run the session. Creating log files: Loadmanger creates logfile contains the status of session. The LM also sends the 'failure mails' in case of failure in execution of the Subsequent DTM process.

177) How can you stop a batch? Ans) By using server manager or pmcmd.

178) What is metadata reporter? Ans) It is a web based application that enables you to run reports againist repository metadata.With a meta data reporter,You can access information about U?r repository with out having knowledge of sql,transformation language or underlying tables in the repository.

179) Suppose session is configured with commit interval of 10,000 rows and source has 50,000 rows. Explain the commit points for Source based commit and Target based commit. Assume appropriate value wherever required. Ans) Source based commit will commit the data into target based on commit interval.so,for every 10,000 rows it will commit into target.Target based commit will commit the data into target based on buffer size of the target.i.e., it commits the data into target when ever the buffer fills.Let us assume that the buffer size is 6,000.So,for every 6,000 rows it commits the data.

180) What is the default source option for update stratgey transformation? Ans) Data driven.

181) Difference between summary filter and details filter? Ans) Summary Filter --- we can apply records group by that contain common values.Detail Filter --- we can apply to each and every record in a database.

182) What are the reusable transforamtions? Ans) Reusable transformations can be used in multiple mappings.When you need to incorporate this transformation into maping,U add an instance of it to maping.Later if you change the definition of the transformation ,all instances of it inherit the changes.Since the instance of reusable transforamation is a pointer to that transforamtion,You can change the transforamation in the transformation developer,its instances automatically reflect these changes.This feature can save you great deal of work.A reusable Transformation is a reusable metadata object , defined with business logic using single Transformation.

183) What r the types of maping wizards that r to be provided in Informatica? Ans) Simple Pass throughSlowly Growing Target

Slowly Changing the DimensionType1Most recent valuesType2Full History

Page 36: FAQs On Informatica final

35

Informatica Questions & Answers

VersionFlagDateType3Current and one previous

184) After draging the ports of three sources(sql server,oracle,informix) to a single source qualifier, can u map these three ports directly to target? Ans) if u drag three hetrogenous sources and populated to target without any join means you are entertaining Carteisn product. If you don't use join means not only diffrent sources but homegeous sources are show same error. If you are not interested to use joins at source qualifier level u can add some joins sepratly. In Source qualifier we can join the tables from same database only.

185) What is difference between partioning of relatonal target and partitioning of file targets? Ans) If u parttion a session with a relational target informatica server creates multiple connections to the target database to write target data concurently.If u partition a session with a file target the informatica server creates one target file for each partition.You can configure session properties to merge these target files.

186) What is aggregate cache in aggregator transforamtion? Ans) The aggregator stores data in the aggregate cache until it completes aggregate calculations.When you run a session that uses an aggregator transformation,the informatica server creates index and data caches in memory to process the transformation.If the informatica server requires more space,it stores overflow values in cache files.

187) What are the properties should be notified when we connect the flat file source definition to relational database target definition? Ans) 1.File is fixed width or delimited2.Size of the file.If its can be executed without performance issues then normal load will workIf its huge in GB they NWAY partitions can be specified at the source side and the target side.3.File reader,source file name etc .....

187) Why we use stored procedure transformation? Ans) A Stored Procedure transformation is an important tool for populating and maintaining databases. Database administrators create stored procedures to automate time-consuming tasks that are too complicated for standard SQL statements.

188) Which objects are required by the debugger to create a valid debug session? Ans) Intially the session should be valid session.

source, target, lookups, expressions should be availble, min 1 break point should be available for debugger to debug your session.

189) How do you decide whether you need ti do aggregations at database level or at Informatica level? Ans) It depends upon our requirment only.If you have good processing database you can create aggregation table or view at database level else its better to use informatica. Here i'm explaing why we

Page 37: FAQs On Informatica final

35

Informatica Questions & Answers

need to use informatica.what ever it may be informatica is a thrid party tool, so it will take more time to process aggregation compared to the database, but in Informatica an option we called "Incremental aggregation" which will help you to update the current values with current values +new values. No necessary to process entire values again and again. Unless this can be done if nobody deleted that cache files. If that happend total aggregation we need to execute on informatica also. In database we don't have Incremental aggregation facility.

190) How is the union transformation active transformation? Ans) Active Transformation: the transformation that change the no. of rows in the Target.Source (100 rows) ---> Active Transformation ---> Target (< or > 100 rows)Passive Transformation: the transformation that does not change the no. of rows in the Target.Source (100 rows) ---> Passive Transformation ---> Target (100 rows)Union Transformation:Here Union Transformation acts like a UnionAll in SQl.i.e.,it wil also include duplicates while concatinating two tables.bt,we were provided with a option to eliminate duplicates also..dats y it's become as an active transformation

191) How to get the first 100 rows from the flat file into the target? Ans) 1. Use test download option if you want to use it for testing. 2. Put counter/sequence generator in mapping and perform it. Its simple.take a filter drag all ports from source qualifier to filter. in filter write the condition columname<101 and drag ports to the target

192) What is meant by complex mapping Ans) Complex maping means involved in more logic and more business rules.Actually in my project complex mapping isIn my bank project, I involved in construct a 1 dataware houseMeny customer is there in my bank project, They r after taking loans relocated in to another place that time i feel to diffcult maintain both prvious and current adressesin the sense i am using scd2This is an simple example of complex mapping

193) Can you start a session inside a batch idividually? Ans) We can start our required session only in case of sequential batch.in case of concurrent batch we cant do like this. 194) Can we use aggregator/active transformation after update strategy transformation Ans) You can use aggregator after update strategy. The problem will be, once you perform the update strategy, say you had flagged some rows to be deleted and you had performed aggregator transformation for all rows, say you are using SUM function, then the deleted rows will be subtracted from this aggregator transformation.

195) Can you copy the batches? Ans) NO.

196) Explain the informatica Architecture in detail

Page 38: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) informatica server connects source data and target data using native odbc driversagain it connect to the repository for running sessions and retriveing metadata informationsource------>informatica server--------->target

REPOSITORY The PowerCenter Server is a repository client application. It connects to the Repository Server and Repository Agent to retrieve workflow and mapping metadata from the repository database. When the PowerCenter Server requests a repository connection from the Repository Server, the Repository Server starts and manages the Repository Agent. The Repository Server then re-directs the PowerCenter Server to connect directly to the Repository Agent.

197) What is Load Manager? Ans) The load Manager is the Primary Informatica Server Process. It performs the following tasks:-Manages session and batch scheduling.Locks the session and read session properties.Reads the parameter file.Expand the server and session variables and parameters.Verify permissions and privileges.Validate source and target code pages.Create the session log file.Create the Data Transformation Manager which execute the session.

198) In which circumstances that informatica server creates Reject files? Ans) When it encounters the DD_Reject in update strategy transformation.Violates database constraintFiled in the rows was truncated or overflowed.

199) Describe two levels in which update strategy transformation sets? Ans) Within a session: When you configure a session, yoYou can instruct the Informatica Server to either treat all records in the same way (for example, treat all records as inserts), or use instructions coded into the session mapping to flag records for different database operations.

Within a mapping: Within a mapping, you use the Update Strategy transformation to flag records for insert, delete, update, or reject.

200) Can U use the maping parameters or variables created in one maping into another maping? Ans) NO. You might want to use a workflow parameter/variable if you want it to be visible with other mappings/sessions

201) What is Partitioning ? where we can use Partition? wht is advantages?Is it nessisary? Ans) Partitions are used to optimize the session performancewe can select in sesstion propetys for partiotionstypesdefault----passthrough partitionkey range partion

Page 39: FAQs On Informatica final

35

Informatica Questions & Answers

round robin partionhash partiotion

202) In realtime which one is better star schema or snowflake star schema the surrogate will be linked to which columns in the dimension table. Ans) In real time only star schema will implement because it will take less time and surrogate key will there in each and every dimension table in star schema and this surrogate key will assign as foreign key in fact table.

203) What is the exact meaning of domain? Ans) Domain is nothing but give a comlete information on a particular subject area..like sales domain,telecom domain..etc,. The PowerCenter domain is the fundamental administrative unit in PowerCenter. The domain supports the administration of the distributed services. A domain is a collection of nodes and services that you can group in folders based on administration ownership.

204) How can u work with remote database in informatica?did u work directly by using remote connections? Ans) You can work with remote,But you have to Configure FTPConnection detailsIP addressUser authentication

205) What is lookup transformation and update strategy transformation and explain with an example.Ans ) Look up transformation is used to lookup the data in a relationa table,view,Synonym and Flat file.The informatica server queries the lookup table based on the lookup ports used in the transformation.It compares the lookup transformation port values to lookup table column values based on the lookup conditionBy using lookup we can get the realated value,Perform a caluclation and Update SCD.Two types of lookupsConnected UnconnectedUpdate strategy transformationThis is used to control how the rows are flagged for insert,update ,delete or reject.To define a flagging of rows in a session it can be insert,Delete,Update or Data driven.In Update we have three optionsUpdate as UpdateUpdate as insertUpdate else insert

206) What is batch and describe about types of batches? Ans) Grouping of session is known as batch. Batches are two types:-Sequential: Runs sessions one after the otherConcurrent: Runs session at same time.If you have sessions with source-target dependencies you have to go for sequential batch to start the

Page 40: FAQs On Informatica final

35

Informatica Questions & Answers

sessions one after another.If you have several independent sessions You can use concurrent batches which runs all the sessions at the same time.

207) What is meant by lookup caches? Ans) The informatica server builds a cache in memory when it processes the first row af a data in a cached look up transformation.It allocates memory for the cache based on theamount you configure in the transformation or session properties.The informatica server stores condition values in the index cache and output values in the data cache.

208) How many ways you can update a relational source defintion and what are they? Ans) Two ways:-1. Edit the definition2. Reimport the defintion.

209) What is exact use of 'Online' and 'Offline' server connect Options while defining Work flow in Work flow monitor? . The system hangs when 'Online' server connect option. The Informatica is installed on a Personal laptop. Ans) When the repo is up and the PMSERVER is also up, workflow monitor always will be connected on-line.When PMserver is down and the repo is still up we will be prompted for an off-line connection with which we can just monitor the workflows

210) Which is better among connected lookup and unconnected lookup transformations in informatica or any other ETL tool? Ans) If you are having defined source you can use connected, source is not well defined or from different database you can go for unconnected

211) Why you use repository connectivity? Ans) When you edit,schedule the sesion each time,informatica server directly communicates the repository to check whether or not the session and users are valid.All the metadata of sessions and mappings will be stored in repository.

212) What is Session and Batches? Ans) Session - A Session Is A set of instructions that tells the Informatica Server How And When To Move Data From Sources To Targets. After creating the session, we can useeither the server manager or the command line program pmcmd to start or stop the session.Batches - It Provides A Way to Group Sessions For Either Serial Or ParallelExecution By The Informatica Server. There Are Two Types Of Batches :Sequential - Run Session One after the Other.Concurrent - Run Session At The Same Time.

213) Can u generate reports in Informatcia? Ans) It is a ETL tool, you could not make reports from here, but you can generate metadata report, that is not going to be used for business analysis

214) How to recover the standalone session? Ans) A standalone session is a session that is not nested in a batch. If a standalone session fails, you can run recovery using a menu command or pmcmd. These options are

Page 41: FAQs On Informatica final

35

Informatica Questions & Answers

not available for batched sessions.To recover sessions using the menu:1. In the Server Manager, highlight the session you want to recover.2. Select Server Requests-Stop from the menu.3. With the failed session highlighted, select Server Requests-Start Session in Recovery Mode from the menu.To recover sessions using pmcmd:1.From the command line, stop the session.2. From the command line, start recovery.

215) How do you create a mapping using multiple lookup transformation? Ans) Use unconnected lookup if same lookup repeats multiple times.

216) What is the procedure to write the query to list the highest salary of three employees? Ans) The following is the query to find out the top three salariesin ORACLE:--(take emp table)select * from emp e where 3>(select count (*) from emp where e.sal>emp.sal) order by sal desc.in SQL Server:-(take emp table)select top 10 sal from emp[ORACLE] - select * from (SELECT emp_name,salary from bbc order by area salary desc) where rownum<=3

217) Is a fact table normalized or de-normalized? Ans) A fact table is always DENORMALISED table. It consists of data from dimension table (Primary Key's) and Fact table has Foreign keys and measures.

218) What is the mapping for unit testing in Informatica, are there any other testings in Informatica, and how we will do them as a etl developer. how do the testing people will do testing are there any specific tools for testing Ans) In informatica there is no method for unit testing. There are two methods to test the mapping.1. But we have data sampling. set the ata sampling properties for session in workflow manager for specified number of rows and test the mapping. 2. Use the debugger and test the mapping for sample records.

219) Can batches be copied/stopped from server manager? Ans) Yes, we can stop the batches using server manager or pmcmd commnad

220) Can i use a session Bulk loading option that time can i make a recovery to the session? Ans) If the session is configured to use in bulk mode it will not write recovery information to recovery tables. So Bulk loading will not perform the recovery as required.

221) Can we add different work flows in to one batch and run sequentially? If pos how do we do that? Ans) For simulating a batch we can create a unix script and write pmcmd commands for running different workflows one after other into that unix script . like workflowrun.sh(Giving example for informatica 8 )pmcmd startworkflow -sv IS_INTEGRATION_SERVICE -d DOMAIN1 -u myuser -p mypass -f folder1

Page 42: FAQs On Informatica final

35

Informatica Questions & Answers

workflow1pmcmd startworkflow -sv IS_INTEGRATION_SERVICE -d DOMAIN1 -u myuser -p mypass -f folder1 workflow2Save this file and run the same .This will trigger workflow1 and then workflow2.

222) How to retrive the records from a rejected file. explane with syntax or example Ans) During the execution of workflow all the rejected rows will be stored in bad files(where your informatica server get installed;C:Program FilesInformatica PowerCenter 7.1Server) These bad files can be imported as flat a file in source then thro' direct maping we can load these files in desired format.

223) If the workflow has 5 session and running sequentially and 3rd session hasbeen failed how can we run again from only 3rd to 5th session? Ans) If multiple sessions in a concurrent batch fail, you might want to truncate all targets and run the batch again. However, if a session in a concurrent batch fails and the rest of the sessions complete successfully, you can recover the session as a standalone session.To recover a session in a concurrent batch:1.Copy the failed session using Operations-Copy Session.2.Drag the copied session outside the batch to be a standalone session.3.Follow the steps to recover a standalone session.4.Delete the standalone copy.Hi, as per the questions all the sessions are serial. So you can start the session as "start workflow from task" from there it wil continue to run the rest of the tasks.

224) How i can do incremental aggregation in real time? Ans) For incremental Aggregation.. We need to use Aggregations + Look up on Target + Expression to SUM up Count obtained from New Aggregations and Lookup on target.For one record already present in Aggregations table.. count is also there..It will be available in lookup.. new count will be available through AGG.. Sum then up and update that record in target..

225) If i done any modifications for my table in back end does it reflect in informatca warehouse or maping desginer or source analyzer? Ans) NO. Informatica is not at all concern with back end data base.It displays u all the information that is to be stored in repository.If want to reflect back end changes to informatica screens, again u have to import from back end to informatica by valid connection.And u have to replace the existing files with imported files.

226) How the informatica server increases the session performance through partitioning the source? Ans) For a relational sources informatica server creates multiple connections for each parttion of a single source and extracts seperate range of data for each connection.Informatica server reads multiple partitions of a single source concurently.Similarly for loading also informatica server creates multiple connections to the target and loads partitions of data concurently.

For XML and file sources,informatica server reads multiple files concurently.For loading the data informatica server creates a seperate file for each partition(of a source file).U can choose to merge the targets.

227) How to join two tables without using the Joiner Transformation. Ans) Itz possible to join the two or more tables by using source qualifier.But provided the tables should have relationship.

Page 43: FAQs On Informatica final

35

Informatica Questions & Answers

When u drag n drop the tables u will getting the source qualifier for each table.Delete all the source qualifiers.Add a common source qualifier for all.Right click on the source qualifier u will find EDIT click on it.Click on the properties tab,u will find sql query in that u can write ur sqls. You can also do it using Session --- mapping---source--- there you have an option called User Defined Join there you can write your SQL

228) When we create a target as flat file and source as oracle.. how can i specify first rows as column names in flat files... Ans) Use a pre sql statement....but this is a hardcoding method...if you change the column names or put in extra columns in the flat file, you will have to change the insert statement

229) What happens if you try to create a shortcut to a non-shared folder? Ans) It only creates a copy of it..

230) Explain about Recovering sessions? Ans) If you stop a session or if an error causes a session to stop, refer to the session and error logs to determine the cause of failure. Correct the errors, and then complete thesession. The method you use to complete the session depends on the properties of the mapping, session, and Informatica Server configuration.Use one of the following methods to complete the session:? Run the session again if the Informatica Server has not issued a commit.? Truncate the target tables and run the session again if the session is not recoverable.? Consider performing recovery if the Informatica Server has issued at least one commit.

231) Can Informatica load heterogeneous targets from heterogeneous sources? Ans) Yes it can. For example...Flat File and Relations sources are joined in the mapping, and later, Flat File and relational targets are loaded.

232) While running multiple session in parallel which loads data in the same table, throughput of each session becomes very less and almost same for each session. How can we improve the performance (throughput) in such cases? Ans) I think this will be handled by the database we use.When the operations/loading on the table is in progress the table will be locked.If we are trying to load the same table with different partitions then we run into rowID erros if the database is 9i and we can apply a patch to reslove this issue

233) What is data merging, data cleansing, sampling? Ans) Cleansing:---TO identify and remove the retundacy and inconsistencysampling: just smaple the data throug send the data from source to target

Data merging: It is a process of combining the data with similar structures in to a single output.Data Cleansing: It is a process of identifying and rectifying the inconsistent and inaccurate data into consistent and accurate dataData Sampling:It is the process of sample by sending the data from source to target

234) What is Code Page Compatibility? Ans) Compatibility between code pages is used for accurate data movement when the Informatica Sever runs in the Unicode data movement mode. If the code pages areidentical, then there will not be any data loss. One code page can be a subset or superset of another. For accurate data movement, the target code page must be a

Page 44: FAQs On Informatica final

35

Informatica Questions & Answers

superset of the source code page.Superset - A code page is a superset of another code page when it contains the character encoded in the other code page, it also contains additional characters notcontained in the other code page.Subset - A code page is a subset of another code page when all characters in the code page are encoded in the other code page.

235) There are 3 depts in dept table and one with 100 people and 2nd with 5 and 3rd with some 30 and so. i want to diplay those deptno where more than 10 people exists Ans) If you want to perform it thru informatica, the Fire the same query in the SQL Override of Source qualifier transformation and make a simple pass thru mapping.Other wise, you can also do it by using a Filter.Router transformation by giving the condition there deptno>=10.

236) How to load the data from people soft hrm to people soft erm using informatica? Ans) Following are necessary1.Power Connect license 2.Import the source and target from people soft using ODBC connections3.Define connection under "Application Connection Browser" for the people soft source/target in workflow manager .select the proper connection (people soft with oracle,sybase,db2 and informix)and execute like a normal session.

237) Somebody ca explain me the 3 points:I want to Know : 1) the differences between using native and ODBC server-side databaseConnections 2)Know the reason why to register a server to the repository is necessary 3)Know the rules associated with transferring and sharing objects between folders. 4) Know the rules associated with transferring and sharing objects between repositories Ans) 1> Native connection is something which is provided by the same vendor for that tool. eg: oracle warehouse builder has its own driver to connect to oracle DB which does not use a ODBC driver. here connection will be fast and hence performance.ODBC is basically a third party driver like Microsoft driver for Oracle, which can be used by any tool to connect to oracle.2> Registering a server to a repository is necessary because the sessions will be using this server to run. If we have multiple servers, then we can use diff server to diff sessions to run.238) I have an requirement where in the columns names in a table (Table A) should appear in rows of target table (Table B) i.e. converting columns to rows. Is it possible through Informatica? If so, how? Ans) if data in tables as follows Table AKey-1 char(3);table A values_______123

Table B

Page 45: FAQs On Informatica final

35

Informatica Questions & Answers

bkey-a char(3);bcode char(1);table b values1 T1 A1 G2 A2 T2 L3 A

and output required is as

1, T, A2, A, T, L3, A

the SQL query in source qualifier should be

select key_1,max(decode( bcode, 'T', bcode, null )) t_code,max(decode( bcode, 'A', bcode, null )) a_code,max(decode( bcode, 'L', bcode, null )) l_codefrom a, bwhere a.key_1 = b.bkey_agroup by key_1/

239) Explain about perform recovery? Ans) When the Informatica Server starts a recovery session, it reads the OPB_SRVR_RECOVERY table and notes the row ID of the last row committed to the target database.The Informatica Server then reads all sources again and starts processing from the next row ID. For example, if the Informatica Server commits 10,000 rows before thesession fails, when you run recovery, the Informatica Server bypasses the rows up to 10,000 and starts loading with row 10,001.By default, Perform Recovery is disabled in the Informatica Server setup. You must enable Recovery in the Informatica Server setup before you run a session so theInformatica Server can create and/or write entries in the OPB_SRVR_RECOVERY table.

240) Can any one tell me how to run scd1 bec it create two target tables in mapping window and there are only one table in warehouse designer(means target).. so if we create one new table in target it gives error.. Ans) If so, create the target with the name u have given in wizard for target(table). No't create the target again for the second instance. It is just the virtual copy of the same target. i.e in warehouse designer create and execute the target definitions and run the session containing the mapping again.define the source& target locations in general properties of sessiontreat rows as: Data DrivenCheck this once and let me know

241) i have source like 1;2:3.4 its flatfile. now i want in my target table as 1 2 3 4 plz can any one explain me the procedure how to get output like dat

Page 46: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) No answer available currently. Be the first one to reply to this question by submitting your answer from the form below.

242) In a sequential batch can you run the session if previous session fails? Ans) Yes.By setting the option always runs the session.

243) In a filter expression we want to compare one date field with a db2 system field CURRENT DATE. Our Syntax: datefield = CURRENT DATE (we didn't define it by ports, its a system field ), but this is not valid (PMParser: Missing Operator).. Ans) The db2 date formate is "yyyymmdd" where as sysdate in oracle will give "dd-mm-yy" so conversion of db2 date formate to local database date formate is compulsary. other wise u will get that type of error

244) How do you transfert the data from data warehouse to flatfile? Ans) You can write a mapping with the flat file as a target using a DUMMY_CONNECTION. A flat file target is built by pulling a source into target space using Warehouse Designer tool.

245) What is the Rank index in Rank transformation? Ans) Based on which port you want generate Rank is known as rank port, the generated values are known as rank index.

246) Define Informatica Repository? Ans) The Informatica repository is a relational database that stores information, or metadata, used by the Informatica Server and Client tools. Metadata can include informationsuch as mappings describing how to transform source data, sessions indicating when you want the Informatica Server to perform the transformations, and connect stringsfor sources and targets.The repository also stores administrative information such as usernames and passwords, permissions and privileges, and product version.Use repository manager to create the repository.The Repository Manager connects to the repository database and runs the code needed to create the repositorytables.Thsea tablesstores metadata in specific format the informatica server,client tools use.

247) What is change data capture? Ans) Change data capture (CDC) is a set of software design patterns used to determine the data that has changed in a database so that action can be taken using the changed data.

248) Can any body write a session parameter file which will change the source and targets for every session. i.e different source and targets for each session run. Ans) You are supposed to define a parameter file. And then in the Parameter file, you can define two parameters, one for source and one for target.Give like this for example:$Src_file = c:program filesinformaticaserverinabc_source.txt$tgt_file = c: argetsabc_targets.txtThen go and define the parameter file:[folder_name.WF:workflow_name.ST:s_session_name]$Src_file =c:program filesinformaticaserverinabc_source.txt$tgt_file = c: argetsabc_targets.txt

Page 47: FAQs On Informatica final

35

Informatica Questions & Answers

If its a relational db, you can even give an overridden sql at the session level...as a parameter. Make sure the sql is in a single line.

249) What is meant by Junk Attribute in Informatica? Ans) Junk Dimension A Dimension is called junk dimension if it contains attribute which are rarely changed ormodified. example In Banking Domain , we can fetch four attributes accounting to a junk dimensions like from the Overall_Transaction_master table tput flag tcmp flag del flag advance flag all these attributes can be a part of a junk dimensions.Grouping of random flags and text attributes in a dimension and moving them to a separate dimension is called as junk dimension

250) What are partition points? Ans) Partition points mark the thread boundaries in a source pipeline and divide the pipeline into stages.Partition points mark the thread boundaries in a pipeline anddivide the pipeline into stages. The Informatica Server sets partition points at severaltransformations in a pipeline by default. If you use PowerCenter, you can define other partitionpoints. When you add partition points, you increase the number of transformation threads,which can improve session performance. The Informatica Server can redistribute rows of data at partition points, which can also improve session performance.

251) Where should U place the flat file to import the flat file defintion to the designer? Ans) There is no such restrication to place the source file. In performance point of view its better to place the file in server local src folder. if you need path please check the server properties availble at workflow manager. It doesn't mean we should not place in any other folder, if we place in server src folder by default src will be selected at time session creation.

252) I have flatfile it contains 'n' number of records. i need to load half of the records to one target table another half to another target table. plz any one can explain me the procedure. Ans) There will be 2 pipelinesIn first pipeline , read from source file , put an expression,in expression take a variable and increment it by 1 ( v=v+1),then put a target T0,generates sequence in T0 column.Now after first pipeline gets executed we have a) count of all the rows from the fileb) rank of all the records in T0 table and c1 column.In second pipeline,take T0 as source and T1,T2 as target and router R1 as transformation in between.In R1 , put 2 groups -> 1st group c1<=v/2 - direct to T12nd group c2>v/2 - direct to T2

253) How can u recover the session in sequential batches? Ans) If you configure a session in a sequential batch to stop on failure, you can run recovery starting with the failed session. The Informatica Server completes the session andthen runs the rest of the batch. Use the Perform Recovery session propertyTo recover sessions in sequential batches configured to stop on failure:1.In the Server Manager, open the session property sheet.2.On the Log Files tab, select Perform Recovery, and click OK.3.Run the session.

Page 48: FAQs On Informatica final

35

Informatica Questions & Answers

4.After the batch completes, open the session property sheet.5.Clear Perform Recovery, and click OK.If you do not clear Perform Recovery, the next time you run the session, the Informatica Server attempts to recover the previous session.If you do not configure a session in a sequential batch to stop on failure, and the remaining sessions in the batch complete, recover the failed session as a standalonesession.

254) How to use mapping parameters and what is their use Ans) In designer u will find the mapping parameters and variables options.u can assign a value to them in designer. comming to there uses suppose u r doing incremental extractions daily. suppose ur source system contains the day column. so every day u have to go to that mapping and change the day so that the particular data will be extracted . if we do that it will be like a layman's work. there comes the concept of mapping parameters and variables. once if u assign a value to a mapping variable then it will change between sessions.

255) Diff between informatica repositry server & informatica server Ans) Informatica Repository Server:It's manages connections to the repository from client application. Informatica Server:It's extracts the source data,performs the data transformation,and loads the transformed data into the target

256) What r the mapping paramaters and maping variables? Ans) Please refer to the documentation for more understanding.Mapping variables have two identities:Start value and Current valueStart value = Current value ( when the session starts the execution of the undelying mapping) Start value <> Current value ( while the session is in progress and the variable value changes in one ore more occasions)Current value at the end of the session is nothing but the start value for the subsequent run of the same session.

257) In certain mapping there are four targets tg1,tg2,tg3 and tg4. tg1 has a primary key,tg2 foreign key referencing the tg1's primary key,tg3 has primary key that tg2 and tg4 refers as foreign key,tg2 has foreign key referencing primary key of tg4 ,the order in which the informatica will load the target? 2]How can I detect aggregate tranformation causing low performance? Ans) To optimize the aggregator transformation, you can use the following options.Use incremental aggregationSort the ports before you perform aggregationAvoid using aggregator transformation after update strategy, since it might be confusing.Answer for the second query:To get performance details for any aggregator transformation, we have to check some parameters in the .perf file named as Transformationname_writetodisk and Transformationname_readfromdisk. If these two counters provide values which are not zero then we have to tune the aggregator transformation. The ways in which the aggregator transformation can be tuned:1. Using incremental aggregation2. By increasing the DATA cache and index cache sizes3. Using a sorter transformation before the aggregator transformation

Page 49: FAQs On Informatica final

35

Informatica Questions & Answers

258) How many number of sessions that You can create in a batch? Ans) Any number of sessions.It depends on the config settings of informatica server. The parameters for the maximum connections cant be exceeded. It depends on the overall sessions running per the server at a time. For eg, if the number of connxns rt now is 300 and if u have batches running with 290+ sessions at a time, adding 15 more sessions to the time frame will cause the loads to fail

259) Compare Data Warehousing Top-Down approach with Bottom-up approach Ans) Top downODS-->ETL-->Datawarehouse-->Datamart-->OLAPBottom upODS-->ETL-->Datamart-->Datawarehouse-->OLAP

260) What r the methods for creating reusable transforamtions? Ans) You can design using 2 methods

using transformation developercreate normal one and promote it to reusable

261) How to export mappings to the production environment? Ans) In the designer go to the main menu and one can see the export/import options.Import the exported mapping in to the production repository with replace options.You will have to export as xml format using export option and then import in production environment.

262) where do we use MQ series source qualifier, application multi group source qualifier. just give an example for a better understanding Ans) We can use a MQSeries SQ when we have a MQ messaging system as source(queue).When there is need to extract data from a Queue, which will basically have messages in XML format, we will use a JMS or a MQ SQ depending on the messaging system. If you have a TIBCO EMS Queue, use a JMS source and JMS SQ and an XML Parser, or if you have a MQ series queue, then use a MQ SQ which will be associated with a Flat file or a Cobal file.263) How do we estimate the depth of the session scheduling queue? Where do we set the number of maximum concurrent sessions that Informatica can run at a given time? Ans) u set the max no of concurrent sessions in the info server.by default its 10. u can set to any no.

264) Discuss which is better among incremental load, Normal Load and Bulk load Ans) It depends on the requirement. Otherwise Incremental load which can be better as it takes onle that data which is not available previously on the target.According to performence bulk is better than normal.But bolh having some conditions in source dataConditions are like1)does not containn any constraint in data.2)dont use the double datatype if neccesory to use then use it as last row of the table.3)it does not support the CHECK CONSTRAINT.

265) what is the best way to show metadata(number of rows at source, target and each transformation level, error related data) in a report format

Page 50: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) You can select these details from the repository table. you can use the view REP_SESS_LOG to get these data

266) When the informatica server marks that a batch is failed? Ans) If one of session is configured to "run if previous completes" and that previous session fails.

267) Which tool you use to create and manage sessions and batches and to monitor and stop the informatica server? Ans) Informatica Server Manager.Its the Integration Service in 8.x

268) What is the hierarchies in DWH Ans) Data sources ---> Data acquisition ---> Warehouse ---> Front end tools ---> Metadata management ---> Data warehouse operation management

269) How can we store previous session logs Ans) Just run the session in time stamp mode then automatically session log will not overwrite current session log.

270) my source is having 1000 rows. i have brought 300 records into my ODS. so next time i want to load the remaining records. so i need to load from 301 th record. when ever i start the work flow again it will load from the begining. how do we solve this problem. Ans) By using Sequence GeneratorTransformation u can do itie by chaging the RESET option in the properties tab of your SequenceGeneratorTransformation.then it will workwe can also use recover task so that ,when data is extracting because of any problem while loading data if it's stop's loading at middle using recover task we can get the records from where it's stoped previously.....

271) What is Dimension table Exactly? Ans) Dimension tables gives description about something.for eg. If we take Student as a dimention table, we have various attributes like college name, age, gender,etc which gives some description about a student.

272) What are the different threads in DTM process? Ans) Master thread: Creates and manages all other threadsMaping thread: One maping thread will be creates for each session.Fectchs session and maping information.Pre and post session threads: This will be created to perform pre and post session operations.Reader thread: One thread will be created for each partition of a source.It reads data from source.Writer thread: It will be created to load data to the target.Transformation thread: It will be created to tranform data.

273) What is a junk dimension Ans) A "junk" dimension is a collection of random transactional codes, flags and/or text attributes that are unrelated to any particular dimension. The junk dimension is simply a structure that provides a convenient place to store the junk attributes. A good example would be a trade fact in a company that brokers equity trades.

274) What r the circumstances that infromatica server results an unreciverable session?

Page 51: FAQs On Informatica final

35

Informatica Questions & Answers

Ans) The source qualifier transformation does not use sorted ports.If u change the partition information after the initial session fails.Perform recovery is disabled in the informatica server configuration.If the sources or targets changes after initial session fails.If the maping consists of sequence generator or normalizer transformation.If a concuurent batche contains multiple failed sessions.

275) How does the server recognise the source and target databases? Ans) By using ODBC connection.if it is relational.if is flat file FTP connection..see we can make sure with connection in the properties of session both sources && targets.

276) Whats the diff between Informatica powercenter server, repositoryserver and repository? Ans) By using ODBC connection.if it is relational.if is flat file FTP connection..see we can make sure with connection in the properties of session both sources && targets.

277) About Informatica Power center 7: 1) I want to Know which mapping properties can be overridden on a Session Task level. 2)Know what types of permissions are needed to run and schedule Work flows. Ans) 1.(Ans) You can override any properties other than the source and targets. Make sure the source and targets exists in ur db if it is a relational db. If it is a flat file, you can override its properties. You can override sql if its a relational db, session log, DTM buffer size, cache sizes etc.

2.(Ans) You need execute permissions on the folder to run/schedule a workflow. You may have read and write. But u need execute permissions as well.

278) Two relational tables are connected to SQ Trans,what are the possible errors it will be thrown? Ans) The only two possibilities as of I know is

Both the table should have primary key/foreign key relation shipBoth the table should be available in the same schema or same database

279) What r the options in the target session of update strategy transsformatioin? Ans) Update as Insert:This option specified all the update records from source to be flagged as inserts in the target. In other words, instead of updating the records in the target they are inserted as new records.Update else Insert:This option enables informatica to flag the records either for update if they are old or insert, if they are new records from source.insert,update,delete,insert as update,update else insert.update as update.

280) Why we use partitioning the session in informatica? Ans) Performance can be improved by processing data in parallel in a single session by creating multiple partitions of the pipeline.Informatica server can achieve high performance by partitioning the pipleline and performing the extract , transformation, and load for each partition in parallel.

Page 52: FAQs On Informatica final

35

Informatica Questions & Answers

281) I have a source column data with names like ravi kumar.i want to insert the ravi in one column and kumar in another coliumn of target table.how do u implement in informatica? Ans) i can able to give solution for this Question , in Exp Transformation using syntax of "substr" and "instr". use the syntax of this to identity the string when source is having multiple string. i have given that Syntax below:SUBSTR(char as char, m as numeric, [n as numeric])//Returns n characters of char, beginning at character m.INSTR(char1 as char, char2 as char, [n as integer, [m as integer, [comparisonType as integer]]])//Searches char1 beginning with its nth character for the mth occurance of char2 and returns the position of the character in char1 that is the first character of this occurrence. Linguistic comparison is done when comparisonType is 0 and binary comparison is done when comparisonType is any non-zero value. By default comparisonType is 0 i.e.linguistic comparisonand use the link to get how achieved that Q in the below Link which contain Source,Target code,mapping.

282) DOubts regarding rank transformation: CAn we do ranking using two ports? can we rank all the rows coming from source, how? Ans) Rank port. Use to designate the column for which you want to rank values. You can designate only one Rank port in a Rank transformation. The Rank port is an input/output port. You must link the Rank port to another transformation.So you can not use two ports for ranking in the rank transformation.Note:you can achieve this question using Aggregate and Expression transformation.

283) what is the diff b/w rowid and row? 2.diff b/w rowid and row number? Ans) Every row is identified by a rowid. ROWID is pseudo column in every table. The physical address of the rows is use to for the ROWID.IN HEXADECIMAL representation, ROWID is shown as 18 character string of the following formatBBBBBBBBB.RRRR.FFFF (block,row,file)And row is a piece of record or simple a record.