+ All Categories
Home > Documents > informatica

informatica

Date post: 17-Jul-2016
Category:
Upload: abdul-mateen
View: 15 times
Download: 1 times
Share this document with a friend
Description:
infrmaica
51
PUSH DOWN OPTIMISATION You can push transformation logic to the source or target database using pushdown optimization. When you run a session configured for pushdown optimization, the Integration Service translates the transformation logic into SQL queries and sends the SQL queries to the database. The source or target database executes the SQL queries to process the transformations. The amount of transformation logic you can push to the database depends on the database, transformation logic, and mapping and session configuration. The Integration Service processes all transformation logic that it cannot push to a database. Use the Pushdown Optimization Viewer to preview the SQL statements and mapping logic that the Integration Service can push to the source or target database. You can also use the Pushdown Optimization Viewer to view the messages related to pushdown optimization. The following figure shows a mapping containing transformation logic that can be pushed to the source database: This mapping contains an Expression transformation that creates an item ID based on the store number 5419 and the item ID from the source. To push the transformation logic to the database, the Integration Service generates the following SQL statement: INSERT INTO T_ITEMS(ITEM_ID, ITEM_NAME, ITEM_DESC) SELECT CAST((CASE WHEN 5419 IS NULL THEN '' ELSE 5419 END) + '_' + (CASE WHEN ITEMS.ITEM_ID IS NULL THEN '' ELSE ITEMS.ITEM_ID END) AS INTEGER), ITEMS.ITEM_NAME, ITEMS.ITEM_DESC FROM ITEMS2 ITEMS The Integration Service generates an INSERT SELECT statement to retrieve the ID, name, and description values from the source table, create new item IDs, and insert the values into the ITEM_ID, ITEM_NAME, and ITEM_DESC columns in the target table. It concatenates the store number 5419, an underscore, and the original ITEM ID to get the new item ID. Pushdown Optimization Types You can configure the following types of pushdown optimization: Sourceside pushdown optimization. The Integration Service pushes as much transformation logic as possible to the source database. Targetside pushdown optimization. The Integration Service pushes as much transformation logic as possible to the target database. Full pushdown optimization. The Integration Service attempts to push all transformation logic to the target database. If the Integration Service cannot push all transformation logic to the database, it performs both sourceside and targetside pushdown optimization. Running SourceSide Pushdown Optimization Sessions When you run a session configured for sourceside pushdown optimization, the Integration Service analyzes the mapping from the source to the target
Transcript

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 1/51

PUSH DOWN OPTIMISATION

You can push transformation logic to the source or target database using pushdown optimization. When you run a session configured for pushdownoptimization, the Integration Service translates the transformation logic into SQL queries and sends the SQL queries to the database. The source ortarget database executes the SQL queries to process the transformations.The amount of transformation logic you can push to the database depends on the database, transformation logic, and mapping and sessionconfiguration. The Integration Service processes all transformation logic that it cannot push to a database.Use the Pushdown Optimization Viewer to preview the SQL statements and mapping logic that the Integration Service can push to the source or targetdatabase. You can also use the Pushdown Optimization Viewer to view the messages related to pushdown optimization.The following figure shows a mapping containing transformation logic that can be pushed to the source database:

This mapping contains an Expression transformation that creates an item ID based on the store number 5419 and the item ID from the source. To pushthe transformation logic to the database, the Integration Service generates the following SQL statement:INSERT INTO T_ITEMS(ITEM_ID, ITEM_NAME, ITEM_DESC) SELECT CAST((CASE WHEN 5419 IS NULL THEN '' ELSE 5419 END) + '_'+ (CASE WHEN ITEMS.ITEM_ID IS NULL THEN '' ELSE ITEMS.ITEM_ID END) AS INTEGER), ITEMS.ITEM_NAME, ITEMS.ITEM_DESCFROM ITEMS2 ITEMSThe Integration Service generates an INSERT SELECT statement to retrieve the ID, name, and description values from the source table, create newitem IDs, and insert the values into the ITEM_ID, ITEM_NAME, and ITEM_DESC columns in the target table. It concatenates the store number5419, an underscore, and the original ITEM ID to get the new item ID.Pushdown Optimization TypesYou can configure the following types of pushdown optimization:

Source­side pushdown optimization. The Integration Service pushes as much transformation logic as possible to the source database.Target­side pushdown optimization. The Integration Service pushes as much transformation logic as possible to the target database.Full pushdown optimization. The Integration Service attempts to push all transformation logic to the target database. If the Integration Servicecannot push all transformation logic to the database, it performs both source­side and target­side pushdown optimization.

Running Source­Side Pushdown Optimization SessionsWhen you run a session configured for source­side pushdown optimization, the Integration Service analyzes the mapping from the source to the target

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 2/51

or until it reaches a downstream transformation it cannot push to the source database.The Integration Service generates and executes a SELECT statement based on the transformation logic for each transformation it can push to thedatabase. Then, it reads the results of this SQL query and processes the remaining transformations.Running Target­Side Pushdown Optimization SessionsWhen you run a session configured for target­side pushdown optimization, the Integration Service analyzes the mapping from the target to the sourceor until it reaches an upstream transformation it cannot push to the target database. It generates an INSERT, DELETE, or UPDATE statement basedon the transformation logic for each transformation it can push to the target database. The Integration Service processes the transformation logic up tothe point that it can push the transformation logic to the database. Then, it executes the generated SQL on the Target database.Running Full Pushdown Optimization SessionsTo use full pushdown optimization, the source and target databases must be in the same relational database management system. When you run asession configured for full pushdown optimization, the Integration Service analyzes the mapping from the source to the target or until it reaches adownstream transformation it cannot push to the target database. It generates and executes SQL statements against the source or target based on thetransformation logic it can push to the database.When you run a session with large quantities of data and full pushdown optimization, the database server must run a long transaction. Consider thefollowing database performance issues when you generate a long transaction:

A long transaction uses more database resources.A long transaction locks the database for longer periods of time. This reduces database concurrency and increases the likelihood of deadlock.A long transaction increases the likelihood of an unexpected event. To minimize database performance issues for long transactions, considerusing source­side or target­side pushdown optimization.

Rules and Guidelines for Functions in Pushdown OptimizationUse the following rules and guidelines when pushing functions to a database:

If you use ADD_TO_DATE in transformation logic to change days, hours, minutes, or seconds, you cannot push the function to a Teradatadatabase.When you push LAST_DAY () to Oracle, Oracle returns the date up to the second. If the input date contains sub seconds, Oracle trims the dateto the second.When you push LTRIM, RTRIM, or SOUNDEX to a database, the database treats the argument (' ') as NULL, but the Integration Service treatsthe argument (' ') as spaces.An IBM DB2 database and the Integration Service produce different results for STDDEV and VARIANCE. IBM DB2 uses a differentalgorithm than other databases to calculate STDDEV and VARIANCE.When you push SYSDATE or SYSTIMESTAMP to the database, the database server returns the timestamp in the time zone of the databaseserver, not the Integration Service.If you push SYSTIMESTAMP to an IBM DB2 or a Sybase database, and you specify the format for SYSTIMESTAMP, the database ignoresthe format and returns the complete time stamp.You can push SYSTIMESTAMP (‘SS’) to a Netezza database, but not SYSTIMESTAMP (‘MS’) or SYSTIMESTAMP (‘US’).

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 3/51

When you push TO_CHAR (DATE) or TO_DATE () to Netezza, dates with sub second precision must be in the YYYY­MM­DD HH24: MI:SS.US format. If the format is different, the Integration Service does not push the function to Netezza.

PERFORMANCE TUNING OF LOOKUP TRANSFORMATIONS

Lookup transformations are used to lookup a set of values in another table.Lookups slows down the performance.1. To improve performance, cache the lookup tables. Informatica can cache all the lookup and reference tables; this makes operations run very fast.(Meaning of cache is given in point 2 of this section and the procedure for determining the optimum cache size is given at the end of this document.)2. Even after caching, the performance can be further improved by minimizing the size of the lookup cache. Reduce the number of cached rows byusing a sql override with a restriction.Cache: Cache stores data in memory so that Informatica does not have to read the table each time it is referenced. This reduces the time taken by theprocess to a large extent. Cache is automatically generated by Informatica depending on the marked lookup ports or by a user defined sql query.Example for caching by a user defined query: ­Suppose we need to lookup records where employee_id=eno.‘employee_id’ is from the lookup table, EMPLOYEE_TABLE and ‘eno’ is theinput that comes from the from the source table, SUPPORT_TABLE.We put the following sql query override in Lookup Transform‘select employee_id from EMPLOYEE_TABLE’If there are 50,000 employee_id, then size of the lookup cache will be 50,000.Instead of the above query, we put the following:­‘select emp employee_id from EMPLOYEE_TABLE e, SUPPORT_TABLE swhere e. employee_id=s.eno’If there are 1000 eno, then the size of the lookup cache will be only 1000.But here the performance gain will happen only if the number of records inSUPPORT_TABLE is not huge. Our concern is to make the size of the cache as less as possible.3. In lookup tables, delete all unused columns and keep only the fields that are used in the mapping.4. If possible, replace lookups by joiner transformation or single source qualifier.Joiner transformation takes more time than source qualifiertransformation.5. If lookup transformation specifies several conditions, then place conditions that use equality operator ‘=’ first in the conditions that appear in theconditions tab.6. In the sql override query of the lookup table, there will be an ORDER BY clause. Remove it if not needed or put fewer column names in theORDER BY list.7. Do not use caching in the following cases: ­­Source is small and lookup table is large.­If lookup is done on the primary key of the lookup table.8. Cache the lookup table columns definitely in the following case: ­­If lookup table is small and source is large.9. If lookup data is static, use persistent cache. Persistent caches help to save and reuse cache files. If several sessions in the same job use the same

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 4/51

lookup table, then using persistent cache will help the sessions to reuse cache files. In case of static lookups, cache files will be built from memorycache instead of from the database, which will improve the performance.10. If source is huge and lookup table is also huge, then also use persistent cache.11. If target table is the lookup table, then use dynamic cache. The Informatica server updates the lookup cache as it passes rows to the target.12. Use only the lookups you want in the mapping. Too many lookups inside a mapping will slow down the session.13. If lookup table has a lot of data, then it will take too long to cache or fit in memory. So move those fields to source qualifier and then join with themain table.14. If there are several lookups with the same data set, then share the caches.15. If we are going to return only 1 row, then use unconnected lookup.16. All data are read into cache in the order the fields are listed in lookup ports. If we have an index that is even partially in this order, the loading ofthese lookups can be speeded up.17. If the table that we use for look up has an index (or if we have privilege to add index to the table in the database, do so), then the performancewould increase both for cached and un cached lookups.

Optimizing the Bottleneck’s

1. If the source is a flat file, ensure that the flat file is local to the Informatica server. If source is a relational table, then try not to use synonyms oraliases.

2. If the source is a flat file, reduce the number of bytes (By default it is 1024 bytes per line) the Informatica reads per line. If we do this, we candecrease the Line Sequential Buffer Length setting of the session properties.

3. If possible, give a conditional query in the source qualifier so that the records are filtered off as soon as possible in the process.4. In the source qualifier, if the query has ORDER BY or GROUP BY, then create an index on the source table and order by the index field of the

source table.

PERFORMANCE TUNING OF TARGETSIf the target is a flat file, ensure that the flat file is local to the Informatica server. If target is a relational table, then try not to use synonyms or aliases.

1. Use bulk load whenever possible.2. Increase the commit level.3. Drop constraints and indexes of the table before loading.

PERFORMANCE TUNING OF MAPPINGSMapping helps to channel the flow of data from source to target with all the transformations in between. Mapping is the skeleton of Informaticaloading process.

1. Avoid executing major sql queries from mapplets or mappings.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 5/51

2. Use optimized queries when we are using them.3. Reduce the number of transformations in the mapping. Active transformations like rank, joiner, filter, aggregator etc should be used as less as

possible.4. Remove all the unnecessary links between the transformations from mapping.5. If a single mapping contains many targets, then dividing them into separate mappings can improve performance.6. If we need to use a single source more than once in a mapping, then keep only one source and source qualifier in the mapping. Then create

different data flows as required into different targets or same target.7. If a session joins many source tables in one source qualifier, then an optimizing query will improve performance.8. In the sql query that Informatica generates, ORDERBY will be present. Remove the ORDER BY clause if not needed or at least reduce the

number of column names in that list. For better performance it is best to order by the index field of that table.9. Combine the mappings that use same set of source data.10. On a mapping, field with the same information should be given the same type and length throughout the mapping. Otherwise time will be spent

on field conversions.11. Instead of doing complex calculation in query, use an expression transformer and do the calculation in the mapping.12. If data is passing through multiple staging areas, removing the staging area will increase performance.13. Stored procedures reduce performance. Try to keep the stored procedures simple in the mappings.14. Unnecessary data type conversions should be avoided since the data type conversions impact performance.15. Transformation errors result in performance degradation. Try running the mapping after removing all transformations. If it is taking

significantly less time than with the transformations, then we have to fine­tune the transformation.16. Keep database interactions as less as possible.

PERFORMANCE TUNING OF SESSIONSA session specifies the location from where the data is to be taken, where the transformations are done and where the data is to be loaded. It hasvarious properties that help us to schedule and run the job in the way we want.

1. Partition the session: This creates many connections to the source and target, and loads data in parallel pipelines. Each pipeline will beindependent of the other. But the performance of the session will not improve if the number of records is less. Also the performance will notimprove if it does updates and deletes. So session partitioning should be used only if the volume of data is huge and the job is mainly insertionof data.

2. Run the sessions in parallel rather than serial to gain time, if they are independent of each other.3. Drop constraints and indexes before we run session. Rebuild them after the session run completes. Dropping can be done in pre session script

and Rebuilding in post session script. But if data is too much, dropping indexes and then rebuilding them etc. will be not possible. In such cases,stage all data, pre­create the index, use a transportable table space and then load into database.

4. Use bulk loading, external loading etc. Bulk loading can be used only if the table does not have an index.5. In a session we have options to ‘Treat rows as ‘Data Driven, Insert, Update and Delete’. If update strategies are used, then we have to keep it as

‘Data Driven’. But when the session does only insertion of rows into target table, it has to be kept as ‘Insert’ to improve performance.6. Increase the database commit level (The point at which the Informatica server is set to commit data to the target table. For e.g. commit level can

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 6/51

be set at every every 50,000 records)7. By avoiding built in functions as much as possible, we can improve the performance. E.g. For concatenation, the operator ‘||’ is faster than the

function CONCAT (). So use operators instead of functions, where possible. The functions like IS_SPACES (), IS_NUMBER (), IFF (),DECODE () etc. reduce the performance to a big extent in this order. Preference should be in the opposite order.

8. String functions like substring, ltrim, and rtrim reduce the performance. In the sources, use delimited strings in case the source flat files or usevarchar data type.

9. Manipulating high precision data types will slow down Informatica server. So disable ‘high precision’.10. Localize all source and target tables, stored procedures, views, sequences etc. Try not to connect across synonyms. Synonyms and aliases slow

down the performance.

DATABASE OPTIMISATIONTo gain the best Informatica performance, the database tables, stored procedures and queries used in Informatica should be tuned well.

1. If the source and target are flat files, then they should be present in the system in which the Informatica server is present.2. Increase the network packet size.3. The performance of the Informatica server is related to network connections.Data generally moves across a network at less than 1 MB per

second, whereas a local disk moves data five to twenty times faster. Thus network connections often affect on session performance. So avoidnetwork connections.

4. Optimize target databases.

IDENTIFICATION OF BOTTLENECKS

IDENTIFICATION OF BOTTLENECKSPerformance of Informatica is dependant on the performance of its several components like database, network, transformations, mappings, sessionsetc. To tune the performance of Informatica, we have to identify the bottleneck first.Bottleneck may be present in source, target, transformations, mapping, session,database or network. It is best to identify performance issue incomponents in the order source, target, transformations, mapping and session. After identifying the bottleneck, apply the tuning mechanisms inwhichever way they are applicable to the project.Identify bottleneck in Source If source is a relational table, put a filter transformation in the mapping, just after source qualifier; make the condition of filter to FALSE. So allrecords will be filtered off and none will proceed to other parts of the mapping.In original case, without the test filter, total time taken is as follows:­Total Time = time taken by (source + transformations + target load)Now because of filter, Total Time = time taken by sourceSo if source was fine, then in the latter case, session should take less time. Still if the session takes near equal time as former case, then there is asource bottleneck.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 7/51

Identify bottleneck in TargetIf the target is a relational table, then substitute it with a flat file and run the session. If the time taken now is very much less than the time taken forthe session to load to table, then the target table is the bottleneck.Identify bottleneck in TransformationRemove the transformation from the mapping and run it. Note the time taken.Then put the transformation back and run the mapping again. If the timetaken now is significantly more than previous time, then the transformation is the bottleneck.But removal of transformation for testing can be a pain for the developer since that might require further changes for the session to get into the‘working mode’.So we can put filter with the FALSE condition just after the transformation and run the session. If the session run takes equal time with and withoutthis test filter,then transformation is the bottleneck.Identify bottleneck in sessionsWe can use the session log to identify whether the source, target or transformations are the performance bottleneck. Session logs contain threadsummary records like the following:­MASTER> PETL_24018 Thread [READER_1_1_1] created for the read stage of partition point [SQ_test_all_text_data] has completed: Total RunTime =[11.703201] secs, Total Idle Time = [9.560945] secs, Busy Percentage =[18.304876].MASTER> PETL_24019 Thread [TRANSF_1_1_1_1] created for the transformation stage of partition point [SQ_test_all_text_data] has completed:Total Run Time = [11.764368] secs, Total Idle Time = [0.000000] secs, Busy Percentage = [100.000000].If busy percentage is 100, then that part is the bottleneck. Basically we have to rely on thread statistics to identify the cause of performance issues. Once the ‘Collect Performance Data’ option (In session‘Properties’ tab) is enabled, all the performance related information would appear in the log created by the session.

Performance Tuning Overview

Performance Tuning OverviewThe goal of performance tuning is to optimize session performance by eliminating performance bottlenecks. To tune session performance, firstidentify a performance bottleneck, eliminate it, and then identify the next performance bottleneck until you are satisfied with the session performance.You can use the test load option to run sessions when you tune session performance.If you tune all the bottlenecks, you can further optimize session performance by increasing the number of pipeline partitions in the session. Addingpartitions can improve performance by utilizing more of the system hardware while processing the session.Because determining the best way to improve performance can be complex, change one variable at a time, and time the session both before and afterthe change. If session performance does not improve, you might want to return to the original configuration.Complete the following tasks to improve session performance:

1. Optimize the target. Enables the Integration Service to write to the targets efficiently.2. Optimize the source. Enables the Integration Service to read source data efficiently.3. Optimize the mapping. Enables the Integration Service to transform and move data efficiently.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 8/51

4. Optimize the transformation. Enables the Integration Service to process transformations in a mapping efficiently.5. Optimize the session. Enables the Integration Service to run the session more quickly.6. Optimize the grid deployments. Enables the Integration Service to run on a grid with optimal performance.7. Optimize the Power Center components. Enables the Integration Service and Repository Service to function optimally.8. Optimize the system. Enables Power Center service processes to run more quickly.

TRANSFORMATIONS PART­2 IN INFORMATICA

SQL TRANSFORMATION :

You can pass the database connection information to the SQL transformation as input data at run time. The transformation processes external SQLscripts or SQL queries that you create in an SQL editor. The SQL transformation processes the query and returns rows and database errors.When you create an SQL transformation, you configure the following options:Mode:­The SQL transformation runs in one of the following modes:

Script mode. The SQL transformation runs ANSI SQL scripts that are externally located. You pass a script name to the transformation witheach input row. The SQL transformation outputs one row for each input row.Query mode. The SQL transformation executes a query that you define in a query editor. You can pass strings or parameters to the query todefine dynamic queries or change the selection parameters. You can output multiple rows when the query has a SELECT statement.Passive or active transformation. The SQL transformation is an active transformation by default. You can configure it as a passivetransformation when you create the transformation.Database type. The type of database the SQL transformation connects to.Connection type. Pass database connection information to the SQL transformation or use a connection object.

Script ModeAn SQL transformation running in script mode runs SQL scripts from text files. You pass each script file name from the source to the SQLtransformation Script Name port. The script file name contains the complete path to the script file.When you configure the transformation to run in script mode, you create a passive transformation. The transformation returns one row for each inputrow. The output row contains results of the query and any database error.Rules and Guidelines for Script ModeUse the following rules and guidelines for an SQL transformation that runs in script mode:

You can use a static or dynamic database connection with script mode.To include multiple query statements in a script, you can separate them with a semicolon.You can use mapping variables or parameters in the script file name.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 9/51

The script code page defaults to the locale of the operating system. You can change the locale of the script.The script file must be accessible by the Integration Service. The Integration Service must have read permissions on the directory that containsthe script.The Integration Service ignores the output of any SELECT statement you include in the SQL script. The SQL transformation in script modedoes not output more than one row of data for each input row.You cannot use scripting languages such as Oracle PL/SQL or Microsoft/Sybase T­SQL in the script.You cannot use nested scripts where the SQL script calls another SQL script.A script cannot accept run­time arguments.

Query Mode

When you configure the SQL transformation to run in query mode, you create an active transformation.When an SQL transformation runs in query mode, it executes an SQL query that you define in the transformation.You pass strings or parameters to the query from the transformation input ports to change the query statement or the query data.

You can create the following types of SQL queries in the SQL transformation:

Static SQL query. The query statement does not change, but you can use query parameters to change the data. The Integration Service preparesthe query once and runs the query for all input rows. Dynamic SQL query. You can change the query statements and the data. The Integration Service prepares a query for each input row.

Rules and Guidelines for Query ModeUse the following rules and guidelines when you configure the SQL transformation to run in query mode:

The number and the order of the output ports must match the number and order of the fields in the query SELECT clause.The native data type of an output port in the transformation must match the data type of the corresponding column in the database. TheIntegration Service generates a row error when the data types do not match.When the SQL query contains an INSERT, UPDATE, or DELETE clause, the transformation returns data to the SQL Error port, the pass­through ports, and the Num Rows Affected port when it is enabled. If you add output ports the ports receive NULL data values.When the SQL query contains a SELECT statement and the transformation has a pass­through port, the transformation returns data to the pass­through port whether or not the query returns database data. The SQL transformation returns a row with NULL data in the output ports.You cannot add the "_output" suffix to output port names that you create.You cannot use the pass­through port to return data from a SELECT query.When the number of output ports is more than the number of columns in the SELECT clause, the extra ports receive a NULL value.When the number of output ports is less than the number of columns in the SELECT clause, the Integration Service generates a row error.You can use string substitution instead of parameter binding in a query. However, the input ports must be string data types.

SQL Transformation Properties

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 10/51

After you create the SQL transformation, you can define ports and set attributes in the following transformation tabs:

Ports. Displays the transformation ports and attributes that you create on the SQL Ports tab. Properties. SQL transformation general properties. SQL Settings. Attributes unique to the SQL transformation. SQL Ports. SQL transformation ports and attributes.

Note: You cannot update the columns on the Ports tab. When you define ports on the SQL Ports tab, they display on the Ports tab.Properties TabConfigure the SQL transformation general properties on the Properties tab. Some transformation properties do not apply to the SQL transformation orare not configurable.The following table describes the SQL transformation properties:

Property DescriptionRun Time Location Enter a path relative to the Integration Service node that runs the SQL

transformation session.If this property is blank, the Integration Service uses the environmentvariable defined on the Integration Service node to locate the DLL or sharedlibrary.You must copy all DLLs or shared libraries to the run­time location or tothe environment variable defined on the Integration Service node. TheIntegration Service fails to load the procedure when it cannot locate theDLL, shared library, or a referenced file.

Tracing Level Sets the amount of detail included in the session log when you run a sessioncontaining this transformation. When you configure the SQL transformationtracing level to Verbose Data, the Integration Service writes each SQLquery it prepares to the session log.

Is Partition able Multiple partitions in a pipeline can use this transformation. Use thefollowing options:­ No. The transformation cannot be partitioned. The transformation andother transformations in the same pipeline are limited to one partition. Youmight choose No if the transformation processes all the input data together,such as data cleansing.­ Locally. The transformation can be partitioned, but the Integration Servicemust run all partitions in the pipeline on the same node. Choose Locallywhen different partitions of the transformation must share objects in

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 11/51

memory.­ Across Grid. The transformation can be partitioned, and the IntegrationService can distribute each partition to different nodes.Default is No.

Update Strategy The transformation defines the update strategy for output rows. You canenable this property for query mode SQL transformations.Default is disabled.

Transformation Scope The method in which the Integration Service applies the transformationlogic to incoming data. Use the following options:­ Row­ Transaction­ All InputSet transaction scope to transaction when you use transaction control instatic query mode.Default is Row for script mode transformations.Default is All Input forquery mode transformations.

Output is Repeatable Indicates if the order of the output data is consistent between session runs.­ Never. The order of the output data is inconsistent between session runs.­ Based On Input Order. The output order is consistent between session runswhen the input data order is consistent between session runs.­ Always. The order of the output data is consistent between session runseven if the order of the input data is inconsistent between session runs.Default is Never.

Generate Transaction The transformation generates transaction rows. Enable this property forquery mode SQL transformations that commit data in an SQL query.Default is disabled.

Requires SingleThread Per Partition

Indicates if the Integration Service processes each partition of a procedurewith one thread.

Output is Deterministic The transformation generate consistent output data between session runs.Enable this property to perform recovery on sessions that use thistransformation.Default is enabled.

Create Mapping :Step 1: Creating a flat file and importing the source from the flat file.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 12/51

Create a Notepad and in it create a table by name bikes with three columns and three records in it.Create one more notepad and name it as path for the bikes. Inside the Notepad just type in (C:\bikes.txt) and save it.Import the source (second notepad) using the source­>import from the file. After which we are goanna get a wizard with three subsequentwindows and follow the on screen instructions to complete the process of importing the source.

Step 2: Importing the target and applying the transformation.In the same way as specified above go to the targets­>import from file and select an empty notepad under the name targetforbikes (this is one moreblank notepad which we should create and save under the above specified name in the C :\).

Create two columns in the target table under the name report and error.We are all set here. Now apply the SQL transformation.In the first window when you apply the SQL transformation we should select the script mode.Connect the SQ to the ScriptName under inputs and connect the other two fields to the output correspondingly.

Snapshot for the above discussed things is given below.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 13/51

Step 3: Design the work flow and run it.

Create the task and the work flow using the naming conventions.Go to the mappings tab and click on the Source on the left hand pane to specify the path for the output file.

Step 4: Preview the output data on the target table.

================================================================ NORMALIZER TRANSFORMATION:

Active and Connected Transformation.The Normalizer transformation normalizes records from COBOL and relational sources, allowing us to organize the data.Use a Normalizer transformation instead of the Source Qualifier transformation when we normalize a COBOL source.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 14/51

We can also use the Normalizer transformation with relational sources to create multiple rows from a single row of data.

Example 1: To create 4 records of every employee in EMP table.

EMP will be source table.Create target table Normalizer_Multiple_Records. Structure same as EMP and datatype of HIREDATE as VARCHAR2.Create shortcuts as necessary.

Creating Mapping :

1. Open folder where we want to create the mapping.2. Click Tools ­> Mapping Designer.3. Click Mapping­> Create­> Give name. Ex: m_ Normalizer_Multiple_Records4. Drag EMP and Target table.5. Transformation­>Create­>Select Expression­> Give name, Click create, done.6. Pass all ports from SQ_EMP to Expression transformation.7. Transformation­> Create­> Select Normalizer­> Give name, create & done.8. Try dragging ports from Expression to Normalizer. Not Possible.9. Edit Normalizer and Normalizer Tab. Add columns. Columns equal to columns in EMP table and datatype also same.10. Normalizer doesn’t have DATETIME datatype. So convert HIREDATE to char in expression t/f. Create output port out_hdate and do the

conversion.11. Connect ports from Expression to Normalizer.12. Edit Normalizer and Normalizer Tab. As EMPNO identifies source records and we want 4 records of every employee, give OCCUR for

EMPNO as 4.

13. 14. Click Apply and then OK.15. Add link as shown in mapping below:16. Mapping ­> Validate17. Repository ­> Save

Make session and workflow.Give connection information for source and target table.Run workflow and see result.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 15/51

Example 2: To break rows into columns Source: Roll_Number Name ENG HINDI MATHS100 Amit 78 76 90101 Rahul 76 78 87102 Jessie 65 98 79

Target :Roll_Number Name Marks100 Amit 78100 Amit 76100 Amit 90101 Rahul 76101 Rahul 78101 Rahul 87102 Jessie 65102 Jessie 98102 Jessie 79

Make source as a flat file. Import it and create target table.Create Mapping as before. In Normalizer tab, create only 3 ports Roll_Number, Name and Marks as there are 3 columns in target table.Also as we have 3 marks in source, give Occurs as 3 for Marks in Normalizer tab.Connect accordingly and connect to target.Validate and SaveMake Session and workflow and Run it. Give Source File Directory and Source File name for source flat file in source properties in mappingtab of session.See the result.

==================================================================== SEQUENCE GENERATOR TRANSFORMATION:

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 16/51

Passive and Connected Transformation.The Sequence Generator transformation generates numeric values.Use the Sequence Generator to create unique primary key values, replace missing primary keys, or cycle through a sequential range of numbers.

We use it to generate Surrogate Key in DWH environment mostly. When we want to Maintain history, then we need a key other than Primary Key touniquely identify the record. So we create a Sequence 1,2,3,4 and so on. We use this sequence as the key. Example: If EMPNO is the key, we cankeep only one record in target and can’t maintain history. So we use Surrogate key as Primary key and not EMPNO. Sequence Generator Ports :The Sequence Generator transformation provides two output ports: NEXTVAL and CURRVAL.

We cannot edit or delete these ports.Likewise, we cannot add ports to the transformation.

NEXTVAL: Use the NEXTVAL port to generate sequence numbers by connecting it to a Transformation or target. For example, we might connect NEXTVAL to two target tables in a mapping to generate unique primary key values.

Sequence in Table 1 will be generated first. When table 1 has been loaded, only then Sequence for table 2 will be generated. CURRVAL: CURRVAL is NEXTVAL plus the Increment By value.

We typically only connect the CURRVAL port when the NEXTVAL port is Already connected to a downstream transformation.If we connect the CURRVAL port without connecting the NEXTVAL port, the Integration Service passes a constant value for each row.when we connect the CURRVAL port in a Sequence Generator Transformation, the Integration Service processes one row in each block.We can optimize performance by connecting only the NEXTVAL port in a Mapping.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 17/51

Example: To use Sequence Generator transformation

EMP will be source.Create a target EMP_SEQ_GEN_EXAMPLE in shared folder. Structure same as EMP. Add two more ports NEXT_VALUE andCURR_VALUE to the target table.Create shortcuts as needed.

Creating Mapping:1. Open folder where we want to create the mapping.2. Click Tools ­> Mapping Designer. 3. Click Mapping­> Create­> Give name. Ex: m_seq_gen_example 4. Drag EMP and Target table. 5. Connect all ports from SQ_EMP to target table. 6. Transformation ­> Create ­> Select Sequence Generator for list ­> Create ­> Done 7. Connect NEXT_VAL and CURR_VAL from Sequence Generator to target. 8. Validate Mapping 9. Repository ­> Save

Create Session and then workflow.Give connection information for all tables.Run workflow and see the result in table.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 18/51

Sequence Generator Properties:Setting Required/Optional DescriptionStart Value Required Start value of the generated sequence that we want IS to use if we

use Cycle option. Default is 0.Increment By Required Difference between two consecutive values from the NEXTVAL

port.End Value Optional Maximum value the Integration Service generates.Current Value Optional First value in the sequence.If cycle option used, the value must be

greater than or equal to the start value and less the end value.Cycle Optional If selected, the Integration Service cycles through the sequence

range. Ex: Start Value:1 End Value 10 Sequence will be from 1­10and again start from 1.

Reset Optional By default, last value of sequence during session is saved torepository. Next time the sequence is started from the valuedsaved. If selected, the Integration Service generates values based on theoriginal current value for each session.

Points to Ponder:

If Current value is 1 and end value 10, no cycle option. There are 17 records in source. In this case session will fail.If we connect just CURR_VAL only, the value will be same for all records.If Current value is 1 and end value 10, cycle option there. Start value is 0.There are 17 records in source. Sequence: 1 2 – 10. 0 1 2 3 –To make above sequence as 1­10 1­20, give Start Value as 1. Start value is used along with Cycle option only.If Current value is 1 and end value 10, cycle option there. Start value is 1.There are 17 records in source. Session runs. 1­10 1­7. 7 will be saved in repository. If we run session again, sequence will start from 8.Use reset option if you want to start sequence from CURR_VAL every time.

=====================================================================AGGREGATOR TRANSFORMATION:

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 19/51

Connected and Active TransformationThe Aggregator transformation allows us to perform aggregate calculations, such as averages and sums.Aggregator transformation allows us to perform calculations on groups.

Components of the Aggregator Transformation

1. Aggregate expression2. Group by port3. Sorted Input4. Aggregate cache

1) Aggregate Expressions

Entered in an output port.Can include non­aggregate expressions and conditional clauses.

The transformation language includes the following aggregate functions:

AVG, COUNT, MAX, MIN, SUMFIRST, LASTMEDIAN, PERCENTILE, STDDEV, VARIANCE

Single Level Aggregate Function: MAX(SAL) Nested Aggregate Function: MAX( COUNT( ITEM )) Nested Aggregate Functions

In Aggregator transformation, there can be multiple single level functions or multiple nested functions.An Aggregator transformation cannot have both types of functions together.MAX( COUNT( ITEM )) is correct.MIN(MAX( COUNT( ITEM ))) is not correct. It can also include one aggregate function nested within another aggregate function

Conditional Clauses We can use conditional clauses in the aggregate expression to reduce the number of rows used in the aggregation. The conditional clause can be anyclause that evaluates to TRUE or FALSE.

SUM( COMMISSION, COMMISSION > QUOTA )

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 20/51

Non­Aggregate Functions We can also use non­aggregate functions in the aggregate expression.

IIF( MAX( QUANTITY ) > 0, MAX( QUANTITY ), 0))

2) Group By Ports

Indicates how to create groups.When grouping data, the Aggregator transformation outputs the last row of each group unless otherwise specified.

The Aggregator transformation allows us to define groups for aggregations, rather than performing the aggregation across all input data.For example, we can find Maximum Salary for every Department.

In Aggregator Transformation, Open Ports tab and select Group By as needed.

3) Using Sorted Input

Use to improve session performance.To use sorted input, we must pass data to the Aggregator transformation sorted by group by port, in ascending or descending order.When we use this option, we tell Aggregator that data coming to it is already sorted.We check the Sorted Input Option in Properties Tab of the transformation.If the option is checked but we are not passing sorted data to the transformation, then the session fails.

4) Aggregator Caches

The Power Center Server stores data in the aggregate cache until it completes Aggregate calculations.It stores group values in an index cache and row data in the data cache. If the Power Center Server requires more space, it stores overflowvalues in cache files.

Note: The Power Center Server uses memory to process an Aggregator transformation with sorted ports. It does not use cache memory. We do notneed to configure cache memory for Aggregator transformations that use sorted ports. 1) Aggregator Index Cache:The index cache holds group information from the group by ports. If we are using Group By on DEPTNO, then this cache stores values 10, 20, 30 etc.

All Group By Columns are in AGGREGATOR INDEX CACHE. Ex. DEPTNO

2) Aggregator Data Cache: DATA CACHE is generally larger than the AGGREGATOR INDEX CACHE.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 21/51

Columns in Data Cache:

Variable ports if anyNon group by input/output ports.Non group by input ports used in non­aggregate output expression.Port containing aggregate function

1) Example: To calculate MAX, MIN, AVG and SUM of salary of EMP table.

EMP will be source table.Create a target table EMP_AGG_EXAMPLE in target designer. Table should contain DEPTNO, MAX_SAL, MIN_SAL, AVG_SAL andSUM_SALCreate the shortcuts in your folder.

Creating Mapping:1. Open folder where we want to create the mapping. 2. Click Tools ­> Mapping Designer.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 22/51

3. Click Mapping­> Create­> Give mapping name. Ex: m_agg_example 4. Drag EMP from source in mapping. 5. Click Transformation ­> Create ­> Select AGGREGATOR from list. Give name and click Create. Now click done. 6. Pass SAL and DEPTNO only from SQ_EMP to AGGREGATOR Transformation. 7. Edit AGGREGATOR Transformation. Go to Ports Tab8. Create 4 output ports: OUT_MAX_SAL, OUT_MIN_SAL, OUT_AVG_SAL, OUT_SUM_SAL 9. Open Expression Editor one by one for all output ports and give the calculations. Ex: MAX(SAL), MIN(SAL), AVG(SAL),SUM(SAL) 10. Click Apply ­> Ok. 11. Drag target table now. 12. Connect the output ports from Rank to target table. 13. Click Mapping ­> Validate 14. Repository ­> Save

Create Session and Workflow as described earlier. Run the Workflow and see the data in target table.Make sure to give connection information for all tables.

==================================================================UNION TRANSFORMATION:

Active and Connected transformation.

Union transformation is a multiple input group transformation that you can use to merge data from multiple pipelines or pipeline branches into onepipeline branch. It merges data from multiple sources similar to the UNION ALL SQL statement to Combine the results from two or more SQLstatements. Union Transformation Rules and Guidelines

we can create multiple input groups, but only one output group.we can connect heterogeneous sources to a Union transformation.all input groups and the output group must have matching ports. The Precision, data type, and scale must be identical across all groups.The Union transformation does not remove duplicate rows. To remove Duplicate rows, we must add another transformation such as a Router orFilter Transformation.we cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.

Union Transformation ComponentsWhen we configure a Union transformation, define the following components:Transformation tab: We can rename the transformation and add a description.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 23/51

Properties tab: We can specify the tracing level.Groups tab: We can create and delete input groups. The Designer displays groups we create on the Ports tab.Group Ports tab: We can create and delete ports for the input groups. The Designer displays ports we create on the Ports tab. We cannot modify the Ports, Initialization Properties, Metadata Extensions, or Port Attribute Definitions tabs in a Union transformation. Create input groups on the Groups tab, and create ports on the Group Ports tab. We can create one or more input groups on the Groups tab. TheDesigner creates one output group by default. We cannot edit or delete the default output group. Example: to combine data of tables EMP_10, EMP_20 and EMP_REST

Import tables EMP_10, EMP_20 and EMP_REST in shared folder in Sources.Create a target table EMP_UNION_EXAMPLE in target designer. Structure should be same EMP table.Create the shortcuts in your folder.

Creating Mapping:

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 24/51

1. Open folder where we want to create the mapping.2. Click Tools ­> Mapping Designer.3. Click Mapping­> Create­> Give mapping name. Ex: m_union_example4. Drag EMP_10, EMP_20 and EMP_REST from source in mapping.5. Click Transformation ­> Create ­> Select Union from list. Give name and click Create. Now click done.6. Pass ports from SQ_EMP_10 to Union Transformation.7. Edit Union Transformation. Go to Groups Tab8. One group will be already there as we dragged ports from SQ_DEPT_10 to Union Transformation.9. As we have 3 source tables, we 3 need 3 input groups. Click add button to add 2 more groups. See Sample Mapping10. We can also modify ports in ports tab.11. Click Apply ­> Ok.12. Drag target table now.13. Connect the output ports from Union to target table.14. Click Mapping ­> Validate15. Repository ­> Save

Create Session and Workflow as described earlier. Run the Workflow and see the data in target table.Make sure to give connection information for all 3 source Tables.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 25/51

Sample mapping picture

=======================================================================JOINER TRANSFORMATION:

Connected and Active TransformationUsed to join source data from two related heterogeneous sources residing in Different locations or file systems. Or, we can join data from thesame source.If we need to join 3 tables, then we need 2 Joiner Transformations.The Joiner transformation joins two sources with at least one matching port. The Joiner transformation uses a condition that matches one ormore pairs of Ports between the two sources.

Example: To join EMP and DEPT tables.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 26/51

EMP and DEPT will be source table.Create a target table JOINER_EXAMPLE in target designer. Table should Contain all ports of EMP table plus DNAME and LOC as shownbelow.Create the shortcuts in your folder.

Creating Mapping:

1. Open folder where we want to create the mapping.2. Click Tools ­> Mapping Designer.3. Click Mapping­> Create­> Give mapping name. Ex: m_joiner_example4. Drag EMP, DEPT, and Target. Create Joiner Transformation. Link as shown below.

5. Specify the join condition in Condition tab. See steps on next page. 6. Set Master in Ports tab. See steps on next page. 7. Mapping ­> Validate 8. Repository ­> Save.

Create Session and Workflow as described earlier. Run the Work flow and see the data in target table.Make sure to give connection information for all tables.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 27/51

JOIN CONDITION:The join condition contains ports from both input sources that must match for the Power Center Server to join two rows. Example: DEPTNO=DEPTNO1 in above.

1. Edit Joiner Transformation ­> Condition Tab2. Add condition

We can add as many conditions as needed.Only = operator is allowed.

If we join Char and Varchar data types, the Power Center Server counts any spaces that pad Char values as part of the string. So if you try to join thefollowing:Char (40) = “abcd” and Varchar (40) = “abcd” Then the Char value is “abcd” padded with 36 blank spaces, and the Power Center Server does not join the two fields because the Char field containstrailing spaces. Note: The Joiner transformation does not match null values.MASTER and DETAIL TABLES In Joiner, one table is called as MASTER and other as DETAIL.

MASTER table is always cached. We can make any table as MASTER.Edit Joiner Transformation ­> Ports Tab ­> Select M for Master table.

Table with less number of rows should be made MASTER to improve Performance.Reason:

When the Power Center Server processes a Joiner transformation, it reads rows from both sources concurrently and builds the index and datacache based on the master rows. So table with fewer rows will be read fast and cache can be made as table with more rows is still being read.The fewer unique rows in the master, the fewer iterations of the join comparison occur, which speeds the join process.

JOINER TRANSFORMATION PROPERTIES TAB

Case­Sensitive String Comparison: If selected, the Power Center Server uses case­sensitive string comparisons when performing joins onstring columns. Cache Directory: Specifies the directory used to cache master or detail rows and the index to these rows. Join Type: Specifies the type of join: Normal, Master Outer, Detail Outer, or Full Outer.

Tracing Level Joiner Data Cache Size

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 28/51

Joiner Index Cache Size Sorted Input JOIN TYPESIn SQL, a join is a relational operator that combines data from multiple tables into a single result set. The Joiner transformation acts in much the samemanner, except that tables can originate from different databases or flat files. Types of Joins:

NormalMaster OuterDetail OuterFull Outer

Note: A normal or master outer join performs faster than a full outer or detail outer join. Example: In EMP, we have employees with DEPTNO 10, 20, 30 and 50. In DEPT, we have DEPTNO 10, 20, 30 and 40. DEPT will be MASTERtable as it has less rows. Normal Join: With a normal join, the Power Center Server discards all rows of data from the master and detail source that do not match, based on the condition.

All employees of 10, 20 and 30 will be there as only they are matching.

Master Outer Join: This join keeps all rows of data from the detail source and the matching rows from the master source. It discards the unmatched rows from the mastersource.

All data of employees of 10, 20 and 30 will be there.There will be employees of DEPTNO 50 and corresponding DNAME and LOC Columns will be NULL.

Detail Outer Join: This join keeps all rows of data from the master source and the matching rows from the detail source. It discards the unmatched rows from the detailsource.

All employees of 10, 20 and 30 will be there.There will be one record for DEPTNO 40 and corresponding data of EMP columns will be NULL.

Full Outer Join: A full outer join keeps all rows of data from both the master and detail sources.

All data of employees of 10, 20 and 30 will be there.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 29/51

There will be employees of DEPTNO 50 and corresponding DNAME and LOC Columns will be NULL.There will be one record for DEPTNO 40 and corresponding data of EMP Columns will be NULL.

USING SORTED INPUT

Use to improve session performance.to use sorted input, we must pass data to the Joiner transformation sorted by the ports that are used in Join Condition.We check the Sorted Input Option in Properties Tab of the transformation.If the option is checked but we are not passing sorted data to the Transformation, then the session fails.We can use SORTER to sort data or Source Qualifier in case of Relational tables.

JOINER CACHES Joiner always caches the MASTER table. We cannot disable caching. It builds Index cache and Data Cache based on MASTER table. 1) Joiner Index Cache:

All Columns of MASTER table used in Join condition are in JOINER INDEX CACHE.

· Example: DEPTNO in our mapping. 2) Joiner Data Cache:

Master column not in join condition and used for output to other transformation or target table are in Data Cache.

· Example: DNAME and LOC in our mapping example. Performance Tuning:

Perform joins in a database when possible.Join sorted data when possible.For a sorted Joiner transformation, designate as the master source the source with fewer duplicate key values.Joiner can't be used in following conditions:

1. Either input pipeline contains an Update Strategy transformation.2. We connect a Sequence Generator transformation directly before the Joiner transformation.

=====================================================================Update Straegy TRANSFORMATION:

Active and Connected Transformation

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 30/51

Till now, we have only inserted rows in our target tables. What if we want to update, delete or reject rows coming from source based on somecondition? Example: If Address of a CUSTOMER changes, we can update the old address or keep both old and new address. One row is for old and one fornew. This way we maintain the historical data. Update Strategy is used with Lookup Transformation. In DWH, we create a Lookup on target table to determine whether a row already exists or not.Then we insert, update, delete or reject the source record as per business need. In Power Center, we set the update strategy at two different levels:

1. Within a session2. Within a Mapping

1. Update Strategy within a session: When we configure a session, we can instruct the IS to either treat all rows in the same way or use instructions coded into the session mapping to flagrows for different database operations. Session Configuration:Edit Session ­> Properties ­> Treat Source Rows as: (Insert, Update, Delete, and Data Driven). Insert is default. Specifying Operations for IndividualTarget Tables:

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 31/51

You can set the following update strategy options:Insert: Select this option to insert a row into a target table. Delete: Select this option to delete a row from a table. Update: We have the following options in this situation:

Update as Update. Update each row flagged for update if it exists in the target table. Update as Insert. Inset each row flagged for update. Update else Insert. Update the row if it exists. Otherwise, insert it.

Truncate table: Select this option to truncate the target table before loading data. 2. Flagging Rows within a MappingWithin a mapping, we use the Update Strategy transformation to flag rows for insert, delete, update, or reject. Operation Constant Numeric

ValueINSERT DD_INSERT 0

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 32/51

UPDATE DD_UPDATE 1DELETE DD_DELETE 2REJECT DD_REJECT 3Update Strategy Expressions: Frequently, the update strategy expression uses the IIF or DECODE function from the transformation language to test each row to see if it meets aparticular condition. IIF( ( ENTRY_DATE > APPLY_DATE), DD_REJECT, DD_UPDATE ) Or IIF( ( ENTRY_DATE > APPLY_DATE), 3, 2 )

The above expression is written in Properties Tab of Update Strategy T/f.DD means DATA DRIVEN

Forwarding Rejected Rows: We can configure the Update Strategy transformation to either pass rejected rows to the next transformation or drop them.Steps:

1. Create Update Strategy Transformation2. Pass all ports needed to it.3. Set the Expression in Properties Tab.4. Connect to other transformations or target.

Performance tuning:

1. Use Update Strategy transformation as less as possible in the mapping.2. Do not use update strategy transformation if we just want to insert into target table, instead use direct mapping, direct filtering etc.3. For updating or deleting rows from the target table we can use Update Strategy transformation itself.

==================================================================Lookup TRANSFORMATION:

Passive TransformationCan be Connected or Unconnected. Dynamic lookup is connected.Use a Lookup transformation in a mapping to look up data in a flat file or a relational table, view, or synonym.We can import a lookup definition from any flat file or relational database to which both the PowerCenter Client and Server can connect.We can use multiple Lookup transformations in a mapping.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 33/51

The Power Center Server queries the lookup source based on the lookup ports in the transformation. It compares Lookup transformation port values tolookup source column values based on the lookup condition. Pass the result of the lookup to other transformations and a target. We can use the Lookup transformation to perform following:

Get a related value: EMP has DEPTNO but DNAME is not there. We use Lookup to get DNAME from DEPT table based on LookupCondition.Perform a calculation: We want only those Employees who’s SAL > Average (SAL). We will write Lookup Override query.Update slowly changing dimension tables: Most important use. We can use a Lookup transformation to determine whether rows already existin the target.

1. LOOKUP TYPES We can configure the Lookup transformation to perform the following types of lookups:

Connected or UnconnectedRelational or Flat FileCached or Un cached

Relational Lookup: When we create a Lookup transformation using a relational table as a lookup source, we can connect to the lookup source using ODBC and import thetable definition as the structure for the Lookup transformation.

We can override the default SQL statement if we want to add a WHERE clause or query multiple tables.We can use a dynamic lookup cache with relational lookups.

Flat File Lookup: When we use a flat file for a lookup source, we can use any flat file definition in the repository, or we can import it. When we import a flat file lookupsource, the Designer invokes the Flat File Wizard. Cached or Un cached Lookup: We can check the option in Properties Tab to Cache to lookup or not. By default, lookup is cached. Connected and Unconnected Lookup Connected Lookup Unconnected LookupReceives input values directly from the pipeline. Receives input values from the result of a :LKP

expression in another transformation.We can use a dynamic or static cache. We can use a static cache.Cache includes all lookup columns used in themapping.

Cache includes all lookup/output ports in the lookupcondition and the lookup/return port.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 34/51

If there is no match for the lookup condition, thePower Center Server returns the default value for alloutput ports.

If there is no match for the lookup condition, thePower Center Server returns NULL.

If there is a match for the lookup condition, thePower Center Server returns the result of the lookupcondition for all lookup/output ports.

If there is a match for the lookup condition,thePower Center Server returns the result of the lookupcondition into the return port.

Pass multiple output values to anothertransformation.

Pass one output value to another transformation.

Supports user­defined default values Does not support user­defined default values.

2 .LOOKUP T/F COMPONENTSDefine the following components when we configure a Lookup transformation in a mapping:

Lookup sourcePortsPropertiesCondition

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 35/51

1. Lookup Source: We can use a flat file or a relational table for a lookup source. When we create a Lookup t/f, we can import the lookup source from the followinglocations:

Any relational source or target definition in the repositoryAny flat file source or target definition in the repositoryAny table or file that both the Power Center Server and Client machine can connect to The lookup table can be a single table, or we can joinmultiple tables in the same database using a lookup SQL override in Properties Tab.

2. Ports: Ports Lookup Type Number

NeededDescription

I Connected Unconnected

Minimum 1 Input port to Lookup. Usually ports used for Join conditionare Input ports.

O Connected Unconnected

Minimum 1 Ports going to another transformation from Lookup.

L Connected Unconnected

Minimum 1 Lookup port. The Designer automatically Designates eachcolumn in the lookup source as a lookup (L) and outputport (O).

R Unconnected 1 Only Return port. Use only in unconnected Lookup t/f only.

3. Properties Tab Options Lookup Type DescriptionLookup SQL Override Relational Overrides the default SQL statement to query the lookup

table.Lookup Table Name Relational Specifies the name of the table from which the

transformation looks up and caches values.Lookup CachingEnabled

Flat File, Relational Indicates whether the Power Center Server caches lookupvalues during the session.

Lookup Policy onMultiple Match

Flat File, Relational Determines what happens when the Lookup transformationfinds multiple rows that match the lookup condition.Options: Use First Value or Use Last Value or Use AnyValue or Report Error

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 36/51

Lookup Condition Flat File, Relational Displays the lookup condition you set in the Condition tab.ConnectionInformation

Relational Specifies the database containing the lookup table.

Source Type Flat File, Relational Lookup is from a database or flat file.Lookup CacheDirectory Name

Flat File, Relational Location where cache is build.

Lookup CachePersistent

Flat File, Relational Whether to use Persistent Cache or not.

Dynamic LookupCache

Flat File, Relational Whether to use Dynamic Cache or not.

Recache FromLookup Source

Flat File, Relational To rebuild cache if cache source changes and we are usingPersistent Cache.

Insert Else Update Relational Use only with dynamic caching enabled. Applies to rowsentering the Lookup transformation with the row type ofinsert.

Lookup Data CacheSize

Flat File, Relational Data Cache Size

Lookup Index CacheSize

Flat File, Relational Index Cache Size

Cache File NamePrefix

Flat File, Relational Use only with persistent lookup cache. Specifies the filename prefix to use with persistent lookup cache files.

Some other properties for Flat Files are:

Date time FormatThousand SeparatorDecimal SeparatorCase­Sensitive String ComparisonNull OrderingSorted Input

4: Condition Tab We enter the Lookup Condition. The Power Center Server uses the lookup condition to test incoming values. We compare transformation input values

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 37/51

with values in the lookup source or cache, represented by lookup ports.

The data types in a condition must match.When we enter multiple conditions, the Power Center Server evaluates each condition as an AND, not an OR.The Power Center Server matches null values.The input value must meet all conditions for the lookup to return a value.=, >, <, >=, <=, != Operators can be used.Example: IN_DEPTNO = DEPTNO

In_DNAME = 'DELHI' Tip: If we include more than one lookup condition, place the conditions with an equal sign first to optimize lookup performance. Note:1. We can use = operator in case of Dynamic Cache. 2. The Power Center Server fails the session when it encounters multiple keys for a Lookup transformation configured to use a dynamic cache.

3. Connected Lookup TransformationExample: To create a connected Lookup Transformation

EMP will be source table. DEPT will be LOOKUP table.Create a target table CONN_Lookup_EXAMPLE in target designer. Table should contain all ports of EMP table plus DNAME and LOC asshown below.Create the shortcuts in your folder.

Creating Mapping: 1. Open folder where we want to create the mapping. 2. Click Tools ­> Mapping Designer. 3. Click Mapping­> Create­> Give name. Ex: m_CONN_LOOKUP_EXAMPLE 4. Drag EMP and Target table. 5. Connect all fields from SQ_EMP to target except DNAME and LOC. 6. Transformation­> Create ­> Select LOOKUP from list. Give name and click

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 38/51

Create. 7. The Following screen is displayed.8. As DEPT is the Source definition, click Source and then Select DEPT.9. Click Ok.

10. Now Pass DEPTNO from SQ_EMP to this Lookup. DEPTNO from SQ_EMP will be named as DEPTNO1. Edit Lookup and rename it toIN_DEPTNO in ports tab.11. Now go to CONDITION tab and add CONDITION.DEPTNO = IN_DEPTNO and Click Apply and then OK. Link the mapping as shown below: 12. We are not passing IN_DEPTNO and DEPTNO to any other transformation from LOOKUP; we can edit the lookup transformation and removethe OUTPUT check from them. 13. Mapping ­> Validate 14. Repository ­> Save

Create Session and Workflow as described earlier. Run the workflow and see the data in target table.Make sure to give connection information for all tables.Make sure to give connection for LOOKUP Table also.

We use Connected Lookup when we need to return more than one column from Lookup table.There is no use of Return Port in Connected Lookup.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 39/51

SEE PROPERTY TAB FOR ADVANCED SETTINGS

4. Unconnected Lookup TransformationAn unconnected Lookup transformation is separate from the pipeline in the mapping. We write an expression using the :LKP reference qualifier tocall the lookup within another transformation. Steps to configure Unconnected Lookup:

1. Add input ports.2. Add the lookup condition.3. Designate a return value.4. Call the lookup from another transformation.

Example: To create a unconnected Lookup Transformation

EMP will be source table. DEPT will be LOOKUP table.Create a target table UNCONN_Lookup_EXAMPLE in target designer. Table should contain all ports of EMP table plus DNAME as shownbelow.Create the shortcuts in your folder.

Creating Mapping:1. Open folder where we want to create the mapping. 2. Click Tools ­> Mapping Designer. 3. Click Mapping­> Create­> Give name. Ex: m_UNCONN_LOOKUP_EXAMPLE

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 40/51

4. Drag EMP and Target table. 5. Now Transformation­> Create ­> Select EXPRESSION from list. Give name and click Create. Then Click Done.6. Pass all ports from SQ_EMP to EXPRESSION transformation. 7. Connect all fields from EXPRESSION to target except DNAME. 8. Transformation­> Create ­> Select LOOKUP from list. Give name and click Create. 9. Follow the steps as in Connected above to create Lookup on DEPT table.10. Click Ok. 11. Now Edit the Lookup Transformation. Go to Ports tab. 12. As DEPTNO is common in source and Lookup, create a port IN_DEPTNOports tab. Make it Input port only and Give Datatype same as DEPTNO. 13. Designate DNAME as Return Port. Check on R to make it.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 41/51

14. Now add a condition in Condition Tab. DEPTNO = IN_DEPTNO and Click Apply and then OK. 15. Now we need to call this Lookup from Expression Transformation.16. Edit Expression t/f and create a new output port out_DNAME of data type as DNAME. Open the Expression editor and call Lookup as givenbelow: We double click Unconn in bottom of Functions tab and as we need only DEPTNO, we pass only DEPTNO as input. 17. Validate the call in Expression editor and Click OK. 18. Mapping ­> Validate 19. Repository Save.

Create Session and Workflow as described earlier. Run the workflow and see the data in target table.Make sure to give connection information for all tables.Make sure to give connection for LOOKUP Table also.

5. Lookup CachesWe can configure a Lookup transformation to cache the lookup table. The Integration Service (IS) builds a cache in memory when it processes thefirst row of data in a cached Lookup transformation.The Integration Service also creates cache files by default in the $PMCacheDir. If the data does not fit in the memory cache, the IS stores the overflowvalues in the cache files. When session completes, IS releases cache memory and deletes the cache files.

If we use a flat file lookup, the IS always caches the lookup source.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 42/51

We set the Cache type in Lookup Properties.

Lookup Cache Files 1. Lookup Index Cache:

Stores data for the columns used in the lookup condition.

2. Lookup Data Cache:

For a connected Lookup transformation, stores data for the connected output ports, not including ports used in the lookup condition.For an unconnected Lookup transformation, stores data from the return port.

Types of Lookup Caches: 1. Static Cache By default, the IS creates a static cache. It caches the lookup file or table and Looks up values in the cache for each row that comes into thetransformation.The IS does not update the cache while it processes the Lookup transformation. 2. Dynamic Cache To cache a target table or flat file source and insert new rows or update existing rows in the cache, use a Lookup transformation with a dynamic cache.The IS dynamically inserts or updates data in the lookup cache and passes data to the target. Target table is also our lookup table. No good forperformance if table is huge. 3. Persistent Cache If the lookup table does not change between sessions, we can configure the Lookup transformation to use a persistent lookup cache.The IS saves and reuses cache files from session to session, eliminating the time Required to read the lookup table. 4. Recache from Source If the persistent cache is not synchronized with the lookup table, we can Configure the Lookup transformation to rebuild the lookup cache.If Lookuptable has changed, we can use this to rebuild the lookup cache.

5. Shared Cache

Unnamed cache: When Lookup transformations in a mapping have compatible caching structures, the IS shares the cache by default. You canonly share static unnamed caches. Named cache: Use a persistent named cache when we want to share a cache file across mappings or share a dynamic and a static cache. Thecaching structures must match or be compatible with a named cache. You can share static and dynamic named caches.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 43/51

Building Connected Lookup Caches We can configure the session to build caches sequentially or concurrently.

When we build sequential caches, the IS creates caches as the source rows enter the Lookup transformation.When we configure the session to build concurrent caches, the IS does not wait for the first row to enter the Lookup transformation before itcreates caches. Instead, it builds multiple caches concurrently.

1. Building Lookup Caches Sequentially:

2. Building Lookup Caches Concurrently:

To configure the session to create concurrent caches

Edit Session ­> In Config Object Tab­> Additional Concurrent Pipelines for Lookup Cache Creation ­> Give a value here (Auto By Default) Note: The IS builds caches for unconnected Lookups sequentially only

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 44/51

====================================================================Expression TRANSFORMATION:

Passive and connected transformation.

Use the Expression transformation to calculate values in a single row before we write to the target. For example, we might need to adjust employeesalaries, concatenate first and last names, or convert strings to numbers.Use the Expression transformation to perform any non­aggregate calculations.Example: Addition, Subtraction, Multiplication, Division, Concat, Uppercase conversion, lowercase conversion etc. We can also use the Expression transformation to test conditional statements before we output the results to target tables or other transformations.Example: IF, Then, DecodeThere are 3 types of ports in Expression Transformation:

InputOutputVariable: Used to store any temporary calculation.

Calculating Values :To use the Expression transformation to calculate values for a single row, we must include the following ports:

Input or input/output ports for each value used in the calculation: For example: To calculate Total Salary, we need salary and commission. Output port for the expression: We enter one expression for each output port. The return value for the output port needs to match the returnvalue of the expression.

We can enter multiple expressions in a single Expression transformation. We can create any number of output ports in the transformation.Example: Calculating Total Salary of an Employee

Import the source table EMP in Shared folder. If it is already there, then don’t import.In shared folder, create the target table Emp_Total_SAL. Keep all ports as in EMP table except Sal and Comm in target table. Add Total_SALport to store the calculation.Create the necessary shortcuts in the folder.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 45/51

Creating Mapping:

1. Open folder where we want to create the mapping.2. Click Tools ­> Mapping Designer.3. Click Mapping ­> Create ­> Give mapping name. Ex: m_totalsal4. Drag EMP from source in mapping.5. Click Transformation ­> Create ­> Select Expression from list. Give name and click Create. Now click done.6. Link ports from SQ_EMP to Expression Transformation.7. Edit Expression Transformation. As we do not want Sal and Comm in target, remove check from output port for both columns.8. Now create a new port out_Total_SAL. Make it as output port only.9. Click the small button that appears in the Expression section of the dialog box and enter the expression in the Expression Editor.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 46/51

10. Enter expression SAL + COMM. You can select SAL and COMM from Ports tab in expression editor.11. Check the expression syntax by clicking Validate.12. Click OK ­> Click Apply ­> Click Ok.13. Now connect the ports from Expression to target table.14. Click Mapping ­> Validate15. Repository ­> Save

Create Session and Workflow as described earlier. Run the workflow and see the data in target table.

As COMM is null, Total_SAL will be null in most cases. Now open your mapping and expression transformation. Select COMM port, In DefaultValue give 0. Now apply changes. Validate Mapping and Save. Refresh the session and validate workflow again. Run the workflow and see the result again. Now use ERROR in Default value of COMM to skip rows where COMM is null.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 47/51

Syntax: ERROR(‘Any message here’) Similarly, we can use ABORT function to abort the session if COMM is null. Syntax: ABORT(‘Any message here’) Make sure to double click the session after doing any changes in mapping. It will prompt that mapping has changed. Click OK to refresh the mapping.Run workflow after validating and saving the workflow. Performance tuning :Expression transformation is used to perform simple calculations and also to do Source lookups.

1. Use operators instead of functions.2. Minimize the usage of string functions.3. If we use a complex expression multiple times in the expression transformer, then Make that expression as a variable. Then we need to use only

this variable for all computations.

===================================================================Router Transformation :

Active and connected transformation.

A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. A Filtertransformation tests data for one condition and drops the rows of data that do not meet the Condition. However, a Router transformation tests data forone or more conditions And gives you the option to route rows of data that do not meet any of the conditions to a default output group. Example: If we want to keep employees of France, India, US in 3 different tables, then we can use 3 Filter transformations or 1 Routertransformation.

Mapping A uses three Filter transformations while Mapping B produces the same result with one Router transformation. A Router transformation consists of input and output groups, input and output ports, group filter conditions, and properties that we configure in theDesigner.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 48/51

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 49/51

Working with Groups A Router transformation has the following types of groups:

Input: The Group that gets the input ports.Output: User Defined Groups and Default Group. We cannot modify or delete Output ports or their properties.

User­Defined Groups: We create a user­defined group to test a condition based on incoming data. A user­defined group consists of output ports anda group filter Condition. We can create and edit user­defined groups on the Groups tab with the Designer. Create one user­defined group for eachcondition that we want to specify. The Default Group: The Designer creates the default group after we create one new user­defined group. The Designer does not allow us to edit ordelete the default group. This group does not have a group filter condition associated with it. If all of the conditions evaluate to FALSE, the IS passesthe row to the default group. Example: Filtering employees of Department 10 to EMP_10, Department 20 to EMP_20 and rest to EMP_REST

Source is EMP Table.Create 3 target tables EMP_10, EMP_20 and EMP_REST in shared folder. Structure should be same as EMP table.Create the shortcuts in your folder.

Creating Mapping:1. Open folder where we want to create the mapping. 2. Click Tools ­> Mapping Designer. 3. Click Mapping­> Create­> Give mapping name. Ex: m_router_example 4. Drag EMP from source in mapping. 5. Click Transformation ­> Create ­> Select Router from list. Give name and Click Create. Now click done. 6. Pass ports from SQ_EMP to Router Transformation. 7. Edit Router Transformation. Go to Groups Tab8. Click the Groups tab, and then click the Add button to create a user­defined Group. The default group is created automatically.. 9. Click the Group Filter Condition field to open the Expression Editor. 10. Enter a group filter condition. Ex: DEPTNO=10 11. Click Validate to check the syntax of the conditions you entered.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 50/51

12. Create another group for EMP_20. Condition: DEPTNO=20 13. The rest of the records not matching the above two conditions will be passed to DEFAULT group. See sample mapping 14. Click OK ­> Click Apply ­> Click Ok. 15. Now connect the ports from router to target tables. 16. Click Mapping ­> Validate 17. Repository ­> Save

Create Session and Workflow as described earlier. Run the Workflow and see the data in target table.Make sure to give connection information for all 3 target tables.

6/5/2015 krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA

http://krishnareddyoracleapps.blogspot.in/search/label/INFORMATICA 51/51

Sample Mapping:

Difference between Router and Filter :We cannot pass rejected data forward in filter but we can pass it in router. Rejected data is in Default Group of router. ========================================================


Recommended