Powered By Blogger

Saturday, March 5, 2011

Informatica problems


SATURDAY, 4 APRIL 2009


Informatica Strange Problems

Some time we encounter strange problem while working with informatica.I would like to highlight some problem which no way seems to be logical

Error : Informatica Debugger: Invalid session can not be debugged .Select a valid session instance to use

Problem : You are trying to run your mapping in debug mode by selecting valid session but you are getting below error even though your session is valid.
Solution :
1) Copy your mapping Suppose M1 as M1_Debug
2) Make your session as reusable
3) Making your Initial Mapping M1 as invalid by someway
4) Now refresh your session it will ask for new mapping then select M1_debug
5) Revalidate your session
6) Try to run your mapping(M1_debug) using second option use existing reusable instance for mapping
Now you should be able to run your mapping in debugger mode
Though it not a 100% correct method but it worked in my case but it is worth trying

Error : CMN_1892 Error: Data for Lookup [] fetched from the file [] is not sorted on the condition ports
Problem : You are suddenly started getting this error though you are not using lookup with sorted ports. You also tried all the things like using sorted file etc.
Solution : For us suddenly file size increased almost 10 times and we started getting this error .We tried all the thing but it did not work.
In the end we increased Lookup Data Cache size and Lookup Index Cache size and it started working fine.
Though it not a 100% correct method but it worked in my case but it is worth trying
-----------------------------------------------------------------------------

How to validate all mappings in a particular folder
In this post we will discuss how to validate all mapping in a particular folder
Step1 : Go to the Repository manager client
Step2 :
Go to option Tools->Queries
Create a new query with name ALL_MAP_QUERY
Create a query with parameter name and condition to retrieve the mappings in a folder
as the following:
folder = folder-name
object type = mapping



Step 3:
a) Connect to power center repository using pmrep
pmrep connect -r RepositoryName -d DomainName -n UserId -x Passwd
b) Create a Persistent output file using command
pmrep executequery -q ALL_MAP_QUERY -u d:\infa8\PERS_OP_FILE
Step 4 :Use the persistent output file created in last step to validate mappings
pmrep validate -i d:\infa8\PERS_OP_FILE -u PERS_OP_FILE_LOG

---------------------------------------------------------------------
Informatica Useful Tips (Part1)
Tip 1 : Ignore the SQ SQL Override conditionally
It is possible by defining a mapping parameter for the WHERE clause of the SQL Override. When you need all records from the source, define this parameter as 1=1 in theparameter file and in case you need only selected data, set the parameter accordingly.


Tip 2 : Overcome size limit for a SQL Override in a PowerCenter mapping
The SQL editor for SQL query overrides has a limit of maximum of 32,767 characters.
To overcome this we can do following
To source a SQL of more than 32,767 characters do the following:
1. Create a database view using the SQL query override.
2. Create a Source Definition based on this database view.
3. Use this new Source Definition as the source in the mapping


Tip 3. :Export an entire Informatica folder to a xml file
We can do this in 8.1.1,
1) In designer Select Tools -> Queries and create a newquery. Set the Parameter Name "Folder" equal to the Folder you want to export and then run the query.
2) In the Query Results window, choose Edit -> Select All Then select Tools -> Export to XML File andenter a file name and location. Full Folder willbe exported to an XML file.
We can also use the query tool in Repository Manager, to geteverything in the folder (mappings, sessions, workflows, etc.)


Tip 4 : Validate all mappings in a folder


We can validate all mappings in a folder in following way:
1. Go to the Repository manager client
2. Create a query with parameter name and condition to retrieve the mappings in a folder
as the following:
folder = folder-name
object type = mapping
Use the following Pmrep Execute query command to get persistent output file:


executequery -q [-u ] [-a (append)]

We can write the result to a persistent output file.If the query is successful, it returns the total number of qualifying records.We can use newly created persistent output file as a input file in the following pmrep validate command :

Pmrep validate-i }[-p [-u ] [-a (append)][-b (verbose)]

Tip 5 : If you are getting following error
CMN_1022 [
[Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed[Microsoft][ODBC Excel Driver]Optional feature not implemented
Database driver error...
Function Name : SetConnectOption
Database driver error...
Function Name : AutoCommitOff]
Solution :to make an entry of excel ODBC in powermart.ini file in informatica folder

Informatica Metadata Queries (Part 2)

In this post we will continue with informatica metadata queries


Purpose : To Find Tracing Level for Session
Query :
select task_name,decode (attr_value,0,'None',1,'Terse',2,'Normal',3,'Verbose Initialisation',4,'Verbose Data','') Tracing_Level
from REP_SESS_CONFIG_PARM CFG,opb_task TSK
WHERE CFG.SESSION_ID=TSK.TASK_ID
and tsk.TASK_TYPE=68
and attr_id=204 and attr_type=6
Description : This query will give tracing information along with session names.This query is helpful in identifying the session which are having particular type of Tracing level like Verbose.


Purpose : To Find name of all stored procedure being used in stored procedure transformation
Query :
select attr_value from
OPB_WIDGET_ATTR
where widget_type=6 and attr_id=1
Description : This query is helpful when you require to know name of all stored procedure being used in informatica.


Purpose : To find who saved mapping last time
Query :
SELECT substr(rpl.event_time,7,4) substr(rpl.event_time,6,1) substr(rpl.event_time,1,5) ' ' substr(rpl.event_time,12,11) "EventTimestamp" ,
usr.user_name "Username",
DECODE(rpl.object_type_id,21,s21.subj_name,('('rpl.object_type_id')')) "Folder",
obt.object_type_name "Type",
DECODE(rpl.object_type_id,21,map.mapping_name,('('rpl.object_type_id')')) "Object"
FROM
opb_reposit_log rpl,opb_object_type obt,
opb_subject fld,opb_mapping map,opb_users usr,opb_subject s21
WHERE obt.object_type_name = 'Mapping'
AND rpl.object_type_id = obt.object_type_id
AND rpl.object_id = map.mapping_id(+)
AND rpl.object_id = fld.subj_id(+)
AND rpl.event_uid = usr.user_id
AND map.subject_id = s21.subj_id(+)
ORDER BY(substr(rpl.event_time,7,4) substr(rpl.event_time,6,1)
substr(rpl.event_time,1,5) ' ' substr(rpl.event_time,12,11)) DESC
Description : This query is helpful when you want to know who saved the mapping last time .

Informatica Metadata Queries (Part1)

Informatica Metadata Queries
In Informatica normally we come across situation like we need to find all mapping which use a particular table as source or something similar .It is very difficult to find it through Informatica tools.In such a situation Repository tables are very useful.

Repository tables name start with OPB and are continuously updated when we make changes.Along with OPB tables REP views are also present.
It is very dangerous to modify these tables so due care should be taken while dealing with OPB Tables.

In this article we will focus on some useful meta data queries

Purpose : Search for a table in Source Qualifiers Sql Override:
Query :
select distinct SUB.subj_name, MAP.mapping_name
from opb_widget_attr WID, opb_mapping MAP, opb_widget_inst WIDINST,
opb_subject SUB
where WID.widget_id = WIDINST.widget_id
and WID.widget_type = WIDINST.widget_type
and WID.widget_type = 3
and WID.attr_id = 1
and WIDINST.mapping_id = MAP.mapping_id
and MAP.subject_id = SUB.subj_id
and upper(WID.attr_value) like '%TNAME%' ;

Description : This query will give list of all mappings where a particular table is being used in sql override.

Purpose : Search for a table in Sources and Targets :
Query :
select SUB.subj_name, decode(WIDG.widget_type,1,'Source',2,'Target'),
MAP.mapping_name,
WIDG.instance_name
from opb_widget_inst WIDG, opb_mapping MAP, opb_subject SUB
where SUB.subj_id = MAP.subject_id
and WIDG.mapping_id = MAP.mapping_id
and WIDG.widget_type in (1,2)
and WIDG.instance_name like '%TNAME_%'
and SUB.subJ_NAME='YOUR_FOLDER_NAME'

Description : This query will give list of all folders,mappings where a particular table is being used as source or target instance.

Purpose : Query to give lookup information
Query :
Select distinct wid.WIDGET_ID, all_map.mapping_name, wid.INSTANCE_NAME Lkp_name, Decode(widat.attr_id,2,widat.attr_value) Table_name,
decode (widat.attr_id,6,widat.attr_value) src_tgt
FROM rep_all_mappings ALL_MAP, rep_widget_inst wid, OPB_WIDGET_ATTR widat
where all_map.mapping_id=wid.mapping_id
and wid.WIDGET_ID=widat.WIDGET_ID
and all_map.subject_area='DCM_SPP_UPL_DEVT'
and wid.WIDGET_TYPE=11
and widat.WIDGET_TYPE=11
and widat.ATTR_ID in (2,6)

Description : This query will give information about lookup transformations like lookup name,Tablename ,Mapping name etc.

Purpose : Query to give Invalid workflows
Query :
select opb_subject.subj_name, opb_task.task_name
from opb_task, opb_subject
where task_type = 71
and is_valid = 0
and opb_subject.subj_id = opb_task.subject_id
and UPPER(opb_subject.SUBJ_NAME) like UPPER('YOUR_FOLDER_NAME')

Description : This query will list of all invalid


Best Practices in Informatica (Part1)

We have tried to come up with some of best practices in informatica


1) Always try to add expression transformation after source qualifier and before Target. If source or target definition changes it is easier to reconnect the ports
2) Always use Cobol File for normaliser in binary format otherwise there are lot of issues specially with comp-3 fields
3) Remove unused ports, though unused ports do not have any effect on performance bit it is always better to remove them for more visibility
4) If possible try to do calculation in Output Ports instead of variable ports as variable ports are need to assign/reassign each time and it can slow down the performance
5) Try to avoid complex filter expression instead of that try to evaluate filter expression in upstream expression transformation and pass it to filter transformation. If you use too many complex calculations in filter condition expression it can slow down performance.
6) In workflow Source/Target directory Property take advantage of Unix links. Instead of hard coding path in source/target directory specify path with Unix link


i.e. suppose in devt environment you are specifying Source directory path as /devserver/team/source and in prod server you specify it as /prodserver/team/source .You can get link created in $PMRootDir in as src_file_dir pointing to /devserver/team/source in dev server and /prodserver/team/source in prod server and in your source/Target file directory you can put path as $PMRootDir/src_file_dir In this case there is no need to change Source/Target directory every time you move between production and dev and testing


7) In sequence generator do not connect current value port to downstream (unless required) transformation as when we connect current value port from sequence generator transformation Informatica Server processes one row in each block. We can optimize performance by connecting only the NEXTVAL port in a mapping .
8) Improve lookup performance by putting all conditions that use the equality operator ‘=’ first in the list of conditions under the condition tab.
9) Always remember rule not to cache look tables having more than 550000 rows (Assuming row size 1024) .If your row size is less than or more than 1024 then adjust number of rows accordingly .
10) Avoid calculating same value again and again. Instead of that store it in a variable use it several times.


If you have further queries then please mail to support@itnirvanas.com

FRIDAY, 23 JANUARY 2009


Normalizer transformation (Working with VSAM source)

Normalizer transformation :
Normalizer transformation is used with COBOL sources, which are often stored in a denormalized format. The OCCURS statement in a COBOL file nests multiple records of information in a single record. We can use Normalizer transformation, to break out repeated data within a record into separate records. For each new record it creates, the Normalizer transformation generates a unique identifier.
Step 1: Create the Copybook for COBOL source
First Step is to get the copybook from Mainframe Team and convert that Informatica Compliant format
It will look like




Normally Highlighted section is provided by Mainframe team convert it into format required by format by adding line above that code (From identification to fd FNAME) and below that code (starting from working storage division). After changes save the file as .cbl file
Point to be taken care while editing .cbl FileYou might get following error

identification division.
program-id. mead.
environment division.
select Error at line 6 : parse error
Things to be taken care of

1.Line Select FNAME should not start before column position 12
2.Other line which have been added above and below should not start before column 9
3.All the line in structure (Highlighted above) should end with Dot.
Once Cobol Source is imported successfully you can drag Normalizer source into mapping
Step 2: Set Workflow Properties Properly for VSAM Source
One you have successfully imported the COBOL copybook then you can create your mapping using VSAM Source. After creating mapping you can create your workflow
Please take care of following properties in session containing VSAM source
In Source Advance File properties set the following options (Highlighted one)

Imp: Always ask for COBOL source file to be in Binary Format, Otherwise you will face lot of problems with COMP-3 Fields
Once you have set these properties you can run your workflow.
COMP3 FIELDS:
COBOL Comp-3 is a binary field type that puts ("packs") two digits into each byte, using a notation called Binary Coded Decimal, or BCD. This halves the storage requirements compared to a character, or COBOL "display", field. Comp-3 is a common data type, even outside of COBOL
Common issues faced while working with Comp-3 Fields :If you have created your created cobol source definition with comp-3 fields (Packed Data) but actual data in source file is not packed .So Make sure that in both the definition and source file date is in same format
Check whether COMP-3 fields are signed or unsigned

FRIDAY, 16 JANUARY 2009


New features of Informatica 8

Enhancments in informatica 8.6 Version :



  1. Target from Transformation :In Infa 8 we can create target from transformation by dragging transformation in Target designer

  2. Pushdown optimization : Uses increased performance by pushing transformation logic to the database by analyzing the transformations and issuing SQL statements to sources and targets. Only processes any transformation logic that it cannot push to the database.

  3. New function in expression editor :New function have been introduced in informatica 8 like reg_extract and reg_match

  4. Repository query available in both versioned and non versioned repositories previously it was available only for versioned repository

  5. UDF (User defined function) similar to macro in excel

  6. FTP :We can have partitioned FTP targets and Indirect FTP filesource(with file list).

  7. Propagating Port Descriptions : In Infa 8 we can edit a port description and propagate the description to other transformations in the mapping.

  8. Environment SQL Enhancements : Environment SQL can still be used to execute an SQL statement at start of connection to the database. We can Use SQL commands that depend upon a transaction being opened during the entire read or write process. For example, the following SQL command modifies how the session handles characters: Alter session set NLS_DATE_FORMAT='DD/MM/YYYY';".

  9. Concurrently write to multiple files in a session with partitioned targets.

  10. Flat File Enhancements :
  • Reduced conversion of data types
  • Delimited file performance has improved
  • Flat file now can have integer and double data types
  • Data can be appended to existing flat files

SATURDAY, 3 JANUARY 2009


How to transpose data in Informatica

Suppose we are having input data coming as


Firstname1
Ph1
Address1


Firstname2
Ph2
Address2


Firstname3
Ph3
Address3


You want data in output like i.e. you want to convert 9 rows into 3 rows with similar group together


Firstname1 Ph1 Address1
Firstname2 Ph2 Address2
Firstname3 Ph3 Address3


You can create a mapping by following these steps
  1. Create a SQ with one input field Field1

  2. Add a expression transformation after SQ

  3. Link Field1 from SQ to expression Transformation

  4. Add a new op port name grp =substr(FIELD1,length(FIELD1),1)

  5. Add a aggregator transformation after Expression and drag Field1 and Grp field

  6. Have grp as group by fieldAdd three new port OP_FLD1=LAST(FIELD1,SUBSTR(FIELD1,1,1)='f')OP_FLD2=LAST(FIELD1,SUBSTR(FIELD1,1,1)='p')OP_FLD=LAST(FIELD1,SUBSTR(FIELD1,1,1)='a')
  7. Add OP_FLD1,OP_FLD2,OP_FLD3 to target

No comments: