Youtube Channel

Tally ERP Tutorials...



Coming Soon..


Course Contents : From basics to GSTR fillings in Day Wise Schedule.
 

Target Load Order/ Target Load Plan in Informatica

Target Load Order:

Target load order (or) Target load plan is used to specify the order in which the integration service loads the targets. You can specify a target load order based on the source qualifier transformations in a mapping. If you have multiple source qualifier transformations connected to multiple targets, you can specify the order in which the integration service loads the data into the targets.

Target Load Order Group:

A target load order group is the collection of source qualifiers, transformations and targets linked in a mapping. The integration service reads the target load order group concurrently and it processes the target load order group sequentially.

Use of Target Load Order:

Target load order will be useful when the data of one target depends on the data of another target. For example, the employees table data depends on the departments data because of the primary-key and foreign-key relationship.
E.g.The departments table should be loaded first and then the employees table. Target load order is useful when you want to maintain referential integrity when inserting, deleting or updating tables that have the primary key and foreign key constraints.
Sending records to target tables in cyclic order 

Get top 5 records to target without using rank 


Sending alternate record to target

Sending first half record to one target and second half of record to another target 


Remove header and footer from your file 
Solution:


 We can remove header by skipping 1st row while importing the source but for footer we will have to implement a logic as follows:
  • Use Expression Transformatin to assign Sequence no to each rows.
  • Use Sorter after expression and sort on sequence no generated in expression transformation.
  • as we have five footer lines use a filter after sorter and give condition where sequence no >5 
  • finally connect to the target.



   
Retrieving first and last record from a table/file 


  1. Separating duplicate and non-duplicate rows to separate tables.
    Solution: This can be achivied through this way :
     
    After SQ use Aggregator Transformation and add a port o_COUNT of variable type with value
    as COUNT(ID), Select group by all ports. 
    Take a Router Transformation drag all ports from Aggregator and create two groups with following conditions:
    Unique = o_COUNT=1
    Duplicate=o_COUNT>1
    Connect the respective groups to their targets.

    Video : 



Convert single row from source to three rows in target.
Solution : Let's assume our Source and Target structures are as following:
  


 



A) By using Java Transformation
Use Java Transformation after SQ and drag all ports to Java Transformation,then create output ports for o_ID,o_NAME, o_SALARY and write following codes in java code :
Compile the code and connect output ports to target.


Error Handling
Unit Testing
SCD Type3 


SCD Type2 









SCD TYPE1 


Constraint Based Loading

Column to Rows Conversion

 

Rows to column conversion

 

 


 

 

Concurrent execution of a workflow.

Separate even odd records

 
Demonstrate Incremental Aggregation.

Demonstrate Incremental load.

How to load multiple files with different structure.

Add custom message to session log file.

  1. Count the no of vowel present in emp_name column
Insert and reject records using update strategy
Minus operation on flat file

sending data one after another to three tables in cyclic order 


Removing '$' symbol from salary column 

Using mapping parameter and variable in mapping
Produce files as target with dynamic names 

Lead and Lag in informatica level.
Concatenation of duplicate value by comma separation 

Extracting every nth row 

Many of us knew about the primary key importance. A Primary key is unique identifier for the rows in the table, so by using primary key we can easily find the unique row from a table. but when we have more than one column having the property to be mark as primary key then we go for composite primary key.
Syntax :
create table <Table_name>(column1 Datatype(size), column2 Datatype(size)....column_n Datatype(size) primary key(column1,column2));

Lets have a scenario when  we have to go for a composite key. We have to design a database for Library.
So, for a Library database we will have following tables.
  • Book_Master
  • Author_details
  • Publisher_details
  • Staff-details
  • Student_details
  • Book_details
  • Book_issue_details
Book_Master -



COLUMN_NAME
DATA TYPE
CONSTRAINTS
ISBN                      
NUMBER
PRIMARY KEY
BOOK_ID
NUMBER
PRIMARY KEY
PRICE
NUMBER

PURCHACE_DATE
DATE

Author_details
 

COLUMN_NAME
DATA TYPE
CONSTRAINTS
AUTHOR_ID       
NUMBER
PRIMARY KEY
AUTHOR_NAME
VARCHAR


 Publisher_details



COLUMN_NAME
DATA TYPE
CONSTRAINTS
PUB_ID                
NUMBER
PRIMARY KEY
PUB_NAME
VARCHAR


Staff_details



COLUMN_NAME
DATA TYPE
CONSTRAINTS
STAFF_ID
NUMBER
PRIMARY KEY
STAFF_NAME
VARCHAR

DESIGNATION
VARCHAR

DEPARTMENT





Book_details



COLUMN_NAME
DATA TYPE
CONSTRAINTS
ISBN
NUMBER
FOREIGN KEY
BOOK_NAME
VARCHAR

AUTHOR_ID
NUMBER
FOREIGN KEY
PUB_ID
NUMBER
FOREIGN KEY
COPIES
NUMBER

Book_issue_details


COLUMN_NAME
DATA TYPE
CONSTRAINTS
ISBN
NUMBER
FOREIGN KEY
BOOK_ID
NUMBER

STUDENT_ID
NUMBER
FOREIGN KEY
ISSUE_DATE
DATE

ISSUDED_BY
VARCHAR
FOREIGN KEY
Previous PostOlder Posts Home