Sunday, 27 October 2013

Choosing Filters When You Define Context ..

This question came to my mind, when I came across one data issue in prod environment. Our BO Tool was generating 3 queries - I am assuming there were different context defined at universe - and filter which user had choose at Query level was actually getting applied only for one Query. User was expecting to filter some records, however - as filter was getting applied to only one query - at report level user was getting all records with null values.

Lets re-create the same scenario as below

In our earlier post http://gyaankatta.blogspot.in/2013/10/business-objects-universe-design_9960.html we had seen how you can create a Context at universe.

We will use same example now.

We have defined 2 context in our earlier post
1. To get Salary for each Employee, so joining Employee Fact and Emp and Dep dimentions.
2. To get Salary for each Departments, so joining Departments Fact and Dep Dimension.

At universe level, your joins will be as below

As you can see, we have 2 Contexts defined and Respective joins for it.

Report Requirement
We have to generate report, which will give Department_Name, Employee_Name, Salary_Per_Employee and Salary_Per_Department having Salary_Per_Department less than 15000.

You will choose your objects at BO Query Panel as below


You have pulled Salary_Per_Department condition in your BO Query. If you refresh the report, you will expect that your report should give you all Departments and Employees having salary less than 15000.

However, your reports will give all records and no record will get filtered out.

Why is so ?
If we verify 2 queries generated by BO, filter conditions which we have specified will get applied only to one Query as below

 This is a query one, generated by context Department and Employee_Fact. This one do not have filter condition which user is expecting to add.

This is a Query2, from Context Department and Department_Fact and this is having filter condition which user has specified.









BO tool will do FULL OUTER JOIN of the results generated from above 2 queries.

Second query will filter all departments having salary less than 15000, however first query won't do that. So, at final results, user can see all departments.

Is this because user added Fact as a Filter Condition?
No, its not because of that. Lets add one more table - Location - at Universe using same structure.

As you see here, we have added LOCATIONS table and all columns from this table we have exposed to Universe as Dimensions.

Save the universe, and now generate same query again, with filter condition as City = 'Bombay' as below

In earlier example, we have added filter which was a Fact, however this time its a pure Dimension. Same as earlier, BO Tool will generate 3 queries depending upon the context.

Query 1 got generated using the first context of Employee_FACT and Employee_DIM.











 Query 2 got generated using context Departments_FACT and Departments
Query 3 got generated using context Departments_FACT and Departments












As you can see, Dimension filter which we have added at BO Query got applied only for one query out of three. So at ultimate report, you will get all records but not specific to City as Bombay.

This situation will occur only when  you give your universe for Ad-Hoc reporting. In case of Canned Report you do not have to worry.

Remedy / Workaround --
Even I am not sure the workaround of it. however hoping to get something related to it.

Generating Target Files Dynamically -- Part 2

In my last post [link is mentioned below if you haven't gone through it] I had explained how will you generate a TargetFile name dynamically -- means how will you give your target file name from mapping.

http://gyaankatta.blogspot.in/2013/10/generating-target-file-name-dynamically.html

At end of the post, I had mentioned one NOTE particularly specifying that the structure / arrangement will work only for creating a Single File.

Lets assume that you are loading 3 different source files from only one mapping using indirect loading, and you also want to create same number of files as an output with some different name.

How can you achieve that?
1. Indirect File Loading
As mentioned in earlier post [link is below], you need to make arrangements for indirect file loading and also need to add port in source for Currently Processing File Name. By this you can load number of source files using a single mapping and also process current file name.

http://gyaankatta.blogspot.in/2013/10/indirect-file-loading-and-saving.html

2. File Name column at Target
As per your requirement, you need to create the file names dynamically, meaning you need to pass those filename from mapping otherwise if you specify at Session level it will be a static name and you could not create more than one file.

http://gyaankatta.blogspot.in/2013/10/generating-target-file-name-dynamically.html -- As per this post, make sure that your target will have an additional port "FileName"


Now, if you connect your Source and Target as shown in above image, you can assign a file name dynamically; however you can not create more than one file from this.

Why it won't create more than one file?
Your mapping will get executed as part of one transaction and [ I think so] it will assign first file name which you will read from source as a target file.

To create more than one file, you need to let know the session that mapping has to go through more than one transaction. So session will create one target file for each transaction.

*You can see same scenario in help files of informatica

How can you create more than one transactions ?
Using Transactional Control Transformation, you can create more than one transactions. So, as and when your source file name gets change, you need to create a separate transaction [using transactional control transformation] by which Session will create more than one target files.


You need to add transactional control transformation just before the target definition. The logic will be as below

If your filename is same, continue with same transaction, otherwise create a new transaction.

Do we require a Sorter for sorting file names?
No, we do not need it, as by default informatica will read the files sequentially so even the filenames will come sequentially.

How will you understand the file name got changed?
That you need to achieve creating variable port and storing the old value.
As you can see we have defined a variable to store the Old value of input File Name and we will compare old and current file name to identify the change.

Depending upon the value of ChangedFileFlag we will decide whether to create new transaction or continue with existing.

Session Level Setting
This will be similar to your indirect loading settings, you just have to define load type to Indirect and give the source file name as File which contains actual file list which you are going to load.

Target file name you can give any which is ultimately going to get over-ridden by the name which you are passing via mapping.

You have done with mapping and session level configurations, just run the workflow and validate the results.





Saturday, 26 October 2013

Generating Target File Name Dynamically...

Generally we give our target file name at Session Task Properties, wherein file name is static [if its not parametrized]. Suppose you have a situation in which you have to generate your target file name, based on verifying some conditions at Mapping.

How would you do that?
Informatica Target Designer give you the facility by which you can assign the FileName to your target at mapping. This will over-ride the name which you have defined at Session Level.

 As you see in above image, when you edit the your Target at Columns tab, there is one more button at last. Tool tip of that will prompt you "Add FileName column to this table"

If you click on that, new port will appear - which is static - having name as FileName, though you can change the precision and scale value of it.

That is all, remaining task is to pass the FileName value to this port from your previous transformations.

As stated, I have created FileName of my own using below function [you can give any dam name], and connected that port to our newly created target port "FileName"

TO_CHAR(SYSTIMESTAMP(), 'SSSSS')

You are done with your mapping, just create corresponding workflow and execute it. You can create only one file name in above mentioned scenario.

Note -- Lets assume you have a situation in which you are doing in-direct loading, and for each incoming file, you want to generate corresponding target file giving / specifying target file name dynamically, then above arrangement won't work. We will see that scenario in later post.

Indirect File Loading and Saving Current Processing FileName..

This question I had asked at the time of interview to at least dozens of people; most of them could answer first part of it i.e. Indirect File Load; however, second part i.e. Saving Current FileName people were unaware of.

Part -1 Indirect File Load --

When you need to use Indirect File Load ?
Lets assume you have file coming in from different countries, each file has same structure, and you want to process all those files using a single mapping. You can achieve this by Indirect File Load.

Import Source -- Lets assume you have 3 files India.csv. UK.csv and USA.csv coming from different countries having same file structure. You can import any of the file to import as a Source.

Import Target -- Once you import your Source, just drag it to Target Designer so same will become as your target.

Mapping -- Your mapping structure will be simple i.e Source -- Source Qualifier -- Target


As you can see, above is the mapping structure which has only Source -- Source Qualifier -- Target.

Next remaining task is to Create a Workflow --
Once you create a workflow which is mapped with above mapping, you need to change some settings at Session Task which you have created.

Specifying Indirect Load and Source File Name --
Now, first thing which we need to change is the Load Type at session level; by default it will be Direct, which you need to change to InDirect.

As shown in above image, we have changed Source FileType to Indirect
Also, filename we have specified is FileList.csv -- this is not the actual file which you are intended to load, but it contains names of all files which you have planned to load.

In this case, structure of FileList.csv will be as below


As shown in this image, your filename [in case of indirect load] will contain list of file names which you are planning to load

Also, make sure that FileList and Other Files should be in same directory which you have specified in "Source File Directory" option.

If source other files are at different physical location, then you need to specify its full path in your FileList.csv

Target Level Session Task Settings --
As in the image above, you just have to specify Target File Directory and Target File Name.

Once you do that, you are done with the development and your workflow is ready for execution.

Part -2 Loading Current Processing File Name --
Lets assume that your client asked you to also load the corresponding file name along with the data at your target, so that customer can identify particular data belongs to which region.

Change at Source -- As you need to populate FileName along with the data, you need to retrieve the currently processing file name along with the Data from Source.

How will you get Current Processing FileName ?
As you can see in image, Informatica Source Analyzer gives you an option to populate Currently Processing File Name along with other data. When you click on the Check Box above at Source, automatically one extra port gets added at end, as below



As you can see, a new port gets added automatically when you check the above specified option.

Just similarly add one more port to your target to same the currently processing file name.

Note -- A new port which gets added at Source, will have whole Source File Path and not only the file name.

Connect Currently Processed File name from source till target.

Save and Validate your mapping and refresh the workflow and execute it.

Now the same output file which you had specified above, will get updated [records won't get added / updated if the file is already exists.] with new records



Wednesday, 16 October 2013

Reading and Writing Multiple Files Using Single Source and Target at One Go..

Reading and Writing Multiple Files Using Single Source and Target at One Go..
Dynamic Lookup Use
Parameter File
Lookup On Multiple Match
Scenario Of 4 Locations

Transpose Columns To Rows Using Union Tranformation

This was a bit simple but comfusing question asked at the time of interview.

Scenario was like this..

We have a source - flat file or relational - having a single record as below

Source -- a,b,c,d,e,f,g
Target -- a,b,c,d
               e,f,g,h

Usually, after looking at requirement, it looks like tranposing rows to columns and the first things comes in a mind is use of Normalizer.

However, if we use normalizer, [ consiering occurance as 4, that is what you are thinking] result will be incorrect.

Solutions will be pretty simple which is as follows


As shown in above image, we just need one Union tranformation between Source Qualifier and Target.

We will create 2 groups in Union Tranformation as below -- having 4 ports each


As, shown in above image, we have created 2 groups having 4 ports each, which has converted 8 columns to 2 rows of four columns each.

67 How will you re-create Persistent Lookup Files on a particular date?

Few days back one of my friend was giving client interview and one of the question was "How will you re-generate / refresh Lookup Persistent Cache on a particular time period?" ... [ another thing is, I realized that even after cracking the interview for a company, you again have to face client round - even before your actual joining process starts]

Below are the ways which we discussed ..

Before thinking about how we refresh the persistent cache file, we will see how the file gets created.

Lets assume below scenario..


We have 1. Departments [Oracle] as a Source,
2. Departments [as a Lookup]
3. Target as a Flat File.



As shown in above image, you have to check the "Lookup Cache Persistent" box, to enable persistent cache [or create persistent cache file]

To validate if your session is creating persistent cache [file] or not, run the workflow and verify the session log -- which is as below


TT_11183 : Enabled using [1 (auto)] additional concurrent pipelines to build lookup caches. (Session likely will build or refresh [1] lookup caches; [0] on-demand only).

2013-10-17 23:59:58 : INFO : (7924 | MAPPING) : (IS | BodheeTree) : node01_mandar-ad09ce72 : TM_6660 : Total Buffer Pool size is 24000000 bytes and Block size is 65536 bytes.

2013-10-17 23:59:58 : INFO : (7924 | MAPPING) : (IS | BodheeTree) : node01_mandar-ad09ce72 : TT_11162 : INFO: Transformation [lkp_Dep]: Input Group Id=0: transforming up to 288 row(s) at a time.

2013-10-17 23:59:58 : INFO : (7924 | MAPPING) : (IS | BodheeTree) : node01_mandar-ad09ce72 : TT_11163 : INFO: Transformation [lkp_Dep]: Output Group Id=0: transforming up to 288 row(s) [288] at a time.

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : DBG_21097 : Lookup Transformation [lkp_Dep]: Default sql to create lookup cache: SELECT DEPARTMENT_ID FROM DEPARTMENTS ORDER BY DEPARTMENT_ID

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : DBG_21249 : Initializing Transform: lkp_Dep{{BLD}}

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : DBG_21371 : Number of Input Transforms = [1]:

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : DBG_21259 : Input Transform[0]: [lkp_Dep{{DSQ}}]

2013-10-17 23:59:58 : INFO : (7924 | DIRECTOR) : (IS | BodheeTree) : node01_mandar-ad09ce72 : TT_11184 : Starting additional concurrent pipeline to build the lookup cache needed by Lookup transformation [lkp_Dep].

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1:TRANSF_1_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : DBG_21294 : DBG_21294 Lookup cache creation completed : (Thu Oct 17 23:59:58 2013)

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1:TRANSF_1_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : CMN_1671 : Created new cache files PMLKUP5_5_0W32.[dat/idx] in directory C:\Informatica\9.0.1\server\infa_shared\Cache for Lookup [lkp_Dep].

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1:TRANSF_1_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : DBG_21641 : lkp_Dep: Index cache size = [71568000], Data cache size = [143138816]

2013-10-17 23:59:58 : INFO : (7924 | LKPDP_1:TRANSF_1_1) : (IS | BodheeTree) : node01_mandar-ad09ce72 : DBG_21216 : Finished transformations for Source Qualifier [lkp_Dep{{DSQ}}]. Total errors [0

As you can see in the session log, lookup query got created and got fired against the respective database to create the cache file.

However, when you run the same worfklow for second time, session log will say somedifferent story as below --

Severity Timestamp Node Thread Message Code Message

INFO 10/18/2013 12:10:58 AM node01_mandar-ad09ce72 LKPDP_1 TE_7212 Increasing [Index Cache] size for transformation [lkp_Dep] from [71567155] to [71568000].

INFO 10/18/2013 12:10:58 AM node01_mandar-ad09ce72 LKPDP_1 TE_7212 Increasing [Data Cache] size for transformation [lkp_Dep] from [143134310] to [143138816].

INFO 10/18/2013 12:10:58 AM node01_mandar-ad09ce72 LKPDP_1 CMN_1669 Using existing cache files PMLKUP5_5_0W32.[dat/idx] in directory C:\Informatica\9.0.1\server\infa_shared\Cache for Lookup [lkp_Dep].

INFO 10/18/2013 12:10:58 AM node01_mandar-ad09ce72 LKPDP_1 DBG_21641 lkp_Dep: Index cache size = [71568000], Data cache size = [143138816]

INFO 10/18/2013 12:10:58 AM node01_mandar-ad09ce72 READER_1_1_1 DBG_21438 Reader: Source is [ORCL], user [HR]
As log says, second time when workflow ran, it validated if the cache file is present or not. and as it finds the cache file at respective location, it used same file to generate the Lookup Cache

Now the file which got genrated is a static file; this will not automatically gets updated.

When persistent cache will be useful ?
The time when you are sure that the master table on which you are doing the lookup will not get updated / inserted frequently. e.g. Fiscal Year table which you populate once in a year.

How will you update the cache file [which new records] or re-generate it?

1. If you delete /move the file from respective location [ or if], informatica will not get the respective file and it will automatically re-generates it.

So, you can create a command task [which will run at end of every month] to delete / move the persistent cache file from respective location.


 As shown in above image, we will create one more task to delete the persistent cache and at we will run this task only at month end. We will write below condition between task1 and task2.

DATE_DIFF (LAST_DAY(sysdate), sysdate,'DD') = 0

2. Instead of deleting the persistent cache, we also have an option to re-build the cache at session level itself.


As shown in above image, click the check box for "Re-Cache From Lookup Source", which will re-generate the lookup cache files, before actaully running the mappings.

Create one more session as below which will re-generate cache files and before running the actual workflow.




As explained in Point#1, even s_RegenerateLookupCache will only run on month end which will have below condition at link

DATE_DIFF (LAST_DAY(sysdate), sysdate,'DD') = 0


Sunday, 13 October 2013

Business Objects : Universe Design Scenario - Objects using Context

In last 3 posts, we have seen how to use Derived Table, Analytical Functions at Universe Object at Universe Level and Merged Dimension at Report Level.

Below are the links for the same

Derived Table -- http://gyaankatta.blogspot.in/2013/10/business-objects-universe-design.html
Analytical Function -- http://gyaankatta.blogspot.in/2013/10/business-objects-universe-design_13.html
Merge Dimension -- http://gyaankatta.blogspot.in/2013/10/business-objects-universe-design_6236.html

In this post, we will try to resolve same problem but this time using the Context at Universe level.

Before actually creating a new Universe, we will first analyze the scenario on the basis of which we will be defining the context.

We have Employees and Departments table [ we do not have Dimension and Fact tables separate] which are having pure 3NF relationship. We are going to use Dimension Modeling Concept on top of this 3NF layer.

Logically we have 2 separate paths
1. To get the Employee information along with Salary as a Fact
2. To get Departments information along with Salary as a Fact

Our logical model to implement this solutions is as follows..

At Universe Level, we will create 2 alias of Employee table for each fact table as described in above image.

We will define 2 contacts as follows
1. Employee Context -- We will have all Employee information from this context along with Employee Salary as a measure attribute. We will have join between Emp Dim, Emp Fact and Dep Dim
2. Department Context -- We will have all Departments information from this context along with Department wise Salary as a measure attribute. We will have a join between Dep Dim and Dep FACT table.

How will you create an Alias ?

You just have to select the table of which you want to create an alias.

Click on Insert Menu where you will get an option to create an Alias of a selected table.

Create 2 Aliases of Employees table with name as Emp FACT and Dep FACT respectively.




Final join between imported Tables and Aliases will be as below

Next task is to define Context for Employee and Departments information separately.

How will you create a Context ?

At Insert menu, you will have an option to create Context at Universe.

Once you click on the Context option, new window will appear, which will prompt you to give Context Name and select the joins which will participate in that context.

For our purpose, we will use below joins for 2 context.

1. Employee Context --

a. Join between Employees and Departments
b. Join between Employees and Employee Fact

2. Department Context --

a. Join between Departments and Department Fact






Once you define the context, next job is to create classes and objects.

As we have defined 2 context for Employee and Departments separately, we will create objects accordingly.

1. Employees class will have all Employee related Dimensions e.g Employee Id, First Name, Last Name etc.
2. Departments class will have Department related Dimensions e.g Department Id, Department_Name
3. Measure class will have 2 measure objects coming from 2 separate context.
   a. Salary_Per_Employee -- will come from Emp FACT table
   b. Salary_Per_Department -- will come from Dep FACT table.

Finally our Universe will appear something as below

We have done with Universe creation, next and final part is of creating a Report.


As soon as you pull the objects which are in 2 different context -- at Query Panel, BO Tool will create 2 separate queries, one from each context.

As you can see in a image, BO has created 2 separate queries.

1. Using Department Context, which will give Department Name and Salary_Per_Department
2. Using Employees Context, which will give Employee Information along with Salary_Per_Employee

 Ultimately tool will join results of these queries internally on the basis of common columns, in this case it is Department_ID

Note -- If you define the context, BO Tool will create 2 separate queries for you by its own and merge results of those on the basis common dimension internally. However, when you define 2 separate queries at BO by your own, you have to define / join result sets of those by defining merged dimension which we had seen earlier.

Business Objects : Universe Design Scenario - Merged Dimensions

In last 2 posts, we have seen a scenario of creating a sample Business Objects report using Derived Table and Analytical Functions. Links for those posts are as below respectively.

http://gyaankatta.blogspot.in/2013/10/business-objects-universe-design.html

http://gyaankatta.blogspot.in/2013/10/business-objects-universe-design_13.html

In this post, we will see how can we use Merged Dimension facility to create a report.

Our Universe will be very simple - we will not use any Derived Table or Analytical Functions. It will only have your source tables with Respective Classes / Objects created as below

As mentioned in above image, we will have 3 classes - one for each source tables as Dimension objects and one for Measure objects.

As per our reporting requirements, we have below 7 columns ready with current universe structure.

1. Employee_id
2. First_Name,
3. Last_Name
4. Email
5. Phone_Number
6. Salary
7. Department_Name

Remaining task is to have a Department wise salary which we will derive at report level.

In earlier 2 posts, at Report level we had created only 1 query; However, this time we will create 2 queries [ as we do not have Salary_Per_Department value coming directly from Source]

Query 1 -- All employee related information such as Employee_Id, First_Name, Last_Name, Email, Salary, Department_Id


Query 2 -- All departments related information such as Department_Id, Department_Name, Salary

Here, each query will give us separate result - first with Employee Information along with Salary and second with Departments Information along with Departments wise Salary.

Our task is too merge these 2 query results at Report Level


How will you merge 2 query results  ?

To merge results of 2 separate queries, you need to have some common dimension between 2. In this case the common dimension is Department_Id

Business Objects Reporting Tool will automatically identifies common dimensions between 2 queries [if you have selected same object from same universe] and term it as Merged Dimension at report query panel.


If you see in this image, Business Objects tool automatically separated Department_ID as Merged Dimensions from Query1 and Query2 respectively.

Our next task is to merge Employees and Departments reports together.

We will create 2 separate variable as below for Departments Reports.

1. Department_Name -- This will be a Detail objects which is related to Department_Id of Merged Dimension from Query 4

2. Salary_Per_Department -- This will be a simple measure object, related to Salary object of Query4



Detail Objects  -- Department_Name. If you are merging dimension objects from 2 different queries, then you need to create Detail Object and associate that object to parent dimension of respective query.

Measure Object -- Salary_Per_Department.

Now, we have all columns for our report are ready. Remaining report preparation task will be same as we have seen earlier.

Note -- Usually we merge 2 reports using merge dimensions to avoid universe level changes.

Business Objects : Universe Design Scenario - Objects using Analytical Function

In earlier post -- http://gyaankatta.blogspot.in/2013/10/business-objects-universe-design.html -- we had seen how can we use Derived Table while creating a Business Objects universe.

In this post, we will see same scenario and try to build same report; however this time instead of using Derived Table, we will import Source tables as is.

How is the universe Structure ?


So, this time instead of creating a Derived Table, we will import Source Tables - Departments and Employees - as it is.

Once you import these tables, you need to define the Relationship \ Join between these 2 tables.



As we need all records from Employees table, we have checked the Outer Join box for Table1 and the join between 2 tables will be based on Department_Id column which is common for both.

Once you define the relationship between 2 tables, create respective classes; Define separate classes to identify each subject area separately and also to identify Measure and Dimension objects.

So, we will create 3 separate classes,
1. Departments -- In which all Department's Dimension objects reside
2. Employees -- In which all Employee's Dimension objects reside
3. Measure -- In which Measure objects related to Departments and Employees will reside.

 Now, we have almost all objects [Dimensions and Measure] ready except Salary_Per_Department.

To derive Salary_Per_Department measure, we will use analytical function while creating a measure object. Business Objects allows us to use Analytical Functions at the time of Universe Creation.

We have done will all object and Universe Creation for our reporting requirement.

Next process of creating a report is same as we have seen earlier.

Note -- Creating a universe like this is better approach than the earlier once. By this approach we can use all columns from both Departments and Employees table and Universe structure will be simple.

Business Objects : Universe Design Scenario - Derived Table


Today we are going to see a simple Scenario using Employees and Departments table in HR schema [you will get this schema created automatically once you install ORCL or XE default databases from oracle].


Requirement -- We have 2 normalized tables Employees and Departments. Structure of tables are as below

We need a single report which will have columns as
1. Employee_ID,
2. First_Name,
3. Last_Name,
4. Email,
5. Phone_Number,
6. Salary,
7. Department_Name,
8. Salary_Per_Department,
9. % of Salary [Salary_per_Employee / Salary_Per_Department]




Universe Design -- Columns 1 to 6 are direct columns, coming from Employees table, even Department Name you can get directly from joining Departments table to Employees table; However, pain area is to get the value for Salary_Per_Department.


Before actually creating a universe, we will try to derive this report using SQL Query..


SELECT EMPLOYEE_ID,
FIRST_NAME,
LAST_NAME,
EMAIL,
PHONE_NUMBER,
SALARY,
DEP.DEPARTMENT_ID,
DEPARTMENT_NAME
FROM EMPLOYEES EMP LEFT JOIN DEPARTMENTS DEP
ON EMP.department_ID = dep.department_id;


As said earlier, first 7 columns will come directly from Source table; Now, to derive salary_per_department we will use Analytical Function, so the ultimate query will be like below.


SELECT EMPLOYEE_ID,
FIRST_NAME,
LAST_NAME,
EMAIL,
PHONE_NUMBER,
SALARY,
DEP.DEPARTMENT_ID,
DEPARTMENT_NAME,
CASE WHEN DEP.DEPARTMENT_ID IS NULL THEN 0 ELSE SUM(SALARY) OVER(PARTITION BY DEP.DEPARTMENT_ID) END AS SALARY_PER_DEPARTMENT
FROM EMPLOYEES EMP LEFT JOIN DEPARTMENTS DEP
ON EMP.department_ID = dep.department_id;


As we are ready with our report query, we will use the same at the time of Universe Design using it as a Derived Table.


How will you create a universe ?

If you have installed BO 3.0, then at Program File you will have a menu called "Universe Designer" [In case of BO 4.0 it is termed as Information Design Tool ]
After you login --
1. At top-left corner, you will have a menu to create a new universe. Once you click on it, new window will appear.
2. You need to give Universe Name which you are going to create.
3. Create a new Connection if you do not have it already.
4. Click OK.




Next step is to import the tables to the universe which you have created.




You will get options to import tables [using the connection, which you have specified at the time of universe creation]

Here, instead of importing a new table to your universe, we will create a Derived Table.

As query written above is sufficient for our reporting requirement, we will use the same query while creating a derived table.



How will you create classes and objects?

You need to create respective classes and objects for your report generation. You will create these at Universe itself.

Just drag your derived table to left window panel and automatically new class [name same as your derived table] with objects [all columns of your derived table] gets created.

All objects by default gets created as a Dimension, you can change their property to Measure or Detail as required. In this case we will change the property of Salary and 'Salary Per Department' to measure.

As shown in an image, you can change the property of a specific object as Dimension, Measure or Details depending on the requirement.


Now you have your universe with respective objects ready for the reporting.









How will you use universe for reporting ?

Normally you can use BO Infoview, BO Rich Client, BO Full Client for reporting purpose. For the sack of this reporting explanation I am going to use BO Rich Client.

At program files, you will have a installed menu called "Web Intelligence Rich Client".


Once you log into it, new window will appear which will prompt you [first option] to choose a universe for your reporting. Select the universe which you have just created.


Once you import a universe into Rich Client, new query panel will appear as above to create a new Report Query.

You can see same objects at left side panel which you have created at Universe. You need to drag your objects - as per your reporting requirement - to Results Objects panel and filters if any at Query Filters panel.


As per requirement, we have selected respective objects at Result Objects and Query Filters Panel and click on Run Query.

A report will get created with all records having department name as "Finance". Now the next task is to add a new column % of Salary [Salary_per_Employee / Salary_Per_Department] at Report level.


Right click on last column of a report and a popup will appear to add a new column; select "Insert column to the right" in this case

Once new column gets added, click "Ctr + Enter" -- new window will appear wherein you can enter a required formula i.e. Salary / Salary_Per_Department * 100

 
As shown in above image, write a formula in Formula Editor and click OK. 

All corresponding values will appear as per the formula specified in a new derived column.
You can find sample universe and report explained in above example at below URL.

Note -- We have not added original Employees and Departments table at Universe just for sack of this example. By not adding these tables to universe, we have actually restricted the report functionality which is now limited only to the objects available in Derived Table. As a general practice, I have seen people do not use \ or try to avoid the use of Derived Table in universe.