Thursday, January 9, 2014

SAP HANA Videos

Hi All,
Please find the attached sample video's on SAP BI/BO/HANA (all together).I can give the guarantee that these all are the best videos that can help to one can start in real-time, very basic to advance level explanation videos.
===================================
Write an email at ->> mailtosapbiw@gmail.com
===================================

It is totally 15GB data that I can share it with you.It includes-

ü  Complete set of videos
ü  Two set of videos
ü  Materials
ü  Real-time scenarios
ü  Exercises
ü  All Materials
ü  Extra videos
ü  Interview preparation

Video : 1 


Let me know if you are interested then i can share video links with low price.
Once you do the payment then will give you the rights to download complete 15GB data to you.

All the best !!


Regards,
Venkat

SAP BO-BOBJ Videos

Hi All,
Please find the attached sample video's on SAP BI/BO/HANA (all together).I can give the guarantee that these all are the best videos that can help to one can start in real-time, very basic to advance level explanation videos.
===================================
Write an email at ->> mailtosapbiw@gmail.com
===================================

It is totally 15GB data that I can share it with you.It includes-

ü  Complete set of videos
ü  Two set of videos
ü  Materials
ü  Real-time scenarios
ü  Exercises
ü  All Materials
ü  Extra videos
ü  Interview preparation

Video : 1 



Let me know if you are interested then i can share video links with low price.
Once you do the payment then will give you the rights to download complete 15GB data to you.

All the best !!


Regards,
Venkat

SAP BI/BW Videos

Hi All,
Please find the attached sample video's on SAP BI/BO/HANA (all together).I can give the guarantee that these all are the best videos that can help to one can start in real-time, very basic to advance level explanation videos.
===================================
Write an email at ->> mailtosapbiw@gmail.com
===================================

It is totally 15GB data that I can share it with you.It includes-

ü  Complete set of videos
ü  Two set of videos
ü  Materials
ü  Real-time scenarios
ü  Exercises
ü  All Materials
ü  Extra videos
ü  Interview preparation

Video : 1 


Let me know if you are interested then i can share video links with low price.
Once you do the payment then will give you the rights to download complete 15GB data to you.

All the best !!


Regards,
Venkat

SAP BI-BO-HANA videos

Hi All,
Please find the attached sample video's on SAP BI/BO/HANA (all together).I can give the guarantee that these all are the best videos that can help to one can start in real-time, very basic to advance level explanation videos.
===================================
Write an email at ->> mailtosapbiw@gmail.com
===================================

It is totally 15GB data that I can share it with you.It includes-

ü  Complete set of videos
ü  Two set of videos
ü  Materials
ü  Real-time scenarios
ü  Exercises
ü  All Materials
ü  Extra videos
ü  Interview preparation

Video : 1 

Video : 2


Let me know if you are interested then i can share video links with low price.
Once you do the payment then will give you the rights to download complete 15GB data to you.

All the best !!


Regards,
Venkat

Friday, December 6, 2013

SAP BI PERFORMANCE Improve techniques

Performance can be Reporting PerformanceLoad Performance or General Performance.
For improving Report Performance
-Create Aggregates on Infocubes
-Use OLAP Cache for buffering the query result to reduce burden on database
-Pre-caliculated Web templates helps to distribute workload of running report to off-peak hours and can have report result set ready for very fast access to data.
-Use small amount of result data as starting point of any queries and do the drill down
-Avoid reporting on ODS
-If you use exclusion in reporting (<>), the indices are not used, so avoid using the exclusion but use inclusion.
-Use read mode "read when navigating and expanding the hierarchies"
-Use compression of infocube since the E table is optimized for queries
-Create additional indexes at manage data target - performance tab
-Run DB Statistics often

for improving Load Performance
-Check the ABAP Coding at transfer and update rules this would make performance slow
-Keep available of more dialog processes available and do load balance to different servers
-Indexes on source tables
-Use fixed lengh files if you load data from a flat files and put the files on the application server
-Prefer to use SAP delivered standard extractors
-Use PSA and Datatarget parallel option in the infopackage load settings
-Start several infopackages parallel with different selection options
-load master data before loading transaction data

For improving 
General Performance
-Archieve and delete the old data
-Use line item dimensions for large data
-Use BW statistics cube to monitor the performance

-If you are not going to use ODS for reporting desable the Bex reporting flag

SAP BI INTERVIEW QUESTIONS

1. What are the extractor types?
· Application Specific à
o BW Content à
§ FI, HR, CO, SAP CRM, LO Cockpit
o Customer-Generated Extractors à
§ LIS, FI-SL, CO-PA
· Cross Application (Generic Extractors) à
o DB View, InfoSet, Function Module
2. What are the steps involved in LO Extraction?
· The steps are:
o RSA5 à Select the DataSources
o LBWE à Maintain DataSources and Activate Extract Structures
o LBWG à Delete Setup Tables
o 0LI*BW à Setup tables
o RSA3 à Check extraction and the data in Setup tables
o LBWQ à Check the extraction queue
o LBWF à Log for LO Extract Structures
o RSA7 à BW Delta Queue Monitor
3. How to create a connection with LIS InfoStructures?
· LBW0 à Connecting LIS InfoStructures to BW
4. What is the difference between ODS and InfoCube and MultiProvider?
· ODS: Provides granular data, allows overwrite and data is in transparent tables, ideal for drilldown and RRI.
· CUBE: Follows the star schema, we can only append data, ideal for primary reporting.
· MultiProvider: Does not have physical data. It allows to access data from different InfoProviders (Cube, ODS, InfoObject). It is also preferred for reporting.
5. What are Start routines, Transfer routines and Update routines?
· Start Routines: The start routine is run for each DataPackage after the data has been written to the PSA and before the transfer rules have been executed. It allows complex computations for a key figure or a characteristic. It has no return value. Its purpose is to execute preliminary calculations and to store them in global DataStructures. This structure or table can be accessed in the other routines. The entire DataPackage in the transfer structure format is used as a parameter for the routine.
· Transfer / Update Routines: They are defined at the InfoObject level. It is like the Start Routine. It is independent of the DataSource. We can use this to define Global Data and Global Checks.
6. What is the difference between start routine and update routine, when, how and why are they called?
· Start routine can be used to access InfoPackage while update routines are used while updating the Data Targets.
7. What is the table that is used in start routines?
· Always the table structure will be the structure of an ODS or InfoCube. For example if it is an ODS then active table structure will be the table.
8. Explain how you used Start routines in your project?
· Start routines are used for mass processing of records. In start routine all the records of DataPackage is available for processing. So we can process all these records together in start routine. In one of scenario, we wanted to apply size % to the forecast data. For example if material M1 is forecasted to say 100 in May. Then after applying size %(Small 20%, Medium 40%, Large 20%, Extra Large 20%), we wanted to have 4 records against one single record that is coming in the info package. This is achieved in start routine.
9. What are Return Tables?
· When we want to return multiple records, instead of single value, we use the return table in the Update Routine. Example: If we have total telephone expense for a Cost Center, using a return table we can get expense per employee.
10. How do start routine and return table synchronize with each other?
· Return table is used to return the Value following the execution of start routine
11. What is the difference between V1, V2 and V3 updates?
· V1 Update: It is a Synchronous update. Here the Statistics update is carried out at the same time as the document update (in the application tables).
· V2 Update: It is an Asynchronous update. Statistics update and the Document update take place as different tasks.
o V1 & V2 don’t need scheduling.
· Serialized V3 Update: The V3 collective update must be scheduled as a job (via LBWE). Here, document data is collected in the order it was created and transferred into the BW as a batch job. The transfer sequence may not be the same as the order in which the data was created in all scenarios. V3 update only processes the update data that is successfully processed with the V2 update.
12. What is compression?
· It is a process used to delete the Request IDs and this saves space.
13. What is Rollup?
· This is used to load new DataPackages (requests) into the InfoCube aggregates. If we have not performed a rollup then the new InfoCube data will not be available while reporting on the aggregate.
14. What is table partitioning and what are the benefits of partitioning in an InfoCube?
· It is the method of dividing a table which would enable a quick reference. SAP uses fact file partitioning to improve performance. We can partition only at 0CALMONTH or 0FISCPER. Table partitioning helps to run the report faster as data is stored in the relevant partitions. Also table maintenance becomes easier. Oracle, Informix, IBM DB2/390 supports table partitioning while SAP DB, Microsoft SQL Server, IBM DB2/400 do not support table portioning.
15. How many extra partitions are created and why?
· Two partitions are created for date before the begin date and after the end date.
16. What are the options available in transfer rule?
· InfoObject
· Constant
· Routine
· Formula
17. How would you optimize the dimensions?
· We should define as many dimensions as possible and we have to take care that no single dimension crosses more than 20% of the fact table size.
18. What are Conversion Routines for units and currencies in the update rule?
· Using this option we can write ABAP code for Units / Currencies conversion. If we enable this flag then unit of Key Figure appears in the ABAP code as an additional parameter. For example, we can convert units in Pounds to Kilos.
19. Can an InfoObject be an InfoProvider, how and why?
· Yes, when we want to report on Characteristics or Master Data. We have to right click on the InfoArea and select “Insert characteristic as data target”. For example, we can make 0CUSTOMER as an InfoProvider and report on it.
20. What is Open Hub Service?
· The Open Hub Service enables us to distribute data from an SAP BW system into external Data Marts, analytical applications, and other applications. We can ensure controlled distribution using several systems. The central object for exporting data is the InfoSpoke. We can define the source and the target object for the data. BW becomes a hub of an enterprise data warehouse. The distribution of data becomes clear through central monitoring from the distribution status in the BW system.
21. How do you transform Open Hub Data?
· Using BADI we can transform Open Hub Data according to the destination requirement.
22. What is ODS?
· Operational DataSource is used for detailed storage of data. We can overwrite data in the ODS. The data is stored in transparent tables.
23. What are BW Statistics and what is its use?
· They are group of Business Content InfoCubes which are used to measure performance for Query and Load Monitoring. It also shows the usage of aggregates, OLAP and Warehouse management.
24. What are the steps to extract data from R/3?
· Replicate DataSources
· Assign InfoSources
· Maintain Communication Structure and Transfer rules
· Create and InfoPackage
· Load Data
25. What are the delta options available when you load from flat file?
· The 3 options for Delta Management with Flat Files:
o Full Upload
o New Status for Changed records (ODS Object only)
o Additive Delta (ODS Object & InfoCube)
26. What are the inputs for an InfoSet?
· The inputs for an InfoSet are ODS objects and InfoObjects (with master data or text).
27. What internally happens when BW objects like InfoObject, InfoCube or ODS are created and activated?
· When an InfoObject, InfoCube or ODS object is created, BW maintains a saved version of that object but does not make it available for use. Once the object is activated, BW creates an active version that is available for use.
28. What is the maximum number of key fields that you can have in an ODS object?
· 16
29. What is the importance of 0REQUID?
· It is the InfoObject for Request ID. OREQUID enables BW to distinguish between different data records.
30. Can you add programs in the scheduler?
· Yes. Through event handling.
31. What does a Data IDoc contain?
· Data IDoc contains:
o Control Record à Contains administrator information such as receiver, sender and client.
o Data record
o Status Record à Describes status of the record e.g., modified.
o 
32. What is the importance of the table ROIDOCPRMS?
· It is an IDOC parameter source system. This table contains the details of the data transfer like the source system of the data, data packet size, maximum number of lines in a data packet, etc. The data packet size can be changed through the control parameters option on SBIW i.e., the contents of this table can be changed.
33. When is IDOC data transfer used?
· IDOCs are used for communication between logical systems like SAP R/3, R/2 and non-SAP systems using ALE and for communication between an SAP R/3 system and a non-SAP system. In BW, an IDOC is a data container for data exchange between SAP systems or between SAP systems and external systems based on an EDI interface. IDOCs support limited file size of 1000 bytes. So IDOCs are not used when loading data into PSA since data there is more detailed. It is used when the file size is lesser than 1000 bytes.
34. When an ODS is in 'overwrite' mode, does uploading the same data again and again create new entries in the change log each time data is uploaded?
· No.
35. What is the function of 'selective deletion' tab in the manage contents of an InfoCube?
· It allows us to select a particular value of a particular field and delete its contents.
36. When we collapse an InfoCube, is the consolidated data stored in the same InfoCube or is it stored in the new InfoCube?
· When the cube is collapsed the data is stored in the same cube, data is stored in F table before the compress and in E table after the compression. These two tables are for the same cube.
37. What happens when you load transaction data without loading master data?
· The transaction data gets loaded and the master data fields remain blank.
38. When given a choice between using an InfoCube and a MultiProvider, what factors to consider before making a decision?
· One would have to see if the InfoCubes are used individually. If these InfoCubes are often used individually, then it is better to go for a MultiProvider with many InfoCubes since the reporting would be faster for an individual InfoCube query rather than for a big InfoCube with lot of data.
39. How many hierarchy levels can be created for a characteristic InfoObject?
· Maximum of 98 levels.
40. What is the function of 'reconstruction' tab in an InfoCube?
· It reconstructs the deleted requests from the InfoCube. If a request has been deleted and we want the data records of that request to be added to the InfoCube, we can use the reconstruction tab to add those records. It goes to the PSA and brings the data to the InfoCube.
41. What are secondary indexes with respect to InfoCubes?
· It is an Index created in addition to the primary index of the InfoCube. When you activate a table in the ABAP Dictionary, an index is created on the primary key fields of the table. Further indexes created for the table are called secondary indexes.
42. What is DB Connect and where is it used?
· DB connect is a database connecting program. It is used in connecting third party tools with BW for reporting purpose.
43. What is the common method of finding the tables used in any R/3 extraction?
· By using the transaction LISTSCHEMA we can navigate the tables.
44. What is the difference between table view and InfoSet query?
· An InfoSet Query is a query using flat tables while a view table is a view of one or more existing tables. Parts of these tables are hidden, and others remain visible.
45. How to load data from one InfoCube to another InfoCube?
· Through DataMarts data can be loaded from one InfoCube to another InfoCube.
46. What is the difference between extract structure and DataSource?
· DataSource defines the data from different source system, where an extract structure contains the replicated data of DataSource and where we define extract rules and transfer rules
· Extract Structure is a record layout of InfoObjects.
· Extract Structure is created on SAP BW system.
47. What is entity relationship model in data modeling?
· An ERD (Entity Relation Diagram) can be used to generate a physical database.
· It is a high level data model.
· It is a schematic that shows all the entities within the scope of integration and the direct relationship between the entities.
48. What is DataMining concept?
· Process of finding hidden patterns and relationships in the data.
· With typical data analysis requirements fulfilled by data warehouses, business users have an idea of what information they want to see.
· Some opportunities embody data discovery requirements, where the business user wants to correlate sets of data to determine anomalies or patterns in the data.
49. How does the time dependency work for BW objects?
· Time Dependent attributes have values that are valid for a specific range of dates (i.e., valid period).
50. What is I_ISOURCE?
· Name of the InfoSource
51. What is I_T_FIELDS?
· List of the transfer structure fields. Only these fields are actually filled in the data table and can be sensibly addressed in the program.
52. What is C_T_DATA?
· Table with the data received from the API in the format of source structure entered in table ROIS (field ROIS-STRUCTURE).
53. What is I_UPDMODE?
· Transfer mode as requested in the Scheduler of the BW. Not normally required.
54. What is I_T_SELECT?
· Table with the selection criteria stored in the Scheduler of the SAP BW. This is not normally required.
55. What are the different Update Modes?
· Direct Delta: In this method, extraction data from document postings is transferred directly to BW delta queue.
· Queued Delta: In this method, extraction data from document postings is collected in an extraction queue, from which a periodic collective run is used to transfer the data to BW delta queue.
o The transfer sequence and the order in which the data was created are the same in both Direct and Queued Delta.
· Unserialized V3 Update: In this method, the extraction data is written to the update tables and then is transferred to the BW delta queues without taking the sequence into account.
56. What are the different ways Data Transfer?
· Full Update: All the data from the InfoStructure is transferred according to the selection criteria defined in the scheduler in the SAP BW.
· Delta Update: Only the data that has been changed or is new since the last update is transferred.
57. Which Object connects Aggregates and InfoCube?
· ReadPointer connects Aggregates and InfoCube. We can view the ReadPointer in table RSDDAGGRDIR, the field name is RN_SID, whenever we are rolling up the data, it contains the request number, it will check with the next request for second roll up. Just follow the table for a particular InfoCube and roll up the data.
58. What is switching ON and OFF of aggregates? How do we do that?
· When we switch off an aggregate, it is not available to supply data to queries, but the data remains in the aggregate, so if required, we can turn it on and update the data, instead of re-aggregating all the data. However if we deactivate an aggregate, it is not available for reporting and also we lose the aggregated data. So when you activate it, it starts the aggregation anew. To do this select the relevant aggregate and choose the Switch On/Off (red and green button). An aggregate that is switched off is marked in column Filled/Switched off with Grey Button.
59. While creating aggregates system gives manual or automatic option. What are these?
· If we select the automatic option, system will propose aggregates based on the BW statistics. i.e., how many times the InfoCube is used to fetch data, etc. Else we can manually select the dataset which should form the aggregate.
60. What are the options when defining aggregates?
· Manual
· Automatic
61. What are Aggregates and when are they used?
· An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently in a consolidated form. Aggregates make it possible to access InfoCube data quickly in Reporting. Aggregates can be used in following cases:
o To speed up the execution and navigation of a specific query.
o Use attributes often in queries.
o To speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels.
62. When I run Initial load it failed then what should I do?
· Deletion of an initial load can be done in the InfoPackage. First set the QM status of the request to red if not yet done, then delete it from all data targets. After that we go to the InfoPackage and choose from menu scheduler à “initialization options for the source system”. There you should see your red request. Mark it and delete it. Accept deletion question and accept post information message. Now the request should be deleted from the initialization options. Now you can run a new init.
You can also run a repair request. That's a full request. With this you correct your data in the data target because of failed deltas or wrong inits. You do this in the InfoPackage too. Choose menu scheduler repair full request. But if you want to use the init/delta load you have to make a successful init first.
63. What are the inverted fields in DataSource?
· They allow to do reverse posting. It would actually multiply the field by -1.
64. What are setup tables and why should we delete the setup tables first before extraction?
a. Setup tables are filled with data from application tables. They are the OLTP tables storing transaction record. They interface between the application tables and extractor. LO extractor takes data from Setup table while initialization and full upload. It is not needed to access the application table for data selection. As setup tables are required only for full and init load we can delete the data after loading in order to avoid duplicate data.
65. What are the setup tables? Why use setup tables?
· In LO Extraction Mechanism when we fill the setup tables the extract structure is filled with the data. When we schedule InfoPackage using FULL / INITI DELTA from BW, the data is picked from the setup tables
66. When filling the set tables, is there any need of delete the setup tables?
a. Yes. By deleting the setup tables we are deleting the data that is in the setup tables from the previous update. This avoids updating the records twice into the BW.
67. Why we need to delete the setup table first then filling?
· During the Setup run, these setup tables are filled. Normally it's a good practice to delete the existing setup tables before executing the setup runs so as to avoid duplicate records for the same selections.
68. With what data the setup table is filling (Is it R3 data)?
· The init loads in BW pull data from the Setup tables. The setup tables are only used in case of first init/full loads.
69. Will there be any data in the application tables after sending data to Setup tables?
· There will be data in application tables even after fill up of setup tables. Setup tables are just temp tables that fill up from application tables for setting up Init/Full loads for BW.
70. How to work master data delta?
· We always do full load for Master Data. It would always overwrite the previous entries.
71. Master data is stored in Master Data tables. Then what is the importance of dimensions?
· Dimension tables link Master Data tables with the fact table through SID's.
72. I replicated the DataSource to BW system. I want to add one more field to DataSource. How do I do it?
· Add the field to extract structure and replicate the DataSource again into BW and this field will appear in BW also.
73. Suppose one million records are uploaded to InfoCube. Now I want to delete 20 records in InfoCube. How can we delete 20 records?
· This you could do with selective deletion
74. What is the InfoCube for inventory?
· InfoCube: 0IC_C03
75. What is the maintenance of DataSource?
· It is the maintenance of required fields in a particular DataSource for which there are reporting requirements in BW and data for the same needs to be extracted.
76. What is the maintenance of Extract structure
· Extract structures are maintained in case of LO DataSources. There are multiple extract structures for each DataSource in the LO for different applications. Any enhancements to DataSource in case of LO are done using maintenance of extract structures.
77. What are MC EKKO, MC EKPO in the maintenance of DataSource?
· These are purchasing related communication structures.
78. How is the Delta Load different for an InfoCube and ODS?
· An InfoCube will have additive Delta, but you will still be able to see all individual records in the InfoCube contents. This is because if you choose to delete the current request - then the records have to be rolled back to the prior status. You build a query on the InfoCube and on the query you will find that the data is actually summed up. The ODS records will not have duplicate records. You will have only one record.
79. What is the difference between the transactions LBWF and RSA7?
· RSA7 is to view BW delta queue. This gets overwritten each time.
· LBWF is the Log for LO Extract Structures. This is populated only when the User parameter MCL is set, and is recommended only for testing purposes.
80. What exactly happens (background) when we are inactivating/activating the extract structure for LO Cockpit?
· If the extract structure is activated then any online transaction or on the compilation of setup tables, the data is posted to the extract structures depending on the update method selected. Activation marks the DataSource with green else it is yellow. The activation/deactivation makes entries to the TMCEXACT table.
81. What is content extraction?
· These are extractors supplied by SAP for specific business modules. Eg. 2FI_AR_4: Customers: Line Items with Delta Extraction / 2FI_GL_6: General Ledger Sales Figures via Delta Extraction.
82. What is direct Update of InfoObject?
· This is updating of InfoObject without using Update Rules but only the Transfer Rules.
83. You get New status or Additive Delta. If I set here (on R/3) what is the need of setting in BW.
· In R/3 the record mode determines this as seen in the RODELTAM table i.e., whether it will be a new status or additive delta for the respective DataSource. Based on this you need to select the appropriate update type for the data target in BW. For e.g., ODS supports additive as well as Overwrite function. Depending on which DataSource is updating the ODS, and the record mode supported by this DataSource, you need to do the right selection in BW.
84. Where does BW extract data from during Generic Extraction and LO Extraction?
· All deltas are taken from the delta queue. The way of populating the delta queue differs for LO and other DataSources.
85. What is the importance of ODS Object?
· ODS is mainly used as a staging area.
86. Differences between star and extended star schema?
· Star schema: Only characteristics of the dimension tables can be used to access facts. No structured drill downs can be created. Support for many languages is difficult.
· Extended star schema: Master data tables and their associated fields (attributes), External hierarchy tables for structured access to data, Text tables with extensive multilingual descriptions are supported using SIDs.
87. What are the major errors in BW and R3 pertaining to BW?
· Errors in loading data (ODS loading, InfoCube loading, delta loading etc)
· Errors in activating BW or other objects.
88. When are tables created in BW?
· When the objects are activated, the tables are created. The location depends on the Basis installation.
89. What is M table?
· Master Data table.
90. What is F table?
· Fact table
91. What is data warehousing?
· Data Warehousing is a concept in which the data is stored and analysis is performed over it.
92. What is a RemoteCube and how is it accessed and used?
· A RemoteCube is an InfoCube whose data is not managed in the BW but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system.
· With a RemoteCube, we can report using data in external systems without having to physically store transaction data in BW. We can, for example, include an external system from market data providers using a RemoteCube.
· This is best used only for small volume of data and when less users access the query.
93. Tell about a situation when you implemented a RemoteCube.
· RemoteCube is used when we like to report on transactional data. In a RemoteCube data is not stored on BW side. Ideally used when detailed data is required and we want to bypass loading of data into BW.
94. Differences between MultiCube and RemoteCube.
· A MultiCube is a type of InfoProvider that combines data from a number of InfoCubes and makes them available as a whole to reporting.
· A RemoteCube is an InfoCube whose transaction data is not managed in the BW but externally. Only the structure of the RemoteCube is defined in BW. The data is read for reporting using a BAPI from another system.
95. How you did Data modeling in your project? Explain
· We had collected data from the user and created HLD (High level Design document) and we analyzed to find the source for the data. Then data models were done indicating dataflow, lookups. While designing the data model considerations were given to use existing objects (like ODS and InfoCube) not storing redundant data, volume of data, Batch dependency.
96. There is an InfoObject called 0PLANT I activated and using it after some days one more person came and activated it again. What will happen, whether there will be any effect merge or no effect.
· Reactivating the InfoObject shouldn't affect unless he has made some changes to that and then reactivated it.
97. I have two processes one process contains ABAP program. After the successful completion of the first process it should trigger second one how to know whether the first is successful or not?
98. I want to create an InfoObject that is a dependent InfoObject. How to do it?
· Go to the first InfoObject screen in administration work bench go to compounding tab, create the InfoObject that is dependent on the former InfoObject and activate.
99. Delta has been done successfully in LO. Later some fields were added to that particular DataSources then there will be any effect to the previous data records.
· No. If there is data in the DataSource we can only append the fields. No data will be lost. But you need to have separate mechanism to fill in the historical data for the newly added fields.
100. There are 5 characteristics in an InfoCube. We have to assign these characteristics to a dimension based on what we assign characteristics to dimension?
· Depends on the characteristic and cardinality.
· The characteristics that logically belong together can be grouped together in a Dimension.
· First we will decide the dimensions of the InfoCube. After that we will assign necessary InfoObjects to the corresponding dimensions.
101. What are the places we use ABAP code in BW?
· Start routine
· Update routine
· InfoPackages (to populate selection parameters )
· Transfer Rules
· Variable Exits
· To create any generic DataSources
102. Sales flow
· Quotation à inquiry à Sales order à Delivery à Post goods issue àInvoice à Accounting document
103. What is delta queue (RSA7)? When will the data queue here and from where?
· Delta queue stores records that have been generated after last delta upload and yet to be sent to BW. The queued data will be sent to BW from here. Depending on the method selected, generated records will either come directly to this queue or through extraction queue.
104. What is Extraction Queue? What does it contain?
a. Newly generated records will be stored in the extraction queue and from there a scheduled job will push it to delta queue.
105. What are Serialized and Unserialized V3 updates?
· In serialized V3 Update data is transferred from the LIS communication structure, using extract structures (e.g. MC02M_0HDR for the header purchase documents), into a central delta management area.
· With Unserialized V3 Update mode, the extraction data continues to be written to the update tables using a V3 update module and then is read and processed by a collective update run (through LBWE).
106. 5 different types of Source Systems are:
· SAP Systems – SAP Basis Release 3.11 and above – SAPI
· DataBases – Use SAP DB Connect
· External Systems – BAPI
· File Systems
· SAP BW
107. 4 different types of DataSources:
· Transaction Data
· Attributes
· Texts
· Hierarchy
108. 2 types of InfoSources:
· Direct InfoSource
· Flexible InfoSource
109. 3 types of Transfer Rules:
· The fields copied from the Transfer Structure and are not modified
· Fixed Value can be assigned to an InfoObject
· An ABAP routine or a form field determines the value of the InfoObject
110. 6 types of connections between the Source Systems and the BW:
· RFC Connections
· ALE Settings
· Partner Agreements
· Ports
· IDoc Types
· IDoc Segments
111. What is Transfer Method and what are the types of Transfer Methods?
· The transfer method only determines how the data is transferred.
o IDoc transfer method: A data IDoc consists of a control record, a data record, and a status record. The control record contains administration information such as receiver, sender, and client. The status record describes the status of the IDoc, for example "modified". The data stores in the ALE inbox and outbox have to be emptied or reorganized.
o PSA (tRFC) transfer method: With this transfer method, a transactional Remote Function Call is used to transfer the data directly from the source system to the SAP BW. Here, there is the option of storing the data in the PSA (the tables have the same structure as the transfer structure.) This is the preferred transfer method, because it improves performance better than the IDoc method. When you use tRFCs to transfer data, the maximum number of fields that can be used is restricted to 255. The length of a data record is also restricted to 1962 bytes when you use tRFCs (IDoc --> 1000 bytes.)
112. 6 control Parameters for Transferring Data: (SBIW à General settings)
· Source System
· Maximum size of the DataPackage
· Maximum number of rows in a DataPackage
· Frequency
· Maximum number of parallel processes during the transfer of data
· Target system for batch job
113. 3 uses of SAPI technology:
· Transfer data and Metadata from SAP Systems
· Transfer data from XML files
· Transfer data between BW data targets or from one BW system to another (Data Marts)
114. 4 functions of LO Cockpit:
· Maintaining Extract Structures
· Maintaining DataSources
· Activating Updates
· Controlling Updates
115. 3 update Methods for InfoPackage:
· Full Update
· Initialize Delta
· Delta Update
116. 3 update Methods in Logistics Extraction:
· Direct Delta
· Queued Delta
· Unserialized V3 Update
117. 4 advantages of LO Extraction:
· Improved performance and reduced volumes of data
· Simple handling
· Standardized solution for all Logistics applications
· No use of LIS functions
118. DataSource: Customer version edit:
· Field Name
· Short Text
· Selection
· Hide Field
· Inversion
· Field only known in Customer Exit
119. 8 Delta Terms:
a. Service API: Layer in the source system that sends requests and starts extractors.
a. Delta Management: A group of programs for the delta queue that control the transaction data delta.
b. Update Mode (init, full, delta, and so on): A term that describes which data is requested.
c. Delta Queue: A holding area for new and modified (delta) records in the SAP system.
d. Delta Types: A term that describes how the data gets into the delta queue.
e. Serialization: The sequence in which the data records arrive in BW.
f. Record Mode: A description of the contents of a record.
g. Delta Method: A term that classifies a DataSource according to record mode, serialization, and delta type.
120. 5 update modes:
· Update mode “F” = Full Update: Available for all DataSources
· Update mode “C” = Initialization Delta: If the extractor supports deltas, it must be initialized prior to a delta run. Selection conditions are saved. Finally, a full upload is started for the selected range of data. Additional settings are saved to allow for future delta uploads.
· Update mode “D” = Delta: If the extractor supports deltas, only new or changed records are sent.
· Update mode “R” = Repeat: If a delta is showing a RED traffic light status then a dialog box prompts to decide whether or not the last group of delta records needs to be reloaded.
· Update Mode “A” = Master Data. Master data does not however use the delta queue functions
121. 3 ways to load a delta queue:
· At the time of the transaction – Direct Delta
· At a later date after the transaction (V3 job) – Queued Delta
· At the time the extractor job is called by BW – Unserialized V3 Update
122. 6 types of Record Mode: 0RECORDMODE is an InfoObject that specifies the method in which delta information is supplied.
· “After Image” = Record Mode “ ” - The way a record looks after the change
· “Before Image” = Record Mode “X” - The way a record looked before the change
· “Additive Image” = Record Mode “A” - Shows only the difference for all numeric values
· “New Image” = Record Mode “N” - For each change, a new, unique record is generated
· “Delete” = Record Mode “D” - Only provides the key information required to make a deletion
· “Reverse” = Record Mode = “R” - Sends information to numerically “cancel” a deleted record.
123. 2 points where Currency Translation can take place:
· When data is updated in the InfoProvider, currency can be determined for the Key Figure
· When analyzing data in the BEx, we can determine the currency conversion key and target currency for each structure part separately.
124. There is one product of color black and one is pink. The color properties should be displayed when the query is run but it is not displaying. What might be the problem?
· Check if the colors have been checked as navigational attribute of material.
· Check if master data for the same is maintained and extracted in BW.
125. What is the importance of Compounding of InfoObjects?
· This defines a superior InfoObject which must be combined to define another InfoObject and it makes the superior InfoObject uniquely identifiable. For example, in a Plant, there can be some similar products manufactured. (Plant A-- Soap, Paste, Lotion; plant B--Soap, paste, Lotion) In this case Plant A and Plant B should be made unique. So the characteristics can be compounded to make them unique.
126. What is delta upload? What is the use of delta upload?
· When transactional data is pulled from R3 system instead of pulling all the data daily (Instead of having full load), if we pull only the changed records, or newly added records, the load on the system will be very less.
127. What is SID? What is the impact in using SID?
· SIDs are Surrogate IDs which are system generated numbers and assigned to each characteristic value when they are uploaded. Search on Numeric character is faster than Alpha characters and hence SIDs.
128. What are the three tables of ODS Objects? Explain?
· ODS has three database tables. New Table, Active Table and Change Log Table. Initially new data are loaded and their traces are kept in Change log table. When another set of data comes, it actually compares with change log and transfers the data (delta data) into active table and also notes in Change log. Every time the tables are compared and data is written into the targets.
129. Other than BW, what are the other ETL tools used for SAP R/3 in industry?
· Informatica, ACTA, COGNOS, Business Objects are other ETL tools.
130. Does any other ERP software use BW for data warehousing?
· No.
131. What is the importance of hierarchies?
· One can display the elements of characteristics in hierarchy form and evaluate query data for the individual hierarchy levels in the BEx (in Web applications or in the BEx Analyzer).
132. What are hierarchies? Explain how you used in your project?
· Hierarchies are organizing data in a structured way. For example BOM (Bill of material) can be configured as hierarchies.
133. Where is 0RECORDMODE InfoObject used?
· It is an InfoObject which specifies the method in which the delta information is supplied. ODS uses 0RECORDMODE for delta load. 0RECORDMODE can have any of the 6 values as “ ”, “X”, “A”, “N”, “D” & “R”.
134. Are all the characteristics - key fields in an ODS?
· No. An ODS object contains key fields (for example, document number/item) and data fields that can also contain character fields (for example, order status, customer).
135. What is the use BAPI, ALE?
· BAPI & ALE are programs to extract data from DataSources. BW connects SAP systems (R/3 or BW) and flat files via ALE. BW connects with non SAP systems via BAPI.
136. Where to check the log for warning messages appearing in activation of transfer rules?
· If transfer rules are not defined for InfoObjects, then traffic lights will not be green.
137. Can we load transaction data into InfoCube without loading the master data first?
· Yes.
138. What is difference between saving and activating?
· In BIW, Saving à actually saves the defined structure and retrieves whenever required.
· Activating à It saves and generates required tables and structures.
139. What is time dependent master data?
· Time dependant master data are one which keeps changing according to time. For example: Assume a Scenario, Sales person A works in East Zone till (Jan 30th 2004), and then moves to North Zone from Jan31st 2004. Thus the master data with regard to Sales person A, should be changed to different zone based on a time
140. What does delta initialization do?
· It initializes the delta Update mechanism for that DataSource.
141. What is difference between delta and pseudo delta?
· Some data targets and modules have delta Update feature. Those can be used for delta Update of data. Say ODS, InfoCube, COPA are delta capable. Data can be expected stage wise. After first accumulation of data, BIW expects the data in delta for these data target. When a data target does not have this feature (delta update), it can be made delta capable using ODS as data target.
142. What is Third Normal Form and its comparison with Star Schema?
· Third normal form is normalized form of storing data in a relational database. It eliminates functional dependencies on non-key fields by putting them in a separate table. At this stage, all non-key fields are dependent on only the key.
· Star schema is a demoralized form of storing data, which paves the path for storing data in a multi-dimensional model.
143. What is Life period of data in Change Log of an ODS?
· The data of Change Log can be scheduled to be deleted periodically. Usually the Data is removed after it has been updated into the data targets.
144. What are Inbound and Consistent ODSs?
· In an Inbound ODS object, the data is saved in the same form as it was when delivered from the source system. This ODS type can be used to report the original data as it comes from the source system.
· In a Consistent ODS object, data is stored in granular form and consolidated. This consolidated data on a document level creates the basis for further processing in BW.
145. What is Life period of data in PSA?
· Data in PSA is deleted when one feels that there is no use of it in future. There is a trade off between wastage of space and use as a back up.
146. How to load data from one InfoCube to another?
· A DataSource is created from the InfoCube which is supposed to feed. This can be done by right-clicking on the InfoCube and selecting export DataSource. Then a suitable InfoSource can be created for this DataSource. And the intended data target InfoCube can be fed.
147. What is activation of objects?
· Activation of objects enables them to be executed, in other words used elsewhere for different purposes. Unless an object is activated it cannot be used.
148. What is transactional ODS?
· A transactional ODS object differs from a standard ODS object in the way it prepares data. In a standard ODS object, data is stored in different versions (active, delta, modified); whereas a transactional ODS object contains the data in a single version. Therefore, data is stored in precisely the same form in which it was written to the transactional ODS object by the application.
149. Are SIDs static or dynamic?
· SIDs are static.
150. Is data in InfoCube editable?
· No.
151. What are data-marts?
· Data Marts are used to exchange data between different BW systems or to update data within the same BW system (Myself Data Mart). Here, the InfoProviders that are used to provide data are called Data Marts.
152. Which one is more normalized; ODS or InfoCube?
· InfoCube is more normalized than ODS.
153. What is replication of DataSource?
· Replication of DataSource enables the extract structure from the source system to be replicated in the BW.
154. What are the quality checks for inefficient InfoCube designs?
· Huge Dimension tables make an InfoCube inefficient.
· The query takes a long time.
155. Why is star schema not implemented for ODS as well?
· Because ODS is meant to store a detailed document for quick use and help make short-term decisions.
156. Why do we need separate update rules for characteristics on each key figure?
· If the requirement specifies a different need for each characteristic then we have separate update rules for each of the characteristics.
157. What is the use of Hierarchies?
· Efficient reporting is one of the targets of using hierarchies. Easy drilldown paths can be built using hierarchies.
158. What is "Referential Integrity"?
· A feature provided by relational database management systems (RDBMS) that prevents users or applications from entering inconsistent data. For example, suppose Table B has a foreign key that points to a field in Table A.
o Referential integrity would prevent from adding a record to Table B that cannot be linked to Table A.
o Referential integrity rules might also specify that whenever you delete a record from Table A, any records in Table B that are linked to the deleted record will also be deleted. This is called cascading delete.
o Referential integrity rules could specify that whenever youmodify the value of a linked field in Table A, all records in Table B that are linked to it will also be modified accordingly. This is called cascading update.
159. What is a Transactional InfoCube and when is it preferred?
· Transactional InfoCubes differ from Basic InfoCubes in their ability to support parallel write accesses. Basic InfoCubes are technically optimized for read accesses to the detriment of write accesses. Transactional InfoCubes are designed to meet the demands of SEM, where multiple users write simultaneously into an InfoCube.
160. When is data in Change Log table of ODS deleted?
· When requests loaded into ODS object are neither required for delta update nor for initialization, they can be deleted. If delta initialization for update exists in connected data targets, the requests have to be updated first before the data can be deleted.
161. How is the data of different modules stored in R/3?
· Data is stored in multiple tables in R/3 based on ERM (Entity Relationship model) to prevent the redundant storage of data.
162. In what cases do we transfer data from one InfoCube to another?
· Modifications can't be made to an InfoCube if there is data present in the InfoCube. If we want to modify an InfoCube and no backup for data exist then we can design another InfoCube with the parameters specified and load data from the old InfoCube.
163. How often do we have a Multi-layered structure in ODS stage and in what cases.
· Multi-layered structure in ODS stage is used to consolidate data from different DataSources.
164. How is data extracted from systems other than R/3 and Flat files?
· Data is extracted from systems other than R/3 and flat files using staging BAPIs.
165. When do tRFC and IDoc errors occur?
· tRFC and iDoc errors- when you load data , these are connection specific and if the source system is not set properly or is interrupted , you get these errors.
· Intermediate Document (IDoc) is a container for exchanging data between R/3, R/2 and non-SAP systems. IDocs are sent in the communication layer by transactional Remote Function Call (tRFC) or by other file interfaces (for example, EDI). tRFC guarantees that the data is transferred once only. Was not able to find out when the errors occur.
166. On what factors does the loading time depend on?
· Loading time depends on the work load both on the BW side and source system side. It might also depend upon the network connectivity.
167. How long does it take to load a million records into an InfoCube from an R/3 system?
· Depending on work load on BW side and source system side loading time varies. Typically it takes half an hour to load a million records.
168. Will the loading time be same for the same amount of data for non-SAP systems like Flat files?
· It might not be the same. It depends on the extraction programs used on the source system side.
169. What is mySAP.com?
· SAP solution to integrate all relevant business processes on the Internet. mySAP.com integrates business processes in SAP and non-SAP systems seamlessly and provides a complete business environment for electronic commerce.
170. How was Data modeling done in your project? Explain
· Initially we study the business process of client, like what kind of data is flowing in the system, the volume, changes taking place in it, the analysis done on the data by users, what are they expecting in the future, how can we use the BW functionality. Later we have meetings with business analyst and propose the data model, based on the client. Later we give a proof of concept demo wherein we demo how we are going to build a BW data warehouse for their system. Once we get an approval we start requirement gatherings and building your model and testing follows in QA.
171. As you said you have worked on InfoCubes and ODS, Which one is better suited for reporting? Explain and what are the drawbacks and benefits of each one?
· Depending on the type of report the data is stored in InfoCube or ODS. BW is used to store high volumes of data and faster reporting. InfoCube is used to store normalized data. Master Data and transaction data are stored in InfoCube as per the Extended Star Schema using SIDs. The reporting is fast.
· ODS stores data in more detail utilizing its structure of transparent tables. Reporting on this will be slow. ODS is better used for RRI.
172. How do you measure the size of InfoCube?
· In number of records
173. What is the difference between InfoCube and ODS?
· InfoCube is structured as per Extended Star Schema with the fact table surrounded by different dimension tables which connect to SIDs. And the data can be aggregated in the InfoCubes. ODS is a flat structure and does not use the star schema concept and has detailed data in transparent tables.
174. What is the difference between display attributes and navigational attributes?
· Display attribute is one which is used only for display purpose in the report. Where as navigational attribute is used for drilling down in the report. We don't need to maintain NAV attribute in the InfoCube as a characteristic (that is the advantage) to drill down.
175. Data is uploaded twice into InfoCube. How to correct it?
· You can delete it by the Request ID.
176. Can you add a new field at the ODS level?
· Yes.
177. Can many DataSources have one InfoSource?
· Yes. For example, for loading text and hierarchies we use different DataSources but the same InfoSource.
178. Apart from R/3, which legacy db you used for extraction?
· Access, Informatica
179. There were problems with delta loads. The DataSources and transfer rules were re-activated and transported from both DEV systems to the Production systems. When the jobs are scheduled, there is an error that the delta update to the InfoCube has been invalidated because a previous delta load has been deleted from the InfoCube. All data from the ODS objects and the InfoCube then ran the initialization of delta job to restart the process has already been deeted. The load to the ODS objects completes successfully but the subsequent load to the InfoCube from the two ODS objects fails with this error.
· Prior to deleting all the data did you run the last deltas from the delta collective runs into the InfoCube? It sounds as if you may have left the delta data in the queue and re-initialized the delta process. Thus when you go to load the delta data it doesn't want to load b/c it is looking for it’s predecessor delta that you deleted.
180. I had initialized an LIS structure for billing and when I ran the delta request it is giving me an error stating that this is a duplicate document and my user wants the report urgently.
· What I did was,
o Deleted setup data (LBWG in R3)
o Deleted delta queue (RSA7 in R3)
o Generated new queue (for 2LIS_13_VDITM it is OLI9BW transaction)
181. I have a requirement in one of the existing InfoCube to change the Attribute as Navigational. I am sure that I have to check the Navigational option in the Attributes tab of respective characteristic and I have to check the option in the InfoCube also. My concern is, this InfoCube already has millions of records in production, I wonder is there any way to realign it with out reloading the InfoCube. How about realigning master data after making those changes... Is it mandatory to reload the InfoCube or any work around is available.
· All you need to do after selecting and activating the object and InfoCube is run a Hierarchy attribute change run. That should make the Nav attr. visible for reporting. In my first project I created an FI InfoCube with the company info. customized the FI AR InfoCube and created a MultiProvider for the users to drill down on customer info and ledger accounts. I also customized the sales overview for transactional data
182. When Extracting Sales Data using V3 Collective Run using LBWE job control, no data is being extracted (Nothing is being shown in RSA3). When filling up Setup Table for OLD Documents, we are able to see the Extracted Data records in RSA3. What may be wrong? And what should be the actual procedure to see data consistently in RSA3 so that BW can pull records from R/3.
· In RSA7 on the source system do you see your DataSources under queue maintenance and are they green? Also, did you run OLI7BW to setup your data in the statistical queue for initialization? And finally did you run your init from BW to initialize the delta process and get the initial load into BW? If you have done all these things you should be collecting deltas in the delta queue under RSA7.
183. Where the PSA data is stored?
· In PSA table.
184. What is data size?
· The volume of data one data target holds(in no. of records)
185. Different types of InfoCubes.
· BasicCube,
· Virtual Cube
o RemoteCube
o SAP RemoteCube
o Virtual InfoCube with services
186. What is an InfoSet?
· InfoSet is an intersection of multiple InfoProviders. They can be made of ODS and InfoObjects only.
187. If there are two DataSources how many transfer structures are there?
· Two in R/3 and Two in BW
188. What are indexes?
· Indexes are database indexes, which help in retrieving data fast.
189. Is it necessary to initialize each time the delta update is used?
· No
190. After the data extraction what is the image position?
· After image
191. What are Authorizations?
· Profile generators.
192. Can a Characteristic and InfoObject be InfoProvider?
· Yes
193. What is data Integrity and how can we achieve this?
· Data Integrity is about eliminating duplicate entries in the database and achieve normalization.
194. What is index maintenance and what is its purpose?
· Indexing is a process by which the address of data is stored. It helps easier access to the data.
195. When and why use InfoCube compression?
· When the data in the InfoCube is not going to change ever and if it is occupying a lot of space then we compress the InfoCube. The data that is compressed cannot be altered or deleted except through selective deletion, hence we have to make an informed decision on compression. This compression can be done through process chain and also manually.
196. How can Business Content be enhanced? And why do we need to enhance the Business Content?
· We can enhance the Business Content by adding fields to the Extract Structures delivered by SAP BC. We may need to enhance the BC because we need to provide fields which are not already in the BC as per the customers needs. Eg: you have a customer InfoCube (in BC) but your company uses attribute for say Apt number. Then instead of constructing the whole InfoCube you can add the above field to the existing BC InfoCube.
197. What is Tuning and why do we do it?
· Tuning is done to increase efficiency. It is done to lower time for:
o Loading data into Data Target
o Accessing a query
o For drilling down in a query, etc
198. What is a MultiProvider and how do we use MultiProvider?
· MultiProvider can combine various InfoProviders for reporting purposes. We can extract data from an ODS, InfoCube, InfoSet, InfoObject, etc in any combination.
199. What are scheduled and monitored data loads?
· Scheduling of data load means to schedule the loading of data for some particular date and time. It can be done from the scheduler tab on Create InfoPackage. Data Loads are monitored using transaction RSMON.
PERFORMANCE TUNING
200. What are the data load tuning one can do?
· Load balance on different servers
· Indexes on source tables
· Use fixed length files if data is loaded from flat files and put the file on the application server
· Use content (SAP delivered) extractor as much as possible
· Use “PSA and Data target in parallel” option in the InfoPackage
· Start several InfoPackages parallel with different selection options
· Buffer the SID number ranges when lot of data is loaded at once
· Load Master Data before loading Transaction Data
· Watch out the ABAP code in transfer and Update rules – This might slow performance
201. What are the general tuning guidelines?
· Archive and delete old data
· Use line item dimensions for large dimensions
· Use MultiProviders to parallel query on the basic InfoCubes
· Use the BW statistics InfoCube to monitor performance
· Reporting authorizations slow the performance
· Web reporting is faster than BEx reporting
· Use the aggregate hierarchies to minimize the roll-up time
· Use parallel upload and activation of ODS objects
· Disable the BEx reporting flag if ODS is not used for reporting
202. Ways to improve the performance
· Define as many dimensions as possible.
· Create aggregates
· Check and define Line Item Dimension
LO Extraction
203. What is the specific advantage of LO extraction over LIS extraction?
a. The load performance of LO extraction is better than that of LIS. In LIS two tables are used for delta management that is cumbersome. In LO only one delta queue is used for delta management.
204. What is Logistic Cockpit (LC)?
a. It is a technique to extract logistics information and consists of a series of a standard extract structures delivered in the business content.
205. What is the significance of setup tables in LO extractions?
· It adds the Selection Criteria to the LO extraction.
206. What is Delta Management in LO?
a. It is a method used in delta update methods which are based on change log in LO.
207. Suppose we performed a LO extraction using v3 update. This update method has problems after replication. Can you change the alternative methods?
a. This is only the extraction mechanism for logistics other than LIS and SAP is not recommended LIS extraction
CO-PA
208. What is partitioning characteristic in CO-PA used for?
· For easier parallel search and load of data.
209. What is the advantage of BW reporting on CO-PA data compared with directly running the queries on CO-PA?
· BW has a better performance advantage over reporting in R/3. For a huge amount of data, the R/3 reporting tool is at a serious disadvantage because R/3 is modeled as an OLTP system and is good for transaction processing rather than analytical processing.
210. Can we extract hierarchies from R/3 for CO-PA?
· No, we cannot, there are no hierarchies in CO/PA.
211. Explain the field name for partitioning in CO-PA.
· The CO/PA partitioning is used to decrease package size (eg: company code)
212. What is t-code for CO-PA?
· KEB0
213. What is operating concern in CO-PA?
· An organizational structure that combines controlling areas together in the same way as controlling areas group companies together.
214. What is field partitioning in CO-PA?
· Internally allocates space in database. If needed table resides in one or few partitions, then only these partitions will be selected and examined by SQL statement, thereby significantly reducing I/O volume.
215. Is CO-PA delta capable?
· Yes, CO-PA is delta capable.
216. What is operating concern and partitioning in CO-PA.
· Operating concern is set of characteristics based on which we want to analyze the performance of company. Partitioning is dividing the data into different datasets depending on a certain characteristics. Partitioning enables parallel access of data.
217. What is the difference between value fields and key figures in CO-PA?
· Value fields comprises of data which CO-PA gets from various modules in R/3. Whereas key figures are derived from these value fields.
GENERIC EXTRACTIONS
218. What are the steps in Generic extraction?
· RSO2 à
o Select the DataSource type and create
o On create DataSource screen:
§ Choose an application Component to which the DataSource is to be assigned.
§ Enter the descriptive texts. You can choose these freely.
§ Choose Generic Delta.
Process Chains
219. What are the load process and post processing?
a. InfoPackage
b. Read PSA and update data target
c. Save hierarchy
d. Update ODS data object
e. Data Export (Open Hub)
f. Delete overlapping requests
220. What are the data target administration tasks?
a. Delete Index
b. Generate Index
c. Construct database statistics
d. Initial fill of new aggregates
e. Roll up of filled aggregates
f. Compression of InfoCube
g. Activate ODS
221. What are the parallel processes that could have locking problems?
a. Hierarchy attribute change run
b. Loading master data for same InfoObject
c. Rolling up for same InfoCube
d. Selecting deletion of InfoCube / ODS and parallel loading
e. Activation or deletion of ODS object when loading parallel
222. How would you convert an InfoPackage group into a process chain?
a. Double click on the InfoPackage group
b. Click on the “Process Chain Maintenance” button
c. Type in the name and description. Individual InfoPackages are inserted automatically
223. What is a process chain and how you use it?
a. A process chain is a sequence of processes that are scheduled to wait in the background for an event. Some of these processes trigger a separate event that can, in turn, start other processes.
b. In one of our scenario we wanted to upload wholesale price InfoObject which will have wholesale price for all the material. Then we wanted to load transaction data. While loading transaction data to populate wholesale price, there was a look up in the update rule on this InfoObject Master Data table. This dependency of first uploading Master Data and then uploading transaction data was done through the process chain.
224. What is a Meta Chain?
a. Process chains which are clubbed together are called a Meta Chain. Each sub chain is triggered only when the previous Process Chain is successful.
225. What is process chain? How many types are there? How many we use in real time scenario?
· Process Chains can define interdependent processes with tasks like data loading, InfoCube compression, index maintenance, Master Data & ODS activation in the best possible performance & data integrity. Process Chains exist in administrator work bench. Using these we can automate ETTL processes. They help to schedule all activities and monitor (T-Code: RSPC).
226. Process chains: How to schedule data daily?
ASAP
227. What are the project phases in ASAP?
a. Project Preparation: We do a conceptual review at this initial phase.
b. Business Blue Print: We Collect Functional Specs and conduct a design review.
c. Realization: All the Developmental activities are done during this phase and we have a configuration review.
d. Final Presentation: All the QA and other final activities are done before moving to production. We do a performance review.
e. Go-Live and Support: To production and support
228. What is ASAP methodology
a. ASAP is a standard methodology for efficiently implementing and continually optimizing the SAP software. ASAP supports the implementation of the R/3 System and of mySAP.com Components, and can also be used for upgrade projects. It provides a wide range of tools that helps in all stages of implementation project - from project planning to the continual improvement of the SAP System. The two key tools in ASAP are: The Implementation Assistant, which contains the ASAP Roadmap, and provides a structured framework for your implementation, optimization or upgrade project. The Question & Answer database (Q&Adb), allows you to set your project scope and generate your Business Blueprint using the SAP Reference Structure as a basis.
FI-SL
229. 4 Functions to Update data in the FI-SL Special Purpose Ledger:
a. Validation
b. Substitution
c. Ledger Selection
d. Transfer of Fields
230. 4 Operations for FI-SL Data:
a. Currency Translation
b. Balance Carryforward
c. Allocation
d. Roll Up Ledgers
231. 5 tables that are created when an FI-SL table group is created:
a. Summary Table (...T)
b. Actual Line Item Tables (…A)
c. Plan Line Item Tables (…P)
d. ObjectTable_1 (Object / Partner) (…O)
e. Optional ObjectTable_2 (Movement Attribute) (…C)
BW - V3 Update... which function domain
RMBWV302 Purchasing
RMBWV303 Inventory
RMBWV304 Shopfloor controlling
RMBWV305 Quality
RMBWV308 Shipment
RMBWV311 Sales
RMBWV312 Shipping
RMBWV313 Billing
RMBWV317 Notifications
RMBWV318 Notifications
RMBWV340 Retailing
RMBWV343 POS - cashier
RMBWV344 POS - Sales Receipt
RMBWV345 Agency business
RMBWV346 GTM
BEx Reporting
232. What are Structures?
a. They are a combination of characteristics and Key figures (Basic Key Figures, Calculated Key Figures (CKF) and Restricted Key Figures(RKF))
233. What does the term CELL mean?
a. The term CELL in the function of Defining Exception Cells is the intersection of the two structure elements.
234. 4 Different types of Variables:
a. Text
b. Formula
c. Hierarchy
d. Hierarchy Node Variables
o Variable Hierarchy Node with a fixed Hierarchy
o Variable Hierarchy Node with a variable Hierarchy
235. 5 Variable processing types:
a. User Entry / Default Value
b. Replacement Path
c. Authorization
d. Customer Exit
e. SAP Exit
236. When we run the query at query run time the wrong data has come for one key figure, what might be the problem?
a. If the Key figure is Calculated Key Figures (CKF), check the formula. If it is a regular key figure, check the respective values in the InfoProvider. If the values are incorrect, need to check the update rules & transfer rules. If they are correct in the InfoProvider, then do check the source system table, maybe they are captured in correctly in the Source System level itself.
237. When we run query at run time it is not displaying one key figure values what could be the problem?
· Check if InfoProvider has values for the same. If yes, then check the formula incase of CKFs or check the selection parameters/filters etc in query.
238. Suppress the unit from displaying on the BEX Report
· Go into the query where your key figure is.
· Right click on key figures in the column section.
· Select New Formula
· Name the New Formula, Actual Qty
· In the function column expand Data Functions & select value w/o dimension.
· NODIM() will appear in the formula. Click on your key figure actual qty from the operand list.
· Hide your key figure in the query and display this new one (unit free)
239. I have a requirement where in, in the BEx Report I have to display the date when the report was run i.e. the Current date. Do I have to make use of VB code for that or is there a simple solution for that. Any Suggestions?
· You can use Variable (type: SAP exit) for the same i.e., 0DAT (current calendar day- which will be picked up from System date)
· I'm not sure if u have to do something in BW for that, but I have on my reports after they have been run, gone to the File Menu, then chosen Page Setup, then Header/Footer, click the Custom Footer button, then I'd select my date and time to appear on the Right section of the page at the bottom. I've also then saved the workbook as such and that date remained there.
· Have you tried using Text elements in the report? If you click on the "Layout" button, you can then select "Display text elements" and then "General". This puts in a number of elements into the report (workbook) The one you're looking for is a text element called "Last refreshed". It records the last time the report was refreshed (i.e. the last time it was run).
240. Assume that there is 5 years of data to be loaded as per the client requirement we need a key figure in the reporting. How to get historical data to that key figure? Is it possible to load only the current data instead of loading historical data for a key figure?
· When you load, just load it for 5 years. If you want to see just a current data in the Key Figure just create a Restricted Key Figure (RKF) with fiscal year (current) or posting period with your current value.
241. When to report from R/3 and when from BW?
· We can Report from R/3 for all operational needs that are based on the transaction data. The optimum use of BW is to analyze the data from tactical and strategic point of view.