Q1: What is the transaction name for the SAP NetWeaver Performance Trace? The transaction name is /IWFND/TRACES.
Q2: How many days remains a trace in system? Once written a trace expires after two days and is deleted by a cleanup batch job of SAP NetWeaver Gateway. You can mark some trace entries and change the expiry dates to suit your own requirements.
Q3: Will a trace be written even if the OData request is terminated with error? Once the OData request is passed to the SAP NetWeaver Gateway framework (after a successful login) a trace is written and contains only entries and times up to the location of the error. Therefore, you can also use the trace to check how far your request has been processed within the framework. In this case, you can click Error Log to directly navigate to the Error Log.
Q4: How can I display traces of different OData requests? You have to mark different trace entries in the trace overview and click Display selected entries.
Q5: What do the duration (ms) and Net Time (ms) mean? Duration (ms) is the duration in milliseconds of the current call. Net Time (ms) is the real processing time of the current call. The difference between theses times is the processing time of the sub-calls. The sub-calls have a higher level than the current call.
Q6: How can I display the source codes of a call? You have to mark a trace entry and clickSource Code. A new SAPGUI will appear to display the source code located in the SAP NetWeaver Gateway hub system or in the SAP Business Suite backend system. The fieldLocationidentifies the system where the current call is processed.
Q7: I can see the RFC and Network overhead in the trace summary. How can I determine this overhead time in the trace details? Look for entry with “Call Backend - <RFC Destination>”. The net time of this entry is the RFC and Network overhead of an RFC.
Q8: How is the duration computed when Multi-Origin with parallel processing is used? In the trace summary you see an entry with operation name “Multi-Origin – Parallel”. This entry contains the times of the entry READ_ENTITYSET with the largest sum of RFC and Network Overhead, GW Backend Framework Overhead and Application Time.
Q9: How are the durations computed when $batch is used with at least SP 08 in both SAP NetWeaver Gateway hub and backend system? In the trace summary you see an entry with operation name “PARALLELIZE_QUERIES”. This entry contains the times of the entryREAD_ENTITYSETwith the largest sum of RFC and Network Overhead, GW Backend Framework Overhead and Application Time. The SAP NetWeaver Gateway backend framework time in this entry is the sum of the most expensiveREAD_ENTITYSETand a small time for the internal parallelization overhead.
Q10: Can I replay a request from within a performance trace? An OData request can be a retrieve or modifying operation with or without request payload (application data) but the performance trace has to be quick and therefore it does not store the request payload. Consequently, a direct replay from within a performance trace is not available.
What you can do, however, is activate the Payload Trace (available from SP 07 onwards) to store all OData requests for which you want to measure the performance. After that, you deactivate the payload trace and activate the performance trace and do different replays from within the payload traces.
Q11: Is the performance trace integrated in Solution Manager? Yes, the performance trace can also be activated by the HTTP Plugin (SAT Trace flag) and the performance trace summary will be displayed in Solution Manager.
Q12: Can I somehow see the processing times in the gateway framework and provider application without having activated the performance trace?
Yes, this is available from SAP NW Gateway 2.0 SP08 onwards and the so-called Performance Statisticscan be required by an OData consumer. See chapter about Performance Statistics inSome new features in SAP NW Gateway 2.0 SP08for more details.
In this document I will comment how to trigger BW process chain from ECC, for example to update a particular report. I hope it can help to someone who has to perform this process in the future.
According to the requirements, the user - after executing certain transactions in ECC - should be able to trigger a load in BW and see the reports updated with this new information.
To do this, we will create an ad-hoc transaction in ECC (ZBW_UPDATE_REPORTS) with an associated program (ZBW_TRIGGER_EVENT). A dynpro where the user will be able to select the business area he wants to update will be displayed in this transaction. After selecting this area, "Execute" button is pressed and the program is launched. This program calls a BW function (ZFM_TRIGGER_EVENT) passing as parameter the event associated with the selected area. Finally, this function will be responsible for triggering the event (ZECC_EVE01) and starting to run the appropriate load chain (TR_DT).
Step 1.Create the transaction in SE93. This transaction will have associated the program that we will create in the next step.
Create Dialog Transaction:
Step 2.Create an executable program in t-code SE38. In this transaction the user will select the Area of reports that he wants to update.
Note:SLBWD200 is the name of our SAP BW system.
Fill the Text Symbols and Selection Texts:
Step 3.The dynpro looks like this:
Step 4.Create an event in tcode SM64 for each of the different loads we want throw. In our case: ZECC_EVE01 (for Treasury area) and ZECC_EVE02 (for Consolidation area).
Step 5.We associate to each chain the corresponding event. In our example we plan the load chain of Treasury (TR_DT) to run after event: ZECC_EVE01.
Note:it is important to plan and activate the chain and check “Periodic job”.
Step 6.Create the function module (in our case ZFM_TRIGGER_EVENT). This FM must be enabled as Remote-Enabled and it will call the standard function BP_EVENT_RAISE which will raise the event received as parameter.
Step 7.Finally, just after executing the transaction from ECC side, the BW Process Chain is thrown:
This document will help you to understand the various options available under Export and Import in HANA Studio.This is based on current design and may subject to change in future releases of HANA Studio.
Here it goes...
Why do we have Export and Import?
When the user creates Information Models, Tables, Landscapes and if he wants to move them to different systems (new or existing), instead of recreating everything there in the target, he can simply export them and import it into a target system to reduce the effort of the user. This is similar to transport of objects in BW terms. And also this supports importing of Meta data from other systems as well.
Where to access this?
Export and Import can be accessed from Quick Launch->Content or through the File menu or through the right click context menu option of tables
What are the options available in Export & Import?
Under SAP HANA Content:
Delivery Unit: A single Unit which can be mapped to multiple packages and can be exported as single entity. So that all the packages assigned to Delivery Unit can be treated as single unit. The user can use this option to export all the packages that make a delivery unit and the relevant objects contained in it to a HANA Server or to local Client location. This is kind of “Transport Request” in BW terms.
Developer Mode: This option can be used to export individual objects (Views) to a location in the local system. This can be used only in rare cases.
SAP Support Mode: This can be used to export the objects along with the data for SAP support purposes. This can be used when requested. For Eg, user creates a view which throws up error and he couldn’t able to resolve. In that case he can use this option to export the view along with data and share it with SAP for debugging purpose.
Tables: This option can be used to export tables along with its content
Under SAP HANA Content:
Delivery Unit: Exported Delivery Unit can be imported either from HANA Server or from local Client location
Developer Mode: To import the already exported views from the local Client location
Mass Import of Metadata: This option will be used to import the meta data (table definition) from SAP ERP source systems using SAP Load Controller in to HANA, if the user uses Sybase Replication Server for Data Provisioning
Selective Import of Meta Data: This is similar to above but in this case, SAP BO Data Services will be used for Data Provisioning
Under SAP HANA Studio:
Landscape: To import the exported landscape in the target system
Tables: To import the exported tables into target system
How to make use of these options?
Delivery Unit should have been created by user prior to the usage of it. Creation of Delivery Unit can be done through Quick Launch->Setup->Delivery Units…
Once the Delivery Unit is created and the packages assigned to it,
Go to Quick Launch->Content->Export->Delivery Unit->Select the Delivery Unit. The user can see the list of packages assigned to it.
He can export the Delivery Unit either to HANA Server location or to local Client location by selecting through the radio button
The user can restrict the export through “Filter by time” which means Views which are modified within the specified time interval will be exported
Select the Delivery Unit and Export Location and then Click Next->Finish. This will export the selected Delivery Unit to the specified location
Go to Quick Launch->Content->Export->Developer Mode->Next
Select the Views to be exported, the user can select individual Views or group of Views and Packages and select the local Client location for export and Finish
SAP Support Mode:
Go to Quick Launch->Content->Export->SAP Support Mode->Next
Select the View that need to be debugged by SAP Support. This will export the View along with the table data it refers to. This will be directly exported to HANA Server “backup” location
Landscape: It’s already discussed in other Document as I mentioned earlier
Go to Quick Launch->Content->Import->Delivery Unit->Next and select the Delivery Unit (from HANA Server or from Local Client)
Here, the user can select “Overwrite inactive versions” which means that if there are any inactive version of objects (from previous import) exist those will be overwritten. If the user select “Activate objects”, then after the import, all the imported objects will be activated by default, the user no need to trigger the activation manually for the imported views.
Go to Quick Launch->Content->Import->Developer Mode->Next
Browse for the Local Client location where the views are exported and select the views to be imported, the user can select individual Views or group of Views and Packages and Click Finish
The user can select “Overwrite Existing Objects” to overwrite already imported objects, if any.
Mass Import of Metadata:
Go to Quick Launch->Content->Import->Mass Import of Metadata ->Next and select the target system
Configure the System for Mass Import and click Finish
Selective Import of Metadata:
Go to Quick Launch->Content->Import->Selective Import of Metadata ->Next and select the target system
Select the Source Connection of type “SAP Applications”. Remember that the DataStore should have been created already of type SAP Applications->Click Next
Now select the tables (User can also search for table here) and the target schema to import the Meta data and click Next
Validate if needed and click Finish
Import of Landscape and Tables are in a similar way. Hence not discussing about them here.
You "may" expect a provision to import the data directly from flat file(excel or csv), through this import option into HANA, in the future releases of HANA Studio.
Hope this document helps to understand the Import and Export feature of HANA Studio.