The SAP HANA data model is still evolving. What’s in store for InfoCubes, and what’s the status of the LSA architecture? In the future, will we see more reporting directly from SAP ERP on SAP HANA?
Comerit’s Dr. Bjarne Berg took questions from our audience to discuss the emerging best practices for SAP BW on HANA data models and reporting.
Dr. Berg is an author, consultant, professor, and long-time speaker at our BI conferences. He is presenting SAPinsider's upcoming SAP BI seminar on dashboards and reporting and is co-author, with IBM’s Penny Silvia, of SAP HANA: An Introduction (2nd edition) from SAP PRESS.
Molly Folan, SAPinsider Events: Welcome, Dr. Berg!
Dr. Berg: Before we start today's session, let me give some reflections on the need for BW InfoCubes.
Do We Need InfoCubes?
Currently, there is significant debate on Internet blogs and forums concerning whether InfoCubes are needed with an SAP HANA system. However, for the interim period, InfoCubes are needed for several reasons.
First, transactional InfoCubes are needed for Integrated Planning (IP) and write-back options. InfoCubes are also needed to store and manage noncumulative key figures, and the direct write interface (RSDRI) only works for InfoCubes. In addition, the transition from SAP NetWeaver BW to SAP HANA is simplified by allowing customers to move to the new platform without having to rewrite application logic, queries, MultiProviders, and data transformations from DSOs to InfoCubes.
However, the continued use of InfoCubes has to be questioned. The introduction of the star schema, snowflakes, and other dimensional data modeling (DDM) techniques in the 1990s reduced costly table joins in relational databases, while avoiding the data redundancy of data stored in first normal form (1NF) in operational data stores (ODSs).
The removal of the relational database from SAP HANA’s in-memory processing makes most of the benefits of DDM moot, and continued use of these structures is questionable. In the future, we may see multilayered DSOs with different data retention and granularity instead. But, for now, InfoCubes will serve a transitional data storage role for most companies.
Thanks,
Dr. Berg
Mariano Filiberto: Dear Dr. Berg,
Thanks for the opportunity of discussing this topic!! It was a pity that it was not included in the SAP BI conference. It is a major topic that deserves more than one hour, but let's start somewhere. I am looking forward to your view/update of the evolution of the LSA model/framework.
Some generic questions:
- LSA ++ ?
- Distinction between Datawarehouse Layer vs. Reporting Layer still valid on BW on HANA?
- How to deal with the impact of BW on HANA with a LSA project already running?
- Reporting on BW or directly on ECC/CRM on HANA?
More detailed questions to come.
Dr. Berg: Hi Mariano,
Many organizations have created a Layered Scalable Architecture (LSA) in their SAP BW systems. This is mainly done to partitioning and isolating the data to maintain the original data, isolate transformations, scaling the system using partitioning, providing performance for queries and allowing companies to create 'corporate memory areas' of old data that can be off loaded to technologies like NLS.
While many of these concepts still are valid, there are significant overhead of this approach. First, when creating the system as part of a project, it includes many layers and takes a very long time to develop.
I recently worked on moving a 40 TB BW system to HANA that was using LSA. Almost all data flows (85) had all LSA layers and were also partitioned into 7 geographical areas. This resulted in over 40 InfoProviders for most data flows (3,000+ overall). It took forever to develop any new content using this approach.
Secondly, any changes like adding a field to existing data flows, required 40 changes in the InfoProviders and the same amount of changes to all data loads between the layers. It was simply not sustainable from a TCO standpoint.
With LSA++, we are removing the layers and simplifying the architecture to take advantage of the inherent performance of HANA. For example, in the BW to HANA migration above, we could remove 31 of the 40 InfoProviders and almost 2,500 InfoProviders in the whole system. So, this is significant benefits in-terms on new development, but also from the total cost of ownership of existing architecture.
While not required, I recommend that almost all companies plan for a gradual change over to the LSA++ simplified model with 3 layers instead of the traditional 7, and that partitioning of InfoCubes are done only done those that are extremely large, have very high frequency or those that can be off-loaded to NLS.
This is not easy and requires a lot of work and should probably be done post-migration to HANA for most companies. But it allows for much simpler systems that takes 100% advantage of HANA.
You can read more on traditional LSA at Juergen Haupt’s great blog and see a great presentation by Juergen on BW on HANA LSA++.
Thanks,
Dr. Berg
Mariano Filiberto: Dear Dr. Berg,
Thanks for your answer.
If I understood you correctly (and I also read this somewhere in your site) first you need a technical migration and then in order to take full advantage of HANA a functional migration.
In the example that you mentioned (big company with a full LSA, 3000 cubes and more) it means that first you need to put a lot of money to do the technical part (your current database) and when the functional part is complete done (after convertion from LSA to LSA ++) where I assume that a lot of space in memory is reduced ...
Then how you justified all that money that you put in HANA for BW after the functional project that is not use anymore? (Answer from SAP -> for future grow/answer from the senior management -> how much money, time and ROI?)
Is there any approach on how to deal with this chicken and egg situation ow due to the fact of how expensive HANA is ? Any methodology or best practice to follow? Or your advise?
Kind regards,
Mariano
Dr. Berg: Hi Mariano,
Great question... What we do first is a BW clean-up. This include a -12 step program with lots of removal of non-needed data and temp tables in BW (see chapter-5 in my HANA book from SAP Press on all the transactions and step-by-step apprpach).
After we have a smaller system (normally 20-30% smaller), we then implement NLS to keep the volume down and save thousands on hardware and licencing costs. For example, the 40TB HANA migration I mentioned above have another 72TB of data on NLS.
The only the 'smaller' system is moved. However, as you correcly observed, if the LSA++ remodeling was done prior to the move of BW to HANA, you could probably save even more...
And yes, getting a BW HANA project off the ground in large companies can be challenging. The planning steps can be many and you often have to convince the organization that the system can actually be migrated.
We created a short movie that shows the 5-steps for moving BW to HANA and published it this spring (14,000+ people have seen it so far). This shows the most critical pre- steps that you should do before you start your project and also demonstrates some of the key tools available to you from SAP to find out what else needs to be done.
It is very useful for those in the planning stages of a BW to HANA migration You can read the blog and view the demo here.
Mariano Filiberto: But in order to convert to LSA++ I need to be on HANA first.
LSA ++is the revamp of LSA for BW on HANA -> means I need to be on HANA first to be able to reduce from 7 to 3 layers-> again chicken and egg.
If I understood your answer correctly the steps are:
- BW cleanup (12 step + chapter 5 of your HANA book )
- NLS (if required)
- Technical migration to HANA.
- Functional migration-> conversion from LSA to LSA ++ (usage of DSO optimized for HANA and if required cube optimized for HANA)
....and after you see what you really need pay SAP the license cost of BW on HANA :)
(last step is also optional)
Kind regards
Mariano
Dr. Berg: Hi Mariano,
That is funny :-) ... But you can actually start the LSA simplification in the Oracle DB for BW before the migration starts.
Some are actually copying the BW system, keeping cloned set of the delta queues (PCA tool), then modifying the LSA in the copy and finally migrating the copy.
This is very unusual, but technically possible if you need to save some HANA bucks :-)
Thanks,
Dr. Berg
Jeyakumar Gopalan:Hello Dr. Berg,
First of all, I am a huge fan of you. I always follow your lr.edu homepage whenever I get a chance.
I have couple of questions for you:
1. We are using snapshot scenario for our materials movement inventory reports in BW. What are the possibilities to implement snapshot scenario ideally in HANA?
2. It would be great if we have RDS on Inventory Management for BW. Any idea when it will be GA? We have non-cumulative key figures sitting around the inventory InfoCube and so I would like to see how HANA is addressing this.
Thanks,
Jeyakumar Gopalan
Dr. Berg: Hi Jeyakumar,
Yes, you have to plan a little around noncumulative key figures in cubes like inventory.
Because SAP HANA loads the initial noncumulative, delta, and historical transactions separately, two DTPs are required for InfoCubes with noncumulative key figures (i.e., inventory cubes).
In this case, one DTP is required to initialize the noncumulative data, and one is required to load data and historical transactions (for more details, see SAP Notes 1548125 and 1558791).
Also, traditional InfoCubes with noncumulative key figures can only be converted to SAP HANA-optimized InfoCubes if they aren’t included in a 3.x data flow. Because manual intervention and DTP changes are needed, inventory cubes and cubes with noncumulative key figures should always be tested in a sandbox, or development box, before being converted in production systems. Alternatively, these InfoCubes can be left in a non-converted status.
You can also see SAP Guidance on how to handle InfoCubes for Inventory.
Thanks,
Dr. Berg
Margie Evitts: Hello Dr. Berg,
Thanks for the opportunity for discussion and information. A few initial questions:
1) Is there an opportunity to 'sidecar' BW at a database level or is BW an all or nothing on HANA?
In other words, can we move one solution area from our current BW to a HANA DB (essentially splitting our current database so that some tables were on the current DB and some were on HANA) or would we have to build an entire BW on that HANA DB and repoint everything for that one solution area to that entire BW on HANA?
2) Is HANA modeling in BW really only necessary as an additional tool in the box for specific solutions?
By that I mean, do you reap the performance benefit of HANA on data loads and conventional reporting for previously deployed solutions without any modeling or changes? Or do you really not see a vast improvement unless you re-model within HANA - moving your logic closer to the data, for example?
Warm regards,
Margie
Dr. Berg: Hi Margie,
I have two clients who are using this "brown-field" approach. They have installed a clean BW on HANA box and are then moved only those areas they want in the new HANA system.
My two clients doing this, simply decided that they had too much 'junk' that was developed over the last 10-12 years that they did not want to bring forward into the new box.
This also meant that the BI (BOBJ) 4.0 environment was really the integrator since it points to both the 'old' BW on Oracle and on the new BW on HANA. What they have in the interim period before that redevelop all the content into the HANA system is basically a Federated Data Warehouse (FDW).
This is not usual and may cost a bit more in the transition phase. All of the other 6 HANA projects I am involved in are simply doing a technical migration of BW to HANA and then doing the fixed and remodeling afterwards.
Another idea: MCOS and MCOD
Also, you could move the whole BW and also model the data in a non-BW system on the same hardware. This is called Multiple Components One Database (MCOD) and Multiple Components One System (MCOS). Let me explain.
MCOD
When you run software applications on a single database, it is known as a Multiple Components One Database (MCOD) configuration. Note that this is not the same as having multiple databases on one hardware appliance. A MCOD system simply refers to having multiple software applications on one database. Naturally, in MCOD mode, SAP supports any custom developed data marts as working with other SAP HANA components. However, you can also run any of the following components together on a single database:
- SAP NetWeaver BW on SAP HANA
- SAP Finance and Controlling Accelerator for the material ledger
- SAP ERP Operational Reporting with SAP HANA
- SAP Finance and Controlling Accelerator: Production Cost Planning
- SAP Rapid Marts
- SAP CO-PA Accelerator
- SAP Operational Process Intelligence
- SAP Cash Forecasting
- SAP HANA Application Accelerator/Suite Accelerator
- Smart Meter Analytics
If you’re running your SAP HANA system as MCOD, you have to consider that all backup, recovery, and failovers now pertain to all applications. You can’t back up a single application. So any restarts, failovers, and restores now impact all components on the database.
You’re also sharing the system resources and will have to size your SAP HANA system accordingly. From an administration standpoint, the software applications are managed individually in their respective interfaces, while the database is managed as a central unit through standard SAP HANA database administration functions.
Finally, if you’re planning to use MCOD with SAP NetWeaver BW as one of the software components, you should also study special considerations for this MCOD scenario in SAP Note 1666670.
MCOS
For Multiple Components One System (MCOS) you can also install multiple SAP HANA databases on a single SAP HANA hardware appliance. This is known as Multiple Components One System (MCOS) or sometimes referred to as a multi-SID (system ID) configuration.
This is an evolving capability, and there are some limitations in what SAP supports from a nontechnical standpoint. For example, if you buy a single-node SAP HANA box, you’ll have a single install of the SAP HANA database on the system. However, you can also install an additional database on this node, and SAP will support you as long as it is in a nonproduction system.
Currently, SAP will not support MCOS if you move this configuration to a production system, but as long as you keep it in development, sandbox, testing, and training environments, you have support. This is likely to change over time, so consult with SAP before attempting this in the future.
If you choose to run MCOS on a single node, nonproduction box, each database is managed individually in Studio, while the hardware is shared. This may be a cost-effective solution for smaller organizations who simply want a small sandbox or training environment, without having to buy another SAP HANA appliance. Just make sure you have the system sized accordingly because the databases may be competing for the same system resources.
So, Margie lots of details, but this should give you an idea of the basic concepts of an FDW (sidecar BW), MCOD and MCOS solution.
Hope it helps…
Dr. Berg
DennisGaule: Dr. Berg,
Thank you for this opportunity to speak with you.
From a strictly business standpoint, the Rapid Deployment Solution 'content areas' appear to be somewhat similar to the current BW extractors. Do you anticipate that SAP might gradually transition the current BW extractors to the Rapid Deployment Solution 'content areas'?
Thank You,
Dennis Gaule
Dr. Berg: The Rapid Deployment Solution deliver 'content areas' which are in a way similar to the current BW extractors: They allow for a functional view of the information, rather than a technical table view. So although the triggers work purely table based, you don't need to have an understanding of the actual tables to get data from ECC into HANA with LT RepServer. That is, as long as there is a 'content area' available for the content you are interested in.
Assets are not yet in scope... So let's hope SAP will add this area soon in a standard delivery. See more here.
The RDS is more of a point-to-point solution for those who either have a specific need, those wo do not have BW, of someone who want to get their HANA projects off-the ground quickly with minimal work.
I think the RDS will live side-by-side with BW for quite sometime, but the question is as you correctly observed, "Will BW all be like a collection of RDSs in the future?"I don't believe so, it would be like having a collection of data marts instead of a real EDW with centeralized masterdata hierarchies. Do I guess the discussion is like DDM Vs. EDW as we did in the past.
Thanks,
Dr. Berg
Ram: Dr. Berg,
Thank you for this opportunity to talk to you.
What is the best combination (Version, SP, etc) we need to use for the BW on HANA and front end with Business Objects?
Regards,
Ram
Dr. Berg: Hi Ram,
Technically, you can migrate a BW 7.3 SP5 system, but a higher SPS is strongly recommended.
For BOBJ 4.0 you get access to HANA data using a variety of interfaces, depending on tool (i.e. DB SQl for BOE, BICS for analysis, MDX for Excel, and ODBC/JDBC for any tool). So, all tools in BOBJ can access HANA in BI 4.0.
Thanks,
Dr. Berg
Dr. Berg: Hi Andrea,
There are many tasks and technical requirements to migrate a BW system to HANA. This includes BW technical settings and software levels required to start the process, basis checks for support packs, ABAP/JAVA stacks, Unicode, BW releases, and add-ons to your system.
An automated BW pre-migration check program is found in SAP Note 1729988, and the tool provided automatic check programs for both the 3.5 version and the 7.x version of BW.
You can read more about this critical tool that everyone should run before starting their HANA project (even in the planning phase) at this link and also see a few screenshots on how it works.
Thanks,
Dr. Berg
KevinJoyner: Dr Berg,
Any thoughts on the recently announced Diablo MCS technology that exposes flash as main memory via the system bus? Wondering if we might see this included on future HANA platforms as a replacement for the SSD hardware layer.
Regards,
Kevin
Dr. Berg: Hi Kevin,
It is a bit faster, but frankly right now way to costly to replace the hardware as of now. Also, I have not seen any reliability studies for extremely large warehouses where hundreds of memory banks are needed.
So, not in the near future, but a possibility perhaps....
Dr. Berg
Rahul: Hi Dr Berg,
Quick question on the effective usage of SPO's when we are talking about migration of our existing SAP BW environment to BW on HANA.
As of now we do not have SPO's in BW(7.30 SP05) environment for any of the functional areas and data is distributed to 3 main geos. Certain cubes in FI, SD & APO area are huge so we need to know how SPOs will help in our endeavor to move to HANA. As seen in some of the previous forums/discussions, SPOs in HANA might be as good as creating separate cubes for historical data based on Fiscal year.
We also need to consider the maintainability of so many objects under SPO as we already have a layered architecture in place with cubes distributed in 3 geos.
So ,please suggest whether use of SPO's will help or shall we think of moving ahead with physical partitions of cube on time chars.
Thanks!
Rahul
Dr. Berg: Hi Rahul,
When still back on BW on Oracle, I strongly recommended SPOs and reference structures as the core partitioning option. However, having seen the new BW DSO and IC partitioning tool that SAP is developing for HANA, I am not so sure.
It is important to note that this tool is not yet released, but from early tests at a couple of my clients, we are very impressed. So, for now SPOs instead of physical partitions and hint on MultiProviders, but that may change very soon.
Take a look at the tool in this blog (towards the end of the blog).
Thanks,
Dr. Berg
Oscar Romero: Hi Dr. Berg.
Is it possible to combine common BW models with ERP-HANA new models?
For example will it be possible to unify with a multiprovider a BW Infocube with a Star-schema model available in an ERP SAP HANA?
Greetings,
Oscar
Dr. Berg: Hi Oscar,
There are many ways to do that. You could create a remote cube in the BW HANA system and the join it into a MultiProvider. You could create a universe in BOBJ and then join the data.
You could even use BEX query (it is available on ERP 6.0 Sp-5) and use it as an source.
So lots of options, but none that is considered 'best-practice' yet...
Thanks,
Dr. Berg
DennisGaule: Hi Dr. Berg,
How would you approach new BW development at a client who does not currently plan on using HANA? What changes in approach or architecture do you think would help position that client for the future?
Thank You,
Dennis Gaule
Dr. Berg: Hi Dennis,
I would avoid the LSA architecture implementation and also excessive partitioning of DSOs and ICs unless absolutely needed. I would plan and implement NLS for old data to keep the active system small.
I would look at opportunities where reportable DSOs can replace InfoCubes and use those instead (will have some performance issues if the data is large in a BW on Oracle EDW).
Those would be some quick ideas that comes to mind..
Thanks,
Dr. Berg
Ken Murphy: Dr. Berg, thank you for taking these questions. I’m wondering, could you highlight a few of the key changes that we’ll see when using the HANA Modeler? Are there new skills that will be required for a BI team when making the switch to HANA?
Dr. Berg: Hi Ken,
For all practical purposes the modeling tools in BW are relatively unchanged. However, I always include at least 1-2 days workshops for the developers to get them up-to speed on the new HANA capabilities in all projects I am involved with.
I also strongly recommend the HA100 and the BW on HANA training classes as well for all developers. This can assure that they do not continue 'old' modeling tehcniques when in HANA.
Thanks,
Dr. Berg
Molly Folan: The "basics" question I have is on the differences between the Information Modeler tool and the more technical HANA Modeler. And does the tool being used have any impact on modeling decisions and/or report design?
Dr. Berg: Hi Molly,
The information modeler is more for non-BW HANA systems and the regular modeling in BW on HANA is still done in the traditional BW interfaces. The Information Composer modeling tool is for power users.
We show each of these tools in details and step-by-step scenarios in chapters 7-9 in our SAP HANA book from SAP press if you want to learn how to use them.
Thanks,
Dr. Berg
Oscar Romero: With a HANA-BW now it would be a "good" practice to model a navegational attribute over a line-item InfoObject due the fact that the performance will not be compromised?
Dr. Berg: Hi Oscar,
Not really best practice, but you are right in your observation that the negative impact in performance of this type of design is much less then in a traditional db.
Thanks,
Dr. Berg
Molly Folan: Berg, Thanks again for joining us today!
It looks like you have had some questions come in to you directly as well as through today's Q&A thread.
Thanks again for all your answers for all of the questions....
Dr. Berg: Hi Suresh,
You can continue using existing standard InfoCubes that don’t have the SAP HANA-optimized property, or you can convert them.
The core of the new SAP HANA-optimized InfoCube is that when you assign characteristics and/or key figures to dimensions, the system doesn’t create any dimension tables except for the package dimension.
Instead, the master data identifiers (SIDs) are simply written in the fact table, and the dimensional keys (DIM IDs) are no longer used, resulting in faster data read execution and data loads. In short, dimensions become logical units instead of physical data tables. The logical concept of “dimensions” is used only to simplify the query development in BEx Query Designer. The InfoCubes can be optimized from the standard SAP NetWeaver BW administration interface or from a program delivered by SAP
Because the physical star-schema table changes during the SAP HANA optimization, any custom-developed program that accesses InfoCubes directly instead of going through standard interfaces must be rewritten. However, because very few companies have ventured into this area, the optional conversion will have little impact on most organizations except for providing faster InfoCube performance.
To convert existing InfoCube
To convert existing InfoCubes, simply go to the program RSDRI_CONVERT_CUBE_TO_INMEMORY, and select the InfoCubes you want to convert. The job is executed in the background as a store procedure and is extremely fast. Typically, you can expect 10–20 minutes even for very large InfoCubes with hundreds of millions of rows. During the conversion, users can even query the InfoCubes. However, data loads must be suspended. Currently, traditional InfoCubes with a maximum of 233 key figures and 248 characteristics can be converted to SAP HANA-optimized InfoCubes.
After the conversion to SAP HANA, optimized InfoCubes are maintained in column-based store of the SAP HANA database and are assigned a logical index (CalculationScenario). However, if the InfoCubes were stored only in SAP NetWeaver BW Accelerator (BWA) before the con-version, the InfoCubes are set to inactive during the conversion, and you’ll need to reactivate them and reload the data if you want to use it.
Although SAP HANA-optimized InfoCubes can’t be remodeled, you can still delete and add InfoObjects using the InfoCube maintenance option, even if you’ve already loaded data into the InfoCube.
Because SAP HANA-optimized InfoCubes have only one fact table, instead of the two fact tables of traditional InfoCubes (E-tables with read-optimized partitioning and an F-table with write/delete-optimized partitioning), the star schema is significantly simpler and also more in line with classical logical data warehouse designs based on Ralph Kimball’s dimensional modeling principles. This fact table simplification, combined with the removal of physical dimension tables, also results in two to three times faster data loads.
Dr. Berg: Hi Vamu,
You do not need to optimize DSOs (it is optional). For existing DSOs, you can either convert them automatically using Transaction RSMIGRHANADB, or you can convert them manually in the Data Warehousing Workbench. This migration doesn’t require any changes to process chains, MultiProviders, queries, or data transformations.
The new SAP HANA-optimized DSOs execute all data activations at the database layer, instead of the application layer. This saves significant time in data loads and process chains, making data available to users much faster.
Behind the scenes, SAP HANA maintains a future image of the recently uploaded data stored in a column table called the activation queue. The current image of the current data is stored in a temporal table that contains the history, main index, and delta index. Finally, to avoid data replication, the change log is now kept in the calculation view instead of a physical table. Because log data doesn’t have to be written to disk at this stage (in traditional SAP NetWeaver BW, this data is written to a log table in a relational database), this new SAP HANA approach is much faster and also consumes less storage space. So, while logically the activation process is very similar to the current relational tables in SAP NetWeaver BW, the technical approach is quite different.
The data loads are also positively impacted. By not being constrained by I/O writes and reads to and from disks (data is loaded in-memory instead) and by using the new optimized approach to internally generated keys (SIDs) to take advantage of the storage methods in SAP HANA, the migrated SAP NetWeaver BW system on SAP HANA typically sees two to three times faster data loads overall. For many companies, this will be reason enough to make the transition to SAP HANA. For more information see SAP Note 1646723.
It is also important to note that as of BW 7.3 SP10 the optimization of DSOs are no longer needed, so if you apply the latest SPS, you can save yourself some work.
Thanks,
Dr. Berg
Dr. Berg: Hi Suresh,
No, you no longer have to compress fact tables in HANA.
Let me explain: After the optimization of SAP HANA InfoCubes, you can no longer partition the fact tables semantically, and you don’t need to. However, there are still four partitions behind the scenes. The first partition is used for non-compressed requests, while the second contains the compressed requests. Another partition contains the reference points of the inventory data, and yet another is used for the historical inventory data movements. The last two partitions are empty if their noncumulative key figures aren’t used.
However, the first two partitions still require periodic compression to reduce the physical space used and increase the load times during merge processing (very much like traditional SAP NetWeaver BW maintenance).
This has only a minor impact on small InfoCubes (less than 100 million records) and InfoCubes without significant data reloads or many requests. Because the compression is also executed as a stored procedure inside SAP HANA, the compression is very fast and should take no more than a few minutes even for very large InfoCubes.
Thanks,
Dr. Berg
Dr. Berg: Hi Sebastian,
To check and make sure that your BW models are performing well, you can keep monitoring the Database Administration Cockpit in HANA. This is used to manage and monitoring the underlying relational or HANA based database and is available for organizations with NW 7.3 SP5 and higher.
The cockpit is available under the transaction code DBACOCKPIT, but requires that the security authorizations S_TCODE and S_RZL_ADM are assigned to your role. You can read more about this and see some screenshots at a blog I wrote a few weeks ago at this link.
Thanks,
Dr. Berg
Dr. Berg: Hi Raja,
Yes, many developers are not aware that after migrating existing DSOs and InfoCubes to HANA, the ETL may in some specific data transformation cases actually work slower in HANA.
Depending on your developers, some of the custom transforms may have sub-optimal ABAP coding can have impacts on how SAP BW ETL performs after the migration to HANA.
To find these slow ETL areas that can cause problems, Marc Bernard at SAP has created a tool that is called ZBW_ABAP_ANALYZER it is attached to SAP Note: 184743 and you can see it in action here.
I suggest you run this before you move your InfoCubes or DSO models into the production environment, but I would focus on those that are already exhibiting very slow run times (most ETL will run fine in HANA, it is only an issue for large process chains with lots of sub-optimal lookups and data transformation).
Thanks,
Dr. Berg
Dr. Berg: Hi David, Hi Joe,
Yes, load balancing your BW InfoCubes and DSOs in a scale-out HANA system is a bit of a new area for SAP. Let me walk you through the new options...
When working with very (very) large BW models in HANA there can be sub-optimal performance if not managed correctly. For example, in a 'scale-out' environment of SAP HANA, there are several servers with InfoCubes and DSOs. Some of these can be so very large and accessed so very frequently (with thousands of users) that it can lead to bottlenecks, even for HANA. To fix this, you can partition large tables and move them across multiple servers to provide load balancing for your DSOs, tables, and InfoCubes.
While not available to everyone yet, SAP has a new tool under developmentfor helping you do this for SAP BW. I wrote a blog on this a couple of weeks ago and you can read more on this and see screenshots of the tool in-action at the link below (scroll down to the end of the blog).
Thanks,
Dr. Berg
Dr. Berg: Hi Joe,
I would not use t-shirt models, rule-of-thumbs, nor QuickSizer to size a BW HANA system for those who already have a BW system.
A few months ago, SAP released an ABAP based tool that generates a report significantly better for sizing. This program is attached to SAP Note 173697 and you can read more about this in a blog I wrote back in March.
Thanks,
Dr. Berg
Dr. Berg: Hi David,
Frankly, that is a great question: Is BW dead or will all reporting be moved back to ERP on HANA?
I have a client in Houston who just went live with ERP on HANA and they do not want to implement BW at all, nor any EDWs. Instead, they are using operational reporting in ERP on HANA and creating reportying tables in ERP to maintain historical reportsing for areas where they have timde-dependent masterdata).
This is very unusual approach, and the jury is out on how this will work in practice. Anyway, I posted some thought on this earlier this year, and you can read a longer response here.
Thanks,
Dr. Berg
Dr. Berg: Hi Ramesh,
Yes, writing a BW to HANA migration workplan can be a lot of work, especially if you are new to this technology and when 'best-practices' are still emerging.
I get this question so often that I wrote a list of the 76 steps that should be included in your BW to HANA migration plan and published it to help you get started. Here is a link with most of the core steps already included.
Thanks,
Dr. Berg
Molly Folan: We'll wrap up here.
Thanks to all who posted questions and followed the discussion!
Be sure to look for a transcript of all the discussion here in our BI-BW Forum and in the blog roll on Insider Learning Network. If you have registered for this Q&A, you’ll receive an email alerting you when the transcript is posted.
You’ll find plenty of details on Dr Berg’s many activities – his blogs, the new HANA book– as well as Dr. Berg’s BI seminar, “Tips, techniques and tools for building reports and dashboards” this fall in Chicago, Orlando, or Las Vegas.
If you're attending SAPinsider's Reporting & Analytics conference in Orlando this November, you'll also find a track on BI optimization, with a session dedicated to this topic.
And a big thank you, again, to Comerit’s Dr. Bjarne Berg for joining us.
Thanks, Berg, for taking the time to answer these questions today!
About Dr. Berg:
Dr. Bjarne Berg has extensive experience in implementing in-memory solutions for SAP analytics in Europe and the USA. His SAP PRESS book SAP HANA: An Introduction, coauthored with Penny Silvia, is now in its second edition. He is a frequent speaker at SAPinsider's conferences and is presenting the “Tips, techniques & tools for building reports and dashboards” seminar in Chicago, Orlando and Las Vegas this fall.