Quantcast
Channel: SCN : Discussion List - SAP HANA Developer Center
Viewing all 6412 articles
Browse latest View live

SLT vs SDA

$
0
0

Hi all,

 

I would like to ask your experience in terms of performance in using SAP HANA Smart Data Access.

The use case is the following : SAP ERP on HANA and need to access SAP HANA Live views with SDA in another HANA instance.

In this HANA instance we will get more data from other systems too and we need to join them.

I would like to know if from performance point of view, SDA could be used for real-time reporting from another instance or we should replicate the data to that instance from the ERP instance with SLT.

I have used SDA in a small scale and looks to works fine but I want to ensure the performance on a big amount of data (e.g. reporting on line items from the remote instance) and if there is a great difference on performance vs data  replication with SLT.

 

Thank you in advance


Migrate ECC6.0 EHP7 on MSSQL to SAP S/4 HANA

$
0
0

Hi All,

I have SAP ECC6.0 EHP7 on windows 2012 with MSSQL 2012 which I need to migrate this application to SAP S/4 HANA.

I have been going through option on how to execute this and have few queries.

What Options I have for above activity.

1. DMO option in SUM as given in below link.    http://scn.sap.com/docs/DOC-49580

2. Need to have BODS or SLT server for this(I don't have any of this available in my environment)

I am still going through documentation for this. Need some more clarity on over all high level steps as source system is on windows and target needs to be SLES11SP3.

Need some expert advice here.

Regards

Shontu

Populating a column with the result of a query from the same table

$
0
0

Hello good people,

 

I want to populate a column("APR_REV") in my table by using an SQL query on the same table. The query is:

 

select "REVENUE" FROM "MySCHEMA".MYTABLE"

WHERE "Month" = 'APRIL' group by "ID", "REVENUE" ;

 

Note that Month stores strings

 

I tried  many different queries with the below one showing most promise as it showed rows effected in the result but nothing is printed in the column.

 

insert into "D063375"."MySCHEMA".MYTABLE"("APR_REV")

select "REVENUE" FROM "MySCHEMA".MYTABLE"

WHERE "Month" = 'APRIL' group by "ID", "REVENUE" ;

 

 

My question is, is it possible to do that? if yes any guidance on the how would be much appreciated as I'm a bit of a novice with HANA and SQL.

How can I connect HANA with MS SSIS?

$
0
0

I want to insert data with using SSIS.How can I connect hana with SSIS?

SAP HANA exception handling in procedure with DDL

$
0
0

Hi all,

 

Please consider the following simple HANA stored procedure:

 

CREATE PROCEDURE SP_TEST ( )     LANGUAGE SQLSCRIPT     SQL SECURITY DEFINER
AS
BEGIN     DECLARE EXIT HANDLER FOR SQLEXCEPTION SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;     CREATE TABLE TEMP (COL1 INT);     DROP TABLE TEMP;     SELECT 9/0 FROM DUMMY;
 END;

When this procedure is executed using CALL SP_TEST, the exception raised by the divide by zero operation is not handled, as per:

 

Started: 2015-11-13 13:13:57

 

Could not execute 'CALL SP_TEST' in 44 ms 429 µs .

 

[129]: transaction rolled back by an internal error: [129] "SP_TEST": line 12 col 2 (at pos 230): [129] (range 3): transaction rolled back by an internal error: division by zero undefined: at function /()

 

Now consider the following modified code:

 

CREATE PROCEDURE SP_TEST ( )     LANGUAGE SQLSCRIPT     SQL SECURITY DEFINER
AS
BEGIN     DECLARE EXIT HANDLER FOR SQLEXCEPTION SELECT ::SQL_ERROR_CODE, ::SQL_ERROR_MESSAGE FROM DUMMY;     --CREATE TABLE TEMP (COL1 INT);     --DROP TABLE TEMP;     SELECT 9/0 FROM DUMMY;
 END;

When this procedure is executed in the same fashion, the exception is handled correctly, as per:

13-11-2015 13-32-31.png

 

From my testing and observation, it appears that once a DDL statement is executed within a stored procedure, any subsequent exception raised will not be handled.  If the exception is raised prior to the execution of a DDL statement, the exception is handled.

 

Local temporary tables do not appear to cause this issue, but global temporary table do.  And of course, DML statements also do not cause the issue.

 

This behaviour has been observed on SPS 8 Rev. 82.

 

Can anyone tell me if a) this behaviour is a product bug, and if so in what revision is it resolved, or b) this behaviour is by design, and if so why?

 

Thanks,

 

Chris.

Testing HANA-SQL outside HANA-studio

$
0
0

Hello,


At this moment we converted out intake-system to HANA primary database.

 

As long as HANA was our secondary database, we had the HANA-studio at our disposal to try-out the specific features of HANA-sql (e.g. CONTAINS, substring and concatenation in WHERE-expressions, ..). We use the ADBC-framework to execute the created statements. The studio only helps us develop them decently.

 

Apparently our database team does not want to give us access to the HANA-studio as soon as it is our primary database. The main reasons are the need to set up thorough autorizations and the set-up of the transport system.

Is there another way to try-out and test these specific features of HANA-sql without the HANA-studio?

 

Thanks for any input.


Kris

Database Triggers VS Procedures performance for specific column updates in HANA.

$
0
0

hi all,

 

wanted to know that which method between triggers and procedures will be more suitable for below requirement so please share your views:

 

 

Requirement: To capture the specific column update or insert in a table(main table) into another table(log table).

Information to be captured is old value ,new value,username ,date and time of update or insert.

 

 

Important Note:The table is going to be huge table may have approx 10 to 20 million records after one year.

HANA Live Content for QM

$
0
0

Hi Gurus,

 

I'm looking for HANA Live content related to ERP QM (Quality Management).

 

But don't find anything  on SAP help. I have searched in the following location.

 

SAP HANA Live for SAP ERP - SAP Library

 

If any one know where to find the content please let me know.

 

 

 

Thanks,

Srikanth


UI Integration Services for HCP Trial account??

$
0
0

Hi Everyone,

 

I am currently using HCP trail account and i am trying to configure the launch pad side and i am getting the below error:

 

Error in OData response for GET "/sap/opu/odata/UI2/PAGE_BUILDER_CUST/Pages?$expand=PageChipInstances/Chip&$filter=catalogId%20eq%20'/UI2/FLPD_CATALOG'&$orderby=title": HTTP request failed

 

What i understood from searching over the internet is that i need to grant privileges for SITE_DESIGNER by _SYS_REPO, (which i think is not possible in trial accounts).

 

so my question is whether UI integration services(launchpad site and tiles) are available for HCP Trial accounts or not??

 

can someone please let me know??

 

Best Regards,
Mahesh

Probloems with CDS will be solved with SPS10

$
0
0

Hello, i'm trying to build an hdbdd file on HANA developer edition SPS9 based on one previous database schema made with postgreSQL

 

I'm facing some problems like these ones i show below to get the exact same model i previously had:

 

1.- Default values for date fields:

 

date: UTCDateTime default CURRENT_UTCTIMESTAMP;

 

Error while activating /pruebas/modelo.hdbdd:

"CURRENT_UTCTIMESTAMP" is a reserved SQL KEYWORD, and cannot be used as a name


 

2.- No boolean type.


 

3.- Association with default values


language_code: Association to modelo.language { code } not null default 'en'

 

Association type cannot be used as constants or default value


 

4.- Null default values


reference: String(50) default null;

 

Association type cannot be used as constants or default value

 

 

So my question is if these issues will be solved in the new SPS10. i've read about the default values for date will...but what about the others?

 

If not, could someone give an alternative to these problems?

 

Thank you very much.

 

Best regards,

 

Luis

 

 



 





SAP DBTech JDBC: [2048]: column store error: search table error: [16] An operation has failed on a file (create, delete, copy, move, ...)

$
0
0

Hi,

I am experiencing the following error message:

 

SAP DBTech JDBC: [2048]: column store error: search table error:  [16] An operation has failed on a file (create, delete, copy, move, ...)

 

Unfortunately I am not able to find anything on this error message except an entry on a TREX error from the year 2005:

 

0016 an operation has failed on a file (create, delete, copy, move...) This error occurs when the system tries to delete an index that is locked by another process. Solution: 1. Stop the TREX Services. 2. Delete the index directory and the entry in bartho.ini,TREXIndexServer.ini and in TREXTcpipClient.ini manually.

 

The error occurs during the following query:

 

SQLBug.PNG

 

In contrast the same query is executed without a problem when replacing BKPF with BSEG (which is about 4 times the size of BKPF).

 

It is a multitenant system and BKPF and BSEG are views referring to tables on another tennant. Querying the single tables/views works without a problem, so I assume that it is not a permission issue. The problem seems to occurs especially in case the BKPF is joined with the third table.

Other joins e.g. BKPF and /COCKPIT/THDR or the third table with BSEG work without a problem.

SplitApp - fakeOS mobile mode, master view not hiding

$
0
0

I have created a splitapp and testing in Android and in windows-> chrome with URL having sap-ui-xx-fakeOS=android.

 

the master view is shown and when I select something from master, the detail page is loading in the background but I am not able to see it.

 

master view is not hiding.

 

I tired SplitApp to set mode to HideMode, ShowHideMode, PopoverMode , but no luck..

 

could you please suggest?

How can I connect SQL Server BI tools to HANA? I'm getting ODBC error with v38

$
0
0

Is there a trick to getting the SQL Server BI tools to connect to HANA. I was trying to use SQL Server 2012 Reporting Services and SQL Server Data Tools to build reports to try and connect and I'm getting a consistent failure when setting up a data link to the ODBC driver. I'm using v38 of the client tooll x64 bit version.

I first setup a User (or System or File) DSN using the ODBC Data Source Administrator - no problems with connecting via the tool using Server:port imdbhdb:30015 for the port. Clicking Connect prompted me for Use and Password and connected without SSL and got a Connect sucessful! message.

 

When I go into SQL Server Data Tools or use the SQL Server 2012 Report Builder to create a Data Link - this is where the problems start. The example I'll walkthru here is creating a Tabular Data model for Analysis Services using SQL Server Data Tools.

  1. Create a Tablular model project
  2. Choose the Model menu and select the Import from Data Source... command that launches the Table Import Wizard.
  3. Choose the Others (OLEDB/ODBC) source which uses the OLEDB for ODBC provider to make the connection
  4. Click the Build... command to build my connection string
  5. I select my HANA DSN, enter the user name and passwod and then click the Test Connection button.

 

This results in the very unexpected error:

Test connection failed because of an error in imitializing provider. [Microsoft][ODBC Driver Manager] The specified DSN contains an architecture mismatch between the Driver and Application.

 

Any help would be greatly appriciated.

Regards,

Bill

Attribute View Filters by using IP

$
0
0

Team

 

 

My Scenario is as below, request you to please look into and suggest me with alternative approach.

 

i have few dimension tables along with the fact table. so i can create a analytical view by using fact table and attribute views(i.e created by using dimension tables avialable) as dimensions. i need to have a input paramenter in analytical view and i need to restrict data by passing that input parameter value to one of the dimension tables.

 

 

Here is the issue, i am not able to pass input parameter to attribute view from analytical view.

 

 

so right now i have created analytical view by using all dimension tables in Data foundation. so that i can pass input parameter value directly to one of the dimension tables to restrict the data.

 

 

so now i want to know is there any other way of doing this, i mean instead of using all tables directly in data foundation of analytical view, is there any other option/alternative approach avialable for doing the same because some times i am getting DB memory issue.

 

 

please check and let me know the alternative approach to avoid DB memoery issue.

 

 

Regards

Nr

Calling xsjs service from xsodata

$
0
0

Hi,

    I have a requirement that when I populate the data in the sapui5 table using and odata service, I need to log in the backend

how can i achieve this?, since the data population is being done by odata service

 

 

Regards,

Sid


SAP DBTech JDBC: [2]: general error: Shrink canceled, probably because of snapshot pages

$
0
0

Monthly we execute this command in order to Shrink the memory used and normally it works, from 800GB goes to 350GB.

 

ALTER SYSTEM RECLAIM DATAVOLUME 120 DEFRAGMENT;

 

Erros Message: Could not execute 'ALTER SYSTEM RECLAIM DATAVOLUME 120 DEFRAGMENT' in 8 ms 73 µs .

SAP DBTech JDBC: [2]: general error: Shrink canceled, probably because of snapshot pages

 

Once I got the same message and after a complete stop/stat I could run this command and Shrink the memory.

 

This time did not work at all,

 

Any one has a comment or already know the solution?

 

Marcelo Ramos

BasisGBS@br.ibm.com

IBM - 2014.07.20 15.43.26.jpg

 

How to create a foreign key that references only one column of a composite primary key?

$
0
0

The last line of the script below causes, ""* 365: no matching unique or primary key for this column list: line 1 col 13 (at pos 12)"". (The same error occurs when I omit the ADD UNIQUE statement.)


The primary key is being created fine (confirmed in the trace log and PUBLIC.CONSTRAINTS; see below).

 

Please advise -- thanks in advance!

 

 

CREATE TABLE "PLN_CRASH"

(

   CRASH_ID varchar(35) NOT NULL,

   PROCESS_ID varchar(100) NOT NULL,

   OWNER varchar(100) NOT NULL,

   -- remaining columns redacted


);

ALTER TABLE "PLN_CRASH" ADD UNIQUE ("CRASH_ID");

ALTER TABLE "PLN_CRASH" ADD PRIMARY KEY ("CRASH_ID", "PROCESS_ID", "OWNER");


CREATE TABLE "PLN_CRASHED_URA"

(

   CRASH_ID varchar(35) NOT NULL,

   -- remaining columns redacted

);

ALTER TABLE "PLN_CRASHED_URA" ADD FOREIGN KEY ("CRASH_ID") REFERENCES "PLN_CRASH" ("CRASH_ID") ON DELETE CASCADE;


------------------

SELECT * FROM "PUBLIC"."CONSTRAINTS" WHERE TABLE_NAME = 'PLN_CRASH' returns:

 

 

 

SCHEMA_NAME,TABLE_NAME,COLUMN_NAME,POSITION,CONSTRAINT_NAME,IS_PRIMARY_KEY,IS_UNIQUE_KEY

<schema_name>,"PLN_CRASH","CRASH_ID",1,"<sys_tree_rs_id>_#2_#0","FALSE","TRUE"

<schema_name>,"PLN_CRASH","CRASH_ID",1,"<sys_tree_rs_id>_#0_#P0","TRUE","TRUE"

<schema_name>,"PLN_CRASH","OWNER",3,"<sys_tree_rs_id>_#0_#P0","TRUE","TRUE"

<schema_name>,"PLN_CRASH","PROCESS_ID",2,"<sys_tree_rs_id>_#0_#P0","TRUE","TRUE"

What are the ways to improve the Performance of View Output

$
0
0

Hi ,

 

I want valuable suggestion from you experts.First time i am working with millions of data.

 

I have 3 tables with 7 million,1 million and 1000 records respectively.

 

I created a SQL SCRIPT CALCULATION VIEW by joining(INNER JOIN) these 3 Tables and on top of it i did some complex calculations like

(Lead & Lag).I myself confirmed that there are no unnecessary Computations.

 

So when i run this view like (select * from myview) in HDBSQL(putty console)  its taking 8.5 seconds

and in Hana studio its taking 8.7 seconds.

 

When i expose this to front end tool(Yellowfin) over JDBC connectivity to load the dashboards its taking 30 to 35 Seconds.

 

As per the Customer requirement the timetaking to load in the front end as well as in backend are not accepatable.

 

How do we reduce view output time & moreover the dashboard loading time in the frontend.??

 

 

 

 

Thanks & Regards

Nagarjuna

Creating instance via XSDS

$
0
0

Hello,

 

i have this tables declared in my hdbdd file.

 

 

 

So, i want to create a new user:

 

 

but i get this error, and i have no idea why it throws that error with the user_id.id, but not with language_code.code

 

 

Any idea? I checked that all variables had the correct value...

 

thanks in advance.

 

Luis.

Data preview on intermediate nodes in calculation view

$
0
0

Guys,

 

As a developer, I've created a calculation view to join some _SYS_REPO-tables. However, a 'normal user' cannot preview the data of intermediate nodes. When I provide the user with system privilege DATA ADMIN, he ís possible. Any suggestions for extending the user with data preview rights, without having the ability to drop tables, execute DDL, etc (as with DATA ADMIN)?

 

Best regards.

Viewing all 6412 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>