Quantcast
Channel: SCN : Discussion List - SAP HANA Developer Center
Viewing all 6412 articles
Browse latest View live

Linux_ODBC to HANA: Connection reset => TimeOut?

$
0
0

Hi there,


I am searching for experts, because I can't find anything about my problem in the internet.


I have a sequence of UPDATE commands which I send to the HANA database via linux_ODBC.

 

In most cases (not always) I get the following error after the second UPDATE command:

DBD::ODBC::st execute failed: [SAP AG][LIBODBCHDB SO][HDBODBC] Connection not open;-10807 System call 'recv' failed, rc=104:Connection reset by peer (SQL-08003)

 

If I execute the UPDATE in HANA Studio there is no problem.. but it lasts more than 5 minutes which is ok for me but maybe not for the ODBC driver?

 

Is it possible that there is a time out problem with linux_odbc?

If yes, do you know, where I can change the time out parameter?

If it's not a time out problem, what else could be the reason, that the connection gets a reset..

 

Following one of the problematic commands:

UPDATE table1 SET tparam=substr_regexpr('.*?s(.+)\s' IN entry_id GROUP 1)

WHERE entry_id NOT LIKE_REGEXPR '^<.+>'


I know the regexp will slow it down, but faster string functions are not an alternative because the table is still growing fast and I will be faced the timeout again in some time. And besides that those Update Commands can be executed in the night and run much longer than 5 min

 

Thank you and best regards,

Tobias


Regarding SAP HANA cloud platform with Eclipse(Kepler) 64 bit

$
0
0

Hi everyone

 

I followed every step given in this link (http://hcp.sap.com/developers/TutorialCatalog/nat201_1_native_hana_setup_eclipse.html) to establish the connection.

But when I tried adding SAP Cloud Platform System from Perspective - Systems I am getting error as shown in the Capture.jpg .

 

The error log says the message -Connection to host 'hanatrial.ondemand.com' failed.

 

java.util.concurrent.ExecutionException: com.sap.jpaas.infrastructure.console.exception.CommandException: Failed to connect the tunnel

  at java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source)

  at java.util.concurrent.FutureTask.get(Unknown Source)

  at com.sap.ndb.studio.common.CallableUtil.executeCallable(CallableUtil.java:62)

  at com.sap.cloud.tools.eclipse.hana.tunnel.ui.CloudSystemConnectionWizard$1.run(CloudSystemConnectionWizard.java:101)

  at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)

Caused by: com.sap.jpaas.infrastructure.console.exception.CommandException: Failed to connect the tunnel

  at com.sap.core.persistence.commands.tunnel.connection.DbTunnelManager.startTunnelClient(DbTunnelManager.java:75)

  at com.sap.core.persistence.commands.tunnel.api.CommandTunnelHandler.openTunnel(CommandTunnelHandler.java:148)

  at com.sap.cloud.tools.eclipse.hana.tunnel.ui.CloudSystemHelper.openTunnel(CloudSystemHelper.java:289)

  at com.sap.cloud.tools.eclipse.hana.tunnel.ui.CloudSystemHelper.addCloudSystem(CloudSystemHelper.java:343)

  at com.sap.cloud.tools.eclipse.hana.tunnel.ui.CloudSystemConnectionWizard$1$1.call(CloudSystemConnectionWizard.java:92)

  at com.sap.cloud.tools.eclipse.hana.tunnel.ui.CloudSystemConnectionWizard$1$1.call(CloudSystemConnectionWizard.java:1)

  at java.util.concurrent.FutureTask$Sync.innerRun(Unknown Source)

  at java.util.concurrent.FutureTask.run(Unknown Source)

  at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)

  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)

  at java.lang.Thread.run(Unknown Source)

Caused by: com.sap.core.connectivity.tunnel.api.management.ConnectionFailedException: Unable to establish tunnel connection

  at com.sap.core.connectivity.tunnel.client.management.DirectTunnelOperatorImpl.connect(DirectTunnelOperatorImpl.java:36)

  at com.sap.core.persistence.commands.tunnel.connection.DbTunnelManager.startTunnelClient(DbTunnelManager.java:72)

  ... 10 more

Caused by: com.sap.core.connectivity.tunnel.core.handshake.TunnelHandshakeException: Invalid proxy response status: 407 Proxy Authentication Required

  at com.sap.core.connectivity.tunnel.client.TunnelClientHandshaker.messageReceived(TunnelClientHandshaker.java:126)

  at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88)

  at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36)

  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)

  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)

  at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)

  at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)

  at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:485)

  at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)

  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

  at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)

  at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294)

  at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)

  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)

  at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)

  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)

  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)

  at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)

  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)

  at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)

  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)

  at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)

  at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)

  at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)

  at org.jboss.netty.handler.execution.MemoryAwareThreadPoolExecutor$MemoryAwareRunnable.run(MemoryAwareThreadPoolExecutor.java:622)

  ... 3 more

 

 

Kindly help me out.

How to enable and test the virus scannning feature for file upload to an ECM repository

$
0
0

We are doing a feature on a Hana web Application(on HCP) where it is required to prevent any viral file upload to ECM repository.

 

Regarding this I have the following Questions.

 

1. Is the below configuration sufficient to enable the ECM viral scanning on a repository created in run time ?   

        RepositoryOptions.setVirusScannerEnabled(true)   

        Suggested by - https://help.hana.ondemand.com/javadoc/com/sap/ecm/api/RepositoryOptions.html

   Or do we need to do any other configuration, or apply any other settings to enable the viral scanning under my web application ?

 

2.  What is the actual JAVA class name for Virus Scan Exception ? Which is JAR file having this class file ? How do I obtain such a jar ?

   

     Basically I need to catch this particular exception in a try-catch block and take necessary action.  

     The link below just says if the virus scanning detects a malicious file the Repository file upload will fail and it will throw virus scan exception but does   

     not precisely take name of its java class and package.

     https://help.hana.ondemand.com/help/frameset.htm?279edd108d4247d997bd932759f72b8d.html   

   

3. Where do I find a simple viral file with which I can test the viral scanning feature ?

    How do we define a file is a viral file in terms of Hana Virus scanner infrastructure ?

 

   Any sample web application and some spec related to this will be of great Help!

In Sap hana -09 Does Hana XS have a built-in XML parser? If not, what's approach to process xml in XS?

$
0
0

in hana 09 does hana xs hava a build in xml parser, how to pass a xml file to hana data base

How to create the folder in XS project in Eclipse hana?

Problem create a debug configuration for AMDP Procedures

$
0
0

Hello,


I have a Problem with the Debugging from an AMDP Procedure.

Bevor I debug my Procedure I would like to create a new debug configuration --> SAP HANA Stored Procedure --> New


When I do this I get following error message:


ErrorMessage.PNG


Can anyone help me with the Problem why I can not create an new debug configuration.



Thank you.


Julia

Bitwise operators in SQL!!?

Error in calculated column based on restricted measure in calculation view

$
0
0

Hello Experts,

 

I have one calculation view CV1. Here I have below restricted and calculated measures.

 

RM1 = 0AMOUNT (Restriction on Account Type)

RM2 = 0AMOUNT (Restriction on Employee Type)

 

CM1 = RM1 - RM2

 

Now when I am selecting CM1 in a select statement like below I am getting error

 

select dim1, sum(CM1) from CV1 group by dim1

invalid KF error.PNG

 

However when I select restricted measures as well I get the result.

 

select dim1, sum(CM1), sum(RM1), sum(RM2)  from CV1 group by dim1

 

Anybody know the fix of this error, please help post here.

 

Thanks

 

Rakesh


Complex logistic regression models in HANA PAL

$
0
0

Hi,

 

I am trying to perform a logistic regression with SAP HANA PAL (without using an R server!!) and would need your help on two questions:

 

1.) Is there any possibility to add interaction terms to the model? I understand how to formulate models such as

 

glm(TYPE ~ X1+X2+X3, family=binomial(logit),data=dataset)

or

glm(TYPE ~ X1+X2+as.factor(X3), family=binomial(logit),data=dataset)

in HANA PAL but I am not able to calculate a model like:

glm(TYPE ~ X1*X2*X3, family=binomial(logit),data=dataset)


Is there an option that I did not find or do I have do construct the independent variables myself (i.e. in this case additional columns with the independent variables being multiplied)?

 

2.) For the "multiple linear regression" there is the parameter ADJUSTED_R2. There does not seem to be an option to return AIC or anything similar to evaluate the goodness of fit for the logistic regression. How can I get these statistics using only HANA PAL?

 

Thank you very much

SAP HANA Smart Data Access - realtime replication

$
0
0

Hello,

 

I tried to setup a realtime data replication like in the video: SAP HANA Academy - Smart Data Integration/Quality : SAP ECC Replication [SPS09] - YouTube

I have connected an Micrsosoft SQL Server as Remote Resouce with the Smart Data Access and added a virtual table for replication.

Now I would like to create a Flowgraph Model for realtime data replication.

I have selected Flowgraph for Activation as Task Plan and selected the virtual table as data source. The target is a Data Sink (template table).

I have selected realtime behaviour in the containerNode as well as in the data source. The activation of the flowgraph model was successful.

If I try to call the created procedure to start the task plan I get to error:

Could not execute 'call "MSSQL"."MSSQL::realtime_SP"' in 262 ms 426 µs .

[129]: transaction rolled back by an internal error:  [129] "MSSQL"."MSSQL::realtime_SP": line 5 col 1 (at pos 98): [129] (range 3): transaction rolled back by an internal error: sql processing error: QUEUE: MSSQL::realtime_RS: Failed to add subscription for remote subscription MSSQL::realtime_RS.Error: exception 151050: CDC add subscription failed: Unable to obtain agent name where remote source id = 153481

 

Is it possible to solve this issue?

Or is a running SAP Hana Data Provisioning Agent necessary for the realtime replication?

 

Best regards,

Marc

HANA SQL mystery: UPDATE statement INSERTS new row

$
0
0

I am new to HANA and found a strange phenomenon that I simplify in the following Code.

1.) HANA Code that inserts a new row when an UPDATE statement is executed.

2.) HANA Code that UPDATEs but does not INSERT.

3.) MySQL Code that is close to 1.) but gives the result in 2.) .


How can this be?

 

 

1.)

CREATE TABLE TEST1 ("ID" int);

INSERT INTO TEST1 VALUES (1);

INSERT INTO TEST1 VALUES (2);

 

ALTER TABLE TEST1 ADD  ("COLUMN_A" DOUBLE );

ALTER TABLE TEST1 ADD  ("COLUMN_B" DOUBLE );

 

CREATE TABLE TEST2 ("ID" INT,"COLUMN_A" INT);

INSERT INTO TEST2  VALUES (1, 99);

 

CREATE TABLE TEST3  ("ID" INT,"COLUMN_B" INT);

INSERT INTO TEST3  VALUES (2, -99);

 

SELECT * FROM TEST1;

SELECT * FROM TEST2;

SELECT * FROM TEST3;

 

UPDATE A  SET A.COLUMN_A  = B.COLUMN_A

FROM TEST1 AS A, TEST2 AS B

WHERE A.ID =  B.ID ;

 

UPDATE A  SET A.COLUMN_B  = B.COLUMN_B

FROM TEST1 AS A, TEST3 AS B

WHERE A.ID =  B.ID ;

 

UPDATE A  SET A.COLUMN_A  = B.COLUMN_A

FROM TEST1 AS A, TEST2 AS B

WHERE A.ID =  B.ID ;

 

SELECT * FROM TEST1;

 

The result of this Code is the following:

 

Unbenannt47.png

 

2.) Interestingly the following HANA Code does not produce this INSERT:

 

CREATE TABLE TEST1 ("ID" INT,"COLUMN_A" INT,"COLUMN_B" INT);

INSERT INTO TEST1 VALUES (1, 0, 0);

INSERT INTO TEST1 VALUES (2, 0, 0);

 

CREATE TABLE TEST2 ("ID" INT,"COLUMN_A" INT);

INSERT INTO TEST2  VALUES (1, 99);

 

CREATE TABLE TEST3  ("ID" INT,"COLUMN_B" INT);

INSERT INTO TEST3  VALUES (2, -99);

 

SELECT * FROM TEST1;

SELECT * FROM TEST2;

SELECT * FROM TEST3;

 

UPDATE A  SET A.COLUMN_A  = B.COLUMN_A

FROM TEST1 AS A, TEST2 AS B

WHERE A.ID =  B.ID ;

 

UPDATE A  SET A.COLUMN_B  = B.COLUMN_B

FROM TEST1 AS A, TEST3AS B

WHERE A.ID =  B.ID ;

 

SELECT * FROM TEST1;

 

Unbenannt48.png

3.) I am used to this from MySQL. This code works fine and produces the desired result altough the steps are the same as in the first HANA code above that generates an INSERT:

 

CREATE TABLE TEST1 (ID int);

    INSERT INTO TEST1 VALUES (1);

    INSERT INTO TEST1 VALUES (2);

 

    ALTER TABLE TEST1 ADD  (COLUMN_A DOUBLE );

    ALTER TABLE TEST1 ADD  (COLUMN_B DOUBLE );

 

    CREATE TABLE TEST2 (ID INT,COLUMN_A INT);

    INSERT INTO TEST2  VALUES (1, 99);

 

    CREATE TABLE TEST3  (ID INT,COLUMN_B INT);

    INSERT INTO TEST3  VALUES (2, -99);

 

    UPDATE TEST1 ,TEST2 SET TEST1.COLUMN_A  = TEST2.COLUMN_A WHERE TEST1.ID =  TEST2.ID ;

 

    UPDATE TEST1 ,TEST3 SET TEST1.COLUMN_B  = TEST3.COLUMN_B WHERE TEST1.ID =  TEST3.ID ;

   

    UPDATE TEST1 ,TEST2 SET TEST1.COLUMN_A  = TEST2.COLUMN_A WHERE TEST1.ID =  TEST2.ID ;

 

    SELECT * FROM TEST1;

 

Unbenannt48.png

 

 

Does anybody have an idea why the first HANA code above does not work? It can produce very harmful results.

Error: [129]: transaction rolled back by an internal error: [129] transaction rolled back by an internal error: exception 141000274: exception 141000274: TRexUtils/ParallelDispatcher.cpp:275 message not found; $message$=

$
0
0

Hello Gurus,

 

I have a couple of calculation views in HANA and each of them has text fields (like Employee Name, Country Name etc.).

 

As part of my project requirement, I have join those two views and get all the fields that exist in them.

 

When I am doing so, I am getting a weird error as shown below.

 

Error: [129]: transaction rolled back by an internal error:  [129] transaction rolled back by an internal error: exception 141000274: exception 141000274:

TRexUtils/ParallelDispatcher.cpp:275

message not found; $message$='TSR HTKD JFSDFM'

Please check lines: 59,

 

Upon, researching further, I could see that the value 'TSR HTKD JFSDFM' is the value in text field of Employee Name.

 

I did try to increase the length of the field and change the data types but nothing has worked.

 

Could you please help me in getting this one resolved.

 

Thanks,

Raviteja

Does NGDBC driver support DatabaseMetaData.getImportedKeys()?

$
0
0

If I try to invoke DatabaseMetaData.getImportedKeys() the resultset is empty?

 

If I try to run the jdbc driver query "SELECT  PKTABLE_CAT, PKTABLE_SCHEM, PKTABLE_NAME, PKCOLUMN_NAME,  FKTABLE_CAT, FKTABLE_SCHEM, FKTABLE_NAME, FKCOLUMN_NAME,  KEY_SEQ, UPDATE_RULE, DELETE_RULE, FK_NAME, PK_NAME,  DEFERRABILITY FROM  SYS.P_IMPORTEDKEYS WHERE  SYS.P_IMPORTEDKEYS.PKTABLE_SCHEM = 'mydb'  ORDER BY PKTABLE_CAT, PKTABLE_SCHEM, UPPER(PKTABLE_NAME), KEY_SEQ" I receive an error message (SAP DBTech JDBC: [259] (at 202): invalid table name:  Could not find table/view P_IMPORTEDKEYS in schema SYS), can you hel me, please?

 

Thank you!

Issue with multilevel partitioning HASH-Range

$
0
0

Hi Guys,

 

In current architecture we have 15 nodes in our cluster. For big tables (14 billion rows) we have implemented the partitioning.

 

Approach 1:

We have created Hash partition on a key column with 60partitions. Thus every node has 4 partitions on every HANA Node and query execution time is 800 ms, With this methodology whenever we are executing queries the entire partition was getting loaded in the main memory thus utilizing significant space.



Table create statement:



CREATE COLUMN TABLE "RAM"."Z_BIG_TRN_HASH" ("CAL_DATE" DAYDATE CS_DAYDATE,

  "DIM" VARCHAR(500),

  "BATCH_ID" INTEGER CS_INT,

  "MATRIX_ID" INTEGER CS_INT,

  "MATRIX_ACTUAL" DECIMAL(18,

  6) CS_FIXED,

  "HISTORY_ACTUAL" DECIMAL(18,

  6) CS_FIXED,

  "M1_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M2_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M3_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M4_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M1_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "M2_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "M3_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "M4_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "SEQ_ID" INTEGER CS_INT,

  "INSERT_TIMESTAMP" LONGDATE CS_LONGDATE,

  "CREATED_USER" VARCHAR(100),

  "GLBL_GRP" INTEGER CS_INT) UNLOAD PRIORITY 5 NO AUTO MERGE WITH PARAMETERS ('PARTITION_SPEC' = 'HASH 60 DIM')

 

Approach 2:

So we have come up with HASH-Range partition

We have implemented a new approach HASH-Range, where we have created multi level partitioning (Hash-Range) on the main table.
We have created 15 hash partitions on the key column and rangepartition on month (calendar date) i.e 60 monthly partitions for 5
years (2011-2015) so with this new approach(HASH -Range) each node has one partition on key column and 60 monthly sub-partition on calendar date column so in total we  have 900 partition of one table on 15 HANA Nodes and query time is 3.8 sec.

 

Table create statement:

 

CREATE COLUMN TABLE "RAM_AD"."Y_BIG_TRN_HASH_RANGE" ("CAL_DATE" DAYDATE CS_DAYDATE,

  "DIM_GRP_ID" VARCHAR(500),

  "BATCH_ID" INTEGER CS_INT,

  "METRIX_ID" INTEGER CS_INT,

  "METRIX_ACTUAL" DECIMAL(18,

  6) CS_FIXED,

  "HISTORY_ACTUAL" DECIMAL(18,

  6) CS_FIXED,

  "M1_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M2_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M3_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M4_PRED" DECIMAL(18,

  6) CS_FIXED,

  "M1_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "M2_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "M3_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "M4_IQRD" DECIMAL(18,

  6) CS_FIXED,

  "SEQ_ID" INTEGER CS_INT,

  "INSERT_TIMESTAMP" LONGDATE CS_LONGDATE,

  "CREATED_USER" VARCHAR(100),

  "GLBL_GRP" INTEGER CS_INT) UNLOAD PRIORITY 5 AUTO MERGE WITH PARAMETERS ('PARTITION_SPEC' = 'HASH 15 DIM_GRP_ID; RANGE CAL_DATE 20110101-20110201,20110201-20110301,20110301-20110401,20110401-20110501,20110501-20110601,20110601-20110701,20110701-20110801,20110801-20110901,20110901-20111001,20111001-20111101,20111101-20111201,20111201-20120101,20120101-20120201,20120201-20120301,20120301-20120401,20120401-20120501,20120501-20120601,20120601-20120701,20120701-20120801,20120801-20120901,20120901-20121001,20121001-20121101,20121101-20121201,20121201-20130101,20130101-20130201,20130201-20130301,20130301-20130401,20130401-20130501,20130501-20130601,20130601-20130701,20130701-20130801,20130801-20130901,20130901-20131001,20131001-20131101,20131101-20131201,20131201-20140101,20140101-20140201,20140201-20140301,20140301-20140401,20140401-20140501,20140501-20140601,20140601-20140701,20140701-20140801,20140801-20140901,20140901-20141001,20141001-20141101,20141101-20141201,20141201-20150101,20150101-20150201,20150201-20150301,20150301-20150401,20150401-20150501,20150501-20150601,20150601-20150701,20150701-20150801,20150801-20150901,20150901-20151001,20151001-20151101,20151101-20151201,20151201-20160101,*')


Issue 1:
But while reading the results on this table HASH-Range the execution time hasincreased 3 folds with HASH –Range partition.

Issue 2:

While execution query on HASH-Range partitioned table, we are getting below errors.

 

Error Type 1 “SAP DBTech JDBC: [2048]: column store error: search table error:  [2613] executor: communication problem”

Error Type 2“SAP DBTech JDBC: [139]: current operation cancelled by request and transaction rolled back: search table error:  [2625] execution plan “

Error Type 3  “SAP DBTech JDBC: [2048]: column store error: search table error: [23017] Index syncpoint mismatch”

 


Can anyone help us for HASH –Range multilevel partitioning.

 

Regards,

Satya

Outbound API Not working

$
0
0

Hi all

    I am new to HANA and now i am currently learning and working with outbound API.I followed the tutorial in SCN

 

Tutorial: Use the XSJS Outbound API - SAP HANA Developer Guide - SAP Library

 

but when i tried following it,i am encountering this error

 

Error: HttpClient.request: request failed. The following error occured: unable to establish connection to download.finance.yahoo.com:80 - internal error code: resolving DNS host name failed


I've tried resolving the issue on my own but without any luck,can somebody help me with this. I've followed the exact code only changing my package so is it any setting related issue?or anything else?any input will be helpful.And thank you in advance  for your time.


Can HANA Viwes execute sucessfully when source schema is deleted

$
0
0

Dear Experts,

 

Could you please suggest on below situation.

 

Created HANA views based on ECC tables from ABC Schema.

if ABC Schema is completely removed from the HANA system due to some reasons, can Analytical/Attribute views execute sucessfully?

 

Here I try to understand, as source schema for the views is not there how system can identify the source tables?

 

 

 

Thanks,

Khader

Joining Calculation View and Input Paramters - ISSUE

$
0
0

Hi All,

 

I am trying to build row level security without using Analytical privileges...The following is what I am trying to do.

 

Created a calculation view which will have user id, company codes, cost centers etc that the user has access to.

 

I have another calculation view which is where the security need to be implemented. So I joined both calculation views (there are matching fields between the views) and build the combined calculation view. I created input parameters and when I pass the input parameters the result does not show it reflects the filters given. On the data preview when I look at the generated SQL it seem correct but results does not show any sign of applying ALL filters.

 

Did anyone try this approach or join two calculation view and then apply input paremter on top of it?

 

Any suggestions, pointers?

 

Thanks,

Arthur.

Top N rows using views

$
0
0

Hi,

     can we show TOP N rows using any of the views(Attribute, Analytic , Calc.) ?.

I have created a calc. view  which is a group by username. The output is something like this:

        USNAM   COUNT

         user123     10

         user121     3

         UserX12     6

         ;;;;;

         ;;;;;;;

 

I want only  top 5 users . Sql Query for this:'  ................order by "COUNT" desc  limit by 5'; which works fine. But I want the output of view exactly the same way. Is it possible?.

 

 

Thanks,

Mukund

call stored procedure from anothor stored procedure with select statment

$
0
0

Hi Guys

I want to call stored procedure from another stored procedure and insert the result into temp table like this

insert into #temptable

   

select * from call spGetCustomers(currID.customerId);

 

I have the stored procedure works on sql anywhere, but the same way not working on SAP Hana  .

 

 

CREATE PROCEDURE spGetCutDetails (IN countryId VARCHAR(50),)

LANGUAGE SQLSCRIPT

SQL SECURITY INVOKER

AS

BEGIN

 

declare customerId  INTEGER;

create local temporary table #temptable

(

     customerId varchar(10) null,

     countryId varchar(50) null,

    firstName varchar(50) null,

    LastName varchar(50) null

);

BEGIN

declare CURSOR IDs FOR

        select ASA.customerId

      from customers  

        FOR currID as IDs DO

           insert into #temptable

        select * from call spGetCustomers(currID.customerId);

    END FOR;

 

    select customerId,countryId,firstName,LastName from #temptable;

drop table tempTemprature;

end;

END;

 

 

Thank you

Bassam

hana xsjs allocation overflow error

$
0
0

Hi

 

I am receiving the following error on parsing a json of recordset table  having more than 1K records in hana xsjs file.

Can anybuddy tell me is there is any limitation on size while parsing a JSON of array of records.

 

InternalError: allocation size overflow



I am having a HANA trial account and created a hana xs application.

Viewing all 6412 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>