Quantcast
Channel: SCN : Discussion List - SAP HANA Developer Center
Viewing all 6412 articles
Browse latest View live

Custom data types to mask fields

$
0
0

Hi collegues,

 

I am developing a web application that shows KPIs around dates, months, another time periods extracted from generated time data and fields that need custom visualization format.

 

Now we are facing some issues when we try to format those data types,

 

For example:

We have in our tables the field CALMONTH with the format YYYYMM.

And we want to format it like MM.YYYYfor the web application, like BW does.

 

We successfully formated regular dates because it is a DATE data type, but CALMONTH is NVARCHAR(6).

 

Is there a way to create custom data types to format certain fields? It would be very useful.

Or is there any workaround?

 

Thanks.

 

Iván Cernigoj


Group by clause

$
0
0

Hi,

 

First of all pardon me for this silly query.

I am getting some strange output.

 

Input:

 

COL1COl2MATNR
TECH111A11111MATERAL1
TECH111 A11112MATERAL2
TECH111A11112MATERAL3
TECH111A11113MATERAL4
TECH111A11113MATERAL5
TECH112A11114MATERAL6
TECH112A11115MATERAL7
TECH112A11116MATERAL8

 

Query:

 

select "COL1",max("COl2") from "BEST"."matnr_count"

group by "COL1";

 

 

Expected Output:

 

COL1               COL2    

TECH111         A11113

TECH112         A11116

 

 

Output Coming:

 

 

COL1               COL2

TECH111          A11113

TECH111          A11112

TECH112          A11116



BR

Sumeet

Deletion of tables/fields/views/synonyms

$
0
0

Hi Experts,

 

Am planning to cleanse HANA db. Identified some tables, fields in a table, synonyms and analytical views for deletion.

 

Tables deletion - Have written a SP with DROP table stmt for identified tables and collected it in CR to transport. (Using HALM for transport)

 

Fields deletion - SP with EXECUTE IMMEDIATE ALTER stmt for identified fields  and collected it in CR to transport.

 

About synonyms, i can see public synonyms for the tables identified for deletion. If base tables are deleted then there is no use to have public synonyms. Please let me know deletion of public synonyms would result in any impact.

 

Analytical views deletion - Only one approach is popping up in my head i.e using SYSTEM or Admin user, right click on view then Delete. Since it's a manual deletion can't be able to capture it in CR for transport. Is there any other way i can delete the view then capture it in CR and transport it across all the landscapes.

 

Thanks in advance.

 

Regards,

SR.K

Can only create read-only procedures in SAP HANA on AWS

$
0
0

When I was done building my procedure, I made sure I was able to do alterations by deleting the 'READS SQL DATA' statement, I saved, commited and was able to activate and execute. However, after restarting my HANA studio and AWS (HANA Developer Edition, SPS6) later, I got an error when I save or activate the procedure.

 

Assumption.

The SQL statements (just the DROP TABLE, INSERT, etc) will just run fine when I execute them under the SQL console. This led me to believe that the user that executes the command is different, I assume the procedure is created and activated under user SYSTEM, whilst the Console is executed by _SYS_BIC. That still doesn't explain why this worked perfectly the day before, plus I wouldn't know how to create this procedure under a system user. I cannot grant more authorizations to another user, as I am already logged in as the SYSTEM user...

 

Error message.

"You can only create a read-only procedure on this server." (appears next to the line where I declare the AS statement)

 

The detailed message when I try to activate:

ERROR      paddle/00/procedures/TblCreate.procedure

           Repository: Internal error during statement execution, please see the database error traces for additional details;error executing statement; feature not supported: DDL is not supported in the READ ONLY procedure/function at ptime/query/checker/proc_check_stmt.cc:577

 

 

ERROR      paddle/00/procedures/TblCreate.procedure

           Repository: Encountered an error in repository runtime extension;object was not activated due to previous activation errors

 

 

Code.

CREATE PROCEDURE TblCreate ( )

          LANGUAGE SQLSCRIPT

          SQL SECURITY INVOKER

          AS

BEGIN

 

CREATE LOCAL TEMPORARY TABLE #tmp_sobl (v_key BIGINT, v_dte DAYDATE, v_tme TIME, v_ids TINYINT, v_akt TINYINT, v_stt TINYINT, v_lsr TINYINT);

INSERT INTO #tmp_sobl VALUES('2013080706000510001','2013-08-07','06:00:05',1,1,11,7);

SELECT * FROM #tmp_sobl;

 

END;

 

Does anyone know how I can manage to create a procedure that will allow me to INSERT, CREATE, etc?

Need help to figure out simple date filter in calculation view

$
0
0

Guys and Gals,

 

I have a simple issue that I can't wrap my little brain around.

 

I need a simple date filter where I get data for only yesterday. For example ERDAT = yesterday.

 

This has to be done in the expression editor in the first projection node, I want to force this filter on the attribute view so the performance will be great. Putting a filter in the first projection node where you consume the Attribute view is equivalent to putting that filter directly in the Attribute view (please correct me if I'm wrong but I'm 99% sure and tested it).

 

So my basic filter in the expression editor will be:

 

ERDAT = now() -1

 

Sounds simple enough but doesn't it? Can someone please tell me the syntax where I only get yesterday's date?

 

Note: If the answer is create a calculated column, etc., then I am not enforcing the filter on the attribute view right away.

 

Thanks alot experts.

Any documentation on base64_encode/base64_decode?

$
0
0

Hi benevolent HANA experts,

 

I am looking for examples for the SQL functions base64_encode and base64_decode.  Specifically, I am wondering if they can be used to insert a blob field from a SQL script, where the blob value is a base64-encoded string. Can someone point me in the right direction?

 

thanks!

POST oData Problem

$
0
0

I'm trying to use Postman to create a record in a table using an oData service.  I've created a small test table to experiment with.

 

The table created using CD I am able to import data into it:

Region Capture2.png

 

The oData servcie:

service {

 

    "GBI_002.gbi.data::GBI_002.MASTERDATA.TEST" as "Test";

  

 

}

 

Postman:

Region Capture1.png

The error I get is a 403 error and the trace is:

 

[16322]{16322}[-1/-1] 2015-05-31 06:09:47.222759 i TraceContext     TraceContext.cpp(00823) : UserName=, ApplicationName=sap.hana.xs.selfService.user, ApplicationSource=/sap/hana/xs/selfService/user/db/iniParams.xsodata/parameters('login_screen_background_image')

[16322]{16322}[-1/-1] 2015-05-31 06:09:47.222744 e XSRequestHandler RequestHandler.cpp(00753) : exception  1: no.2  (XSEngine/Util/ConnectionPool.cpp:135)

    could not create db connection from sql connection configuration sap.hana.xs.selfService.user::selfService, username not set

exception throw location:

1: 0x00007fda65cc80b8 in odata::DbConnection::init()+0x2e4 at DB.cpp:43 (libxsengine.so)

2: 0x00007fda65ccacd7 in odata::BaseSerializer::buildResult()+0x13 at BaseSerializer.cpp:18 (libxsengine.so)

3: 0x00007fda65c6f5a3 in odata::JsonSerializer::serialize(odata::StringOutStream&)+0x60 at JsonSerializer.cpp:52 (libxsengine.so)

4: 0x00007fda65c6bea0 in odata::SerializationFacade::serialize(odata::RequestContext&, xsengine::WebResponse&)+0x160 at SerializationFacade.cpp:150 (libxsengine.so)

5: 0x00007fda65c0c519 in xsengine::ODataSuffixHandler::execute(xsengine::WebRequestInternal const&, xsengine::WebResponseInternal&, xsengine::ResourceHandler::Context const&, ltt::smartptr_handle<xsengine::RepositoryCache::CachedBinary> const&, odata::tracking::ExecutionTracker&, xsengine::ODataSuffixHandler::DBCleanupCallback*&)+0x645 at ODataSuffixHandler.cpp:218 (libxsengine.so)

6: 0x00007fda65c0e509 in xsengine::ODataSuffixHandler::handleRequest(xsengine::WebRequestInternal const&, xsengine::WebResponseInternal&, xsengine::ResourceHandler::Context const&)+0x2d5 at ODataSuffixHandler.cpp:150 (libxsengine.so)

7: 0x00007fda65bcc220 in xsengine::RequestHandler::handleRequest(ltt::smartptr_handle<xsengine::HttpRequestContext>&, int)+0x39e0 at RequestHandler.cpp:652 (libxsengine.so)

8: 0x00007fda4d59a95e in Execution::JobObjectImpl::run(Execution::JobWorker*)+0x6aa at JobExecutorImpl.cpp:822 (libhdbbasis.so)

9: 0x00007fda4d5a8010 in Execution::JobWorker::runJob(ltt::smartptr_handle<Execution::JobObjectForHandle>&)+0x2f0 at JobExecutorImpl.hpp:459 (libhdbbasis.so)

10: 0x00007fda4d5a9044 in Execution::JobWorker::run(void*&)+0x1a0 at JobExecutorThreads.cpp:376 (libhdbbasis.so)

11: 0x00007fda4d5d7439 in Execution::Thread::staticMainImp(void**)+0x875 at Thread.cpp:488 (libhdbbasis.so)

12: 0x00007fda4d5d7ffd in Execution::Thread::staticMain(void*)+0x39 at ThreadMain.cpp:26 (libhdbbasis.so)

 

I'm able to perform a GET so I don't think it's an authentication problem although the part of the trace that says "username not set" is puzzling.  I've been able to do this before.

 

Any suggestions?

 

Thanks,

 

Ross

How does HANA resolve input parameters in calculation views executed in the SQL Engine?

$
0
0

Hello,

 

In our project, we use HANA stored procedures called from ABAP using DB proxy (this is from a time where AMDP was not yet available).

However, we previously refactored the procedures in a way that all (mass) data centric operations  (read, convert and further process data) are done inside using graphical calculation views  which are executed in the SQL engine.

 

Now, with the same version of the HANA server we experience that there is a difference if we execute the procedure from ABAP versus executing it on a standalone HANA server.

In detail, executing the same code in the standalone HANA server is successful whereas executing it from ABAP runs into an error:

[6608]{236093}[39/391650361] 2015-07-02 11:13:08.217560 e cePlanExec       cePlanExecutor.cpp(07222) : Error during Plan execution of model _SYS_BIC:_SYS_SS_CE_232022928_139889446946000_2_INS (-1), reason: "_SYS_BIC"."sap.is.ddf.udf/SP_SWP_MODEL": line 51 col 5 (at pos 1692): [2048] (range 3): column store error:  [2048] "_SYS_BIC"."sap.is.ddf.udf/SP_SW_MODEL": line 36 col 5 (at pos 1009): [2048] (range 3): column store error:  [2048] "_SYS_BIC"."sap.is.ddf.udf/SP_SW_MODEL_POS": line 43 col 5 (at pos 1961): [2048] (range 3): column store error: search table error:  [6968] Evaluator: syntax error in expression string;expected TK_RPAREN,parsing 'longdate(2013-07-31 [here]23:59:59.0000000)'Please check lines: 59

 

It does not look so but the error actually occurs in a graphical calculation view executed from a procedure.

The root cause is that only the standalone server accepts a calculated column of type TIMESTAMP containing the input parameter placeholder of type timestamp: $$P_TimestampFrom$$. The fix was to change $$P_TimestampFrom$$ to '$$P_TimestampFrom$$'.

 

 

Now the questions:

1) Why does it sometimes work without single quotes and sometimes not?

    In Detail: What is different when we call procedures via DB proxy from ABAP versus standalone HANA server?

 

 

2) How do input parameters - where their placeholders are used - get resolved in graphical calculation views with enforced execution in the SQL engine?

-> We were concerned about too many conversions which would badly influence performance with mass data.

We would like to avoid that the timestamp input parameter is converted to a string because of the quotes: '$$P_TimestampFrom$$' and then converted back to timestamp.

We previously assumed this would not be the case without the single quotes. However, the assumption seems to be wrong anyway as there seem to be implicit conversions when the graphical calculation view is converted into SQL.

Are any input parameters always converted into strings when the placeholders ($$P_TimestampFrom$$) are used, independent of their defined type?

Is there any detailed explanation how those input parameters are resolved in filters or calculated columns (SQL Execution!) ?

 

 

Thanks,

Daniel


Modification of a CDS artifact failed because corresponding data cannot be migrated

$
0
0

Hi,

 

I have created an initial hdbdd file and later made some changes in the table type definitions. But after trying to save it, I get this error:

 

[15:27:21] Error while activating /aip/data/Assets.hdbdd:
Modification of a CDS artifact failed because corresponding data cannot be migrated. Reason: column store error.

 

I have also tried creating a new hdbdd file and just changed the names of the table, table type, etc inside but I am also getting the same error even with new names, can someone help me how to resolve this?

 

Thanks!

HANA XS AppServer native development: big step back?

$
0
0

Dear All,

Recently I have attended the course presenting basis of UI5 web interface programming altogether with business logic implemented in JavaScript on HANA XS Application Server. UI5 web interface looks cool in my opinion. But native HANA development languages currently offered: JavaScript + SQL Script are poor alternative in my opinion compared to the current version of ABAP. Nowadays ABAP is fully object oriented language with all nice features like inheritance, encapsulation, polymorphic - comparable to Java or C#. It allows us implementing design patterns like Singleton, Factory. MVC and many others.JavaScript seems to be very primitive language compared to today's ABAP.

Am I missing anything? I am looking forward to your opinions about JavaScript as a language to implement very complex business logic and large applications.

 

Best regards

Bogdan

Introspection of Stored Procedures used in Analytic Privileges

$
0
0

I am trying to define a security model that would have a single Access Control List table and a single stored procedure we could use across all Analytic Privileges.  The only way I have found to support a single multi purpose ACL table is to create a new sp for every instance it is needed in a AP.  Then in the sp I can query the sys.stuctured_privileges table to get introspection on the AP that is calling the sp.  unfortunately this 'solution' would have our developers creating a new identical sp object everytime they needed to apply row level trimming.

 

The first path I can see around this would be ability to set sp prm in the AP which I don't believe is currently supported.

 

Second thought would be if there was a way to use introspection, but I can't find any mention of if that's supported or not.

 

Is there any way for a stored procedure to do introspection on the AP that is calling it?

Or any other recomended approaches.

HCP console client: parameter to confirm EU location needed

$
0
0

To automate console client commands like deploying with neo, one has to confirm that access is done from inside EU.

There are several parameters to place variables directly in command line call but i couldn't find a confirmation parameter.

 

Is there anything like this and if not are there any plans to implement?

 

regards

HANA backup log files

$
0
0

Had few questions on the HANA backup log files :

 

We have backup log files that are fairly old what I needed to understand the following:

 

1) Does the HANA administrator need to go and frequently clear the log and the data files? What checks and measures need to be performed

     to make sure?

2) If so how is done and how to identify the older backups that would be safe to delete?

3) Is there a script that can be scheduled in crontab to take care of this?

4) How is this scenario handled in other cases with other companies? example?

 

 

Thank you

Execution of a script/procedure during a DU activation

$
0
0

Hi all,

 

Since the quick search I made didn't help me on this question, I decided to ask here (sorry if it's a frequent question).

 

Is it possible to configure a script or procedure to be executed while activating a Delivery Unit in the transport process in different environments?

 

This questions is due to some changes and configurations I execute manually before and after activating some artifacts, creating some kind of dependency between them, what makes the transport process more complex (and impossible to do automatically on Hana) than it should be.

These changes/configurations are necessary because of several limitations of database objects definitions through artifact we still face today.

 

Thanks in advance.

Simple TRIGGER... not working !?

$
0
0

Hello,

 

I'm new in SQL so I try to do some basic stuff.

 

I have created a sequence and I want to increment the sequence in a trigger, here is my code :

 

create trigger <my_schema>."myTrigger" before insert 
on <my_schema>."COMMENT" 
referencing new row newrow
for each row begin  select <my_schema>."my_sequence".NEXTVAL into :NEWROW.ID from dummy;
end

Is someone can explain me why it doesn't work ?

 

Here is the error :

 

Could not execute 'create trigger <my_schema>."myTrigger" before insert on ...'

SAP DBTech JDBC: [257]: sql syntax error: incorrect syntax near ":NEWROW": line 5 col 85 (at pos 255)

Kind regards,


Multiple Rows using Json Input from PostMaster

$
0
0

Hi,

 

New to SAP HANA Odata servicess, I am trying to insert multiple rows using POST method from POSTMASTER in below formats

 

1. Json

2. XML

 

When I Insert multiple rows using Json object, i get below error

 

{

    "Odata": [

        {

            "MY_COL1": "Sadsad124511",

            "MY_COL2": 124511

        },

        {

            "MY_COL1": "Sadsad124512",

            "MY_COL2": 124512

        }

    ]

}

 

Error:  <message xml:lang="en-US">Resource not found for the segment &#x0027;Odata$batch&#x0027; at position 0.</message>

 

I can insert single entries using Json

        {

            "MY_COL1": "Sadsad124511",

            "MY_COL2": 124511

        }

 

 

I have exposed table ZTRANS with two columns MY_COL1(String) MY_COL2(Int) as ODATA service, my service definition is as below

 

 

service  {

 

   "P0_MAPPING"."ZTRANS" as "Ztranslation";   

 

Unable to provide user with sap.hana.xs.admin.roles::SQLCC* roles

$
0
0

Hello Experts,

 

Need an advice, Am unable to provide user with below mentioned Roles.

 

sap.hana.xs.admin.roles::SQLCCAdministrator

sap.hana.xs.admin.roles::SQLCCViewer


User only has Modeling role provided to him.

Am getting this error -


Could not modify user '*****'. Several issues have occurred

SAP DBTech JDBC: [257]: sql syntax error: incorrect syntax near ".": line 1 col 11 (at pos 11)

SAP DBTech JDBC: [257]: sql syntax error: incorrect syntax near ".": line 1 col 11 (at pos 11)

 

Requesting assistance with this.

 

Thanks

Mayank

List Partitioning of table in HANA

$
0
0

Hi,

     We are building a multi customer enterprise software under a single platform. There are possibilities of querying the table for 2 or 3 customer data at a time or probably one customer data for most of the cases. I was trying to implement partitioning of the table with respect to the customer_id or customer_fk in all the tables which might not necessarily be a Primary key.

 

Could anyone please let me know if there is anything called as list partitioning as provided by Oracle in HANA?

Thanks,
Athrey

Add sales ord and sales org description together in input variable

$
0
0

Hi,

 

How to add  sales org and sales org description together in input variable  .i.e in analtic view when i click on raw data and press sales org then its description should also show.

How does HANA resolve input parameters in calculation views executed in the SQL Engine?

$
0
0

Hello,

 

In our project, we use HANA stored procedures called from ABAP using DB proxy (this is from a time where AMDP was not yet available).

However, we previously refactored the procedures in a way that all (mass) data centric operations  (read, convert and further process data) are done inside using graphical calculation views  which are executed in the SQL engine.

 

Now, with the same version of the HANA server we experience that there is a difference if we execute the procedure from ABAP versus executing it on a standalone HANA server.

In detail, executing the same code in the standalone HANA server is successful whereas executing it from ABAP runs into an error:

[6608]{236093}[39/391650361] 2015-07-02 11:13:08.217560 e cePlanExec       cePlanExecutor.cpp(07222) : Error during Plan execution of model _SYS_BIC:_SYS_SS_CE_232022928_139889446946000_2_INS (-1), reason: "_SYS_BIC"."sap.is.ddf.udf/SP_SWP_MODEL": line 51 col 5 (at pos 1692): [2048] (range 3): column store error:  [2048] "_SYS_BIC"."sap.is.ddf.udf/SP_SW_MODEL": line 36 col 5 (at pos 1009): [2048] (range 3): column store error:  [2048] "_SYS_BIC"."sap.is.ddf.udf/SP_SW_MODEL_POS": line 43 col 5 (at pos 1961): [2048] (range 3): column store error: search table error:  [6968] Evaluator: syntax error in expression string;expected TK_RPAREN,parsing 'longdate(2013-07-31 [here]23:59:59.0000000)'Please check lines: 59

 

It does not look so but the error actually occurs in a graphical calculation view executed from a procedure.

The root cause is that only the standalone server accepts a calculated column of type TIMESTAMP containing the input parameter placeholder of type timestamp: $$P_TimestampFrom$$. The fix was to change $$P_TimestampFrom$$ to '$$P_TimestampFrom$$'.

 

 

Now the questions:

1) Why does it sometimes work without single quotes and sometimes not?

    In Detail: What is different when we call procedures via DB proxy from ABAP versus standalone HANA server?

 

 

2) How do input parameters - where their placeholders are used - get resolved in graphical calculation views with enforced execution in the SQL engine?

-> We were concerned about too many conversions which would badly influence performance with mass data.

We would like to avoid that the timestamp input parameter is converted to a string because of the quotes: '$$P_TimestampFrom$$' and then converted back to timestamp.

We previously assumed this would not be the case without the single quotes. However, the assumption seems to be wrong anyway as there seem to be implicit conversions when the graphical calculation view is converted into SQL.

Are any input parameters always converted into strings when the placeholders ($$P_TimestampFrom$$) are used, independent of their defined type?

Is there any detailed explanation how those input parameters are resolved in filters or calculated columns (SQL Execution!) ?

 

 

Thanks,

Daniel

Viewing all 6412 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>