SnowflakeRESTSQL marketplace module

0
The issue is following: My main microflow reads the file using String from file to generate the first token. This empties the file stream.I pass the query to Snowflake. Snowflake returns the first page of data.The Snowflake module enters a loop to fetch the second page.Inside that loop, the module tries to generate a new token. It looks at the Private Key object in memory and tries to read it again.Because of my main microflow already "drank the glass empty" in step 1, the module reads 0 bytes. It passes an empty string to the decryptor, resulting in the Unable to decode key error.So if the request contains <200 rows of data, then it responses correct since it generated only 1 page of data. However, if i request for >200 rows data, it crashes and says: "DecryptPrivateKey: Error in Java Action DecryptPrivateKey: Cannot invoke "java.security.PrivateKey.getEncoded()" because "<local6>" is null"
asked
2 answers
0

To me, you do not need to upload an second dummy .p8 file.


At this point, if the connector requires both:

  • an initial token generated in your own flow, and
  • later re-reading the same private key again during pagination,

then the real issue is the current connector design, not your implementation.


From what you described, you already tested the main safe workarounds:

reading the file again, duplicating it, and refilling it with StringToFile. If all of those still fail because the stream is consumed or the key format is altered, then there is no clean no-change workaround left in pure microflow logic.


So architecturally, I would say there are only two proper options:

  1. Generate and reuse the key/token entirely in custom logic so the same consumed FileDocument is no longer part of the flow, or
  2. Adjust the connector implementation so it does not depend on re-reading the same file stream during pagination.


Using a second dummy file would only be a workaround, and a fragile one. I would not consider that a good longterm design.


So my view is: this is no longer a usage issue, but a connector limitation. If you want to keep the Marketplace module untouched, the safer path is to move the token/key handling into your own custom logic. Otherwise, the connector itself needs to be fixed so it caches the parsed key/value instead of reconsuming the FileDocument.

answered
0

The most likely root cause is that the private key file is being consumed too early.


In your main microflow you use String from file to generate the first token. This reads the entire file and effectively empties the file stream. When Snowflake returns the first page and the connector tries to fetch the next page, it needs to generate a new token again. At that point the module attempts to read the same Private Key object, but since the stream was already consumed, it reads 0 bytes. This results in errors like Unable to decode key or the DecryptPrivateKey null exception.


This also explains why the issue only happens when the result set is larger than one page (>200 rows). If the query returns fewer rows, only one token is generated and everything works.


Recommended fix:

Do not read the private key file in the main microflow before calling the Snowflake module. Either let the connector read the key itself, or read it once and store the value in a reusable variable/object instead of trying to read the same file document again.


In short: avoid reusing an already-consumed file stream.


If this resolves your issue, please mark this answer as accepted so it can help others facing the same problem.


answered