Base64Decode not producing proper UTF-8 result

0
I've been having some issues converting Base64Encoded data to both files and usable data in Mendix. Specifically, the Base64Decode actions available in the CommunityCommons module do not seem to produce proper UTF-8 encoded data consistently. I've managed to work around this when decoding to files, but now I would like to commit the Base64Decoded string to a postgres database and get the following error: ERROR: invalid byte sequence for encoding "UTF8": 0x00 This happens both locally on postgres and in the Mendix cloud, but it does not happen with the default built in database. Interestingly, it is possible to see the decoded string in the Modeler debugger variables, but when you double click it to open in a modal it only shows '['   Has anyone encountered this issue before and what could be a way to solve this issue? Thanks!   UPDATE: We've solved this issue by adding in a replaceAll function in the java which filters the binary for null characters and replaces them with ''. Not the prettiest thing that was ever built, but the only adequate solution we could find. The problem seems to be linked to postgres databases not accepting null characters - the next question would be why there were null characters in the encoded string anyway, but I digress
asked
2 answers
0

Both encode and decode functions do a getBytes() call (in the CommunityCommons.StringUtils.java file). You can optionally specify a charset for getBytes(), e.g. getBytes("UTF-8"). I think it'd be best to rewrite the code with an optional charSet parameter.

answered
0

My guess is you are using a UTF-8 with byte order mark. This will include some extra info at the start of your file which could cause the errors you are seeing.

answered