FTP vs Webservices

3
We are building a web service interface which returns a file. In theory the size of the file can exceed 100 MB. We already implemented a asynchounous interface: A client requests a file. The server returns an estimated ready time and start building the file in a separate action The client starts polling an other webserice function and when the server is ready it returns the zipped file So far, so good. But now we have a discussion if this solution will work for larger files. Some/much developers thinks that large files should go over FTP. But is that today still true? What are the (dis)advantages for using webservices? Of course we prefer the webservice interface. But I need some (technical) arguments for our client.
asked
1 answers
9

'Traditional' FTP is an obsolete protocol, especially when you're dealing with sensitive data. It transmits all commands and data without encryption and is difficult to properly support in firewall configurations (it needs 'helpers' which at the firewall actually sniff the traffic and dynamically open/close/redirect extra tcp ports).

Another option is using sFTP over SSH, which uses the same underlying transport channel als SSH console based logins do, so it also supports public key authentication instead of passwords. Typically, sFTP is used if the other system which needs to be integrated really has no other way to do that than via FTP-like protocols, and cannot be adjusted to do anything else.

Using FTP or sFTP poses more work for you to set up. You need to provide access to a system besides the normal https-way, keep an extra user/access administration, create file/directory structures, manage queueing and cleanup of files that you need to copy into those locations, poll for new files etc etc.

Integration using native functionality over HTTP, directly using functionality of the platform provides you to use standard access rights and user roles and file document objects that are already available.

About differences in speed and reliability... We're not in the 90's anymore, when we had slow dialup modems, fragile connections and HTTP 1.1 did not exist yet. Nowadays, large file mirrors have also switched from FTP to HTTP. Downloading a file over FTP requires setting up a connection, login, issuing commands like cd to a directory, pointing at a file you want to have, opening a second connection, doing handshake again... etc. etc. HTTP requests are much simpler from technical perspective. HTTP can also serve files of unlimited size. If the web server and client program you're using support it may be another question, but as long as no-one is killing your connection on purpose, there's no limit.

There's a question left. You say you're using webservices. Using XML to transfer large binary files is a no-go, because they will be re-encoded as base64 with a large amount of overhead (or can Mendix do this in another way already? correct me if i'm wrong here then please). You should only use the webservice as a signalling channel if you want to build a system you're explaining about. Like, return a file id when it's there, and get the file over the normal /file interface within the same login session. There's no processing or memory overhead by doing this, files are directly streamed from the 'uploaded file' contents to the client. I don't know exactly how to do that, but perhaps others can give an example. Also sounds like some idea for an appstore-module.

answered