Interlok as a reverse proxy for other Interlok instances; not because we can, but because we might have to
This is one of those times where having a generic framework is both a blessing and a bit of a curse. One of our customers has a firewall policy that is very strict, only certain ports are open (even internally); the jetty management port had already been opened (8080). We had multiple Interlok instances deployed on the same machine (mainly to partition by logical workunits) and some of those instances were going to expose API endpoints to various other systems. We could have asked for more ports to be open, but there’s an overhead and maintenance cost to that.
In this particular case, the additional instance workflow was performing a long running task (5-10 minutes). This meant that using the standard Jetty proxy servlet wasn’t an option. If we didn’t respond back in 60-120 seconds, then the internal gateway (not directly under the customer’s control) would respond with a 504 gateway timeout. If the HTTP client sends 102-Processing as an Expect header then we will send back a 102 response every 20 seconds or so (as suggested by RFC 2518). However, this caused the Jetty proxy servlet to terminate the socket connection after the first 102, the client never got the 200 OK response.
We raised the support ticket, that’s the right thing to do; we also configured Interlok that was listening on 8080 as a reverse proxy which allowed us to get some testing done…
The processing sequence is :
Receive the request on /lookups/*
Save all the headers as metadata, prefixed with InboundRequest_
The keys httpmethod, jettyURI, jettyQueryString are all autopopulated by the jetty consumer.
Figure out if we have a query or not via metadata-exists-branching-service and generate the correct URL.
Issuing the call to the proxied Interlok instance listening on 8084 passing in most of the headers marked as InboundRequest_*
We used our apache-http optional package because some of the HTTP methods are PATCH
We filter out some of the headers, because they will be determined by other bits of configuration or aren’t needed as they would be defaulted.
We use MappedMetadataFilter to strip off the InboundRequest_ prefix before sending it as an HTTP header
We save all the HTTP response headers prefixed with ProxyResponseHdr_
As we ignore any HTTP error responses from the call; the response code itself is stored against the metadata key adphttpresponse automatically
Send the stored status code, and headers from the HTTP response back to the client.
Again we use MappedMetadataFilter to strip the ProxyResponseHdr_ prefix.
Heroku bonus chatter
If you are deploying Interlok instances in Heroku; then Heroku only exposes a single port as the entry point to your instance. If you needed JMX management capabilities and expose an API endpoint, then this kind of proxy pattern will be quite useful as you could enable ActiveMQ as a management component (you need to enable the ActiveMQ HTTP connector); proxy that with a workflow; configure other heroku instances to use service:jmx:activemq:///http://ui:8080/activemq as their JMX URL, and use a single UI instance deployed in Heroku to manage other adapter instances.
AdapterUI (via the jetty management component), ActiveMQ (via the activemq management component) and ProxyWorkflow are all running in the same JVM in the same heroku instance. The proxy workflow is configured to forward all requests for /activemq to the HTTP endpoint configured in ActiveMQ. Under the covers, serialisation of the JMX objects is handled by XStream (the default). The only sticking point here is that if you are using the bundled derby database then it can take longer than 30 seconds to startup the UI instance (as the ephemeral heroku filesystem can be quite slow); you will probably need an external database to host the UI settings (or still use derby but the in-memory variant of the database).