We have upgraded our Test environment to Laserfiche 11, and we use Citrix Netscaler. I have configured Forms to work in the LB but am very interested in configuring the Forms Routing Service Cluster. Has anyone that uses Citrix Netscaler done this yet? If yes, how did you configure it?
Question
Question
Has Anyone Configured a Forms Routing Service Cluster Using Citrix Netscaler?
Replies
If I am interpreting the help documentation correctly, I should be able to use a similar setup as Nginx with NetScaler.
We are wanting the Forms IIS Servers to be in the same LB as the Forms Routing Service so we don't have to have 2 different LBs and Primary Forms Server would be part of the LB.
This is how I have translated between the two, if someone knows differently, please let me know.
I would need to do the following in NetScaler if server1 = Primary Forms Server:
- Create a VIP for the LB address laserfiche.company.com:443
- Add bindings to server1, server2, and server3 to the VIP
- Create a VIP for the LB address laserfiche.company.com:80
- Add bindings to server 1, server2, and server 3 to the VIP
- Create a VIP for the LB address laserfiche.company.com:8172
- Add bindings to server1, server2, and server 3 to the VIP
- Create a VIP for the LB address laserfiche.company.com:8168
- Add bindings to server1 to the VIP
- 9. Create a VIP for the LB address laserfiche.company.com:8732
- Add bindings to server1 to the VIP
- Create a VIP for the LB address laserfiche.company.com:8736
- Add bindings to server1 to the VIP
- Create a VIP for the LB address laserfiche.company.com:8170
- Add bindings to server1 to the VIP
- Create a VIP for the LB address laserfiche.company.com:8738
- Add bindings to server1 to the VIP
- Create a VIP for the LB address laserfiche.company.com:8268
I am unsure about what to do for port 8181 though as I'm not sure how that translates to NetScaler's features. Would I create a VIP for port 8181 and bind server1 to it since it's the Primary Forms Server?
I find it helpful to categorize these into "frontend" and "backend" ports. The frontend ports are 80/443 and 8181; users' browsers make calls to these. The backend ports are all the rest in your list.
I would use a different address name/VIP for the frontend and backend port sets for clarity. That's just my preference though. Doesn't affect functionality. E.g.:
- Frontend: laserfiche.example.com (192.168.1.10)
- Backend: prod-lf-forms.example.com (192.168.1.11)
You should consider implementing network firewall rules that only allow traffic to the backend VIP and ports from the Forms servers themselves. End users will never send legitimate traffic over those ports.
I am unsure about what to do for port 8181 though as I'm not sure how that translates to NetScaler's features. Would I create a VIP for port 8181 and bind server1 to it since it's the Primary Forms Server?
Yes, though technically not because it's the Primary Forms Server (defined as the server hosting the Primary Forms Routing Service).
The Laserfiche (Forms) Notification Hub Service listens on TCP port 8181. When a user opens the Forms web application in their browser, it makes an HTTP call to the configured Notification Service URL endpoint like https://forms.example.com:8181. The host in the Notification endpoint should generally be the same as for the frontend Forms URL. Provided this call successfully gets to the Notification Hub Service (and some auth steps happen), it's then "upgraded" to a WebSockets connection which stays open and provides real-time task updates in the Forms UI.
While I'm told you can have multiple Notification Hub Services and load balance between them, I've never gotten that working cleanly and haven't tried too hard. There's little if any value in doing so for either performance or availability. The service consumes almost no compute resources and there's still a single point of failure on the Notification Master Service (which the Hub Service(s) connect to) on the Primary Forms Server.
3. Create a VIP for the LB address laserfiche.company.com:80
4. Add bindings to server 1, server2, and server 3 to the VIP
You can either do this and let Forms handle HTTPS redirection or you can configure the NetScaler to do the HTTPS redirect itself. I personally like handling the redirect at the load balancer but neither way is intrinsically better than the other.
Sam, thank you very much for the reply. It was very helpful. Just to verify, I would create a VIP for the backend ports and point them to what have posted above? I will make sure we talk to our security team about the firewall rules for the backend traffic.
And what I hope is my last question, I would follow the instructions for the rest of the configuration outlined in the online help documentation for the Niginx example for editing the web.config files, etc.?
Welcome =)
At this level of detail it's important to accurately define a few different technical terms as they're used by a Citrix NetScaler ADC. Quoted from the first link:
The entities that you configure in a typical Citrix ADC load balancing setup are:
- Load balancing Virtual Server. The IP address, port, and protocol combination to which a client sends connection requests for a particular load-balanced website or application. If the application is accessible from the Internet, the virtual server IP (VIP) address is a public IP address. If the application is accessible only from the LAN or WAN, the VIP is usually a private (ICANN non-routable) IP address.
- Service. The IP address, port, and protocol combination used to route requests to a specific load-balanced application server. A service can be a logical representation of the application server itself, or of an application running on a server that hosts multiple applications. After creating a service, you bind it to a load balancing virtual server.
- Server object. A virtual entity that enables you to assign a name to a physical server instead of identifying the server by its IP address. If you create a server object, you can specify its name instead of the server’s IP address when you create a service. Otherwise, you must specify the server’s IP address when you create a service, and the IP address becomes the name of the server.
This Citrix discussion thread notes "You can use the same VIP on multiple vservers as long as each vserver is on different ports."
Below is what I believe your configuration should be. Please note that I don't have an actual Citrix ADC to test this with and the guidance is based on my general understanding of networking/load balancers combined with reading their documentation.
Please first review Citrix - Configure SSL offloading with end-to-end encryption. For 443/8181, you must use either this end-to-end encryption config (frontend cert provided by VServer) or use the SSL_BRIDGE serviceType for both VServer and Service (certs provided by backend server IIS instance bindings - certs must include frontend host value in SAN field).
Create Servers using domain-name (FQDN) method:
- Server1 example.com (Primary Forms Server)
- Server2 example.com (Secondary)
- Server3 example.com (Secondary)
Create Services:
- Service_FormsFrontendWeb_SSL443_[Server1|Server2|Server3]
- Service_FormsFrontendNotificationHub_SSL8181_Server1
- Service_FormsBackendWCF_TCP8172_[Server1|Server2|Server3]
Create service groups:
- ServiceGroup_FormsFrontendWeb_SSL443
Members:- Service_FormsFrontendWeb_SSL443_[Server1|Server2|Server3]
- ServiceGroup_FormsBackendWCF_TCP8172
Members:- Service_FormsBackendWCF_TCP8172_[Server1|Server2|Server3]
Create VIPs:
- VIP1: 10.0.0.1 (Frontend)
Create DNS A record: laserfiche.example.com -> 10.0.0.1 - VIP2: 10.0.0.2 (Backend)
Create DNS A record: prod-lf-forms-routing-lb.example.com -> 10.0.0.2
Create Virtual Servers:
- VServer_FormsFrontend_SSL443
Type: SSL
IP: VIP1
Port: 443
Bind Service Group: ServiceGroup_FormsFrontendWeb_SSL443 - VServer_FormsFrontend_HTTP80
Type: HTTP
IP: VIP1
Port: 80
Configure: Add a responder action to redirect all traffic to VServer_FormsFrontend_SSL443, then add a responder policy specifying that action and bind it to this virtual server. See: Configure an HTTPS virtual server to accept HTTP traffic - VServer_FormsFrontendNotificationHub_SSL8181
Type: SSL
IP: VIP1
Bind Service: Service_FormsFrontendNotificationHub_SSL8181_Server1 - VServer_FormsBackendWCF_TCP8172
Type: TCP
IP: VIP2
Bind Service Group: ServiceGroup_FormsBackendWCF_TCP8172
You've probably noticed that I didn't include any Citrix ADC configurations for all the other backend Forms WCF ports: 8168, 8170, 8732, 8736, 8738, & 8268. These don't get load balanced, and I'm not actually sure there's any specific utility in passing them through the proxy to the Primary Forms Instance rather than simply sending the traffic directly there in the first place.
You'd have a Web.config that used the routing cluster address for the 8172/lfinstance endpoint, the Primary Forms Server FQDN for 8168, 8170, 8732, 8736, 8738, & 8268, and leave the rest as "localhost" as before. Your <client> block would look something like this:
<client><endpoint address="net.tcp://{prod-lf-forms-routing-lb.example.com}:8172/lfinstance" binding="netTcpBinding" bindingConfiguration="timeoutBinding" contract="Laserfiche.Forms.Routing.IInstanceProcessing" name="" /><endpoint address="net.tcp://{PrimaryFormsServerFQDN.example.com}:8168/lfrouting" binding="netTcpBinding" bindingConfiguration="timeoutBinding" contract="Laserfiche.Forms.Routing.IRoutingEngineService" name="" /><endpoint address="net.tcp://localhost:8268/lfpushnotification" binding="netTcpBinding" bindingConfiguration="timeoutBinding" contract="Laserfiche.PushNotificationService.SharedContracts.IPushNotificationService" name="" /><endpoint address="net.tcp://{PrimaryFormsServerFQDN.example.com}:8732/lfautotrigger" binding="netTcpBinding" bindingConfiguration="timeoutBinding" contract="FormsModel.SharedContracts.IAutoTrigger" name="" /><endpoint address="net.tcp://{PrimaryFormsServerFQDN.example.com}:8736/lfformexport" binding="netTcpBinding" bindingConfiguration="timeoutBinding" contract="FormsModel.SharedContracts.IFormExportService" name="" /> ... more endpoints ... <endpoint address="net.tcp://{PrimaryFormsServerFQDN.example.com}:8170/attachmentTransfer" binding="netTcpBinding" bindingConfiguration="timeoutBindingStreamed" contract="FormsModel.SharedContracts.IAttachmentTransferService" name="" /> </client>
I'm sanity checking that with the Forms team. Doesn't change anything about the rest of the Citrix ADC config for this either way.
I have confirmed with the Forms team that provided the Secondary instances can reach the Primary directly, you can (and should) use the Primary Forms Server address for the non-8172:/lfinstance client endpoints instead of the load balancer address (where indicated).
I have confirmed with the Forms team that provided the Secondary instances can reach the Primary directly, you can (and should) use the Primary Forms Server address for the non-8172(or 8176):/lfinstance client endpoints instead of the load balancer address.
Thank You Sam!!
I'm starting to work on this project again and while reading back through everything, I wanted to make sure I was remembering correctly. In the help documentation it states:
Features
A Routing Service Cluster will be able to process the actions below over a distributed environment:
- Process user submissions.
- Process workflow callbacks.
- Process user tasks and service tasks.
- Stop instances, retry steps, and delete instances when deleting business process.
Other features and services will only be available on the primary Routing Service node.
Since all the Forms instances in the cluster can process workflow callbacks, do we point Workflow to the load balancer address or the Primary Forms Server address?
Belated reply. Point Workflow at the Primary Forms Server address. Workflow is calling a Forms HTTP web service like https://primaryforms.example.com:443/Forms/webapi/workflow_callbacks, which you'll note is a Forms IIS URL. Workflow does not communicate with the Forms Routing Service WCF services on the 8000 range ports, and so should not be pointed at the Forms Routing Service Cluster load balancer address.