You are viewing limited content. For full access, please sign in.

Question

Question

Laserfiche deployment on AWS  as IAAS

asked on July 1, 2019

I need a solution architecture guide or design for Laserfiche deployment on AWS  as IAAS.

0 0

Answer

SELECTED ANSWER
replied on July 5, 2019 Show version history

Hi Mohammed,

Here's a general diagram I use as a guideline. Your actual number of servers and how you split up Laserfiche components among them will depend on the system's size, user count, and use case.

For example, if your solution doesn't use Quick Fields Agent or Import Agent, you don't need the "Capture Server". If your solution is especially Forms-heavy, you may want a dedicated Forms Server. Etc.

 

4 0

Replies

replied on July 1, 2019

Hi Mohammed,

Can you make your request any more specific? Deploying Laserfiche on AWS IaaS is similar to deploying it on-prem. You still have Windows Server VMs, file storage, and SQL Server databases.

Generally speaking, you'll have the following:

  • EC2 instances running Windows Server 2016 or 2019
    • Use M5 (General Purpose) instances for Laserfiche Server
    • Use M5 or C5 (Compute Optimized) instances for Web applications, Workflow, and Quick Fields (Agent)
    • T3 Burstable instances are appropriate for Dev/Test environments
  • EBS volumes (gp2 or st1) for repository storage
    • Laserfiche repositories cannot directly use Object storage like S3
    • EBS backup snapshots do use S3
  • AWS RDS for MSSQL instance(s) for the databases
    • Note that you'll need to pre-create empty app databases in RDS first - some Laserfiche DB setup wizards will fail to create a new database on RDS but can populate an empty, existing DB just fine
    • M5 or R5 (Memory Optimized) instances
    • T series if you only need SQL Express for a small system
3 0
replied on July 2, 2019

Thanks for your reply, I think this information is good enough, however, I do not have an experience in such implementation, do you have any diagram sample that shows how to place all mentioned component in one architecture diagram? 

0 0
SELECTED ANSWER
replied on July 5, 2019 Show version history

Hi Mohammed,

Here's a general diagram I use as a guideline. Your actual number of servers and how you split up Laserfiche components among them will depend on the system's size, user count, and use case.

For example, if your solution doesn't use Quick Fields Agent or Import Agent, you don't need the "Capture Server". If your solution is especially Forms-heavy, you may want a dedicated Forms Server. Etc.

 

4 0
replied on July 7, 2019

Dear Samuel,

 

thanks for your reply, highly Appreciated :)

 

Now, I have a clear vision

 

 

0 0
replied on March 6, 2020

Dear Samuel,

 

What about load balancing for Laserfiche web applications (forms, web, weblink),  do we have to use AWS application or network balancing and why?

0 0
replied on March 6, 2020 Show version history

NLBs have a much simpler form of sticky sessions (source IP affinity) than ALBs, which are cookie-based.

You should go with ALBs. That also allows you to use path-based routing to have one hostname for those Laserfiche web applications (e.g. lf.company.com). In addition to being simpler for end users, it can help avoid CORS issues in your solution.

NLBs do not support path-based routing, so you need an NLB per backend server with separate application roles. For example, if you have two Forms servers and two Web Client servers, you'll need two different endpoints (e.g. docs.company.com, forms.company.com), each of which requires a separate NLB.

ALBs also allow you to use Web Application Firewall (WAF), an incredibly useful security tool for anything public-facing.

Important NoteIntegrated Windows Authentication (IWA, the "Windows Authentication" button) does not work correctly through an ALB (layer 7, HTTP) as it is a layer 4 TCP-level protocol. Doing so presents a security risk as users' authentication tokens can get crossed. It is important to ensure that LFDSSTS login pages are either behind an NLB (layer 4, TCP) or not behind load balancer at all.

2 0
replied on September 16

I know this is a bit of an older post but I wanted to get clarification. If we want to have multiple WebLink or WebClient servers that would've used NLB if they were physical, we would instead use ALB's to accomplish the same goals?  

0 0
replied on September 16 Show version history

There's an important caveat to what I wrote above five years ago.

An AWS Network Load Balancer (NLB) is the direct functional equivalent of a Windows NLB - they are both Layer 4 TCP load balancers.

If end users are using any of the Laserfiche desktop client components (Scanning, Office Integration, Snapshot, Webtools Agent) you MUST use an NLB with NLB target group sticky sessions enabled. See: Edit target group attributes for your Network Load Balancer - Elastic Load Balancing. While AWS doesn't state it explicit, this NLB type of sticky sessions uses the client IP address for stickiness. Client IPs do not change based on the client application (i.e., the same for a user's web client browser session and Laserfiche Scanning, etc.).

AWS Application Load Balancers (ALBs) are Layer 7/HTTP and only support cookie-based sticky session. See: Edit target group attributes for your Application Load Balancer - Elastic Load Balancing.

Laserfiche desktop client applications that connect to the Repository Web Client use an embedded Edge WebView2 browser that does not share a cookie store with any other browser. Webtools Agent manages auth tokens for them via a special non-browser mechanism. Any existing ALB's session affinity cookies aren't accessible to them, so the ALB treats them as new connections and assign them to a target group according to whatever the load balancing algorithm is, typically round-robin, making it a coin flip or 1dX dice roll on whether the client app connects to the same web client server as your browser. If the client app connection is sent to a different server, it fails at the application layer because none of the assumed/required session state is present.

This is only a consideration if you're actually load balancing among multiple backend targets. If you're using an ALB as a pure reverse proxy to a single backend server, you don't need the LB to handle session affinity because there's only one valid destination server for any request.

In summary:

  • If using Laserfiche desktop client apps with Repository Web Client, use an AWS NLB with (client IP-based) sticky sessions. They won't (reliably) work with an AWS ALB or any load balancer relying on cookie-based session affinity. You must use an NLB TCP listener as TLS listeners (which do TLS termination at the LB) do not support sticky sessions. This means the backend servers with IIS handle TLS handshakes with clients directly.
  • If not using Laserfiche desktop client apps, you can use either an AWS ALB or NLB. Both will work. I generally prefer ALBs for the reasons described in my previous reply.

 

1 0
replied on September 17

Wow, you are awesome!  Thanks a ton for that in-depth info.  We are setting this up for WebLink first but the WebClient isn't far behind so I love knowing both options.  I appreciate the follow-up on an old post, providing us this updated knowledge.

1 0
You are not allowed to follow up in this post.

Sign in to reply to this post.