You are viewing limited content. For full access, please sign in.

Question

Question

Server Architecture & Volumes

asked on May 5, 2013 Show version history

I have a question about volumes and whether they can be designed in a way to provide local storage and retrieval to an enterprise environment or whether for performance reasons its best to have volumes stored local to the server.  Here is what I mean. 

 

Say a client is headquartered in Carson, CA.  They have over 15 sites, mainly nationally with a couple in Mexico and Canada.  The goals are as follows:

  • Allow each site to scan, store and retrieve locally, while also having access to documents from any other site seamlessly. What this means is that 1 search for say “Boeing 747” will search all documents and bring back results.
  • Provide best performance while minimizing network bandwidth requirements

The customer feels they need a master repository.  If they went with multiple repositories, can they perform above search?  My understanding is that the search engine searches 1 repository at a time based on a simple test in the client, but maybe there is a way using advanced search or other ways?

 

What about local volumes?  My understanding is that the process is that Web Access sends the request to LF Server which goes to SQL which goes to Images back to LF Server back through Web Access.  Such that having local volumes at each site would take longer to retrieve than having volumes be located at headquarters.  Is this correct?

 

If someone in New York is performing the search, somehow the images should not have to travel from NY to Carson and back to NY if they are already on a server in NY. But if someone in Mexico is searching then it would make sense for it to go from NY to Carson then to Mexico.  Is there a way it can be configured to perform this functionality where content from local volumes stays local?

0 0

Answer

APPROVED ANSWER
replied on May 6, 2013 Show version history

You’re right, performance will be much better if the volumes are in the same location as the Laserfiche Server.  For a situation like this, I think the best option is probably a hybrid approach.  You can have local repositories for each branch, where users do their daily operations, and a central repository where all of the branch repositories are conglomerated.  So, on a nightly basis, a scheduled Workflow replicates all of the new documents in each local repository into the central one.  This allows quick access to the documents users need the majority of the time, while still allowing them access to the “main” repository which has all of the documents from every branch.

 

There is no out-of-the-box way to search multiple Laserfiche repositories at the same time, but the system described above would require a maximum of two repositories to be searched to find any document.  If the search in the local repository doesn’t work, they can look through central repository by quickly toggling the search settings.  Our SharePoint integration does have a federated search ability that allows users to look through multiple repositories at once, and it’s also possible to build a custom search utlitiy that scans multiple locations.  The solution I’ve described though, would probably render such options unnecessary.

 

0 0

Replies

You are not allowed to reply in this post.
You are not allowed to follow up in this post.

Sign in to reply to this post.