You are viewing limited content. For full access, please sign in.

Question

Question

Best practice on copying laserfiche server

asked on May 22, 2015 Show version history

Hello,

We have a client with Rio License, that has a repository in one state and wants to reclicate or copy to another state. Which path or manner would be the best to do that?,  Through workflow?, Can it replicate the same folder estructure?, or Create a server there and coppy the volumes and attach them?? In ether way, is it possible to have the copy of the documents still having the same Id from the origin?

 

Thanks for the help.

Best Regards,

Vitor

0 0

Replies

replied on May 22, 2015

If they will be using a new server for this, then you can backup the database and volumes and restore it on the new server and register the repository there. That will ensure the entry IDs remain the same. You also wouldn't have to worry about changing the repository UUID in this case since it's a different Laserfiche Server and SQL Server (presumably). Also note that you'll need to update the volume paths if that ends up changing in the new repository.

2 0
replied on May 22, 2015

Hi Alexander,

Ok, It will be a new server. I thought about that too, but the sites are far from each other and the conection is not that great, because it is up to almost 2TB of images, they fear that it will take a long time to do it, so I was serching for some ideia that I could do it by parts, because they don´t have storage space in the backup, and they need to have a copy of these things before something happen, since backup is not working.

0 0
replied on May 22, 2015

So the main concern is getting the data to the new site then? I'm not sure what the best way that would be for the customer given that there is a large amount of data that needs to be transferred and the connection between the two sites isn't great as you've stated.

That might be something to discuss further with their IT department. The data transfer is really an independent step at that point. Once the data is transferred to the new site, then you would just restore things as you normally would in Laserfiche, i.e. restore the database to the new SQL instance, register the repository, update the volume paths, etc.

1 0
replied on May 24, 2015

Hi

I have a similar request from a customer.  They would like an offline copy kept at a different site. Although it would be possible to create a copy on a new server, is there an easy way to update this once a month with any file modifications or newly added files.

regards

Peter

0 0
replied on May 28, 2015

What is the connection between the two servers? 

 

And, what is the goal?

0 0
replied on May 28, 2015

If I am not wrong, we have a link about 50mb of connection.

Our goal is to have the images in the other server, so in case that we loose the principal server, we have the other one spare and perhaps ready for use. Until we have the backup solution ready for the current server.

0 0
replied on May 28, 2015

Let's see...

There are a few ways you could handle this.

  1. Using strictly Laserfiche products.  You could set up the secondary server as a new repository.  You could set up workflow to watch for creates/changes/deletes on one repository and mirror them on the secondary.  The complexity of the rule set depends on the repository and how it isused.
  2. Application-level tools.  You could use SQL replication and file replication tools to make updated copies of the SQL and volumes at the secondary location.  If the primary fails, you could start up the repository at the second site.  (You would need to take lots of care in the design, since you wouldn't want to have both started at the same time.  Also - you should consider what happens when you bring up the secondary site with regard to writing things there.  How do you commit those changes back to the primary?  Lots of issues here, but it can be done.)
  3. System-level tools.  You could use VMware to make virtual machines synced up to be live disaster-recovery machines.  You might need this: https://www.vmware.com/products/site-recovery-manager/  This would be the most expensive and also the most robust of the solutions.

 

 

2 0
replied on May 29, 2015

That's great!, thank you a lot.

If I choose number one and use workflow, is there a way to build a workflow to replicate the same structure of folders that I have in the current server?and also replicate the same IDs?

0 0
replied on May 29, 2015

Replicating across a slow network will be a very long process for 2TB of data.  I would recommend getting an external drive, backing up the data to the external drive and then shipping it to the other location.

0 0
replied on May 29, 2015

Vitor - you could replicate the structure.  If you have rules for the existing structure (i.e. if you are already filing things in the existing structure via workflow) then you can use the same rules for the incoming docs.

 

If you don't have rules, I think you could use WF to capture the original path from the changed document into a token and then use that to move it in the new repository. 

 

A challenge, but I think it can be done.

0 0
replied on May 29, 2015

> and also replicate the same IDs?

Workflow can't help with this since client applications like WF can't assign entry ids.  Even if ids were handed out in a totally predictable way (which they are not - see Miruna's answer here) you can't guarantee that Workflow is creating entries in the remote repository in the same order in which they were created locally.

Michael's suggestions 2 and 3 are much better than the first one, since they are using tools for their designed purposes.

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.