You are viewing limited content. For full access, please sign in.

Question

Question

upgrade workflow database wait operation timed out

asked on May 15, 2014 Show version history

 Hi, 

 

I have upgraded my Workflow instance to version 9.1.1 and I am getting an error when proceeding with the database schema upgrade. I have noticed my workflow that database has grown considerably big (140 GB). Please advise on how I can fix this and get this workflow server back and running.

 

 


alter procedure get_instance_info
    @instance_id    uniqueidentifier
as
    set nocount on
    select SI.search_id, SI.instance_id, SI.parent_instance_id, SI.version, IWN.workflow_name, IRN.rule_name, IUN.user_name, SI.workflow_id,
                SR.repository_name, SR.server_name, 
                SE.entry_id, SE.entry_name, SE.entry_path, SE.entry_guid, 
                SS.start_time, SS.status_time, SS.idle_time, SS.status, SS.has_errors,
                SS.has_warnings, SS.has_informations,
                BPI.bpi_name, BPI.bpi_status 
    from search_instance as SI
    left join search_entry as SE on SI.search_id = SE.search_id AND SE.is_starting_entry = 1
    left join search_status as SS on SI.search_id = SS.search_id    
    left join search_rep SR on SR.repository_id = SE.repository_id 
    left join index_workflow_name IWN on SI.workflow_id = IWN.workflow_id
    left join index_rule_name IRN on SI.workflow_rule = IRN.rule_name_id
    left join index_user_name IUN on SI.workflow_user = IUN.user_name_id
    left join bp_instance BPI on SI.search_id = BPI.bpi_id 
    where SI.instance_id = @instance_id        
    union select SI.search_id, SI.instance_id, SI.parent_instance_id, SI.version, IWN.workflow_name, IRN.rule_name, IUN.user_name, SI.workflow_id,
                SR.repository_name, SR.server_name, 
                SE.entry_id, SE.entry_name, SE.entry_path, SE.entry_guid, 
                SS.start_time, SS.status_time, SS.idle_time, SS.status, SS.has_errors,
                SS.has_warnings, SS.has_informations,
                BPI.bpi_name, BPI.bpi_status
    from search_instance_log as SI
    left join search_entry_log as SE on SI.search_id = SE.search_id AND SE.is_starting_entry = 1
    left join search_status_log as SS on SI.search_id = SS.search_id    
    left join search_rep SR on SR.repository_id = SE.repository_id 
    left join index_workflow_name IWN on SI.workflow_id = IWN.workflow_id
    left join index_rule_name IRN on SI.workflow_rule = IRN.rule_name_id
    left join index_user_name IUN on SI.workflow_user = IUN.user_name_id
    left join bp_instance BPI on SI.search_id = BPI.bpi_id 
    where SI.instance_id = @instance_id    
                                     
    RETURN

 

alter procedure get_children_instance_info
    @parent_instance_id    uniqueidentifier
as
    set nocount on
    select SI.search_id, SI.instance_id, SI.parent_instance_id, SI.version, IWN.workflow_name, IRN.rule_name, IUN.user_name, SI.workflow_id,
                SR.repository_name, SR.server_name, 
                SE.entry_id, SE.entry_name, SE.entry_path, SE.entry_guid, 
                SS.start_time, SS.status_time, SS.idle_time, SS.status, SS.has_errors,
                SS.has_warnings, SS.has_informations,
                BPI.bpi_name, BPI.bpi_status 
    from search_instance as SI
    left join search_entry as SE on SI.search_id = SE.search_id AND SE.is_starting_entry = 1
    left join search_status as SS on SI.search_id = SS.search_id    
    left join search_rep SR on SR.repository_id = SE.repository_id 
    left join index_workflow_name IWN on SI.workflow_id = IWN.workflow_id
    left join index_rule_name IRN on SI.workflow_rule = IRN.rule_name_id
    left join index_user_name IUN on SI.workflow_user = IUN.user_name_id
    left join bp_instance BPI on SI.search_id = BPI.bpi_id 
    where SI.parent_instance_id = @parent_instance_id        
    union select SI.search_id, SI.instance_id, SI.parent_instance_id, SI.version, IWN.workflow_name, IRN.rule_name, IUN.user_name, SI.workflow_id,
                SR.repository_name, SR.server_name, 
                SE.entry_id, SE.entry_name, SE.entry_path, SE.entry_guid, 
                SS.start_time, SS.status_time, SS.idle_time, SS.status, SS.has_errors,
                SS.has_warnings, SS.has_informations,
                BPI.bpi_name, BPI.bpi_status 
    from search_instance_log as SI
    left join search_entry_log as SE on SI.search_id = SE.search_id AND SE.is_starting_entry = 1
    left join search_status_log as SS on SI.search_id = SS.search_id    
    left join search_rep SR on SR.repository_id = SE.repository_id 
    left join index_workflow_name IWN on SI.workflow_id = IWN.workflow_id
    left join index_rule_name IRN on SI.workflow_rule = IRN.rule_name_id
    left join index_user_name IUN on SI.workflow_user = IUN.user_name_id
    left join bp_instance BPI on SI.search_id = BPI.bpi_id 
    where SI.parent_instance_id = @parent_instance_id    
                                     
    RETURN

 

create index wf_reporting_log_status_ix
    on workflow_reporting_log 
    (
        workflow_id,start_time
    )
    include 
    (
        status,
        user_name_id
    )


System.Data.SqlClient.SqlException (0x80131904): Timeout expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.
The statement has been terminated. ---> System.ComponentModel.Win32Exception (0x80004005): The wait operation timed out
   at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
   at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
   at System.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
   at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async, Int32 timeout)
   at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(TaskCompletionSource`1 completion, String methodName, Boolean sendToPipe, Int32 timeout, Boolean asyncWrite)
   at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
   at Laserfiche.Workflow.Utilities.WorkflowConfiguration.UpgradeExecutorBase.ExecuteQuery(UpgradeQuery query, ExecuteQueryContext context)
ClientConnectionId:965f6337-cc05-4955-8334-75a0d03d45a

 

Edit: It seems to be timing out when creating this index...

 

create index wf_reporting_log_status_ix
    on workflow_reporting_log 
    (
        workflow_id,start_time
    )
    include 
    (
        status,
        user_name_id
    )

0 0

Replies

replied on May 15, 2014

The timeout period is likely to short for indexing that much data. I would contact support to see if it can be increased. You might also be able to find what all the data is (likely workflow history) and see if there is a way to clean it up. In the WF Admin console I can't find any options to remove workflow history from the database, only log files.

0 0
replied on May 15, 2014

Yes, when I saw the name of the table for the index I figured it was all that history and we had a workflow that went looping unnoticed a little while back.. Digging around Laserfiche Answers, i found the procs to cleanup those logs, the first one directly connected to the table where it was failing to create the index, which at 19M row ran for a little bit and i'm now at 300k row, but the 2nd one has been running for 90 minutes now... will try the upgrade process again when the cleanup has completed.

 

exec cleanup_reporting_log @cutoff_date = '2014-02-01 00:00:00'
exec cleanup_tracking_log @cutoff_date = '2014-02-01 00:00:00'
exec cleanup_message_table

 

0 0
replied on May 16, 2014

This is interesting, I have been concerned about Workflow database history. I can't seem to find an option anywhere in the configuration settings to set a maximum age for history stored in the database. We could use Agent to call the stored procedure on a regular basis.

0 0
replied on May 19, 2014

Please open a case with Tech Support. What version are you upgrading from? The size of the database is unusual, do you have many running instances?

0 0
replied on May 20, 2014

After running the stored procs the database is much lighter and the upgrade went through. Had to set simple recovery mode and use the SP with _chunk prefix to avoid the log to fill the drives.


exec cleanup_reporting_log @cutoff_date = '2014-02-01 00:00:00'
exec cleanup_tracking_log_chunk @cutoff_date = '2014-02-01 00:00:00', @chunk_size = '10000'
exec cleanup_message_table

0 0
You are not allowed to follow up in this post.

Sign in to reply to this post.