Question

Solutions for the mutating problem


Userlevel 4
Badge +9

Hi all,

I constantly have to deal with the mutating table error when creating events. Usually I need to:

  • Save (F12)
  • Run some PLSQL in an custom change event (mutating if same table involved)

The only solution I know, is to start a background job. However, for some scenarios I need the code to run “online” or lock the UI until the job has executed.

 

Does anyone has some solutions for this?

 


15 replies

Userlevel 3
Badge +5

Hi @Hans Andersen 

 

You can trigger events without hitting irreverent mutations by using a AUTONOMOUS_TRANSACTION block. The tricky part would be that you need to manually handle commits and rollbacks as the block will not be impacted by the triggering transaction. 

 

Sample would be something like below.

 

DECLARE 
  PRAGMA AUTONOMOUS_TRANSACTION;

BEGIN
    
 -- Some logic
 
  COMMIT; -- or rollback.. this would need to be handled manually within the autonomous block
END;

 

Userlevel 6

It really also depends on the area. In some cases you can trigger your code on a different table like a history or journal table, whereas sometimes you have to use PRAGMA AUTONOMOUS_TRANSACTION as Sajith mentioned. Another way you can explore is replication jobs as they also use triggers. 

Userlevel 4
Badge +9

thanks @Sajith D ,

I have been hesitating using PRAGMA AUTONOMOUS_TRANSACTION,  because I am not sure in which order the transactions will execute (missing Oracle skills, i guess). For an example.

The Estimate module has an “calculate” function:

 

Let say that I change version Qty to 13 from 12. Instead of RMB “Calculate”, I would like to “calculate” after I hit “F12”.

For that I make an event on the table and execute the “calculate” procedure with PRAGMA AUTONOMOUS_TRANSACTION:

 

Questions is. Can I be 100% sure that “calculate” will fire on qty=13 and not on qty=12? 

Userlevel 3
Badge +4

Does this work with what Hans wants to achieve though? I tried this once before to manipulate data on the same table that triggers it and for me the issue(s) were:

  • On modify we would run into a deadlock as the modify still has the record locked
  • On new we wouldn’t find the dataset as it’s not commited yet. 

I haven’t looked deeper into it since. What’s the trick to get this to work?

Userlevel 3
Badge +5

@Hans Andersen  & @Novacura_jst  In this specific case i don’t think an autonomous block would work to be honest. Most probably Hans will run in the to the same locking issue or the estimate will run with QTY 12 defeating the purpose.  All depends on how the Calculate option work internally. Worth giving it a go and see what happens. i certainly am curious to see what the outcome is. 

 

cheers.

Userlevel 4
Badge +9

Why not move this event into a background job. From there you will be able to notice what happens if you build in some logic to fill the Text rows in the bottom.

Steve

Userlevel 4
Badge +9

@eqbstal

The problem with background jobs is that you cannot control when they will be executed. And there is no callback.

Userlevel 5
Badge +9

@eqbstal

The problem with background jobs is that you cannot control when they will be executed. And there is no callback.

 

With a little tweek, you can create a callback for background job and halt the transaction until background job is finished.

Given below is what I came with. Try and see if it works for you :sunglasses:

declare
id_ NUMBER;
run_ BOOLEAN :=TRUE;
status_ VARCHAR2(32000);
begin

Transaction_SYS.Deferred_Call(id_,<OTHER PARAMS>);
COMMIT;
WHILE run_
LOOP
status_ := deferred_job_api.get_state_db(id_);
IF (status_ = 'Created' OR status_ = 'Posted' OR status_ = 'Executing') THEN
DBMS_LOCK.sleep(1); -- add sleep not to overload system
END IF;
IF (status_ = 'Ready') THEN
run_ := FALSE;
END IF;
IF (status_ = 'Error' OR status_ = 'Incomplete' OR status_ = 'Warning') THEN
ERROR_SYS.Appl_General('LuName', 'BGERROR: something wrong with the job.');
END IF;
END LOOP;
END IF;
end;

 

/Damith

Userlevel 3
Badge +4

Nice idea. Can you do a commit in an event though? Not sure if i remember that correctly but wasn’t there some restriction?

If this is implemented you should probably run the background job in a dedicated queue to make sure it is processed as fast as possible. It will slow down a simple modify either way. I will try this next time a requirement like this comes up. Thanks Damith

/Johannes.

Userlevel 5
Badge +9

Nice idea. Can you do a commit in an event though? Not sure if i remember that correctly but wasn’t there some restriction?

If this is implemented you should probably run the background job in a dedicated queue to make sure it is processed as fast as possible. It will slow down a simple modify either way. I will try this next time a requirement like this comes up. Thanks Damith

/Johannes.

 

Thanks for the comment @Novacura_jst  I haven’t tried but also agree it’s not possible to commit inside the event action. As an alternate, you can put the call to post background job in a dynamic statement. I’ve modified the code stub with the change.

declare
id_ NUMBER;
run_ BOOLEAN :=TRUE;
status_ VARCHAR2(32000);
stmt_ VARCHAR2(2000);
begin

stmt_ :='BEGIN
Transaction_SYS.Deferred_Call(:id_,:method_, :type_ ,:parameters_, :description_);
commit;
END;';

EXECUTE IMMEDIATE stmt_
USING OUT id_,
IN <METHOD>,
IN 'PARAMETER',
IN <PARAMETER ATTR>,
IN 'Background job description';
WHILE run_
LOOP
status_ := deferred_job_api.get_state_db(id_);
IF (status_ = 'Created' OR status_ = 'Posted' OR status_ = 'Executing') THEN
DBMS_LOCK.sleep(1); -- add sleep not to overload system
END IF;
IF (status_ = 'Ready') THEN
run_ := FALSE;
END IF;
IF (status_ = 'Error' OR status_ = 'Incomplete' OR status_ = 'Warning') THEN
ERROR_SYS.Appl_General('LuName', 'BGERROR: something wrong with the job.');
END IF;
END LOOP;
END IF;
end;

 

Userlevel 4
Badge +9

Just tried this. Not working. It is not possible to have the commit anywhere inside the event action.

Userlevel 5
Badge +9

Just tried this. Not working. It is not possible to have the commit anywhere inside the event action.

Sorry for the noob mistake. You should put the background job creation logic inside an inner function with AUTONOMOUS_TRANSACTION:sunglasses:

Version #3 of the example. Hope it works this time.

-- Created on 6/29/2020 by DSJ 
declare
-- Local variables here
id_ NUMBER;
run_ BOOLEAN := TRUE;
status_ VARCHAR2(32000);

--inner funtion with pragma to create background job
function create_bg RETURN varchar2 IS
PRAGMA AUTONOMOUS_TRANSACTION;
idx_ NUMBER;
begin
Transaction_SYS.Deferred_Call(idx_,
<METHOD>,
'PARAMETER',
<PARAMETER ATTR>,
'Background job description');
COMMIT;
return idx_;
end create_bg;
begin
id_ := create_bg();
WHILE run_ LOOP
status_ := deferred_job_api.get_state_db(id_);
DBMS_OUTPUT.put_line(id_ || '-' || status_);
IF (status_ = 'Created' OR status_ = 'Posted' OR status_ = 'Executing') THEN
DBMS_LOCK.sleep(1); -- add sleep not to overload system
END IF;
IF (status_ = 'Ready') THEN
run_ := FALSE;
END IF;
IF (status_ = 'Error' OR status_ = 'Incomplete' OR status_ = 'Warning') THEN
ERROR_SYS.Appl_General('LuName',
'BGERROR: something wrong with the job.');
END IF;
END LOOP;
end;

 

 

Userlevel 4
Badge +9

Yes, this makes the code run, with the background job always committing before the event action. It also halt the UI.

For the scenarios where we what to do something before saving, this might work. I have been running with queue=4 (touch app push). It execute nearly instant.

When the background job has to run after the event action (like the example above), I think another approach is needed.

As always @dsj,  excellent.

 

  

Userlevel 5
Badge +9

Yes, this makes the code run, with the background job always committing before the event action. It also halt the UI.

For the scenarios where we what to do something before saving, this might work. I have been running with queue=4 (touch app push). It execute nearly instant.

When the background job has to run after the event action (like the example above), I think another approach is needed.

As always @dsj,  excellent.

 

  

Thanks @Hans Andersen, Unfortunately I think this is the far we can go with events since event action logic triggers during the main transaction and background job created via autonomous transaction won’t ‘see’ the data change.

One way to overcome this is to look for a mutually exclusive transaction created from your data change. (Eg: adding a history record, creation of a new revision) and add event to that, so that you can ensure the data is already changed. In Estimate window I see Log and History tabs. May be you can play around a little to find a good solution :wink:

Good Luck!

Userlevel 2
Badge +4

I have used autonomous blocks in various instances and know that it will work only on specific scenarios, and to cut down the element of unpredictability, you should use it as the last resort. 

 

In this example, I would say your best, safest, and possibly the only bet is to use a bg job. As the logic for calculation needs to run after the committing of the version value change, it suits more to run the job as a bg job rather than an autonomous block. 

 

You would just need to define a very fast queue that possibly gets looked at say, every 1-2 seconds, and add this job and nothing else. That way, your calculation could be completed within seconds. There are many windows in IFS, for example, cost set copying in costing window where calculations are triggered to the background and the user must wait x no of seconds to refresh the screen to see the changes.  

Reply