I constantly have to deal with the mutating table error when creating events. Usually I need to:
Save (F12)
Run some PLSQL in an custom change event (mutating if same table involved)
The only solution I know, is to start a background job. However, for some scenarios I need the code to run “online” or lock the UI until the job has executed.
Does anyone has some solutions for this?
Page 1 / 1
Hi @Hans Andersen
You can trigger events without hitting irreverent mutations by using a AUTONOMOUS_TRANSACTION block. The tricky part would be that you need to manually handle commits and rollbacks as the block will not be impacted by the triggering transaction.
Sample would be something like below.
DECLARE PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
-- Some logic
COMMIT; -- or rollback.. this would need to be handled manually within the autonomous block END;
It really also depends on the area. In some cases you can trigger your code on a different table like a history or journal table, whereas sometimes you have to use PRAGMA AUTONOMOUS_TRANSACTION as Sajith mentioned. Another way you can explore is replication jobs as they also use triggers.
thanks @Sajith D ,
I have been hesitating using PRAGMA AUTONOMOUS_TRANSACTION, because I am not sure in which order the transactions will execute (missing Oracle skills, i guess). For an example.
The Estimate module has an “calculate” function:
Let say that I change version Qty to 13 from 12. Instead of RMB “Calculate”, I would like to “calculate” after I hit “F12”.
For that I make an event on the table and execute the “calculate” procedure with PRAGMA AUTONOMOUS_TRANSACTION:
Questions is. Can I be 100% sure that “calculate” will fire on qty=13 and not on qty=12?
Does this work with what Hans wants to achieve though? I tried this once before to manipulate data on the same table that triggers it and for me the issue(s) were:
On modify we would run into a deadlock as the modify still has the record locked
On new we wouldn’t find the dataset as it’s not commited yet.
I haven’t looked deeper into it since. What’s the trick to get this to work?
@Hans Andersen & @Novacura_jst In this specific case i don’t think an autonomous block would work to be honest. Most probably Hans will run in the to the same locking issue or the estimate will run with QTY 12 defeating the purpose. All depends on how the Calculate option work internally. Worth giving it a go and see what happens. i certainly am curious to see what the outcome is.
cheers.
Why not move this event into a background job. From there you will be able to notice what happens if you build in some logic to fill the Text rows in the bottom.
Steve
@eqbstal
The problem with background jobs is that you cannot control when they will be executed. And there is no callback.
@eqbstal
The problem with background jobs is that you cannot control when they will be executed. And there is no callback.
With a little tweek, you can create a callback for background job and halt the transaction until background job is finished.
Given below is what I came with. Try and see if it works for you
declare id_ NUMBER; run_ BOOLEAN :=TRUE; status_ VARCHAR2(32000); begin
Transaction_SYS.Deferred_Call(id_,<OTHER PARAMS>); COMMIT; WHILE run_ LOOP status_ := deferred_job_api.get_state_db(id_); IF (status_ = 'Created' OR status_ = 'Posted' OR status_ = 'Executing') THEN DBMS_LOCK.sleep(1); -- add sleep not to overload system END IF; IF (status_ = 'Ready') THEN run_ := FALSE; END IF; IF (status_ = 'Error' OR status_ = 'Incomplete' OR status_ = 'Warning') THEN ERROR_SYS.Appl_General('LuName', 'BGERROR: something wrong with the job.'); END IF; END LOOP; END IF; end;
/Damith
Nice idea. Can you do a commit in an event though? Not sure if i remember that correctly but wasn’t there some restriction?
If this is implemented you should probably run the background job in a dedicated queue to make sure it is processed as fast as possible. It will slow down a simple modify either way. I will try this next time a requirement like this comes up. Thanks Damith
/Johannes.
Nice idea. Can you do a commit in an event though? Not sure if i remember that correctly but wasn’t there some restriction?
If this is implemented you should probably run the background job in a dedicated queue to make sure it is processed as fast as possible. It will slow down a simple modify either way. I will try this next time a requirement like this comes up. Thanks Damith
/Johannes.
Thanks for the comment @Novacura_jst I haven’t tried but also agree it’s not possible to commit inside the event action. As an alternate, you can put the call to post background job in a dynamic statement. I’ve modified the code stub with the change.
EXECUTE IMMEDIATE stmt_ USING OUT id_, IN <METHOD>, IN 'PARAMETER', IN <PARAMETER ATTR>, IN 'Background job description'; WHILE run_ LOOP status_ := deferred_job_api.get_state_db(id_); IF (status_ = 'Created' OR status_ = 'Posted' OR status_ = 'Executing') THEN DBMS_LOCK.sleep(1); -- add sleep not to overload system END IF; IF (status_ = 'Ready') THEN run_ := FALSE; END IF; IF (status_ = 'Error' OR status_ = 'Incomplete' OR status_ = 'Warning') THEN ERROR_SYS.Appl_General('LuName', 'BGERROR: something wrong with the job.'); END IF; END LOOP; END IF; end;
Just tried this. Not working. It is not possible to have the commit anywhere inside the event action.
Just tried this. Not working. It is not possible to have the commit anywhere inside the event action.
Sorry for the noob mistake. You should put the background job creation logic inside an inner function with AUTONOMOUS_TRANSACTION
Version #3 of the example. Hope it works this time.
-- Created on 6/29/2020 by DSJ declare -- Local variables here id_ NUMBER; run_ BOOLEAN := TRUE; status_ VARCHAR2(32000);
--inner funtion with pragma to create background job function create_bg RETURN varchar2 IS PRAGMA AUTONOMOUS_TRANSACTION; idx_ NUMBER; begin Transaction_SYS.Deferred_Call(idx_, <METHOD>, 'PARAMETER', <PARAMETER ATTR>, 'Background job description'); COMMIT; return idx_; end create_bg; begin id_ := create_bg(); WHILE run_ LOOP status_ := deferred_job_api.get_state_db(id_); DBMS_OUTPUT.put_line(id_ || '-' || status_); IF (status_ = 'Created' OR status_ = 'Posted' OR status_ = 'Executing') THEN DBMS_LOCK.sleep(1); -- add sleep not to overload system END IF; IF (status_ = 'Ready') THEN run_ := FALSE; END IF; IF (status_ = 'Error' OR status_ = 'Incomplete' OR status_ = 'Warning') THEN ERROR_SYS.Appl_General('LuName', 'BGERROR: something wrong with the job.'); END IF; END LOOP; end;
Yes, this makes the code run, with the background job always committing before the event action. It also halt the UI.
For the scenarios where we what to do something before saving, this might work. I have been running with queue=4 (touch app push). It execute nearly instant.
When the background job has to run after the event action (like the example above), I think another approach is needed.
As always @dsj, excellent.
Yes, this makes the code run, with the background job always committing before the event action. It also halt the UI.
For the scenarios where we what to do something before saving, this might work. I have been running with queue=4 (touch app push). It execute nearly instant.
When the background job has to run after the event action (like the example above), I think another approach is needed.
As always @dsj, excellent.
Thanks @Hans Andersen, Unfortunately I think this is the far we can go with events since event action logic triggers during the main transaction and background job created via autonomous transaction won’t ‘see’ the data change.
One way to overcome this is to look for a mutually exclusive transaction created from your data change. (Eg: adding a history record, creation of a new revision) and add event to that, so that you can ensure the data is already changed. In Estimate window I see Log and History tabs. May be you can play around a little to find a good solution
Good Luck!
I have used autonomous blocks in various instances and know that it will work only on specific scenarios, and to cut down the element of unpredictability, you should use it as the last resort.
In this example, I would say your best, safest, and possibly the only bet is to use a bg job. As the logic for calculation needs to run after the committing of the version value change, it suits more to run the job as a bg job rather than an autonomous block.
You would just need to define a very fast queue that possibly gets looked at say, every 1-2 seconds, and add this job and nothing else. That way, your calculation could be completed within seconds. There are many windows in IFS, for example, cost set copying in costing window where calculations are triggered to the background and the user must wait x no of seconds to refresh the screen to see the changes.
Have you considered using a Compound Trigger?
@s-oflaherty ,
Thank you for the input, but we create the trigger through IEE and have limited options.
Are you sure that is possible (allowed)?
One way to solve the mutating problem is to make a small customization to add the event in the code instead of the custom event. Then you do not use triggers and get rid of the problem.
It is such a small customization and just like custom events you can define new Event Actions with the code you want to have there.
Adding own triggers outside of the IFS framework is not supported by IFS. (Imagine the difficulty trying to trace errors coming from a trigger we do not know exist.)
One way to solve the mutating problem is to make a small customization to add the event in the code instead of the custom event. Then you do not use triggers and get rid of the problem.
Thank you the info. Customizations are something we try hard to avoid.
At least we now know it can be done.
@Sajith D : Do you see any issues with using PRAGMA AUTONOMOUS_TRANSACTION on a method that gets executed on an event in a table like Customer_Order_tab?