Skip to main content

Hello,

I have an idea for creating a custom note on an LU and populating it with history log data.

  1. BUSINESS NEED: We want to log the changes of certain fields on Purchase Orders, and want that information to be assembled in a note that can be attached to the Purchase Order. We do not want to use the Purchase Order note for this, we want a special field (so will add it as a Custom Attribute of the PurchaseOrder LU).
  2. ELEGANT PATH: Use History Log to store changes to the Purchase Order for just the fields we want. Then add a Custom Event to the PurchaseOrder LU, “after” save, linked to a PL/SQL Action that grabs History Log data related to a recent time-frame, the current purchase order, and the current user. Append new History Log information to the custom note we added in Step 1.
  3. CHALLENGE: Timing and avoiding redundant data. Does IFS write data to the History Log, synchronously and completely, before an “after” Custom Event fires for the LU? How can we determine what changes just occurred and avoid re-writing the whole history for the purchase order over and over again?

I understand that we could “roll our own” logging by using the OLD and NEW field values in the Event and Action. But if the History Log can do all that work for us it will make the PL/SQL much easier to write. I’m just not sure we can rely on the timing to work out so that History Log data is available immediately after the save (and before the Event fires and goes looking for the data).

I also realize we could create strings to append to the custom log field where we could look for text already existing in the note. That way we would avoid re-appending data that was already in the note (e.g. from a change made previously). So I am not as worried about repeating data as I am about knowing the new data is going to consistently be there.

Doing it this way also means we do not need to retain History Log data for very long. If we persist it to a note linked directly to the Purchase Order it doesn’t matter if that log information gets deleted later.

Has anyone developed a persistent logging/audit trail based on this approach? Can anyone confirm the timing and synchronicity of History Log writes? I can certainly start mocking up some tests, but if someone knows for a fact that History Log writes are asynchronous, for example, I really don’t need to bother because this idea could never work if that is the case.

 

Thanks,

Joe Kaufman

 

Sorry, and I promise I’ll only do this once -- bump…

Has no one tried this route of logging before? I assumed I’d see a bunch of replies. As an update, I am not sure a custom field will work because logged data could be larger than 4000 characters (would be nice if CLOB custom fields were supported). So, we may end up having to keep everything in the log anyway. I still would like to understand the timing of when history data gets saved and whether it can be depended on in a Custom Event.

 

Thanks!

Joe Kaufman


If you’re willing to keep the data resident on the Purchase Order custom field, what benefit are you getting by not retaining the history logging longer on the PurchaseOrder LU? Those bytes need to fit somewhere in your database, and they’re all in the IFSAPP_DATA tablespace, so it’s not like you’re gaining much by keeping them in PURCHASE_ORDER_CFT while purging them from HISTORY_LOG_TAB/HISTORY_LOG_ATTRIBUTE_TAB.

For a given set of triggers of the same type, Oracle does not guarantee the order that triggers fire. They added FOLLOWS/PRECEDES options in Oracle Database 11, but IFS Events do not implement this, as far as I know.
https://docs.oracle.com/en/database/oracle/oracle-database/21/lnpls/CREATE-TRIGGER-statement.html#:~:text=information%20crossedition%20triggers-,trigger_ordering_clause,-FOLLOWS%20%7C%20PRECEDES

If you get lucky and find they work well together today, that might change tomorrow.

 

If you’re worried about the history log size, I’ve found that the cleanup job that purges old history benefits greatly from an index. I benchmarked this index and found it only slowed down our history writes by about 1%. It has also helped us with certain searches.

CREATE INDEX c_history_log_tabdate_ix
ON history_log_tab(table_name, time_stamp)
TABLESPACE ifsapp_index;

It sounds like you want to look inside a date window for changes made to one particular instance of the LU. There’s already an index on LU_NAME and KEYS, so performance won’t be a problem to do this lookup dynamically.

I recommend you implement this in a read-only fashion directly from the history, either as a multiline custom field or as an information card.

 

Try this as an information card:

Implementation type: Select

Arguments: v.order_no

Select Statement for the last 30 days’ worth of changes:

 

-- Formatted for readability:
SELECT LISTAGG(
TO_CHAR(sub_.time_stamp, 'YYYY-MM-DD HH24:MI:SS')
|| ' by ' || sub_.username
|| ': ' || sub_.column_name
|| ' from ' || sub_.old_value
|| ' to ' || sub_.new_value, CHR(13) || CHR(10))
WITHIN GROUP(
ORDER BY sub_.time_stamp DESC,
sub_.column_name ASC) AS history_string
FROM (SELECT hl.time_stamp,
hl.username,
hla.column_name,
hla.old_value,
hla.new_value
FROM history_log_tab hl
JOIN history_log_attribute_tab hla
ON hla.log_id = hl.log_id
WHERE hl.lu_name = 'PurchaseOrder'
AND hl.keys = 'ORDER_NO=' || :order_no || '^'
AND hl.time_stamp >= SYSDATE - 30) sub_

 

You’ll want to minify that to one line for the application.

 

Text, unformatted, multiline

(You may need to truncate this query to limit the size.)


@durette,

Fantastic information, and I will read up on it!

Not sure why I care about trading off log space for note space -- it is true we can just leave it in the log. I cane up with a query like yours (though not implementing LISTAGG() to get it all in one text field) and it should work well for us. Key is really easy for PurchaseOrders as well, one of the simplest in IFS! I see you already manually create the key instead of using something like Client_Sys.Get_Key_Reference_From_Objkey() -- we would probably go that route as well. I am still learning all the different ways IFS serializes data.

I am not sure we will try to present this as any sort of field at all, since we will run into the 4000 character limit (I assume that applies to Info Cards as well. Still, that is a nifty approach to presenting it that may be appreciated by the users.

Again, thanks for taking the time -- I appreciate your thoughtful posts here.

 

Thanks,

Joe Kaufman