Skip to main content

We have a few consultant designed interfaces at present, which work well but I would like to start creating simple ones myself.

For this example, I need to create a CSV output on a scheduled basis, with the output file having data placed in columns specific to my needs.

Is there a guide on easiest way to achieve this?

Possibly in the future, may also use an input file but think I will be using external file loads and running them manually at the moment.

Thanks

Hi Bruce,

This is pretty straightforward using the data migration tool.

Create a job of type CREATE_OUTPUT_FILE

Identify the source view, and then mark the fields you want to output.  Then run the job to get the csv file.  Note that you can run this online, or you can schedule it to run on a scheduled basis and write the file to a server location.  If you do that, you can have it use the current datetime in the filename.

 

 

 

 

If you want to schedule it, you can do things like this:

 


Thanks for the really helpful input! 
 

wondering, is this possible when output needs to contain date from different views? 
 

also, can mig jobs be used for This files for inputs also?


hi Bruce

The migration tool is very flexible - you can create custom views and IALs and use those as the source.  Even easier: you can also write a select statement consisting of a table join directly into the Source Name field - put it in parenthesis like

(select * from ifsapp.project … )

I’m not totally clear what you’re asking about the input, but the answer is yes - you can use the outputted files as an input elsewhere e.g. into another environment


Thanks! That's really helpful.

For input files, I mean can I take a file and import into IFS in similar fashion to create a 2-way interface?

Am running through a MIG job now. Wondering, can I specify header names in the IFS MIG output? 


yes, to import easily you can use procedure = INSERT_OR_UPDATE.  To make it harder for yourself, you can use CREATE_TABLE_FROM_FILE and then MIGRTE_SOURCE_DATA.  But that lets you populate multiple LUs at once.  Check the IFS help files for more info.

 


Thanks! That's really helpful.

Am running through a MIG job now. Wondering, can I specify header names in the IFS MIG output? 

 

With some effort, yes.

You need to set the names you want in a sub-select here

 

Manually then create the mappings like this:

Make sure this is set Active so that you get a row of column headings:

 

Result:

 

Alternatively you could create a custom view called an IAL, and use that as the source for your output.  Then you can define the column names in the IAL


yes, to import easily you can use procedure = INSERT_OR_UPDATE.  To make it harder for yourself, you can use CREATE_TABLE_FROM_FILE and then MIGRTE_SOURCE_DATA.  But that lets you populate multiple LUs at once.  Check the IFS help files for more info.

 

Hi @paul harland

I’ve been browsing around and found out your reply here that related to my issue at the moment.

I’m trying to update some columns in a view INVENTORY_PART_PLANNING, and was recommended to use this procedure. I’m so new to IFS and dont know how to set up File mapping correctly. 

Would be appreciated if you could help me some words on procedure INSERT_OR_UPDATE and specifically on this issue.

Best,

Hazel

 


Most people are going to use Excel to consume the output of a migration job, and we found this combination has MANY shortcomings. We ended up writing our own tool from scratch to do this instead.

Most of our work is to export custom SQL queries, which we’d then wrap in a custom view (like an IAL). Whenever business requirements change, you’d have to update your query AND the column setup in the migration job. It would be better if it could pick up the columns dynamically from the query.

If the file write fails, like on a remote network share, it's unrecoverable. You can't go back in time to get the data as it existed at that point in time. (It's important we get backlog snapshots at midnight so we know which day a particular booking occurred.)

The output mechanism uses UTL_FILE and writes one row at a time. This is okay if your destination is on the same server, but over a network with real-world latency, a larger write buffer would be more performant.

The date and time handling is configurable but regional. No one format works on every machine.

Excel barfs on multi-byte characters because the migration job doesn't write a byte order mark.

A dynamic filename has limited configuration.

Strings that look like dates get corrupted by Excel, such as part numbers containing dashes.

Strings that look like numbers get corrupted by Excel, such as ZIP codes with leading zeroes.

If a field contains a Windows-style CRLF inside the data, like inside a notes field, the line gets broken. (Oddly, a Unix-style LF works fine.)

 

Our solution uses XML. The data is first stored in pieces in a custom LU, and then the output is written to disk in 16K buffers.


Reply