Digitalization CollABorative March 2024 Session

  • 27 March 2024
  • 0 replies
  • 54 views

Userlevel 3
Badge +8
  • Do Gooder (Employee)
  • 14 replies

IFS Digitalization CollABorative: Tech Talk – R&D with Andrew Lichey, VP of Product Management for AI Strategy at IFS

Date of Meeting: 19 March 2024 10:00 AM US Eastern Time 

 

Andrews Presentation:  

Slide: IFS CollABorative Data Platform

  • Part of the reason we do these sessions is to provide updates on a periodic basis on what's changing and what's new within our IFS product suite. And as one of the things that's new, I've changed roles. Last time I talked to you was in October, I was the VP of Product Management for the platform group. That's the group looking after the infrastructure that we run IFS cloud on top of. I've since moved over to the AI strategy side, so I'm now in the VP of Product Management for AI strategy. Hence, what you're going to get a lot of today is some of the things that we're doing. And one of the major things that we're doing on the AI front. It's hard to understate, just how important the AI is and how it's revolutionizing everything that we're doing with in the product. We're looking at embedding industrial AI throughout the entire product. Both the way to improve engagements, so improve the user experience with copilot type interactions, but also to improve the outcomes of different processes. So, improve the purchasing process. Improve all these different processes throughout the application, through the use of AI. One of the key parts of getting good outcomes for AI or business intelligence is the quality of the data that you have going in, and that's really what I'm going to be focusing on today. Our new data platform that we're working on. Again, to improve the overall quality of the data, that's involved in the decision making, both our data inherent in our system as well as data that you may have in other systems.

Slide: Evolving Data

  • The world of data has changed a lot in the last I would say 10 years. Ten years ago when you look at the standard data states of many of our customers, it was pretty simple. They would have their ERP database and all the data from their ERP system sat within that database. They may have had some unstructured data that was captured by field engineers and things like that, technical readings or pictures or videos and stuff like that. But the overall quantity was very much in favor of that structured data that sat within the database. Over the last 10 years and in an accelerating pace, the complexity of the data states that our customers have has grown substantially. Data is being stored in more places than it ever was before. We are dealing more and more with other types of data other than structured data. Structured data is that clearly defined data in a schema that sits within the database. Now we're getting more and more data outside of there, so we're dealing with more and more images, pictures, data files, things like that. Where we're at the point now where the unstructured data is many times larger than the structured data that we're dealing with. And that, again, is crucial for us to be able to pull into our AI and BI solutions.
  • The second thing that's changing a lot is the quantity of data that we're capturing. Ten years ago, the amount of data we're capturing was largely limited by the number of people you had sitting by keyboards, right? We could only capture as much data as people could type it in (if for the most part), now we're capturing more and more data automatically through systems, through industrial IoT solutions and things like that, where the just the overall volumes of the data are many, many, many factors greater than they were ten years ago. And that's only again accelerating is more and more industrial IO T solutions get rolled out and things like that.
  • The third thing that we're seeing change along with data is the quality of data. Because we have data coming in from so many different data sources, there's less control over how that data gets entered. If you think about again, if you're using an ERP system, the ERP system governs how you control or how you input that data. It sets default values. It enforces requirements and things like that, but as we get more and more unstructured data, especially and more and more data from different systems, while we're seeing is the quality of that data is highly variable and again the value of the outcomes you get from AI and BI are directly related to the quality of the data that you put in.
  • The fourth area is the rise of the data set citizens. Every company I talked to anyway is now looking at how they can support and implement both BI and AI solutions internally. And to do that, a lot of companies are looking at onboarding data scientists, data engineers, different people in data centric roles, people whose responsibilities are not running the application but looking at the data that the applications have captured and trying to understand how they can derive business value from that data. The challenge is, for many organizations this groups of data citizens have grown up organically within the company. Different groups have hired them and as a result, they often have a very diverse set of tools that they use to do their day-to-day jobs. The end of the day, most of the data citizens are going through the same types of activities, be it someone writing a power BI report and wants to understand how they can visualize their organizations performance based upon the data, or it be a data scientist looking at understanding the data better to build a to train a model against. They're doing the same types of activities, but oftentimes they have very different sets of tools and approaches to do that.
  • And the last thing, and this is very much an evolving process is data regulations that continue to grow around this area. The EU just launched their new set of regulations around AI this week. But we're all familiar with GDPR and other things that are not only regulations, but regulations with strong incentives to comply to those regulations, right? The penalties that companies are seeing for not being in compliance for things like GDPR and the new AI rules of the EU passed are significant. And they're business risks. I mean, they're the kind of things that if you get wrong, it's going to substantially impact your business or maybe put you out of business entirely. So being able to support those regulations. With all the stuff we talked about above with the quantity of data, the quality of data, the dispersal of data, becomes a bigger and bigger challenge. Equally risks. We've all seen over the years the many different data breaches that have happened and the severe impact on companies that those can have. The more data that you have, the more data that you store in multiple places, the more people you give access to that data across all those different data stores, the more risk you are at for being exposed to some sort of attack.

Slide: Solution

  • So, if those are some of the things that are going on with data, what are we looking to do about it? So right now, we are working on a project to deliver what we're calling a data platform. The data platform at its root serves 2 main purposes. Number one, it becomes a single source of data truth for an organization. It's one place to go where you can have access to all the data that you need in order to do any sort of AI or BI activity. Secondly, it offers a unified experience for all your different data citizens, so there's one place to go. There's one set of tools that they use. There's one way of supporting them on their day-to-day jobs and the and the work that they're doing on that day-to-day basis.
  • So that's what a data platform is. What are the parts of it? First, we look at ingestion. We talked about data existing in a lot of different data sources. We want to be able to ingest that data from those different data sources and put it into one common platform. Now, why do we want that single source of truth? And that one platform for people to go to? One big reason is again risk, right? If you have data stored in five different data stores, you have five different sets of authentications, you need to manage five different sets of access. You need to manage for those types of users. Whereas if you give them one place to go, you have a single point of failure that you can better control. The second thing is to raise awareness of the data that you have. The more data sources you have, and the more data sources you ask users to go into to find their data, the more likely it is that there's going to be critical business data that they're not going to know even exists. So, bringing it all together in one place makes it easy for them to find the data, to discover the data, and then perform the subsequent activities that we're going to talk about.
  • The second is the transformation and regiment data. One of the points I talked about in the previous slide was quality of data is all over the place and that's being driven heavily by the balance shifting between structured and unstructured data. Unstructured data is typically captured in ways that you don't really have good controls over. What that quality of data is, but even with structured data you might be pulling data from other systems. From legacy systems, things like that that you need. You need to be able to transform and enrich that data and clean that data and get it into a format that you can use it, both to get accurate results, but also to get efficient results from your AI and BI activities. You're not going to want to send a very complex schema into a report because it's going to take that report a very long time to render. Similarly, the same thing with a machine learning algorithm. You want to transform and enrich that data and get it into a format that is efficient as possible for those activities to consume.
  • Third area is governance and lineage. We talked about all the risks and it's critical that we control who has access to what data and having that all in one place makes that much easier for administrators. Secondly, is lineage and while we're creating a single source of truth for them to use, it still is important for people to understand where that data came from and be able to trace that to its source. One of the things that we often talk about with industrial AI is the concept of explainable AI. When you are organization is challenged by an outcome that was AI driven and the accusations of bias against that are things like that, you need to be able to prove that there was no bias, and part of that is showing OK what were the algorithms? How did they work? And part of it is showing what the state of the data was throughout the process. What was its original state? What did you clean it into? What was processed. Again to show that there was no bias. One example of lineage.
  • We talked about tooling. This is about giving that common set of tools for all your data citizens, and then optimization I touched on, when we were talking about transformation enrichment. This is about getting the data into a state where it can be optimally consumed by AI and BI, and when I say that I assume everyone knows we're talking artificial and business intelligence.

Slide: Principles

  • So, what are the principles then of the data platform that we're looking at? Number one, we're looking at providing a managed solution for a data platform that will include hosting the BI infrastructure as well as supporting data governance. So what we're looking to deliver is this as a wholly managed solution for our customers as part of our cloud offering. We are looking at supporting this for remote customers as well, but we would be managing that solution for you so that you don't have to set up all those expensive BI infrastructure and all the different data platform components and things like that.
  • You can look at this as an expansion of what we started with our analytics as a service offering. Analytics as a service is something we're launching in 24R1. This was a more limited scope of what we're talking about here. This was just about hosting, offering a managed solution for BI, and now we're expanding it beyond that to say not only is it BBI, but it's also everything you need for data transformation and data governance and AI. But it is an expansion of that core infrastructure that we had. And again, we are offering that in 24R1 for a limited group of Cloud customers and in 24R2 it is our intent to offer that to remote customers and make it generally available.
  • Third thing here is we want to offer that unified platform for all your different data systems. So, whether or not it's a BI engineer, a data scientist, whether or not it's just a person in your organization who may not have an official Data Citizen title but is someone who's a business expert, or that person that you go to do power BI reports, because they've trained themselves on it to be experts on it. That's who this data platform is for. It's for people to go in and be able to analyze the data, look at the data and decide how they can make inferences and decisions from this data, how they can build or tune their ML algorithms based upon data that you're capturing, how they can create that power BI report that shows how your business unit is functioning, things like that.
  • And the user experience for this data platform, it's our intent that this will all be driven through IFS Cloud, so there's not a different platform that you're going to need to go to. There's not a different set of tools that you're going to have to get familiar with, or different user experience that you're going to have to know. This is this will all be delivered through IFS Cloud itself. And that includes things like machine learning notebooks, like Jupyter notebooks where you can go in and use Python analyze the data, that includes the ETL configuration when we transform data and enrich data, cleanse data, that kind of thing. All that configuration will be done through IFS cloud.
  • So what are some of the advantages of this data platform for you? So first, we're looking out for this as a fully managed service. There's nothing that you need to host yourselves, so there's no need for big capital investments in infrastructure to support this. Where it's offering that common data platform and it's all delivered to the web UI of IFS Cloud.

Slide: Users

  • So who is this for? And I talked a little bit before. I'm going to go through this one fairly quick because we've already talked about this, but there are really four groups that people I see as the users of the data platform.
  • First, we have our data engineers. These are the people that are going to be setting up the pipelines to your other data stores. So they're going to look at, OK, what different data storage you have in your environments? How can we connect those and pull that data into the data platform? They're going to be the ones doing the data transformation rules. Things like that.
  • Second, the data scientists, these are the people who are looking at the data and studying the data and understanding what sort of automation they could build using industrial AI from this, and that could be people focused on expanding our copilot experience, to people looking at ML algorithms for very specific use cases, like we will ship a model out of the box that looks at the likelihood. This is just an example, but looks at the likelihood of a vendor making payments based upon past results. A data scientists can go in there and look at that and you may have added some additional configuration to the system to capture some additional data that would also be useful in that decision making. So, you might enrich the ML model that we have out of the box, adding that additional field in there. That's something the data scientists would likely do.
  • We talked about BI engineers and then this last group, the citizen developers and this is one I think for many organizations, especially over the next five years, it's going to be your predominant source of people engaging with the platform. Right now we all know it's hard and expensive to find data engineers and data scientists and things like that. But you have a lot of people in your organizations today that really understand the system the way you're using the system. They understand the data. They may have some expertise in these areas and you can leverage them to do things like power BI reporting or through some assistance of us, teach them how to optimize the ML algorithms in the system.

Slide: Conceptual Architecture

  • So, what does this look like from a conceptual architecture? This architecture is known as a medallion cell architecture. This was popularized by like data bricks. If you're familiar with data bricks, it’s an early entry into the and leading entry into the data platform space. The idea here is that data exists in one of three states. It exists in its raw state in the same version that existed in the external data store. It exists in a clean and enrich state and transformed into something that's usable for the business. It's and then the third state is an optimized state for a very specific outcome that you want. So, for an ML algorithm for BI report things like that. So, if we start left to right here, we see the different types of data that we want and ingest into the data platform. We have obviously our structured data that sits within our IFS Cloud Oracle database exists within other databases that you have in your environments. We have our unstructured data stores. This includes like the IFS file stores storage service. If you're storing attachments locally, it would be your SMB file share. It also includes things like data archives where you've archived data off into a low cost service like a data lake or something like that. So any area that you have unstructured data stores. And then the third is the time series and streaming data. Near term this is the IoT data that you're ingesting and in the system. Longer term this could be time series data that you're capturing from other systems as well. But each one of these represents a first-class citizen for data that we want to be able to ingest into the architecture and use.
  • I just want to go back and say thing else on the unstructured data stores. This doesn't just include business data. It might also include FAQ's, training manuals, product manuals, things like that. You want to leverage and expose through like a copilot experience out to your engineers in the organization. So, you ingest all the knowledge that you built up and acquired through the last 15 years for maintaining a product, and you've put in an FAQ being able to ingest that FAQ and then put that in this process so that we can leverage it from a copilot experience and offer that back to a user.
  • So then moving to the right. We have the data in all these different data stores. We bring it into the bronze layer now. Couple things I want to touch on here. First is that again reminder that the bronze layer is literally a copy of the data that exists in these external data stores. So, we haven't put it through any transformation or enrichment yet. That's important because this is what allows us to have that lineage in data science, because we all like big, fancy terms, this is what they call time traveling. So, at any point throughout this process, we can go back and look at the state of the data that existed previously that we made decisions on so it's important that we captured that essential raw data that we've extracted from these systems.
  • The bronze layer is unique also in that it's capturing entire views of data and storing them historically. So, it's not just a one-time view of the data, it has data captured over multiple times. So, you can go back and see how the data has changed in those external systems that we even ported it from. Each one of these layers uses what's called a data lake house. If you're not familiar with that term, data lakes have existed for a while. A data lake is just simply an unstructured data store that makes it easy to drop files and other things in, at very low cost for the storage. It has a lot of limitations. Literally it is just like a blob storage, a data lake. You drop stuff in, you pull stuff up. That's about all you can do with it, but it's extremely low cost and extremely fast. About roughly I don't know exactly the time frame, but over the last decade we've seen the rise of data warehouses and data warehouses are similar to data lakes in that it's a data store, but they're different in that they support schemas, and they support asset type transactions. So, the ability to ensure that when you're interacting with data in that data warehouse that you it is transactionalized. Now data lake house is the best of both. So, it includes both storage formats within it. So, in some cases, when we're dealing with unstructured data, we'll put that in the data lake portion of that lake house, whereas if we're dealing with structured data that we're going to be updating, then we're going to put it in the data warehouse portion of that lake house. Now in between each one of these layers, we have an ETL process, an extract, transform process where we pull the data from the previous layer. We transform, enrich and cleanse that data using a set of rules that we will define out of the box that IFS, and then you may tailor for your specific requirements and then put it into the subsequent layer. This ETL layer we're looking to use Spark, which is a common open-source framework for this part of the Apache project, and again the configuration for those rules is all done within IFS Cloud. So, as we move through these layers, as we grab data from the bronze layer and get ready to put it in the silver layer, that ETL process is going to look at transforming the data from its raw structure into a structure that makes sense for your business. And that may be joining data from multiple data sources together, and that may be flattening out data structures, depends on the use case.
  • And then between the silver and gold layer, remember, now we're transforming that data from a schema that supports comma usage across your whole application to very specific use cases. Again, data that exists in the gold layer is going to directly match a BI report or an ML algorithm, those kinds of things. So, it is in an optimal format that allows those things to run as efficiently as possible. So that's the journey that this supports. Moving from all the different data sources you have on the right into an outcome for a BI or AI algorithm.

Slide: Summary

  • In summary on here, we are planning to offer a data platform and again this is intended to be a single source of data truth across your entire state, as well as offering a unified user experience for all your different data citizens. This data platform would be used for structured, unstructured streaming data, including all your industrial IoT and that could be everything from data files captured from a machine, telemetry alerts, events as well as things like video that you would capture. We're looking on basing this on a medallion style architecture. That's the architecture that the data exists in, one of three states, the raw state, the business state and the optimized state. This is an expansion and something that we're already doing in 24R1. That's our analytics's service offering, and perhaps this last one will be the most surprising. We are being very aggressive here. I mean, this is something that we're rolling out a lot of new industrial solutions as part of IFS Cloud. And so, we need this platform in place and we want you to have this platform in place so that you can optimize your use of the data as soon as possible. So, we're targeting 24R2 which is our release in I think tentatively planned for what October this year. So, this is this is coming, it's coming fast. It's not something that we're just talking about that's years away, but something that we're going to be delivering this year.

Slide: Want to get involved?

  • So, like I said, we're moving fast on this, but we'd really love to get you more involved in this process and hear from you. Get your input on requirements. Understand how this would impact you directly? Get your feedback? I'd love to talk to each and every one of you individually that cares about this. I would encourage you reach out to me. We can do it on this call directly or if you want to think about it, I included my contact information on here. You can reach out to me afterwards. Again, the advantages of getting involved at this stage are you get some influence, you get your voice heard in terms of what we're building that it's going to be right, exactly what you need. You will get regular updates on where we're at throughout this process. And then potentially we can look at early access here. We haven't really worked that out yet. Exactly how we're going to do that. But the people that want to get involved, we're eager to work with you. So, if that means we can get you early access to system, all the better. You can contact me on andrew.lichey@ifs.com or +1 262-290-7787.

 

Questions / Answers / Feedback / Responses:

  • Q: One of the questions that I had, Andrew, so seeing this is very exciting. So, we have fallen into the world of multisystems as the company has grown beyond being able to scale as just FSM as its primary operating system. We have now added in NetSuite for finances. We are working with smartsheet for some project management. We are starting to add more systems left and right. One of my questions would be or actually one statement is, each of these have their own BI platform, right? NetSuite has their NSAW platform, which is incredibly tedious to work with. It's just as far as BI platforms go, not great. So, this excites me in seeing what you're doing with the data platform within IFS, and I would ask, is there any thought being given around making sure that your legacy applications can also benefit from using it such as FSM, PSO and the others. And my mind goes towards the migration right? For example, once our company would be ready to migrate to the IFS cloud platform, there would be significant benefit to having a data platform already in place to tie all of our data sources together so that we can really take a systemic approach due to migration path which will be phase driven and very complicated for us.
  • A: So, it's like many complicated questions. I have the slightly complicated answer. We are very much looking at how we can use this for more than just IFS Cloud. The investments we're making around AI, we’re looking at making available in all of the products that we have. Now, having said that, our products exist in different states, right? I mean we have products like PSO that is a core part of IFS Cloud as well as being able to go stand alone, it’s something we see as a differentiator and we'll be investing in for a long time. Conversely, we are nearing the end of life on FSM. Given my background, that's a hard thing for me to say, but we are reaching an end of life on there, so it's unlikely that we'll do a lot in that space, using for the data platform. Now what we will do is have an integral within the service offering that we have in IFS Cloud. So, when you're ready to make that transition, it'll be there for you. So long answer short, really depends on the product, but it is our goal for products that we are the fully investing in and have a long term future and is fully our intent to have this be available to them as well as IFS Cloud.
  • Q: Would I be able to take in different ERP data connectors? I missed that if you mentioned that earlier.
  • A: That is the idea that this is not just a data platform for your IFS data, but that eventually it will also be a data platform that allows you to ingest data from other systems. Regardless of what master system you have overall in your overall architecture, there's going to be data in different systems that is crucial in your maintenance planning and your service planning and things like that that we want to be able to make available.

 

  • Q: Archiving of all the legacy system. Is this the candidate for that as well? We are focusing on IFS rollouts but then we get rid of AX 2009, AX 2012 SAP, etc. Is the intention of this also to put in the old data from the old system into this because we would like to have access it in case of audits because we are not transferring in the transaction to the new system we start clear but we don't what we would like to put the old systems into some kind of archives so we can turn off the old computers, for example licenses.
  • A: If you want to use that data, if you want to report against that data or use it to better understand your business or use it to train models or optimize models, then this would be a natural spot to land that data just like any other data source. We do have a data archiving investment that's focused on how we archive data from IFS cloud, but in generally enough we go back to the conceptual data slide, what I would see is from these different systems that you have, the archive data would exist in one of those unstructured data stores, right? As you as you retire a system and you want to archive that data, you're probably archiving it into a large data file or something like that that you would drop in a data lake. Just as a low cost storage, and then that could be just another input into this process. If you want to use it for reporting or business intelligence or AI. If you don't want to use it, then it would just sit in those unstructured data stores. This it's not our goal as part of this, to build a platform that allows you to archive data from other systems. So, I see that you would archive it using whatever tools would be appropriate, and then you may import that data here, but you wouldn't use this to archive data from those systems.
  • F: This is actually exactly what we're doing. We are putting into data lakes the whole database of all systems. Then we are making a data warehouse semantic layer on top of that, just to access certain data for looking up prices, part statistics and things like that. So that is probably what you're talking to. Have it analysis on some part of the data.
  • R: Yeah, exactly. You could import that that archive data from that Data Lake into here, and then use the tools that we're offering here either the data science centric tools like loaded in through a Jupyter notebook and then use Python libraries to search through that data to find the inferences you want. We're also looking at how we can offer visualized solutions to that data and make it easier to. But yeah, that sounds very similar.

 

  • Q: From the previous question about archiving and putting data into data lakes. We have high security this morning and we have a very tough challenge to execute on this data lake request because we are placing very much of a permission sets to restrict the user from seeing data using data on port level, on every level possible in the system. How do you combine that with a data lake? If I shorten it, the user shouldn't see more data in the data lake and then it's allowed to in the IFS system.
  • A: Good statement. We do not have that secured as you have not at all, but what we have started with is, we are not accessing the data for the users from the data lake, like a query we are creating a semantic layer on top of the data lake and extracting data in formats. We made a survey what data do you need to access if we close down the system. We have the auditors and they have seven years for viewing transactions down to the source, but for the rest for us, we don't go down to port numbers. We create in the Data Warehouse, we applied the company and the site level. So that is the level that you can access the data on, but it's a good question. I ask you for maybe we need to be more careful, but we have checked and some say it's so old data and our business is changing so rapidly. So in this old IFS from 2002 that we stopped the last company in November last year. We think that we can own company and site level is OK to the limited data that we provide for the users, but it's very good. And we're building access rights in the data warehouse via a day groups connected to it. That's our solution. The data will be accessed by Excel or via specific SSRS reports. We are not investing in doing Nice Power BI application. Not yet at least, not to start with, but it would be interesting. Like you said, Andrew, to connect that data to the new data and try to let the some machine and AI to make some good analysis for the future. But then the customer they've changed ID part numbers have changed ID, the business model have changed, so it's difficult as well. We have lots of much more dimension to follow up on, etcetera. So, it's not the same thing to compare.
  • R: I can't go into detail because we're still figuring out exactly how we want it to work. But data segregation is an important part of the data platform, and its important part from the aspect of copilot type experiences. How do you make sure that a copilot doesn't offer up information that should be restricted? Like for example, say your HR team puts in payroll data, uploads payroll data that's used by your copilot. How do we make sure that that doesn't get exposed to everyone across the company, but also just from a machine learning perspective, especially for larger companies that have things like where they might have multiple divisions or they might work with multiple subcontractors, but they don't want one subcontractor to get information from another subcontractor or things like that. Data segregation is very important. And there's a couple different ways you can segregate data within IFS Cloud today and that's an investment that we're making as well to further enrich our data segregation capabilities. But that will be a first class citizen in the data platform. It's not something that you're going to need to build on top of this.
  • Q: Do we have any time planned for that? Because in my business here, as you said, we get demands from customers that they must control which access they have to their data in our system. And they don't.
  • Q: Like from a GDPR requirement, like personally identifiable and phone?
  • A: No actually, because they have customer secrets in their solution that they don't want to share with other customers and people assigned to 1 customer issue should be able to run or view other customers. We must be able to restrict that somehow, someway. Today we are using different installations to get that OK from the customer in in the next format. It's a real demand that we have in our business, so I have respect for it, that you guys don't have these security demands. We can't even share how many times we use special part because by counting, you could somehow get the a view over some customers possibilities to use some products and so on.
  • R: If I'm understanding correctly, there's a lot of restrictions around you being able to access and use any customer related data.
  • F: Yes, and it's highly affects our possibilities to use data platform tools like AI and such things because it's the individual. For instance, if you're born in the wrong country, you shouldn't be able to see some part of the installation.

 

Feedback:

  • Q: Are there are others here doing similar things? We have people that are pulling data from multiple different data stores using some sort of data platform today.
  • A: We do. We use NSAW and we have a couple other storage locations for data lacking for project data from smartsheet and whatnot, but it's not as extravagant. Most of our work is done from the IFS databases.
  • A: We do as well. We have an in-house data warehouse where we bring all our systems together and then report on them for our BI.
  • Q: Where do you put the IFS cubes? And is SQL service connected to this infrastructure.
  • A: Yeah, I'm not actually sure how much of the IFS Data we are housing. And have to talk to our engineer on that one. But yeah, it's small bits at best.
  • Q: How about just how about machine learning in general? I mean, how many of you have active AI projects going on using machine learning that you're feeding data?
  • A: We're about to. So, one of the things that we're about to engage in is an AI partner to identify market trends. They have a significant, very exciting data mining tool that allows basically them to data mine the Internet for trends and whatnot and we're trying to figure out what are retailers are up to that we can get in front of, as well as identifying within our data set. So, we're going to have an AI system analyze all of our services offered compared to our invoices from NetSuite and identify which services are we delivering that we are forgetting to get our customers to pay for. It's actually a big problem and it sounds silly, but we're not charging it for everything we do.

 

Next Meeting: 16 April 2024 10:00 AM US Eastern Time
IFS Digitalization CollABorative: Think Tank – Meet the Member with Lance Schultz of KLN Family Brands

If you are an IFS Customer and you do not have the next meeting invitation to this CollABorative and would like to join, please click here to fill out the form


0 replies

Be the first to reply!

Reply