Skip to main content

IFS Combined CollABorative: Tech Talk with Bob De Caux, Chief AI Officer

Date of Meeting: 19 November 2024 10:00 AM US Eastern Time

 

Bob De Caux Presentation:

Slide: AI

  • Let's start with just a very quick overview of AI, but I hardly think that's necessary these days. Really the opportunity that exists here, particularly with the rise of Gen. AI that we've seen over the past couple of years. With Gen. AI we're really talking about large language models such as ChatGPT that allow us to give a whole new experience to users that are that are interacting effectively with a console that can answer questions using human natural language and potentially go and search for knowledge and be able to bring that back in a very intelligent way.

Slide: AI is nothing new to IFS

  • AI is nothing new to IFS, though we've been doing it for quite some time as a lot of you will know, particularly in our PSO solution which has been using AI algorithms under the hood. And there is this distinction between traditional or predictive AI as the market is calling it, and generative AI, which is based more around large language models. And we'll talk a little bit about that later.
  • But effectively we've been evolving over the past few years developing the technology backbone to allow us to deliver our IFS.ai platform, at the end of 2023 and we're starting to see in the product now in our 24R1 and 24R2 releases for IFS Cloud.

Slide: Our AI Vision – IFS.ai is Industrial AI. We are…

  • Our AI strategy, some of you will have seen and heard about this before. What does AI mean to IFS? Where it's all about industrial AI.
  • I've always been a strong believer that AI needs to be embedded deep within a product, and when it's surface to users, it has to be done in a way that is easy for them to use and very domain specific.
  • And so, we are not starting with the technology and working out how AI can help, we're starting with our traditional industries and our industry solutions. Thinking about the problems that that you face in those industries, and then thinking about how technology and AI can help us to solve them. So, it's always use case first when we're thinking about AI. It is embedded in the products and that has a number of stages. It means we've got to automate a lot of the data science work, traditionally over the last few years has required a lot of heavy lifting to create AI models and then even more to get them up and running and usable. So we've tried to create the pipelines and automate that process to be able to take your data, build and deploy good AI models and put them into production so you can use them seamlessly in the background of the product.
  • And the third major stream for us is what we call the thread of intelligence. The fact that we are covering the whole range of a lot of our customers business, from assets, taking feeds in from those assets in real time, making more strategic decisions, flowing that all the way out into how we maintain and look after those assets in the field with service. As well as all the ERP components that are needed to control your businesses as well.
  • So, we've have a really rich set of data across the whole decision life cycle that our customers have. They can all feed into the AI process, and as I'm sure you're all aware data is really the driving force behind AI to make it successful.

Slide: IFS.ai

  • So how have we thought about building up the architecture? Well, as I mentioned, it already starts with data and you can see the data services at the bottom.
  • That's typically the structured data that we would hold in our database. We've obviously have huge rich amounts of data there and capture all the relationships between all those different data sources. But there's a lot of new types of data that are available for use within AI now, particularly unstructured data. Notes, manuals, all those things that before would be stored as PDFs and would be hard to use. Now with the rise of large language models, these are available. We can turn those into to readable text using OCR technology, and then we can use large language models to be able to search through them effectively, pull knowledge back, and be able to deliver content off the back of those manuals.
  • As well as that we also have the real time data. So, this is a streaming data that we're increasingly dealing with coming off our customers assets. This is very high volumes of data. It has a time element, so how things are changing over time, and this is very useful for driving processes such as anomaly detection. Being able to then measure the performance of those assets. But you could see that equally applying across other areas such as finance for example. Being able to monitor whether there are anomalies in transactions over time.
  • So it all starts with the data at the bottom and then on top of that we build out a set of AI services that do the work. That might be a service that does the optimisation based on scheduling, as with PSO, it might do the anomaly detection as I mentioned. It might be able to deliver solutions based around generative AI, so being able to pull back contextual knowledge, generate content within the product, be able to offer recommendations. We've built out a number of reusable services that are as easy as possible for our application teams to be able to take and use within their specific domain to drive a solution that will be relevant for telecoms or manufacturing or construction.
  • And the idea was delivering these data services and AI services that we do it on a shared platform. There are a number of advantages to this. We can scale that platform up and down to control the level of compute based on the demand that we're getting, but we can also put the governance and orchestration on top of those shared services, so we can make sure that we're checking the data provenance as it comes in. That we're provisioning those services correctly to our customers according to their entitlements. That we are making sure that we can control the third party calls out to large language models and how we bring them back in, and how we maintain the sovereignty of the data for our customers. So being able to do the orchestration and governance on that platform is critical to the way that we are delivering our solutions. So, the idea is that IFS cloud, we have our separate deployments for all of our customers, but all the AI services delivered through a shared services platform that talks to IFS Cloud, it does the computation, it does the AI work and it can deliver that value back into IFS cloud where it's surfaced to customers.
  • And you can see, obviously we have it across our 6 industry domains, and we've built out a number of use cases based on the patterns that I mentioned built on these AI services. And we'll look at what those AI services are shortly.
  • But of course, AI also allows us to drive a new experience for our customers, particularly around the concept of Copilots, which is a more language based way of interacting with the product, being able to ask questions, being able to kick off processes, being able to pull back knowledge. And that can all be done through our IFS Copilot, which is now an embedded part of IFS Cloud. It's available within the product and it allows you to ask questions directly into that chat window to be able to pull back data and perform a number of tasks. But the idea there by embedding it in the product is to give it context. That's where the value really comes. Otherwise, if you could just take data and pass it to all manner of external copilots, and obviously there's a lot of those on the market, we don't want to be giving the same answers as that. We want to be giving IFS specific answers depending on where you're asking from within the product, what type of user or persona you are, and that's going to come back with a much more targeted answer for the questions that you are asking.
  • So that is effectively the broad architecture that we follow with AI that allows us to deliver all these different use cases and we'll look at some of the use cases that are coming shortly.

Slide: We have a complete stack for Industrial AI – IFS.ai Capabilities

  • So what are the 6 main capabilities or those AI services that can drive all these different use cases? Well, as I mentioned, they're really split into two. We can think of the top and the bottom here. So, on the bottom, we have the predictive AI capabilities. These are the ones that we've been strung out for some time, particularly optimization, but also the ability to forecast, to simulate future scenarios, using your operational data. These are the sort of the key to unlocking the industrial value. They’re very calculation intensive, they're very hard to replicate and this is where IFS is particularly strong.
  • On the top, we have the three generative AI capabilities. Contextual knowledge recommendations, content generation. These are all driven by the large language models and they provide a new and better experience to our customers, allowing to pull back knowledge more effectively. Being able to surface recommendations to users within the product and being able to automatically generate content such as being a able to automatically fill in a form or produce a job description for example.
  • So, two really very different types of AI. And the key to our success is that we can offer both of those together. So, we can put generative AI on top of our predictive AI capabilities and that allows us to add an extra dimension to the optimization because not only are we producing and hopefully a very effective schedule in field service, for example, but we'd be able to offer a dispatcher a language based experience over the top of that to ask questions of that schedule, to be able to understand exceptions better, etcetera. So that's really what generative AI allows us to do within the stack.

Slide: Copilot timeline

  • And the copilot that I've already mentioned, we've seen the first example of that come out in 24R1, but just to give you a little bit of a timeline of how that is developing. So the key to the successful copilot is all about using these multiple data sources. The structured and the unstructured data. And typically with copilot, the language models are much better at dealing with unstructured data, because it's all text and it's not about capturing relationships, it's just reading words within documents. So, where we started out in 24R1 is offering a copilot only over IFS shared documentation. So our technical documentation, everything in our user community, you could ask questions of that from within the product. And you still can in 24R2, but what we've added is the ability to use your own customer specific data. So, to be able to ask questions of your own manuals and notes and anything that you are storing within our document management system.
  • In addition, we're also now allowing the ability to ask questions over structured data. As I mentioned, this is a little bit harder, but effectively what happens in the back end, is we take the question that is asked, we parse it using the large language model we work out what it is that you're trying to look for within the database, and we go away and we apply a number of functions to be able to go away, pull that data back and present that back to the user in the cleanest possible form to answer their question. So, that ability to query knowledge from the database in a more effective way is something that's in the release that's coming out next week in 24R2.
  • In 25R1, the key development is what we call formless interaction. So, at the moment, everything that comes back from the copilot is just presented to you in the chat window as a recommendation. It's not actually going to take action for you. It's always going to be user driven. From 25R1, with formless interaction, you would have the ability to kick off processes within the product using the chat window. So, in that case the back end, the agent as it's called, will actually go off, perform a number of tasks within the product after you've kicked it off in the chat window without a human needing to intervene. Obviously there are inherent risks of doing that and we are very careful to keep the guard rails in terms of what those agents are able to do within the product. But that's the concept of formless interaction that's coming in 25R1 and if a number of you have seen a lot of the hype at the moment around agents that is in the market, and that we're seeing stories from other companies in this space, an agent really is something that can interpret a question, and go away and perform a number of tasks from these relatively vague language based instructions.

Slide: Service Applications 24R2 – Industrial AI for Service Management

  • So, let's talk a little bit more about some of the use cases that we're seeing in 24R2 off the back of this technology. In service management it's about developing the industry specific copilots. So, if you are a dispatcher and you're using IFS Cloud, or if you're a service manager, you're able to ask questions within the context of the relevant parts of the product, and it's going to come back with more detailed contextual answers knowing that you are the dispatcher persona and you're calling based on a certain type of data. So that is available. Those Copilots are available in the product from 24R2.
  • We also have the first version of what we call IFS home, and we can think of this as the next generation of how we present back forms and widgets within the product. So there are a number of widgets within the home space that allow us to present back the results from AI queries. These will be based around graphs. It might be based around explainability. But there's a number of user experiences that you can set up that are really designed to take advantage of these AI based answers that are coming back.

Slide: Asset Applications 24R2 – IFS.ai Copilot for FMECA

  • In the asset space, probably the biggest example we've got is the copilot for FMECA. So, this is failure mode analysis. Again, if you are a technician using the product within the FMECA space, you'll be able to ask contextual questions and pull back relevant knowledge based on the manuals and maintenance guidance. Any uploaded documentations that are going to help you with your FMECA decision. So again, it's about providing that contextual industry specific experience within the product. And again, that is something that is in the 24R2 release.

Slide: Asset Applications 24R2 – Asset Performance Management

  • If we move back into predictive AI space, then there's been a lot of work on the asset performance management solution with 24R2, particularly now incorporating our P2 operational intelligence platform. We have the ability to contextualize and enrich the data coming off the assets and then using AI to be able to automatically detect and handle anomalies. That's going to be fully embedded within the IFS Cloud in 2025. But there's a customer hosted solution that's available in in the 24R2 release. And this can obviously be used detecting and managing these anomalies can be used to kick off processes further down the line that can drive maintenance plans and potentially asset maintenance strategies.

Slide: ERP Applications 24R2 – Manufacturing Scheduling & Optimization and Simulation

  • In the ERP space, one of the key things we've done over the last couple of releases is, take our planning schedule and optimization solution, which has been so crucial to us in field service and use that technology in different industries. And it's not simply a case of being able to port that over to manufacturing. There's a number of elements about manufacturing production planning that are very different to field service, much more of a focused on just in time. Focused on lots of machinery being able to do things concurrently, as opposed to field service technicians who are working in sequence. So, there's a number of changes that we've had to make, but we've created now a domain specific solution using that optimization technology that allows us to dynamically create and manage production schedules for manufacturing.
  • And we also have the first version of our simulation engine that can sit over the top of this. So, allowing you to ask what if questions about your production planning process and being able to test out what that is going to mean for the overall schedule. So again, that's something that is available in 24R2, and we're continuing to look at other use cases for that optimization technology over the next couple of releases, particularly in the asset space and in aerospace and defence for line planning.

Slide: IFS.ai Commercials

  • Alright, let's talk a little bit about the commercials. We've spoken a lot about the use cases. So how does this get paid for within the platform? Well, the idea is that this is a token based consumption. So, users will buy a number of tokens that they can use across all the different IFS.ai use cases within the product. So those tokens are fungible, and it costs a certain number of tokens to perform actions within the product, such as running a production schedule or asking a question. One thing that we've needed of course to be able to put this into practice is to build a token inventory and entitlement management system that allows customers to be able to monitor and control their taken usage and be able to top up those tokens as and when required. And that also has its road map over the next few releases, allowing tokens to be automatically distributed across different use cases, with appropriate permissions and stop gaps, but that's something that we'll see the first version of in 24R2 when we switch to this taken based consumption model.

Slide: IFS.ai Commercials

  • Now see, this is a new model. So there's a lot of questions and I'd be happy to take more on this, but a few that we we've certainly seen come up in early discussions at the moment because of the way we're delivering IFS.ai on the platform, it can only be taken if you are using a cloud hosted version of IFS Cloud. That's how it authenticates into the IFS.ai platform. For remote deployments of IFS Cloud, that is something that is coming on our road map in 25R1. And at that point you would be able to connect via a hybrid model to all those AI services within the cloud. Now the reason we can't deploy those fully on premise is because of the large language models mainly. They are particularly large and we consume those through third parties. We can't build and host our own models that they're too big, so that's why it has to form part of a cloud solution. But again, from 25R1 remote deployments, we'll be able to connect into that shared platform through a hybrid model and consumer and use those services.
  • The majority of the use cases, certainly all the ones we've discussed, are on the token based model. There are exceptions to that where AI is just an intrinsic part of the product, such as PSO, Copperleaf and Demand Planner. And to be able to take tokens, you have to buy the IFS foundation and make sure that you have the relevant modules where the different use cases are. As we've discussed, it's really use case first, so this is not about just enabling AI, it's enabling the particular use cases that you have within your industry modules, so you have to have those modules to take advantage.
  • You can buy a starter pack of the tokens which gives you tokens which you can use across all these different use cases, and the key to making this successful is it doesn't then require heavy extra implementation. Once you buy those tokens and you have the appropriate modules, you then have the entitlement to start using the AI features and they will automatically connect to and use the IFS.ai platform in the background without you having to switch anything on.
  • We currently can't offer these services within EURA or Government Cloud in the US, but you can as customers monitor your taken usage directly through IFS Cloud. As I mentioned through the entitlement management system.
  • So, a very new way of thinking about how we how we build, consume and use these AI services. Obviously is going to develop a lot over time, but we're seeing the first example of that in this coming phase.

Slide: AI Product Governance Framework – A framework for building responsible, ethical, fair and transparent AI products

  • I want to just do a little bit of looking under the hood mainly to be able to talk a bit about IP and governance considerations to understand particularly large language models and what it means for your data.
  • So when we're building new AI products or use cases into the product, we are working and operating within this governance framework that we've built. And there's really four key parts to this.
  • There's the data, making sure that we're using your data in an appropriate way. Only using your data to build AI models that you can use that is not shared across other customers. For example, making sure that that data is anonymized where it needs to be. It doesn't leave certain jurisdictions, and when it does pass out to 3rd party services, making that very clear how that works.
  • Building it into the product, there's a certain level of model governance around testing and performance, that we have to do and step through as we put things into the product.
  • And then there's a large number of legal considerations around compliance, making sure that we fit in with the latest AI frameworks which are constantly developing. And there's a lot of difference between the EU framework around AI, the US ones. It really is changing all the time, so we are constantly assessing what this means for our use cases in terms of AI bias and how we use and process data, and the types of explainability we have to offer back to our customers. So again, that is a key factor into our product development process.
  • And then there's the actual process itself of developing the software, how we are checking at every stage of the journey that we are getting the necessary performance that we need, and we're understanding the limitations or concerns around using large language models, because of course they're not always going to return the same answer each time and we're moving beyond software that is going to just allow you to completely replicate a process at the click of the button. We have a certain level of uncertainty now around all these processes based on large language models, and we need to account for and test that as we build things into the product. So, this is the governance framework that we use.

Slide: IFS.ai Copilot – Q&A Dispatcher Agent example

  • So, what does it mean for your data when you're actually using the copilot? What's happening in the background? We've talked a little bit about those data sources, but I just want to talk a bit more about agents again and how it works. So, on the left hand side here, you can see an example of a user question and so when that question comes in, it effectively gets passed to that agent. So, the first thing the agent needs to do is be able to parse and understand that question. Work out what the questions about, and then it needs to know where it can go and look for the answer. So that might be in the IFS Cloud documentation, it might be in all that unstructured data that you have as customers, it might be in your customer database, it might be through one of our specific assistants that we've built. And these assistants are those contextual elements within the product that allow us to give better answers. They're able to pull metadata from the page that you're on, be able to see what your persona is. And so, the agent is the key decision making process and how we triage that question and go away and ask for an answer.
  • Now the important thing when we're bringing back knowledge is that we have to understand the data that we're querying about that knowledge. What we don't want to do is take a question and go out and just ask ChatGPT some general question about your assets and there's a number of reasons for that. One your assets are very specific to you. ChatGPT is being trained on publicly available data. It can probably give a plausible, convincing sounding answer, but it's not going to know the nuances of your business. The nuances of your business are going to come from your data alone. More importantly, ChatGPT can also provide hallucinations when it when it comes back, it can interpolate answers, it can mix things up, it can add in and fill in gaps, and that's very dangerous. So, we have to make sure that when we're using large language models, we ground them only over the data that we want to search.
  • So, what we actually do when we answer a question is we first do a do a search. If we were going to answer a question over some unstructured data in your document management system, we would break all that data down into chunks. So, we take a PDF document, maybe break it down into sentences. We would search for appropriate sentences and maybe come back with 10/20/30 possible answers. And only then do we use the large language model to distil all those 10 or 20 possibilities down into one final answer. But all the elements that it’s using to come up with that answer are from your data alone. It's not looking outside, it's not looking to that publicly available knowledge it has. We're just using its language skills to come back with a nicely presented answer based on the data you have. So, all of these things are what allows us to give confidence in the answers that are coming back and minimise the hallucinations which otherwise would cause a lot of problems, especially in industry specific solutions.

Slide: ML Recipes – Facilitating interaction with our ML Service

  • Another thing that we do to allow us to scale these use cases is to use what we call recipes. And the idea here is here is that a lot of our developers are not data scientists. They're not going to be able to train and build data science models, understand those algorithms. But what we want to do is give them a low code way of being able to say, here's what I want to do, here's the data that I would like to do it with, here’s the broad parameters that I need my reply to come back in, and then they can pass that as a recipe to the service. And the ML service will do all the hard work. It will take the data, it will train the models, test the performance of those models, and then it will create an endpoint for those models. And once it creates that endpoint, that's the bit that you can now call live when you want to make a decision. So the concept with an AI model is you train it on data and when it's ready, you can then show it new examples and it's going to be able to come back with answers. So we've automated that whole process, the training and the deployment of those models, so that all our developers need to do if they wanted to train a model on some business opportunity conversion data for example, as you can see here, they can specify the data to do it on, and it will create that model and offer it back to them, which can then go into the product.

Slide: AI Recipes roadmap

  • So that concept of recipes can stretch across a whole range of the different AI services that we offer within the product. These are classification and regression models. These are typical AI models. Classification is predicting which group something will fall into for example. But also all the LLM skills that we offer back into the product, again they can be consumed through recipes. So, for example, if we need to extract elements of text from a much larger block, we can offer that as a service into our developers who can consume that using a recipe. If we need to be able to summarize a block of text in a certain voice, or focusing on certain elements, again, we can offer that service. So, all of those different AI services offering up through a local platform to our developers who can build them into industry specific solutions. And that's how we're able to scale up the number of use cases we have very quickly.

Slide: Responsible and Trusted AI

  • Just finally then, focus on responsible and trusted AI. This is a topic that has been around for a number of years, but we see it more and more frequently. It's about the provenance of your data being transparent with how data is used, but also how the AI is used. What decisions is it making, why is it making these decisions. As we move towards more and more complex models, we can't look inside those models and understand what's going on. They've moved beyond the limits of complexity that a human can understand. But there's still a number of ways that we can put frameworks around those models to prod them and poke them and get an understanding of what they're doing. The bounds in which they give good answers when they're liable to break down. Some elements of them being able to explain their thought process, what it is that they're doing. So, these concepts of transparency, fairness, accountability, privacy, these are all fundamental to how we build and develop our AI solutions and how we present back the answers to users.

Slide: Explainability (XAI) – Implementation in IFS Cloud

  • So, an example would be explainability again for the business opportunity prediction, rather than just present back, are there is a 70% chance of this opportunity converting what we can do, is present back in more detailed score, which focuses on what the main drivers of that decision are. Why do we think it's likely to convert? And how important are those elements? Why we think it might not convert conversely, so the element is being able to explain a decision again forms an inherent part of all the use cases that we're presenting back, and we see that in elements as well with scheduling, for example with our scheduling explainability service being able to understand why particular jobs do not form part of the optimal route.

 

Questions / Answers / Feedback / Responses:

  • Q: Quick question on the commercials and the approach. So, you talked about these baseline tokens and usage. Using the tokens by the events that where you use in the AI and the copilots. What about the outcomes of these events? What happens if, let's say I'm a user, I use these tokens. I end up using a lot of tokens and the results that I get are less than satisfactory. What happens in that case?
  • A: What we're trying to do with the copilot interactions is, is as much as possible to keep them targeted. So, to try and pick up the context as much as possible of when you're asking the questions, because as you say, I mean you don't want to be having to work your way around asking things in 10 different ways and consuming tokens. So, a lot of it is trying to keep the responses on rails in terms of what it comes back with. There's always an element of free form to how these copilot interactions take place. And so, at the moment they would consume tokens at the same rate, but there are elements as you're asking the question that can guide you down the paths which are likely to give you better answers. That's what the assistance within the pages will do. And so, if you're able to fit within those then you know the quality of the answer you're going to come back with is better. Obviously over time, we're certainly thinking about how we can move this to towards being a lot more outcome based. I think the whole concept of how customers use and get value from Copilots is really just beginning. We know that they don't want to be wasting a lot of time asking questions that are not valuable to them, and the more we can do to keep them on the right path, potentially not let them have to ask the questions, be able to surface recommendations and insight up to them, without them having to interact is certainly the direction that we want to go. But there's an element of us understanding how customers are using it, what they want to see from this, that we're going to have to see. But as much as possible, we're trying to guide them towards the types of questions that are going to give them better answers.

 

  • Q: If I am understanding correctly, this will not be available for cloud on-prem customers?
  • Q: "Embedded" that this mean available for local installation?
  • A: The concept of embedded is that as a user within the different modules, you are not seeing and interacting with the AI directly. You're doing it through the industry solutions, but the AI services themselves, are not deployed onto a customer's IFS Cloud. They're run as a separate platform, and so if you have a remote on premise deployment, you can connect to that next year, in 25R1, via the hybrid model.

 

  • Q: When you were talking about Internal documents, we can add to IFS. I was am querying if you apply RAG technology on this document or do you just use it as it is without any RAG?
  • A: We have our own RAG service that we've built, so we would build indices over any documents. Say for you as a customer in your document management system, we would build indexes over the document, the data that you put there effectively. And we have our own RAG solution. We're using Azure and AI technology for the LLM and we're using it for AI search as well. But we have our own RAG engine that does the chunking. That breaks the documents up into the different indices, and that's an automated process.

 

  • Q: What is the AI model used by IFS? How do you manage its update? Which frequency?
  • A: We have a model management system that we work on with Azure. At the moment we're only using those as Azure LLMs. We have three or four different ones that we're consuming for different tasks, some for interpreting language, some for what's called embeddings, so how we embed the data into a form where we can ask questions better. The idea, though, is that the IFS.ai architecture we've set up would allow us to use any type of LLM that we want for different tasks. So, we envisage as that such a far changing market we will find that some LLMs are better for answering certain types of questions, others we might be able to deploy on premise. We are seeing these models get smaller and smaller. We might be able to run those ourselves without having to call out to third parties. So, at the moment they're probably three or four models that we are calling through Azure, but we would see that a whole ecosystem of those models developing and we have a management system that is keeping them up to date, checking for performance as we go.

 

  • Q: When training IFS Copilot with customer specific data, is the algorithm able to learn from the user input (i.e. chat history)?
  • A: Yes, just in terms of the concept of learning, I think this is an interesting one with LLMs. These language models are absolutely huge and the ones that we use certainly at the moment are owned by third parties. So, we're not training those models, nor are we doing what's called fine tuning. The idea of fine tuning is you take that model and you tune it over all of your own data. That's a very expensive process to do though. We found it's much more effective to do is this RAG process where we use the model as is, but we effectively ground it over a customer's data. So, we take the data we need to search over, and we break that down into manageable chunks. But the question that we asked, we can provide all sorts of context and the chat history is absolutely part of that context. So, when you ask a question to a large language model, you can not just pass it the question, you can pass it all the conversation history you've had. You can pass it data from the screen that you're on. You can pass all that Persona data. There's a limit which is rapidly growing of what you can use to provide context to the LLM as well as the question, but chat history is certainly one of the things we provide so absolutely through a conversation you're going to start honing in on better and better answers.
  • R: I just wanted to make sure that I'm on the same page with you. So, we as the user can access some kind of interface to pass our customer specific data to the model and do the training on our own. Is that the correct understanding?
  • A: Yes, there's an interface that allows you to upload documentation that can then be searched and queried over. Yes, absolutely. And that'll be the unstructured documentation, plus it has access to all your structured documentation, just that it's in the IFS database as well. It can query over that as well.
  • R: And a possibility of passing the chat history for training is as well a manual process? Do we have to somehow tell the model to do that? Or is it automatic?
  • A: yeah, it is automatic. So, when you're having a conversation in the chat window will be picking up and passing chat history as the conversation develops automatically.

 

  • Q: What does EURA means?
  • A: That is the equivalent of the US restricted Gov Cloud. I think Eurocloud is the high security cloud. The difficulty is when you're within that cloud, you can't make the calls out to the third parties that you'd need for the large language models, so that's all that one meant. It's probably going to be a very small subset of our customers that that would apply to.

 

  • Q: Do you already have any use cases combining generative & predictive AI to deliver for example spare parts predictions for service visits? Everyone, any experience or references in this area?
  • A: It's a very good question. Because I said that's the differentiating factor we have. I think that the first examples we're seeing is in the service space, so the copilot for the dispatcher. Because the dispatcher is effectively using the AI driven scheduling technology to build the schedule, and the generative AI is allowing them to ask questions over that schedule and the data that's part of that schedule. So that's probably the first example. But the idea of spare parts prediction, that's one that's on the road map for 25R1. I think we're going to see a lot more of these combination examples coming in the early part of next year because that's the really exciting stuff where you can start using the optimization, the anomaly detection simulation technology and ask questions over that. That’s where were this will go and become very exciting.

 

 

  • Q: Are you using GenAI frameworks like LangChain and LangGraph?
  • A: So yes, we've been using LangChain, we've been using semantic kernel, we've been we've been trying out a number of things. Effectively these are frameworks that we can put around the large language models to string actions together, be able to assess performance. If you think about the automated testing world that you have within software for large language models, what you want to be able to do is have a framework of questions that you could ask to put functionality through its paces. Be able to check what the edge cases do, etcetera. So, we're building those test frameworks around the LLMs. But yeah, LangChain, Semantic Colonel, there are things that we've absolutely been using key part of the development process of rapidly evolving space. And so, we're constantly just checking that we're doing things in the right way.

 

  • Q: Will you be able to connect to our current documentation repositories in the near future or will we always have to upload our documentation into IFS Cloud? I.e. will we be able to connect to documentation repositories outside of IFS Cloud?
  • A: It's a tricky one. So in certain elements we are in the sustainability space. We are allowing our customers in certain ways to be able to bring in data from other sources to combine with what's in the IFS ecosystem to get answers. And in some of the use cases we have, we're bringing in controlled data sources for things like weather prediction or commodities pricing that allows us to make better decisions or offer better answers. What we're not doing at the moment is accessing full repositories that sit outside IFS Cloud. The idea is that we're answering questions on data that we know and can provide context on within the product. As soon as we allow any data into that ecosystem it becomes harder. It's something that we are looking at and it's something we will probably keep revisiting and seeing what the right answer is. But at the moment, the copilot is based over data that is within IFS Cloud so that we can ensure that the quality and the context is there.
  • R: So, like manuals or work instructions, things along those lines all have to be up to uploaded?
  • A: It will all have to be in in Doc Man, within IFS Cloud.  Yeah, that's right.

 

  • Q: What APIs does IFS expose to allow customers to interface with IFS to create their Agentic Workflows?
  • A: I guess that's more of an API question. So that would be for agents that are operating across a range of systems, right? So, when I spoke about agents is very much within the context of the IFS product only. The IFS API’s, we have a number of premium API’s is that perform specific actions within the IFS product. That's not my area, so I don't have a sort of full list of things, but I'm sure we can get that information of what premium API’s are available, and then of course those can be kicked off, triggered by an RPA process or any other agent based process. So, there is a list of those exposed API’s. Some of them are well documented, which the premium ones. Everything that you can do within IFS Cloud is exposed via an API, but some of them are easier to use than others. You could build them into agentic workflows, but we can get you that list.

 

  • Q: Are there any plans to integrate with ClickLearn to source documentation for IFS.ai to use?
  • A:  There's discussions around it, yes. I haven't got a timeline for that, but certainly something we've been discussing.

 

  • Q: How IFS.ai is connected to hosting (IFS managed Cloud vs. company private cloud in Azure vs. On Premise), and the commercial aspect of it?
  • A: So I've mentioned it, there wasn't a specific slide on it, but at the moment it has to be IFS managed Cloud to authenticate to the IFS.ai services. Coming in 25R1, you'd be able to do that with remote deployment. So that would be on a company private cloud in Azure or an on-premise deployment, if you are connecting to those cloud based AI services. Being able to deploy the AI services fully on Prem, some of them are much more amenable to that right? We can still deploy PSO and all the optimization services on premise. That's fine. The large language models are where that is difficult because they're such large models and we're using third party services for those. As those models become smaller, more use case industry specific, we certainly envisage a world where we'd be able to deploy those on premise for certain use cases. But we're talking further away in the road map for that deployment on premise of the AI services themselves, you would need to, under a hybrid deployment, use the shared services in in IFS Cloud.

 

Next Meeting: 10 December 2024 10:00 AM US Eastern Time
IFS Assets CollABorative: A conversation with Kevin Price, Global Head of EAM at IFS

 

If you are an IFS Customer and you do not have the next meeting invitation to this CollABorative and would like to join, please click here to fill out the form.

Be the first to reply!

Reply