Skip to main content

Assets CollABorative October 2024 - Bringing Modern Technology into your Reliability Practice

  • October 15, 2024
  • 0 replies
  • 109 views

Forum|alt.badge.img+8
  • Do Gooder (Employee)
  • 20 replies

IFS Assets CollABorative: Bringing Modern Technology into your Reliability Practice

Date of Meeting: 09 October 2024 10:00 AM US Eastern Standard Time

 

Thomas Heckmann Presentation:

 

Slide: Bringing Modern Technology into your Reliability Practice

  • I can't assume that everyone really knows what reliability stands for, so I try to introduce a little bit in the topic and give you my perspective on it in general.
  • The title of my presentation is bringing modern technology into your reliability practice. Modern technology is the big word. What do I mean with it? You will see later on that I try to relate and try to balance a little bit on how AI, artificial intelligence, which you read everywhere these days, not only within IFS, but also outside. How can you marry a rather traditional practice like reliability, which has been around, I would say since the late 60s with modern technology like artificial intelligence, machine learning, and what advantages that this could bring.

Slide: Agenda

  • What is reliability? Why did I choose to talk about this topic today? I would like to explain to you what we mean if we say closed loop reliability process, and then the focus of the next couple of minutes is going to be on the use of artificial intelligence within the reliability practice or within this closed group reliability process and then to finish my presentation with a couple of conclusions.

Slide: Introduction

  • This is supposed to be a thought leadership session. So, it's not a product or marketing session. I'm not going to speak a lot about features, but more about concept. Also, a little bit about features, but mostly about concept ideas and the logic behind what we think reliability or contemporary reliability should respond to you.
  • Jon presented me already. I'm on his team. I'm a business architect in presale, focusing on asset performance management and enterprise asset management. In fact, reliability, which for me is very closely related to asset management in general and asset management the way we understand it, it not doesn't equate with maintenance. Maintenance is an important part of it, but as a management for us is stipulated by ISO 55000.
  • So asset management for us, meaning generating value from your assets throughout their life cycle and you will see in a moment that reliability is a key concept in ensuring that your equipment and your assets actually perform the way they shall during that life cycle.
  • I am also personally interested in that topic, so if I don't work for IFS and also active in the so-called Institute of Asset management, which is a British not-for-profit organization. And here I'm member in Germany but also in Saudi, two very different regions, one very mature, the other one growing very fast with its very own challenges when it comes to asset management.

Slide: Anecdotes

  • Before I go into the detail a little bit, I'd like to talk to you about two anecdotes. There are no picture ones, they're real anecdotes which I experienced in the last six month, which made me choose the topic or the title that I'm speaking about today. It was the late last year that I was engaged in a with a prospect who was planning to implement an EAM solution. That prospect was in Denmark, and had a very clear focus on adopting data-driven maintenance. Data-driven maintenance, meaning everything from condition based maintenance onward. So also looking at predictive and prescriptive maintenance. To get more mature when it comes to maintenance, management and that prospect confidently hired data analysts. No engineers but data analysts. Young guys in their mid 20s who had previously worked for Amazon and Google. And in the meetings, they were very bold. They claimed, we're no maintenance experts, but we don't need to be. Data, AI, machine learning. That's all we need. That’s what's going to do the job. That was one experience I had, which to me as a reliability professional, I thought it was a little bit provoking as you can imagine because the questions a little bit what we learned, what we took for granted in the last decade. Do you think that in order to be able to grow your maturity in maintenance, you need to be a maintenance expert. That at least has been my perception so far.
  • Another anecdote, this time an existing customer. Again in the Nordics, in Sweden was a customer who launched a digital service offering to monitor the health of production assets that they had at their customers. That offering was quickly developed. They provided vendors for various performance parameters which were installed, data collected and put into a data lake. And they as well had data analysts who were hired to analyse that data and translate it into actionable insights. And although the machine data was collected with the intention to optimize maintenance, or optimize the performance of the asset, no one had really previously considered what insights or benefits the data that was collected would actually derive. And this is a tendency that we see very often that companies start to go into data-driven maintenance, collect data, try to analyse it, but have not really thought through properly before, what use that data could be of, and these two things, because when I talk about reliability now and reliability centred maintenance, you typically use a very systematic approach to decide what assets to put to focus on, what data to collect, and what you can do with that data. This is what I'm going to talk about in the next 25 remaining minutes. How you can marry this classical approach of a reliability with modern technology of AI, artificial intelligence and ML, machine learning.

Slide: Reliability Challenges – Example: Gearbox Failure Modes

  • Before I do that, let's take a quick look at the typical reliability challenge. The image that you can see in my slide, the smaller gearbox, and below is a small selection of failure modes. Failure modes is a reliability term and describes what causes an asset or an equipment to fail. And typically in maintenance, maintenance decisions are taken on the level of a failure mode. So, for each failure mode, you decide that there is a PM preventive maintenance interval, or maybe in case there is condition data that gives you insight into an impeding failure. Maybe there is a different type of maintenance, namely condition based maintenance to be done.
  • And in this example you can see that for these two failure modes, 4 PM actions were defined. Checking a gearbox, checking the teeth of a grind, hone, polishing teeth. And there is an interval related to these. Often these PM plan or these PM actions, either they are based on OEM requirement or on OEM specifications, or sometimes customers has even started to modify these by applying FMEA, failure modes and effect analysis or RCM analysis. In my example, should be the initial start of the of the of a PM plan with a couple of selected failure modes and PM actions.

Slide: Reliability Challenges – Example: Gearbox Failure Modes

  • What does typically happen if a failure ultimately occurs and I try to show this failure with these sparks that you can see here.

Slide: Reliability Challenges – Asset related decisions taken reactively and in isolation

  • If any equipment failure occurs is that different people, different parts of the organization, different stakeholders look at this failure and have their different opinions on what should be done. And these decisions are often taken reactively after a failure has happened and often in isolation.
  • For example, if you have the risk people looking at failure, they will say we need to do error plan. And if the engineering people would look at it, they might say, we told you if you would have re-engineered it, that failure would never have happened. The maintenance people might have said, it was clear we over maintained or we under maintained. Operations people maybe would have said, we told you reduce the speed, then this failure would never have happened. So, different stakeholders, different roles in the company look at the asset and look at the failure behaviour and come to a very different conclusion.

Slide: Reliability Challenges – Example: Gearbox Failure Modes

  • What we see very often is that the consequence of that, is that maintenance plan primarily also, PM plans have a tendency to grow to get inflated. So, after such a failure happens, sometimes you know these different stakeholders with their differing opinions lead to a PM plan getting inflated so that all of a sudden you don't only do 2 PM actions on a certain failure mode, but you add another three of them. And often times, that's a common knowledge and reliability. Doing more maintenance doesn't necessarily mean getting better results or getting better equipment reliability. Vice versa, often over maintaining an equipment introduces new failure modes into it. Because if you if you open a certain equipment too often, sometimes you introduce dust, dirt, whatever, or the technician does a mistake by fixing it. So, often over maintaining something introduces new failure modes.

Slide: Closed Loop Reliability Process

  • And our idea on how to respond to that is what we call a closed loop reliability process. Ideally by the way, within one single product, and I promised you not to talk about IFS Cloud or the product too much, but every now and then I'm going to refer to what we do in IFS Cloud and for those of you who will join unleashed next week in Orlando, I would already like to convey to you now that one of the key aspects of our presentation today, namely the fully embedded FMECA and RCM analysis, is a feature which will be presented at Unleashed next week for the very first time.
  • I assume given the fact that your IFS users, given the fact that this is the asset collaborative that you know what reliability is. But just in case there are participants who are not fully aware about this term, and I pasted a definition down there. Reliability when it comes to equipment, is the ability of an equipment to consistently perform its intended function without failure, and it involves maintaining the equipment in a state where it operates as expected, minimizing the likelihood of breakdowns or unscheduled downtime. But reliability for us because we feed as a closed loop process, actually contain contains all steps that you need to do, starting from defining what maintenance is required in order to make that equipment performing the way it should. So performing for instance, an RCM or FMECA analysis. Putting that analysis and its results into the execution, then collecting feedback because the feedback allows you then to assess how efficient your strategy you made in a strategy actually worked. If you take the example I presented to you before, if you do more maintenance and you don't get better result, obviously you have good reasons to read you your strategy. So, a constant review and assessment of the efficiency of your strategy. It's also something that we consider very, very important. And ultimately to close that loop, you would update the RCM or FMEA analysis because what you considered at the beginning when you perform that analysis for the very first time, and an RCM analysis by the way, often is a table exercise where the different stakeholders that I have mentioned before did a round the table and discuss failure modes, and it could very well be that when such an RCM or FMEA is reviewed after some time, the different stakeholders agree on changing the approach to mitigate a certain failure mode. For instance, instead of maintaining more, you could come to the conclusion to engineer a failure out to do a certain redesign and apply that redesign to your install base.
  • So this closed loop reliability process, ideally within one single solution without the need to integrate or to use interfaces to communicate between different solutions, but honestly speaking there are also a lot of reliability solutions out there that of course can do these RCM analysis, which might not be fully integrated into IFS cloud or into your EAM solution. And ideally there are interfaces available to integrate these results into the enterprise asset management and the execution. However, we do see that of course if you can integrate without the need of interfaces, there are advantages that reduce the risk that you do mistakes and that you are not efficient in what you decide on because of these interfaces.

Slide: AI in the Reliability Practice

  • Let me continue with my view on AI on artificial intelligence in the reliability practice. You will agree with me when you hear about AI, about artificial intelligence, there is a lot of terminology flying around. Generative AI, Artificial intelligence, NLP, time series forecasting, first principal models, reinforcement learning, machine learning, regression models. There is a lot of terminology, and I would like to focus a little bit on the use primarily of generative AI. The use of machine learning and the use of NLP which stands for natural language processing.
  • When it comes to support, the various segments of the closed loop reliability process. Many of the things I'm going to speak about when it comes to our product, IFS Cloud are already available or will be published next week during Unleashed. A couple of the topics I'm going to speak to you about are still on the road map or are under consideration. I will explain to you when I speak about them, which ones are which.

Slide: AI in the Reliability Practice – Prepare RCM / FMECA

  • But just want to tell you the things that I talk to you are not super visionary. They're not very far out, as I said, some of them already available in the solution. Others definitely already on the road map or under development.
  • The first topic is the use of AI when it comes to preparing an RCM and FMEA. RCM stands for reliability standard maintenance, FMECA sometimes shorten FMEA stands for failure mode effects and criticality analysis, or for failure modes and effect analysis. All three are very systematic approaches to determining per failure mode, per cost of an equipment failure. What the required maintenance is in order to make sure that this equipment performs properly.
  • The first thing that you typically do if you perform such an RCM or FMECA analysis is that you determine the boundaries of such an analysis. You determine what equipment or what assets are part of this analysis. Related to this discussion is obviously the topic of criticality analysis. Before doing such a such an analysis on RCM or FMA, you determine how critical an asset is because you typically focus when you do these analysis, which are very time consuming and obviously require resources. You don't do these on all the equipment or on all assets, but you practically focus on the most critical asset.
  • Criticality analysis is a standard feature in IFS Cloud. What the criticality analysis typically does in case you're not aware of it, you assess the consequences and the priority if an equipment fails, what are the consequences for safety, health, environment, production cost, customer satisfaction. You multiply these consequences with the probability of a failure, and you come up with a risk priority number which allows you to say how relatively critical a certain equipment is compared to others. And as you can imagine, in order to create the biggest gain or biggest advantage, you focus on the highly critical assets when you perform such an analysis.
  • If anyone has already participate in a proper RCM or FMEA, you will probably know that determining the boundaries often is a very painful exercise that takes sometimes a couple of days with a lot of people sitting around the table with big plans, P&ID schematic. P&ID stands for Piping and Instrumentation Diagram. So these people look at the diagrams and determine what should be in the analysis or not. And you can assume that given the fact that many of these P&ID schematic, and you see an example of one on the top right hand side, they are either available as a 2D or 3D models in your CAD solution. Sometimes even already as a BIM model, maybe as a PDF, in the worst case as an image. And one of the things that our R&D is currently investigating is the use of OCR, which stands for optical character recognition and computer vision. To analyse these 2D, 3D BIM model, PDF images, and come up with a list of equipment in an automated way. That list of equipment could be used to create a query within IFS Cloud. That query would list all the equipment that are part of this P&ID schematic and would then allow you to relatively simply determine whether an equipment based on its criticality is part of the RCM or FMECA or not.
  • So that is a very nice functionality which saves a lot of time and brings a lot of unambiguous understanding when it comes to determining the boundaries of an RCM or FMECA. And ultimately determines which equipment will be powered of such an analysis. That use of AI would create a list, as I said before, and automatically add these as decided by you to the to the actual FMECA.
  • By the way, the FMECA, as I said before, is one of the features that will be announced for the first time at Unleashed this year. You can see a small screen down here. FMECA and RCM, a feature that will be made available within IFS Cloud starting with 24R2, so coming up rather soon.
  • The first example, a very nice example on how AI could be used to determine the boundaries of such an analysis.

Slide: AI in the Reliability Practice – Execute RCM / FMECA

  • The second example I would like to give you is about the use of AI when it comes to executing such an analysis, and when you execute such an analysis you will have to use a lot of existing data or information. That data could be experienced knowledge from within your organization. It could be structured data like failure databases or common failures on your critical assets. But it could also be unstructured data. So, documents like OEM manuals such as scientific papers, white papers, or if you want, even Internet research. But that depends on a little bit. You can determine when it comes to the use of AI and generative AI, determine what data sources you find trustworthy and which you would like to use when you do AI.
  • Some of you might be aware about ChatGPT, or about copilot. What does generative AI mean? The use of AI, in an example that you can see here on the right lower part. The use of AI to ask or to interact with a system in natural language by either typing it in or by speaking into the generative AI. So, by asking it a simple question, and the AI or the generative AI would then use or leverage structured or unstructured data. In the example that you could see here, if I would ask the system what the top five failure modes for a Parker Hannifin centrifugal crude oil pump would be, it could query either your existing database because you might have this information in your database, but it might also look at manuals about white papers about anything scientifically published and come up with the answers. As you can see in this small screen here.
  • This by the way, is a feature which is already available. Of course you need to train the model. The model is not intelligent per se, but you need to train or tell the model what trusted data sources are. You need to tell it what failure modes ultimately are. But then this ifs.AI copilot as we call it, is already available for these kind of use cases.
  • A very nice example, by the way, not only available in conjunction with RCM and FMEA, but also with any with any other feature. The image in the background that you can see is the detail of a wind turbine. You can see a 3D BIM model and we had this for a recent demo where you could ask the copilot what typical mistakes on a certain equipment type are, what typical errors on a certain equipment type are? How was it fixed? So the AI would then do data mining in your existing data or in the defined data sources and come up with feedback in a very natural way of interacting. So that is the second example about the use of AI in reliability. This time, when it comes to executing an RCM or an FMECA, the beauty of generative AI is that it can use or leverage both structured and unstructured data. Structured data, as I said is data that you typically have in table or database form. Unstructured data is everything in documents manuals, white papers, sometimes even images of your own data sources or others external data sources.
  • And one last thing, this generative AI could also be used. This is something that we have under consideration to be developed. It could also be used to go through the so-called RCM decision diagram, a topic which I can't elaborate on here would be would take too long, but ultimately the RCM decision diagram is a series of question per failure mode and based on the answer that you provide on these questions, the system would tell you what the suggested maintenance strategy, corrective maintenance, preventive maintenance, condition based maintenance, any other, what the suggested maintenance strategy is, or even calculate the so-called PF interval for you. So, a very, very strong use of a generative AI in RCM and FMECA.

Slide: AI in the Reliability Practice – Put data driven maintenance in action

  • I wanted to focus a little bit more on reliability and on RCM, FMEA and I will maintain this focus, but of course part of the of the closed loop reliability process is also actually executing on your strategy. And as I said, I will not elaborate too much on that, but also the use of everything that you defined in your FMECA when you bring it into execution and you try to be more mature in your maintenance and move into data-driven maintenance. Away from reactive to condition based, Predictive, maybe even prescriptive maintenance. Also here of course the use of AI is instrumental. This slide is just to show you that the big driver about AI in data-driven maintenance is ultimately to move away from a reactive way of doing maintenance maybe even over maintaining something to an automated machine learning based predictive way of doing it.

Slide: AI in the Reliability Practice – First principal and trainable ML models to support Anomaly Detection & PdM

  • And one nice example, this example again, a feature which is available in IFS Cloud already since last year, since 23R2, for instance is the use of AI and ML for anomaly detection. This is where you practically use an AI algorithm that that looks at your data, that establishes a normal baseline, and then manages to correlate unusual behaviour with failures and come up with an understanding of how impeding failures can be recognized. And this very model, we do distinguish by the way between first principal and trainable ML models. This very model is then also able to forecast time series. So we'll be able to tell you whether such a failure is again upcoming by analysing a certain behaviour.
  • As you can imagine, when it comes to maintenance and AI and ML, the biggest topic we address in as a performance management for everything, which makes you more mature by moving away from corrective via condition based into predictive and eventually even prescriptive maintenance, applying anomaly detection and time series forecasting.

Slide: AI in the Reliability Practice – Knowledge exchange and continuous improvement

  • Another topic, if we move in the circle from performing the analysis, bringing it into execution, executing it, and the next segment in this closed loop reliability process is to collecting feedback. Collecting feedback not only when it comes to collecting performance data, so vibrations or temperatures or so on, but also when it comes to collecting expertise from the colleagues in the field. And that is another very interesting, but also very critical and important topic, because many of our customers face the situation where we're experienced long year colleagues leave the business. You probably know this phenomenon and are often either not replaced or are replaced by younger colleagues who have the tendency to not stay that long, or who are less experienced. So, one of the big topics that many of our customers face is the question on how you can capture existing knowledge and expertise and how you can make this expertise available in a form of knowledge management. And the screen captures that you see here is another nice example of the application of AI in a topic knowledge management, which has been around for quite some time. What you see on the screen here is an example of our Poka solution. It's a mobile solution with processes with digital forms, with various ways for the maintenance of service technician to capture different things. To capture for example, what he or she did, but also what failure was there to take images, to record audio, because the equipment might have had a sound, but to also register procedures, tips, suggestions that he or she has, and put that into a knowledge database.
  • So one of the thing that that Poka support is the capture of knowledge either manually or automatically by looking at maintenance logs, tips, best practice, these are automatically captured, categorized, so clustered and made available in a shared knowledge base. That is not yet the use of AI, but AI is actually the next part is when it comes to retrieving this knowledge, which is hidden in structured and unstructured data, structured as database entries, as failure codes, as cost codes, or unstructured as images and audio recording.
  • As for Poka, AI is actually used to provide contextual knowledge and the retrieval, again using NLP, natural language processing. What that means for example could be that a technician is on site, he's working on a certain type of equipment. Maybe the gearbox that I had shown you at the beginning, and the technician might ask the system now I'm on the gearbox. The gearbox is vibrating. It makes a certain noise. It's a certain type of gearbox. Was there anything else recognizable? A certain temperature, something unusual. So, the technician could communicate all that to the AI via natural language processing, typing or speaking and the AI within Poka would then come up with contextual knowledge. So, suggest certain feedback. What could be a relevant procedure to follow now? Are there any tips or suggestions based on a similar situation or scenario that colleagues have captured? We call that contextual knowledge so it's not a static knowledge that you have to query yourself, but it's a knowledge dynamically presented by AI within Poka in a certain situation.
  • And while this is of course a very good example on how the maintenance technician in the field or the service technician could be supported, it could also serve as a decision support or as a general support. Again, when you perform the RCM or FMEA analysis, because it's structured and unstructured data could also help the RCM review groups and technicians to take informed decisions when they perform their analysis when they cooperate together in the table I had explained before.

Slide: AI in the Reliability Practice – Assess Strategy Efficiency

  • And I'm coming a little bit to the end of my presentation because to close that loop, is a regular assessment on how efficient your strategy actually was. If you remember the gearbox example I had given to you in the at the beginning and the phenomena that we often see that maintenance strategy have a tendency to grow over time and to inflate, but not necessarily create the advantage that a customer had expected. It's of course extremely critical that you regularly assess how efficient and whether it's effective, by the way that you assess how efficient this strategy is. And we close that loop for instance by taking a look at the health of a certain asset and in the two screen captures that you see here. On the left hand side you see the asset health dashboard. We refer to it as asset insights of IFS Cloud. But of course there are many other dedicated lobbies as well where you can compare KPIs, overall equipment efficiency, the ratio between plant and corrective maintenance ,many other KPIs over time.
  • Before you change the maintenance strategy and after, which allows you to sort of assess how efficient the modifications that you made to your maintenance strategy actually were.
  • The Assets Health Index, which sits behind the dashboard that you can see on the left hand side for me is an extremely interesting one. In the past, asset health indices have been around for at least a decade, but in the past and asset health index, which is a mathematical formula or an algorithm with which you try to normalize the health of an asset to make it comparable to others, have been around since quite some time, but they were considered to be rather static. You calculated them, you updated them maybe every couple of years, but in fact health is a rather dynamic behaviour and in order to get proper contextual understanding of the health, and in order to be able to align it with the evaluation criteria, we have a tendency to see that asset have more as a dynamic parameter.
  • And this is a last example. Our asset health indices in IFS cloud are based on first principal models. They are defined models that can be very complex, they could be less complex, but many times are quite complex and look at partially at real time data, partially at static data, bring them together, consolidate them, calculate the asset health index that allows you to make decisions when it comes to, for example, prioritizing maintenance. Obviously, if you have a very critical asset which is then rather not so good health, you would prioritize that asset and make this information also available when you plan and schedule your maintenance in another component of the software which we call maintenance planning and scheduling. This uses also by the way, AI algorithms and leverages the information and parameters that I had mentioned here, particularly and Asset health index.

Slide: Conclusions

  • I'd like to conclude. I know that these sessions are sometimes really overwhelming. I use a lot of jargon. Maybe I talk about things which for some of you are new. For the experts amongst you, I probably scratch too much on the surface of that unfortunately.
  • I think when I started my presentation, I told you about these two anecdotes and I remember quite well and maybe some of you can share this, this feeling when I saw that all of a sudden Data Analysts, young guys who've worked with Amazon and Google before entered our practice, they don't really care about RCM, about FMECA. They look at their data. They apply AI and ML and they think that gets the job done. Often though this does not necessarily yield the expected results, and I do believe that it's absolutely necessary but also very beneficial. If you were able to merge the traditional classical reliability practice that has been around since many decades with the newer technology, AI and ML, and I think the availability of data of AI and ML, the processing capabilities, they really have the potential to transform reliability by enabling very new ways of interacting and cooperating during the RCM, the FMEA and ultimately, also leverage data-driven maintenance.
  • I asked this question on the on the maintenance conference in the Netherlands last year to an expert. What do you think is the role of RCM and FMEA in the future and that person gave a very disappointing answer to me because he said I don't think it will have a future. He says criticality analysis may be yes, but AI and ML will replace these. I do not see it like that. I think that RCM and FMEA will remain foundational or reliability in general will remain foundational. For the understanding of systems, encrypted functions and failures. And also, be important to systematically determine the appropriate maintenance strategy. There will be less time consuming. There will be less expensive. They will be less difficult and annoying at times because of the possibilities of interacting that I had tried to explain to you in the last half an hour. Interacting in with natural language or querying the system, putting these technology will ultimately really give a benefit.
  • And to finish up my part, I will say that I think that blending traditional methods with modern technology, that will allow you to create synergies and also acceptance and will be integral to the successful implementation of AI and ML based maintenance as the road here, providing the necessary structure, risk management and oversight to ensure that these digital tools are used effectively.

Slide: Quote (by John Moubray)

  • That brings me actually to the end of my session. The founder or the foundational father if you want of RCM, of reliability is a person called John Moubray. We talk about John Moubray but in fact, there have been another handful of people that are commonly referred to as the rat pack. Terence O'hanlon is one of them you might be aware, but there is a famous citation of John Moubray. He said that “there is little point in doing maintenance the right way if you're doing the wrong maintenance” and that for me in one sentence summarizes a little bit what reliability stands for. As I said, if we were able and we as IFS are doing that currently, if you are able to combine classical reliability with modern AI based technology, we do think that there is a lot of value in that for the practice.

 

Questions / Answers / Feedback / Responses:

  • Q: At the Asset Health Monitoring, what parameters could be tracked? And is there any partner company that you work with for sensors/hardwares?
  • A: So, let's go through this question 1 by 1. What parameters could be tracked? There is no limitation with respect to the parameters that you could embed into such an asset health index. Obviously you need to collect parameters, which makes sense. We are a software company. We do have consulting capabilities in IFS, but also in our new addition to the family at Copperleaf, which you might be aware of, which can work out a good understanding on how an equipment function. What are the parameters that determine the health? So, we do have consulting capabilities which could help you coming up with a proper asset health index which determines what parameters should you collect, which parameters should be in there. As I said, there is no real limitation. But there of course also consulting companies out there which have readily available asset health indices which you could also embed into our solution. Super interesting topic. As I said often, we develop these together with our customers or with consulting firms, or you reach out to consulting firms or you and your colleagues have an understanding of what they would like to have seen in these asset health indices.
  • Is there a partner company that we work with for sensors and hardware? My gut feeling is no, there is not. I could tell you from my experience that there are many hardware vendors. Honeywell, Bently Nevada, GE, you name them, that we that we see a lot on our customer sites. But I could not say that we have a preferred one that we promote or that we work with more than any other.
  • IFS Cloud by the way is extremely open. As you probably know, it has generic APIs available that publish each data set and function. So, we're not limited if you want to any of these.
  • A: We don't form a formal partnership with any hardware vendors and sensor companies as far as I'm aware. We have open technology. Customers usually have something in place. So, our approach has always been to be as open as possible and the software components that we use to link with that are designed for you to be able to bring it in, whether it's from OSIsoft or from Black & Veatch, Honeywell or from specific hardware vendors as well.
  • So, no partners that I'm aware of. That we push, we rather pull from what you have or what makes sense for the equipment that you have.
  • A: One final comment on that I'd like to share you an example of an existing customer who does that in a very interesting way. You might have heard that we have a big reinsurance company as a customer.
  • They are named Munich Re. World's biggest reinsurance company. They use IFS Cloud to run their equipment as a service offering. Equipment as a service means they purchase very costly machines and provide them based on a servitazation model to their end users, and in fact Munigory is a nice example.
  • If you would like to Google, Relayr is an example. I don't want to suggest them, but just tell you this is one very practical example. They have a predictive maintenance offering, but also hardware offering and it's a subsidiary of Munich Re. So that customer puts their sensors on machines that they put a customer sites. They do these analysis together with their customers and what the performance data that should be collected is. And a super interesting example, because it's a very modern, very trendy servitazation model where the customer actually pays per use based on a pay per use model. If they use it more, they pay more. If they use it less, they pay less. And as you can imagine, the owner of this equipment, Munich Re in that case has a very, very high interest that the machine is up and running.

 

  • Q: When using AI to find comparable equipment with other customers, how does it identify that the part used is the same type others?
  • A: That's a definition, that you would have to train the model. I would assume that manufacture / manufacture part number is a is a viable way of doing that. You need an unambiguous way of determining what this equipment is. An unambiguous way could be yes, manufacturer name, model, make number, part number. I would assume that these are the right ones to be using.
  • A: I will also just make a comment because there's another way of interpreting that question and I just want to be very clear about it. When we do the training for our models, we don't do it across customers. So, when you do the training for your own equipment within your own business, you're using your data and the stuff that's within your enterprise, we're not doing it by bringing together all our different customers and trying to compare those. Because that becomes a little bit more complicated. There's also contractual problems that we get ourselves into. So I just wanted to be clear, when we do use AI within your IFS Cloud implementation, you are using your stuff, not everybody else's. Opening it up with everybody is an interesting prospect and one that we've often discussed, but at this point in time it's per customer using their own data. I just want to be clear about that.
  • A: Yeah, which of course does not necessarily exclude that you use an external paper, a white paper, or I don't know what a statement on reliabilityweb.com which refers to a certain product name. If you find it a trustworthy source, but of course I agree with you. What you just explained was tried by GE with predicts by ABB with ability by our friends from SAP with the Asset Intelligence Network to bring together different operators, OEM's, service providers. All of these permanently failed because no one is ultimately besides the risk even interested in sharing this data, because data is worth a lot of things that you probably know.

 

  • Q: If customers were looking at going down this or people were looking at going down this not necessary customers, what three things should they be doing to prepare for AI and reliability?
  • A: I think there is a logical sequence in doing that. The first thing is I think you need to be prepared to understand what AI and ML entails. You need to do a lot of enablement internally to understand what is it, what are its risks because it is related to risk. If you go to ChatGPT and ask something, we tend to believe that what ChatGPT response or copilot is the truth, but it is not always the truth. And I found a couple of good examples where the feedback was definitely wrong. So you need to make your people aware. That would be number 1. By enabling what is AI? What are its opportunities? How do you use it? What's the terminology? What are the risks?
  • The second thing I think is that you need to get your data governance clear.
  • Just like in the past, we tended to say, excuse me, **** in **** out, but like in the past, if you have bad data as an input you get bad output. So, if for example, you would like to use your own structured data, failure data, to be beneficial, and you don't have a systematic way of coding your failures or your causes, you will not get very far. So I would say data quality. Looking at data quality and data governance is the second thing.
  • A third is applying systematic thought. What do you focus on? Where do you get the quickest gain? Don't try to do step #3 before step number one, but be very systematic in understanding. Where is your criticality in your asset portfolio. Focus on that one first. Try it with a pilot if you want. Try it with a group of people who are open minded. Take these people with you once they test it and works, then make the next steps.
  • I think these three things enablement, data quality and governance and trying testing it in a limited use case. Those would be my suggestions.

 

Next Meeting:  29 October 2024 11:00 AM US Eastern Time
IFS Combined CollABorative: Think Tank – IFS UNLEASHED Highlights and Reflections

If you are an IFS Customer and you would like to join the CollABoratives, please click here to fill out the form

Did this topic help you find an answer to your question?

0 replies

Be the first to reply!

Reply


Cookie policy

We use cookies to enhance and personalize your experience. If you accept you agree to our full cookie policy. Learn more about our cookies.

 
Cookie settings