IFS Digitalization CollABorative: Tech Talk - Discussing making AI work in Practice with Örjan Ekström, Director Product Management
Date of Meeting: 17 June 2025 10:00 AM US Eastern Standard Time
Discussion between Tom and Örjan
Örjan: I'm in R&D, working closely with product management, based in Sweden—so I'm comfortable with a bit of silence while we think things through. I’ve prepared a few demo examples we can discuss, but I’d like to start with a broader question:
Do you have a model or roadmap for AI maturity?
Many discussions begin with experimentation and assume we’ll eventually reach an autonomous enterprise. But getting there requires a clear strategy and deliberate planning.
We could even run a quick poll to see where we currently stand on that journey.
Also, I want to highlight something from the recent Connecting in Nashville event. An article that followed emphasized AI not just as a trend, but as a value layer built on everything we've already delivered through IFS Cloud. It reflected the customer perspective well—AI is now a natural evolution of our offering, grounded in the context of our focused industries and platform.
Tom: Those are some really good points you've raised. I’ve put up a poll and we’ve started getting responses. Interestingly, no one has answered “yes” to having an AI maturity model in place. A few have said “no,” some responded with “planned but not yet,” and others asked, “what is that?”
That’s a really telling set of responses.
So maybe the next question we should explore is: What is an AI maturity model? Why does it matter? What is it useful for, and how do we go about developing one?
Could you talk us through how we should be thinking about AI maturity, and how that connects to broader, more strategic use of AI within a business?
Örjan: I can share an example of an AI maturity model, though I believe it’s important for each organization to define their own based on their context and goals.
A good starting point is experimentation. With IFS Cloud, for example, you can explore prebuilt AI use cases in a consumption-based model. This allows you to try out capabilities and only pay for what delivers value—making experimentation low-risk and accessible.
From there, the next step is automation. We’re already enabling more automation through AI. Coming from a background in supply chain, logistics, and manufacturing, I’ve seen how this often begins with automating individual process steps. Once a few of those are in place, you can start thinking in terms of end-to-end process optimization.
At that stage, you may even eliminate certain steps entirely. But you’ll also start to encounter the current limitations of AI—some parts of the process may not yet be mature enough for full automation.
That leads to a third level, where you begin to think about connected agents or intelligent orchestration. This is where AI starts to operate across roles, teams, and even organizations. In supply chain contexts, for example, this could mean automating customer or supplier communications using AI.
Then comes a fourth level, where you’re building toward a network of intelligent agents—what some might call “super agents.” These agents collaborate across departments and ecosystems, enabling more dynamic, cross-functional automation.
Finally, there’s the fifth level, which might be considered aspirational: the autonomous enterprise. This is where human-machine interaction becomes seamless. It aligns with concepts from Industry 5.0, where humans and AI systems work in tandem. In this model, you’re either:
- Using AI to enhance your decision-making, or
- Being guided by AI in your daily work—like a warehouse manager receiving real-time, AI-driven recommendations.
This progression—from experimentation to autonomy—isn’t just about technology. It’s about evolving how your organization thinks, works, and collaborates with AI.
Tom: You mentioned assessing where you are and where you want to go with AI. But what’s the best way to approach that?
Should organizations start by looking at individual processes, or take a department-by-department approach?
Or is it better to begin with a company-wide strategy and then identify what can be implemented to support that vision?
I realize there’s probably no one-size-fits-all answer, but for those who haven’t started this journey yet—especially those using IFS solutions—what should they be thinking about first?
Örjan: Someone else may want to jump in, but I’m happy to start.
Tom, I think a key element in getting the organization engaged and driving success with AI is to focus on business value—understanding what you can do with the technology now, and how it can lead to meaningful, value-based outcomes.
That’s the real power of AI, and it should guide decision-making.
However, I don’t think it’s effective to lead with that when you're just starting out. Talking about business value too early can feel abstract or disconnected. That’s why experimentation is so important in the beginning—it helps people get hands-on, build understanding, and generate internal momentum.
Once you’ve done some initial exploration and learned what’s possible, then you can start framing those insights in terms of business outcomes and value.
Tom: So when we think about applying this in practice—say you're an IFS customer looking to start using AI within your IFS implementation—what’s the right approach?
We know IFS offers some out-of-the-box, cloud-based AI solutions tailored to specific industries. Should customers start by looking at those and asking, “How can I fit this into my business?”
Or should they begin by looking at their business needs first and then ask, “Is there something already available that fits?” And if not, should they consider building something custom—maybe based on a value study, or even just a strong gut feeling?
What are you seeing companies actually doing in this space?
And with that, I’d love to open it up to the audience—what approaches are you taking, and how are you thinking about integrating AI into your business?
Örjan: Yes, we’re providing a number of prepared AI use cases that are validated from multiple angles—value, design, and technical feasibility—through real customer collaboration. That’s how we operate in IFS R&D. These use cases are designed to work out of the box and are deeply industry-focused, so there should be clear value for customers in our target sectors.
The use cases span a range of capabilities:
- Content generation
- Copilots and summarization tools
- Advanced predictive models, such as MSO (Multi-Scenario Optimization) and demand learning AI for forecasting
From my experience implementing and driving AI adoption, success often depends on having the right people in the organization—those early adopters and internal champions who are eager to explore and drive change. These are the people who can help turn experimentation into real business impact.
But that’s just one perspective. I’d love to hear from others—what are your experiences, challenges, or approaches when it comes to adopting AI in your business?
Discussion between Dan, Tom and Örjan:
Dan: We’re currently on Apps 10 and planning a cloud upgrade later this year, so we’re not quite there yet with AI adoption. From our perspective, there are several concerns:
- Data Quality
We know experimentation is important, but if the data being used is poor, the AI models could produce misleading results. That’s a real risk—making decisions based on inaccurate data could do more harm than good. So a key question for us is: How do we get ready to even start experimenting in a meaningful way? - Where to Start
As a large organization operating in both construction and manufacturing, we face challenges across two parallel industries. It’s not always clear where to begin. Are there relevant use cases already available in IFS? Or will we need to advocate for new ones that address our specific pain points? - Credit-Based Model
We’re also concerned about the consumption-based pricing. If people start experimenting freely—asking copilots for things they don’t really need—we could quickly burn through credits and face unexpected costs. That makes it hard to encourage exploration without also needing tight controls.
So while we’re eager to use AI, we need support across all these areas to move forward with confidence.
Tom: You’ve raised some really interesting points, and I think many in the audience can probably relate to them. Just on that third point about credit usage—I read something recently that really stuck with me.
It mentioned how platforms like ChatGPT process billions of calls, many of which are triggered by simple phrases like “please,” “thank you,” or even just “hi.” Each of those prompts generates a response, and each response consumes compute resources and, in some models, credits. So even polite or habitual interactions can add up quickly.
While that kind of usage might be more common in consumer environments, I completely understand your concern. In a business context, you want assurance—that experimentation won’t lead to runaway costs or unintentional overuse.
Dan: I can’t remember if it was Mark at UK Connect or James, but someone mentioned they enjoyed chatting with the chatbot—and of course they do! But that also means we’re potentially spending more money just saying “hello,” “please,” and “thank you.”
As a long-time customer, I get it—it’s about culture. People are used to interacting with AI socially, and they don’t always think about the cost implications in a business context. If AI is presented within IFS without any usage boundaries, there’s a real risk of burning through credits on non-essential interactions.
We need to think about how to guide usage—not to discourage engagement, but to ensure it’s purposeful and cost-effective.
Tom: That’s a really good point—especially around data readiness. Master data management is a huge topic, and one we’ve only just touched on today.
So let’s dig into that a bit more. In your experience, what are customers doing to prepare their data for AI? How are we advising them to get ready? And what have we done internally at IFS, given that we’re also starting to use this technology ourselves?
Örjan: In our experience, data quality is the biggest barrier to getting started and reaching meaningful outcomes with AI. It’s often the business owners who need to take responsibility for ensuring the data is accurate and properly maintained. From an IT perspective, it can be difficult to drive that work alone.
One of the first Copilot use cases we implemented was using our own Copilot on our own documentation. For example, many customers ask, “What data do I need to have in place?”—and Copilot can pull that information directly from the documentation. It’s a common and practical use case that helps bridge the gap between readiness and action.
Another important point is control. All AI use cases in IFS can be turned on or off. You’re not just activating AI and hoping for the best—you can manage which use cases are available to your business users, enabling a more thoughtful and strategic rollout.
This kind of dialogue between IT and business is key to successful adoption. It allows organizations to experiment safely, ensure relevance, and avoid unnecessary usage or costs.
Dan: Is it possible to track which users are using which AI use cases?
I asked this a few months ago, but I’d like to revisit it—can we monitor usage at the user level to see who’s engaging with which Copilot or AI features?
Örjan: Yes, I believe we should have full visibility into that kind of usage. I haven’t personally seen it yet, but it’s definitely something we should follow up on.
Dan: That kind of visibility would be really useful. If you can track usage, you might spot a hotspot—like someone spending all day chatting with the bot. That gives you the opportunity to step in and manage the situation before it becomes a problem.
Discussion between Dennis and Örjan:
Dennis: In our organization, we’ve transitioned from a legacy system into IFS Cloud. That legacy system was, in many ways, just a glorified typewriter. Now, we’re working to establish proper data structures and ownership.
My question to the group is this:
How much historical data do you actually need to make AI effective?
AI relies on trends and patterns, so it can’t deliver much value if you’ve only been live in IFS Cloud for three months. How much history is enough to start seeing meaningful results from AI?
Örjan: If I may start—when it comes to time series analysis, like in our demand planner, having around two years of historical data is ideal. That gives us strong seasonality insights, which are very valuable for accurate forecasting.
However, for generative AI and recommendation use cases powered by large language models, you can actually start seeing value with much less data than you might expect. Even three months of data can be enough, depending on the volume and quality.
So it really depends on the use case. That’s why experimentation is so important—it helps you discover where AI can deliver value, even with limited historical data.
Discussion between Örjan and Tom:
Tom: One of the points Dan raised—and that we were just discussing—is the challenge of not knowing where to start. You mentioned earlier that IFS provides industry-specific AI use cases, and I think it would be helpful to touch on a few examples.
One thing I’ve been wondering—and maybe this is just my own assumption—is whether many of these use cases are more structured and predefined, rather than open-ended chatbot-style interactions. In other words, are there AI features that operate behind the scenes or in a more controlled way, so users can benefit from AI without consuming large amounts of tokens through freeform chat?
If that’s the case, it would be great to walk through a few of those examples. It might help us better understand where the current gaps are and where we could focus next.
Örjan: That’s a great point. While the Copilot does involve open-ended chat—which can lead to overuse if not managed—many of the other AI use cases in IFS are structured, predefined, and task-specific, meaning they don’t behave like a chatbot and are more efficient in how they use tokens.
For example:
- Content Generation:
This includes use cases like extracting data from a PDF or image to automatically generate a customer order. It’s easy to measure the value here—how much faster the process becomes, and which types of documents it works well with. You can then apply AI selectively, only where it adds value. - Summarization Use Cases:
One example is in the Supply Chain Analysis page, where users need to review a large number of connected orders, logs, events, and even unstructured documents. Previously, this could take 15–20 minutes to analyze. With AI summarization, that time can be reduced to 1–2 minutes, helping users quickly identify root causes and take action. - Proactive Intelligence:
We’re also evolving these use cases to become more proactive—alerting users when something needs attention, rather than waiting for them to investigate manually.
These use cases are continuously evolving based on customer feedback. That’s a strength of how we work at IFS—co-developing and refining solutions with our customers. But it also means that if a use case improves after initial rollout, you may need to re-engage your internal users to try it again, which can be a challenge.
That’s why we’re careful about how and when we launch new AI use cases. At the same time, we rely on ongoing customer feedback to guide that evolution. It’s a collaborative process, and it’s central to how we operate at IFS.
Tom: You touched on some great use cases earlier, and I think it’s worth diving into a few specifics. I know we’ve previously discussed things like demand forecasting and shop floor operation planning, and I think those are great examples to revisit.
One of the challenges I’ve personally faced is understanding what AI in IFS actually looks like beyond the chatbot experience. Like Dan mentioned earlier, when most people hear “AI,” they immediately think of tools like ChatGPT, Copilot, or Grok—essentially conversational interfaces. But in a business context, especially for those of us in manufacturing, construction, or service delivery, it’s not always clear how that translates into something practical or valuable.
And while organizations like IFS have the in-house expertise to build AI solutions, most of your customers don’t have teams of developers ready to build custom AI tools. So naturally, they’re looking to IFS and asking:
“What do you already have that we can use?”
“What’s available out of the cloud?”
It would be really helpful if you could walk through some of the ready-to-use AI capabilities that customers can pick up and start using—especially those that align with different stages of AI maturity. Something like:
- Level 1 – Getting Started: Predefined use cases like summarization, document extraction, or basic copilots.
- Level 2 – Operational Efficiency: Demand forecasting, shop floor planning, or predictive maintenance.
- Level 3 – Strategic Optimization: Multi-scenario optimization (MSO), intelligent scheduling, or proactive supply chain insights.
This kind of breakdown would help customers understand where they are, what’s available to them now, and how they can grow their AI maturity over time—without needing to build everything from scratch.
Örjan: Many people, understandably, associate AI with chatbots—tools like ChatGPT, Copilot, or Grok—because that’s what we’re most familiar with as consumers. But my personal favorite area of AI is actually the combination of predictive (mathematical) AI and generative AI using large language models (LLMs).
This hybrid approach allows us to take the intellectual property and domain expertise we’ve built into IFS and enhance it with LLMs to deliver recommendations, insights, and guidance in a much more intelligent and user-friendly way. And this is something that’s already available in IFS Cloud.
- Example: Demand Forecasting with Predictive AI
One great example is our Demand Planner and Time Series Forecasting Engine, which was enhanced in the 25R1 release. This engine can be used not only for forecasting parts but also for any scenario where you have historical time series data.
Here’s how it works:
- On the left side of the graph, you see historical demand (purple line).
- The green line shows the actual historical demand.
- The yellow line represents the system-generated forecast using a basic model (e.g., moving average).
- You also see a forecast error visualization—a bubble chart where each bubble represents a part number. The size of the bubble indicates the forecast error. Green bubbles below the line show good accuracy; red bubbles above the line indicate poor accuracy.
This visualization helps users quickly identify where forecasts are working well and where they’re not.
- Applying a Pretrained Forecast Model
When we apply our pretrained AI forecast model, the system recalculates the forecast and significantly improves accuracy. For example:
- Forecast error drops from 124% to 14%.
- This translates into lower safety stock requirements while maintaining the same service level.
- In expert-mode organizations, this model improves accuracy by around 8%, but for those not yet mature in forecasting, the improvement can be much greater.
- Business Value
- Time savings: No need to manually apply seasonality profiles or adjust models.
- Inventory optimization: Better forecasts lead to more efficient stock planning.
- Scalability: The same engine can be reused across different forecasting scenarios.
This is a great example of how predictive AI can be made accessible and impactful—even for organizations that don’t have deep forecasting expertise.
Discussion between Lance, Tom and Örjan:
Lance: We're currently in the middle of our upgrade and are debating whether to move to 25R1 on-prem or stay on the 24R2 path. We're hearing mixed signals about what's available for on-prem users in 25R1, and that uncertainty is making our decision more difficult. Gaining clarity on this would help us determine the best direction.
Örjan: Yes, this is available on-prem and currently running in our environment. If you have your own Demand server on-prem, you can utilize it there. However, it's a skill-based model, so you need to purchase the forecast model separately—it's a distinct SKU that must be added to your solution if you don't already have it. It's not a consumption-based model.
Tom: That's interesting, especially since we were discussing consumption-based models earlier. So just to confirm—this is a standalone SKU, correct? And to clarify, is this available in 25R1 or 25R2?
Örjan: Forecasting is already available in 24R2, so 25R1 is also good to go. What I mentioned earlier refers to the generic forecasting service that will support other use cases across different application teams in R&D. However, that service will not be available on-prem.
To clarify: with 25R1, you can access all IFS AI services using the hybrid model. This means remote installations can utilize IFS Nexus or IFS AI services, but these services are cloud-based—you’re calling into the IFS service, not running it locally. The ability to benefit from these services in a remote setup is a new feature introduced in 25R1.
Tom: One thing that stood out to me—something Ryan touched on earlier—is the potential for cost savings, particularly in stock or cash flow. But what really caught my attention, especially as a general user of AI, is the ability to access expertise without being an expert yourself.
Traditionally, organizations without this kind of technology would need to hire specialists or invest significant time and effort into developing forecasting capabilities. In manufacturing or supply chain businesses, that’s expected. But for companies where forecasting isn’t a core focus, AI provides access to that functionality without the need to build it from scratch.
That means you can leverage advanced forecasting without dedicating resources to develop or maintain it internally. It’s like having an expert built into your system, which is a huge advantage. Does that make sense?
Örjan: Many traditional manufacturing companies are rooted in rule-based systems. These approaches—like MRP and scheduling—originated in the 1950s and 60s and have shaped how manufacturing operations are run efficiently. The principles and terminology from that era still guide much of today’s thinking.
However, with AI tools, you no longer need deep expertise in those rules. Instead of evaluating outcomes based on predefined parameters, the focus should shift to the business value AI can deliver—even without fully understanding the AI itself. That’s a mindset shift.
We’ve seen companies try to make AI replicate their old rule-based processes—essentially forcing AI into legacy frameworks. But that’s not leveraging AI’s true potential. The goal isn’t to make AI follow the old rules, but to let it uncover new, more efficient ways of operating.
So Tom, your point is spot on. It’s about moving from rule-based thinking to value-based outcomes, and that transition is key for organizations looking to truly benefit from AI.
Tom: This ties back nicely to our original discussion around the maturity model. For me, AI accelerates an organization’s ability to operate at a higher level of maturity—especially in areas where they may not have had deep expertise or capacity before. Traditionally, achieving that level of maturity would require significant time and resource investment.
With AI, much of the heavy lifting is handled by the technology. That allows teams to shift their focus from building and maintaining processes to governing and optimizing them. In essence, AI becomes the creator, and people become the stewards—freeing up resources to focus on revenue-generating or cost-saving activities.
This shift can make businesses stronger, faster, and more agile. Of course, as Dan mentioned earlier, it’s not something you can just jump into—it requires thoughtful planning and a phased approach. But the good news is, there are plenty of areas where you can start small and begin realizing value quickly.
Örjan: Governance is becoming an increasingly important topic in the AI space. We're seeing a rise in smaller companies offering tailored AI solutions for specific use cases, which is great for flexibility and innovation. However, once those solutions are in place, the responsibility for managing and governing them falls on the organization. That includes everything from compliance and data integrity to ethical use and ongoing maintenance.
This is why governance is gaining traction—organizations are realizing that implementing AI isn’t just about building models, but also about managing them responsibly. And that’s where IFS can really add value. There's a growing expectation that IFS will bring not just AI capabilities, but also the governance framework needed to support them—especially tailored to our focus industries.
Tom: You briefly mentioned shop order planning earlier, and I think it would be interesting to explore that further as a different example of how AI is being applied within our use cases—particularly those we're building out using IFS technology.
Örjan: Manufacturing Scheduling and Optimization (MSO) is about automating production planning by setting goals for the PSO (Production Scheduling Optimization) engine. The objective is to minimize late and early orders while maximizing resource utilization—especially for bottleneck resources.
Traditionally, planning has focused on optimizing constrained resources, such as ovens or stamping presses, based on the Theory of Constraints (as outlined by Eliyahu Goldratt). By improving the efficiency of these resources, businesses can avoid costly investments in additional equipment.
Beyond resource optimization, customers are now seeing significant value in reducing the time spent on planning itself. AI and MSO enable more automated and responsive planning, which is especially beneficial when labor is the primary resource. Labor can be flexible, but it often requires specific skills and availability, making it a critical planning factor.
The key is to move away from rule-based thinking and instead focus on how AI can drive more efficient planning. Teams should experiment with new approaches and evaluate how increased productivity can be leveraged—whether to improve service levels or achieve other strategic goals.
Use Case: Welding Robot Breakdown
- Scenario: A welding robot breaks down. Sensors trigger a maintenance request, creating a scheduling disruption.
- Visualization: The scheduling interface highlights overloaded work centers. Red indicators show orders at risk of delay.
- Action: Orders are rerouted to manual welding stations. The PSO engine recalculates the schedule.
- Result: The robot load is resolved, but manual welding becomes overloaded due to limited labor.
Simulation and Scenario Planning
- A simulation scenario is created to evaluate labor needs.
- Capacity for welders is increased from 4 to 30.
- The system compares the original and adjusted scenarios.
- Outcome: The number of late orders remains at 5, but delays are reduced from 5 days to just 1 day—potentially an acceptable solution.
- If needed, further simulations can be run to explore additional options.
This approach allows planners to quickly respond to disruptions, simulate outcomes, and make informed decisions—all while reducing manual effort and improving agility. It’s a powerful example of how AI-driven MSO can transform manufacturing operations.
Tom: What I really appreciate here—and also in the earlier example—is how these use cases go beyond just manufacturing. Whether it's managing demand on assets, individuals, or other resources, these AI applications are relevant across a wide range of industries. The growing list of use cases within IFS reflects our commitment to making AI more practical and accessible for our customers.
As we approach the top of the hour, I want to open the floor for any final questions for Orian. He's generously shared his time today, offering real-world examples of how AI is being applied both within IFS and by our customers. If there's anything you'd like to ask or clarify, now’s a great time.
Discussion between Kindvall and Örjan:
Kindvall: I attended the Nordic Conference recently, and Dan Matthews mentioned that AI capabilities will be implemented in some form for on-premise environments. As you know, Ariana, this is quite important for us. Is there any confirmed timeline for when this will happen?
Örjan: With 25R1, we have access to the remote hybrid option. From what I understand, the timeline for full on-premise AI support is expected sometime in 2026. However, there hasn’t been any new information shared at the Connect event, and further communication will likely follow once the details are finalized.
Tom: That’s a really good question, and it’s highly relevant to many of our customers. Please don’t think we’ve overlooked it—on the contrary, we understand how critical this is, not just for you but for many others as well. Rest assured, it’s a priority for us, and we would never forget about your needs.
Kindvall: I just wanted to add this because it’s quite important to me. Essentially, you're extending the application’s capabilities with functionality that may not be available to us unless we gain access to it somehow. Without that, we risk reaching an end-of-life situation where we can’t keep up or fully utilize the platform. So it’s critical that we find a way to access those capabilities as well.
Tom: That’s absolutely right, and we’re fully aware of the importance of this. Organizations like yours—and many others in similar situations that rely on on-premise solutions—are top of mind for us. It’s a well-raised and relevant concern.
If anyone else has questions or comments, feel free to speak up. Otherwise, I hope you found this session useful—I certainly did. Even being part of the organization, I found the insights shared today really fascinating.
Oya, thank you very much for your time and for sharing those real-world examples. For anyone who wants to learn more, where should they go, who can they speak to, and what’s the best next step?
Örjan: Another key takeaway is that you don’t need to be an AI expert to start using or experimenting with it. If your data is in good shape, you can get started quickly and begin learning from the results.
AI is embedded into the tools, so using it and realizing its benefits isn’t complex. The real focus should be on how you adopt it—getting your team aligned and understanding the business value it can deliver. That’s the core of the discussion: not just using AI, but knowing how it will benefit your organization.
Tom: Some great points and takeaways there—very applicable. As we’ve discussed before, the key is to just get started. If you have access to AI solutions and you're on the latest version of IFS Cloud, take advantage of it. If not, reach out and ask. We’re more than happy to connect you with the right people.
If anyone on the call has questions or isn’t sure about something, feel free to message or email me—or reach out to Oryan. We’ll point you in the right direction. That’s exactly why we run these sessions: to keep our customers front of mind and ensure you’re supported every step of the way.
Oryan, thank you again for sharing your time and insights with us. And thank you to everyone who joined today. If there’s anything you’d like us to explore further—whether it’s this topic or something else—please let us know. These sessions are for you, and we want to make sure you get the most value out of them.
Next CollABoratives:
- 24 June 2025 10:00 AM US EST / 15:00 BST / 16:00 CEST
IFS Assets CollABorative: Tech Talk - Using Operational Intelligence to drive predictive maintenance
If you are an IFS Customer and you would like to join the CollABoratives, please click here to fill out the form.