Sulje

S&OP MasterClass™

E22: How to balance AI in supply chain planning with human judgment

Tervetuloa S&OP MasterClassiin.

Podcasteissa syvennytään integroituun liiketoimintasuunnitteluun sekä toimitusketjun suunnitteluun. Vieraina jaksoissa on pitän linjan asiantuntijoita, jotka valottavat asioita ymmärrettävään muotoon. 

Lue lisää PERITO IBP:stä

1163248942

Using AI where it counts, keeping humans where it matters.

AI is moving fast, and it’s changing the way supply chains are run. But a big question remains: how do you tap into its power without losing control or replacing the judgment you and your team bring to the table? In this episode of the S&OP MasterClass, Søren Hammer Pedersen sits down with Benjamin Obling, a supply chain expert with 16 years at Roima.

You’ll hear real examples of how companies are using AI in demand forecasting and inventory planning without cutting people out of the process. If you’ve been wondering how to get AI working for you, without creating blind spots or overreliance, you’ll want to listen in. We talk about practical ways to build trust, add smart governance, and use AI where it makes sense while keeping human judgment front and center.

In this episode, you’ll learn about:

  1. How to balance AI automation with human judgment in planning.
  2. Key AI-driven improvements in demand forecasting and inventory management.
  3. Importance of trust and governance in adopting AI strategies.
  4. How to determine what to automate and where humans are needed.
  5. Embracing AI while maintaining control through guardrails and alerts.

This podcast is brought to you by Roima and produced by Montanus.

Tässä jaksossa

Listed below are essential timestamps from the podcast episode to make it easier for you to find the topics that interest you.

00:44 The rise of AI in supply chain planning

03:39 Current challenges in AI integration for planning

04:51 Trust and governance in AI predictions

07:03 Finding starting points for AI automation in planning

08:45 Leveraging AI across different planning disciplines

09:13 Identifying critical areas for human oversight

12:19 Balancing automation and human judgment

16:56 Knowing when to trust AI predictions

19:18 Ensuring transparency in AI-driven processes

23:36 Impacts of AI on sales and operations planning

29:10 Future outlook for AI in supply chain planning

Koko jakson transkriptio

Søren Hammer Pedersen (00:12):

Hello, everybody. Warm welcome to this S&OP MasterClass from Roima. My name is Soren Hammer Pedersen. On daily basis, I work with supply chain planning, and today I’m going to be your host for this podcast. The purpose of these S&OP MasterClasses is that we dive into trending interesting topics within supply chain planning, try to give you our perspective on these matters, hopefully give you something that you can use in your company, in your daily life to work with some of these areas.

(00:44):

Today’s topic is no different. Have you ever seen the movie I, Robot? Maybe some of you have, but basically, robots are taking over the world. You could say if you look at LinkedIn, at events, at everything that’s going on within supply chain planning, that AI is taking over the world, or at least it seems like it. So how do we work with this wave of AI coming upon us in our daily professional lives, and what can we do to avoid the robots taking over? That is the topic of here today. Hopefully, as always, I hope you enjoy the topic, and luckily for you, I’m not alone in the studio. As always, I brought my good friend and colleague, Benjamin Obling. Welcome, Benjamin.

Benjamin Obling (01:31):

Thank you.

Søren Hammer Pedersen (01:31):

We’re going to dive into this now, but Benjamin, before we get into this, the few people that don’t know Benjamin Obling, a few words on your background on this matter.

Benjamin Obling (01:42):

Yeah, sure. Benjamin Obling. Been working with the Perito IBP project, product in Roima for the last 16 years. I’m overall responsible for onboarding new clients on the platform and also the continuous development and improvements with our existing client base.

Søren Hammer Pedersen (01:58):

Yeah. Great. Thank you. Let’s dive into it. My thought was around this, with this wave of AI coming in, basically the main question is, of course, how do we balance this automation with human judgment so we don’t end up in this I, Robot situation? But before we get into that, maybe the first area that we could touch upon is, why should people care about this? Of course, they heard a lot about AI, but there’s a lot of AI transformation, a lot of talk about it, but what is it that you are seeing out there now that makes you think people should be aware here?

Benjamin Obling (02:39):

It’s hard not to come with a trivial answer to that because of course there is a huge potential, and that’s obvious for everybody, so of course we want to hunt that. When you say we don’t want the robots to take over, not in a sort of a cruel way, of course, but you could say there are a ton of tasks out there where we certainly do want to have them to take over where they can do a better job and we can do something which is more fun, work a bit more strategically with the supply chain, for example, instead of pushing data around in Excel sheets and so on, which the AI can do for us. So I’d say short answer is there is certainly a huge potential. We just need to make sure that we have a grasp of what is going on so they don’t take over and do a lot of bad things very fast.

Søren Hammer Pedersen (03:21):

So what you’re saying is that of course that we need to dive in, I don’t think people disagree on this, we need to dive into the development we have, but what are the things that are lacking at the moment as we see this transformation towards AI also in the planning?

Benjamin Obling (03:39):

One thing you can say, well, information, knowledge and so on, I guess everybody a bit sort of, okay, where will it really make a difference in supply chain? We can certainly see how we can use the chatbots in making presentations, in summarizing meetings, coming up with proposals, how to improve supply chain, et cetera, all of that. It’s fantastic, but where we really see it now is in, for example, demand forecasting, so improving that, automating that, improving the accuracy, so you can actually automate it, remove a lot of the manual work and also improve the accuracy at the same time. You can do that in inventory planning or in master data, follow-up, predictions, et cetera. There are really a lot of ways where you can improve the supply chain using it.

Søren Hammer Pedersen (04:25):

Yeah. Where some of the issues might come in for some, not to put words in your mouth, but I guess also with all these new functionalities and opportunities coming in, there’s also an element of trust and governance. What’s your experience there? Is things just well-adapted into the company, everybody’s happy, and then let’s move with all this, or what do you see in practice here?

Benjamin Obling (04:51):

Yeah, no, I agree because you could say, of course, all the good stuff that we can obtain with the AI, we can improve the accuracy of the forecast if we take that as a concrete example, but the pitfall or the risk there is also that we’ll have less stable models, less stable predictions because the AI will pick up things, what is signal and what is noise, back to the old disciplines of looking at historical data and different data sources, bringing it together, make a new forecast. If we just feed that, not being critical about, okay, how do we then actually, how do we teach the algorithm, what it should do of the AI, we might come up with something that will go completely crazy in some of the forecast.

(05:36):

That’s really where we need these guardrails around what the AI predictions are doing, because you could say in the old world where you had, let’s say, predictable algorithms where you have ”if this, then that” rules or you’re calculating an average or something, you get exactly same result every time. That’s what we’re coming from, and we are saying even, you hear people saying computers are never wrong. They would repeat everything completely consistent, but that’s not true anymore because in AI, it will change, and that there is a randomization built into the AI and it’s a complete black box by definition.

(06:17):

So you could say here the predictability of the prediction is something that we’ll talk a bit about here. How stable is the prediction? How much can we trust it? And sometimes, how do we avoid that it will go completely crazy and make a forecast or a safety stock or a purchase proposal, which is completely out of line because it’s interpreting something as a signal which was actually noise.

Søren Hammer Pedersen (06:39):

Yeah. Okay, so there’s a whole, and let’s go back to that, the whole discussion around fencing it in, but maybe the interesting point also for this is, if you know you need to go this way, how do you find out where to start, what to automate, what not to automate within your supply chain planning?

Benjamin Obling (07:03):

You could say, one thing is of course looking at, okay, what are the possibilities, basically listing those, and then looking at the other hand... That would be okay in demand forecasting, in safety stock setting, in supply setting and so on, optimization, in that regard, for example, it could be network optimization, so we have a range of possibilities on one hand. The other hand is, okay, what are the challenges you have in your company? Basically you could then say, ”Okay, so we do have a problem with our demand forecast. If we don’t have the accuracy we would like, or we believe we could get up a higher accuracy, okay, what would be the benefit? What would be the business case around that?”

(07:40):

So I would say basically back to the old school, it was a business school teaching saying build a business case, find out what are the different initiatives, what is then the impact, what is the business case, instead of running, and that’s where, say, there is a lot of hype still. I think that is going to be reduced quite soon because I think people are starting to want to see results instead of just something that is made of AI. It was almost like in these days or maybe a bit before, but then you could almost, if it’s something with AI, we want one of those.

Søren Hammer Pedersen (08:13):

Two, maybe.

Benjamin Obling (08:14):

Let’s have two of those. Yeah, exactly. Exactly But back to basics, make a business case, how can we release workforce, how can we improve the accuracy, and what is the business case of that?

Søren Hammer Pedersen (08:26):

Okay. In my perspective, or at least some of the talks I have with the company, there’s also at a different level, because you’re talking about the, let’s call it level one, the planning, okay, how can we optimize the planning, but there’s also the element of the connected supply chain.

(08:45):

Isn’t, in your opinion, also a topic around, how can we utilize what we’re doing within this planning discipline in other planning disciplines in our company or in other systems in the sense that these new AI capabilities might provide some excellent data on this? Also thinking the next level, okay, who’s going to use this and how can they utilize these new opportunities? That could be a deciding factor also where to start, where can we connect things, I guess.

Benjamin Obling (09:13):

Yeah, absolutely. Absolutely, and bring the data together and bring the insights and the AI predictions together as concrete examples, again, could be something. You have the demand forecast, so now we know we have a higher forecast using AI, we have a higher forecast accuracy. Now we know what we are going to sell. How can we utilize that? We are obviously utilizing it in purchasing and production planning, but could we take it a step further and, say, when we have inbound deliveries, for example, in the warehouse, could we also make a prediction on how is that actually going to be picked eventually so that when we receive the goods, we can also make a prediction on how is it actually going to be picked? Where do we place it in the inventory so that we pick it, so we reduce the number of handling?

(09:56):

That would be one concrete example of where you can utilize the predictions across. If you basically think, okay, now we can do pretty good predictions on basically everything in the supply chain using AI, how can we then cross-utilize that? Because there will also be other things where the prediction doesn’t matter here because it doesn’t change anything, but here there is a business case as example.

Søren Hammer Pedersen (10:18):

Yeah. When we start to list where to automate, yes, biggest pain, business case, could we increase the business case by utilizing this that could improve some? And then maybe as the last step, also the critical decision, where the human needed, also mapping that out already here that this is an area that is highly critical for us, not only a pain point, but highly critical, there is a need for a human gate here as well.

Benjamin Obling (10:55):

Yeah, exactly. You could say one thing of looking at these guardrails here, one thing is when you look at, to be concrete, again, the forecast, the specific forecast for a specific SKU, then one limitation or fencing or guardrail here could be to say, how much can it actually increase or decrease the forecast so that we know it won’t go completely nuts? Okay, that’s one way. We need to know that it’s ballpark in the range of what we’ve seen in the history. There can be some trend, seasonality, et cetera. It can’t go exponential increase because it’s seeing something. That could be one at a very, very low practical level.

(11:32):

Another thing is to define the playing field of the AI. To say, for example, in the product portfolio, okay, what can we forecast, for example? What should be on inventory? What is the playing field? There we need a human input because, should item number XY set beyond stock, for example? Is that something we’ll offer to the customers on stock? Of course, the AI will have a very good input on whether we should move it from a make-to-stock to make-to-order backwards and so on, and that is something that should be automated and where we should utilize the AI, but making that decision is actually a commercial decision. What is the value proposition that the company has towards the customers? What can they get off the shelf ready? And in which areas can they actually wait with the lead time, for example?

Søren Hammer Pedersen (12:19):

So that means that, of course, these guards around the AI is then necessary, but you need to think that already in your prioritization of what to automate, because if you have our little checklist there, so say, okay, it’s a priority, it’s a pain, it’s priority, we can use it, other where there’s a big benefit for it. Yes, we can automate it, check, check, check, check, but then you actually need those two extra steps saying, okay, how can we then guard the AI in on this automation, and how can we optimize the use of the human element as well before you actually do anything? So you know you have all those steps in to make sure you end up in a good place.

Benjamin Obling (13:05):

Yeah, because you could say, if you just go ahead and then just throw yourself into the arms of a fantastic AI algorithm and then it will make you a new forecast or it will determine your safety stocks, and so okay, it’s just calculating safety stocks. No, it’s not. It’s actually determining your inventory strategy and your value proposition towards your clients. Don’t you want to be deciding something on that? Yes, absolutely, but you could say on the vast majority of SKUs, for example, which also means the vast majority of work, we can completely automate it, or to a very large extent. So it’s basically the changes.

(13:42):

Defining the playing field, okay, what do we actually have as a product portfolio in the company? What is phase in, what is phase out, et cetera. Again, of course, providing alerts on it, proposals, et cetera from the AI, but then having the humans deciding on that and setting the guardrails around it. Then you can automate it, because if you just let it loose, you will also be able to... You could say the beauty and the horror of the automation, you can also do a lot of very structured, scaled error creation very fast, and we don’t want that.

Søren Hammer Pedersen (14:16):

No, of course. I think that this is a vital point because, in all modesty, where this podcast I think will bring some value to some people out there is, what I experienced is that step one, two, three is in place, and also the element of, we accept that AI is a black box. I think people will have adapted that now, but what is really missing is the two last steps, meaning how do we guard it and how do we ensure that there is a trust from the people and the right decision? That is the main point. If you’re changing anything out there, it’s remembering those two things, because at the moment, I think there’s a lot of frustration around not having the transparency into the black box, basically. Why is it we’re seeing those results or recommendations that we are? Except you’re not going to, but you need to guard it then.

Benjamin Obling (15:12):

Yeah, and that can really be, both it can be like guardrails around, so you cannot go beyond these boundaries of where we allowed. The playing field, you could say, of the AI, it can also be alert-driven to say, okay, if we are now making proposals to purchase or increase inventory of more than X euro, for example, okay, then we’ll pop up that needs approval, et cetera, so there is a workflow around it because you could say, actually, depending on which algorithms, which AIs you choose to use, you could actually end up in a place where it would be better just to use a average for the forecast.

(15:50):

Rolling 12 months average, it’s super boring, but it’s very stable. It won’t go completely nuts. You know what will happen approximately. Of course, you will have some outliers from time to time that will increase it a bit and so on, but the AI could interpret that as a completely new level and you could have something which is a very, very high forecast that you throw into the supply chain. In order to have ease of mind and know that it’s running, it’s super complicated, it’s a black box, it’s a network of weights, et cetera, in the AI, it is a black box, but as long as you know that, okay, but it’s operating within these fences, then I know, okay, it can’t be worse than that. I think that is super important. Otherwise, you might even be better off just doing something really simple.

Søren Hammer Pedersen (16:37):

Yeah. So accepting the black box, I also think is acting upon it, then, and that leads me maybe to my next question, because I think some of the big questions are, of course, is when do we trust the AI? When do we trust the model and when not to, basically?

Benjamin Obling (16:56):

Yeah. I think that’s a bit again back to the predictability of the prediction. How does that make sense? But you could say here, it’s really about, okay, how much does the prediction change when the input changes? If you have an area which is quite actually easy to predict, then it’s also not a problem to let the AI take over because it will most certainly do a very good job in almost all cases. If it’s something which is actually pretty difficult, so that would be cases like you have lumpy demand, it’s slow movers or medium demand and so on, sometimes it’s spiking, sometimes it’s spiking two, three month in a row, is that a new level? Is it just a temporary thing and so on? In some cases, if you load, you tell it and show your product portfolio, it will be able to capture those tendencies. In other cases, it might interpret that as a new level, and then it will increase the forecast completely or the safety stock hugely.

(18:00):

I think that’s really these, let’s say, risk or pitfalls here. And then in those cases, then you need to have your judgment around it. But again, if you then have defenses as one, or the other hand, you know that there is a governance, you could say a technical governance around you cannot purchase for more than X euro compared to history, blah, blah, blah, a more simple model, then you know, okay, I’ll be alerted if something crazy happens. If we suddenly purchase a ton of, I don’t know, of this component which is very expensive with a long lead time and so on, then you know, okay, then I’m alerted. If I buy a ton of O-rings that doesn’t cost anything, okay, never mind.

Søren Hammer Pedersen (18:42):

Yeah, that’s a fair point. Are there also some areas where we can, again, back to when to trust, when not to give that level of transparency or certainty, or how can we work within our daily planning to get that gut feeling that this is actually, of course we can see the results afterwards, but that’s a bit too late, but are there any mechanisms where we can work more with the transparency here through reporting, through other means of ensuring that we are going in the right direction planning-wise?

Benjamin Obling (19:18):

Yeah, you could say one element is also to separate the planning a bit into different elements. Again, to be more concrete than saying, okay, when we do a purchase of components, let’s split that into a, we have a safety stock, we have a lead time, we have an MOQ, we have a bill of material, we have a forecast that we are created. So a lot of this will be created by the AI, but it’s still put together and calculated at the end using an MRP engine, for example, because then we know, okay, that will do the calculations ”if this, then that,” multiplying, et cetera, you can say completely predictable.

(19:59):

So now the prediction is predictable of the purchase, and then making sure that all of these different elements, so forecast, safety stock, MOQs, et cetera, do we have defenses in place so we know, okay, it’s not exceeding X amount of euro in change as one compared to a more simple model, which is not AI-driven, so that could be like a rolling month average, for example, and then at the end have this governance around, okay, I will be told if it’s now purchasing for more than X euro compared to what is normal, you could say defined by a more predictable model. Then the AI might be right, but then we’ll just go in and review and then say, ”Okay, in this case, it is actually a complete jump up in sales. We need to purchase a lot more. It was super good that we reacted this fast,” or the AI did. In this case, yes, we’ll go ahead, but then it won’t do it by itself.

Søren Hammer Pedersen (20:57):

And I guess that is also something that over time will increase the effectiveness of this and the whole automation that if you have seen the, let’s say, proof of data through data that it actually works, it’s not totally crazy, and if it go crazy, we’re going to either get an alert or it’s going to be stopped in its track. Then this will build the trust we need in the organization also. People will start not being in a slight panic over what comes out on the machine.

Benjamin Obling (21:28):

Exactly, exactly, because you could say the alternative to that, if we sort of throw ourselves completely into the hands, the way that the AI would learn that is by what is called reinforced learning. That means that it will try out some things. It has its full network. It’s already trained on the history of the behavior in supply chain. Then it will start to do some predictions on when should we purchase and produce and so on. Most of those predictions will be very good, and that’s perfect. If we’re then using reinforced learning, you might...

(21:59):

Or actually, the algorithm, the AI is then told to from time to time try something a bit different, go a bit outside because it needs to test, are we in? Is the weight in the AI, are they correct? Okay, then it needs to try something new. Okay, why don’t we purchase a bit more? Let’s try it. Let’s see how that flies. Okay, that didn’t go very well, so okay, we won’t do that next time. It makes sense, but you could say we don’t want that. We don’t want to try that out and have that trial and error. You can say in many ways that’s what we do with the human way of ordering. So let’s try...

Søren Hammer Pedersen (22:33):

Let’s try to be better.

Benjamin Obling (22:36):

Exactly, exactly, and then fence it and then have this question upfront before you purchase it, okay, are we sure? Because here you could say the predictability of the prediction, so should I purchase? Yes, yes, no. Here, it’s actually the uncertainty is quite high. We don’t know from the algorithm if this actually is correct, but it’s a bet. It could be. Let’s try it out, but let’s have that workflow around trying it out.

Søren Hammer Pedersen (23:01):

Let’s hope it’s a good bet.

Benjamin Obling (23:01):

Yeah.

Søren Hammer Pedersen (23:04):

Good. I think that’s really, really good advice on, you can say, the daily, weekly planning when we work with stuff, but there’s also element around the governance. How does this affect our, you can say, standard sales and operation process? Let’s say we have a monthly process. Is it just business as usual here when AI comes in or are there new both possibilities or concerns that we need to be aware of in how we organize ourselves around a sales and operation planning process?

Benjamin Obling (23:36):

I think to a large extent, it’s basically enabling that now we can really do it, and we can do it with a higher quality. We can get better forecasts, we can get better capacity simulations, et cetera into the future. We can make what-if scenarios, say, okay, what happens if this turns out in a different way? How will that trickle through the supply chain? So one capability, you could say that that is a lot easier to do to set up those scenarios.

(24:07):

Another way is to interpret the results, so basically have the AI to say, ”Okay, please compare these two scenarios. How does that look?” Make the bullets, make the PowerPoints, et cetera, so you can say all the material for the S&OP and the analysis again is easier and faster to do, and the whole input for the planning process, so the forecast, the safety stocks, the MOQ, the lead time, et cetera, will be improved by it, with a lower, you can say, workload.

(24:41):

What we see, for example, in demand planning is that the reduction in the manual work around the forecast is reduced by 30 to as much as 70, 80% from prior to using these AI models, so it’s very significant. You could say the people working with creating the forecast, creating the supply chain scenarios and so on are normally a very scarce resource in supplying their good analysts and so on. Certainly there are a ton of things we want to use them for, so let’s use them for that instead. Defining the playing field, what is the value proposition towards the clients, and so on.

Søren Hammer Pedersen (25:23):

And I guess that talks into how we’re actually seeing some of the changes in the actual process steps, because if you take a traditional S&OP process, of course, it’s very much around conducting tasks, okay, prepare the forecast. We need to use our people maybe in a bit different way. These more have the steps on how to control the AI or approve the AI recommendations more than actually doing the work themselves. So it becomes more of an analyst task we have in our process more than it’s do-it-yourself in Excel kind of thing.

Benjamin Obling (26:04):

Yeah, and you can say some of it is, of course, in that sense, maybe a bit more boring in the sense that, okay, all the forecast is done for you. The inventory optimization on 90-something percent is already done. You have the capacity overview, all of that, where you would maybe do a lot of things, push data around in Excel and so on. You don’t need to do that anymore. You’ll have some alerts, okay, now we’ve exceeded the boundaries of X euros, blah, blah, blah. Yes, yes, yes, no, yes, yes responding to those alerts. Okay, that’s a bit trivial maybe, but you could say it’s the analysis around that that is interesting, but that’s really the very small part of the work you would then do.

(26:42):

The larger part is then to say, okay, what if we change the service level towards the clients? What if we open up a new production line? What if demand in Germany increases by 10% on this brand? How will that trickle through? For example, preparing the decision material around that, which is a lot more fun and interesting, I think, than using ARIMA models and download data from SAP or whatever, I don’t know. That is actually done in, I don’t know, 70% of the companies’ time.

Søren Hammer Pedersen (27:12):

Yeah, of course.

Benjamin Obling (27:14):

Waste of time.

Søren Hammer Pedersen (27:14):

Yeah. So we are changing the process, but also maybe a bit of advice from my side is to remember the communication and the transparency when we move the process from the, let’s call it, lower-level planning towards the end of the executive S&OP, and remember that we probably have a leadership, a management in a company that is well-aware that AI to some extent is a black box and could be scared of what’s going to happen here. So really remembering to communicate how we control the AI within the supply chain planning, how we guard it and how we ensure that we have this human validation as well, that can help us a lot in running this process so we don’t have this continuously backtrack of, that can’t be right or things like that. Yes, but we know, but this is how we control it.

Benjamin Obling (28:16):

Yeah, absolutely. And then you could say, don’t give up on your common sense. I even see more the tendency as, ”Okay, that’s a black box. That’s okay. So it’s 2,000. That’s perfect. Let’s purchase that.” Or you could say the whole project around a new AI setup in supply chain, okay, it’s something AI, it’s a black box, so you don’t need to explain anything because we can’t explain anything, so let’s just go ahead and do it and spend a lot of money on that.

(28:47):

Let’s build the business case. What are we trying to solve? How is it actually going to solve it? Then we know that there is a network, a deep learning network box that we cannot understand, but what we can certainly understand is, what do we put into it? How do we validate the output? How do we fence the output? And how do we make alerts around the output so we can trust it? That is still, yeah, common sense and business cases still rule.

Søren Hammer Pedersen (29:10):

Yeah. I like that advice. Just keep your common sense. A bit trivial, but it’s so valid. Good. I think that’s really good advice. Benjamin, maybe as a last point here today, talk a bit about, where do you see this going now? There’s so much stuff coming out. I think I saw a very interesting example of ChatGPT here where, within a short period of time, how much had that improved, it answered on the same questions, basically. So things are moving rapidly here. Where do you see it from the state we’re in now within supply chain planning and looking a few years ahead?

Benjamin Obling (29:56):

I guess the best advice is that we know it’s going to change dramatically. It’s very difficult to say where this will go, and you can say what we have been talking about today is surely at least partly wrong in two years. We need just to follow this very closely and you can say the advice and the ways that we set this up and use the AI will change basically from quarter to quarter almost as we speak about this or as time goes by. I think you can say important part is certainly build the business cases and make sure also that you can run or get the business case for the fairly short time horizon. If it needs to take five years before it turns out that you have a positive business case, you probably shouldn’t do it in this space because it can look very different in two years.

(30:48):

But you could say making sure that things like master data is getting better, et cetera. Those are things that you can certainly do. And then make sure that you can free up time. So you automate as much as possible with the possibilities right now with a short time to business case. Then you free up the resources that can then help you with the next steps, what will be there in one year, in two year, in three years, et cetera. Do we need to be quick on our feeds? Harvest the results fast.

Søren Hammer Pedersen (31:22):

Yeah. I think it’s the worst advice we had in a podcast is that everything we’re going to say is outdated in two years’ time, but it’s the honest truth. That’s how it is.

(31:33):

I think, Benjamin, thank you so much for your input. I think it was very interesting for people. My takeaway is anyway from this discussion is very about remembering the steps we talked about in the beginning and how to utilize those, where I think people really, number one, two and three, they are pretty much spot on. They know they have to do something. They might build a business case. They might find out where to hit the mode, but they forget or at least don’t build in the fencing of this and remember how it changes and how we still need this human touch. Very, very excellent. So message from Benjamin and me, don’t treat it like magic. Make sure that we utilize the power of AI. I think that’s in a good way in our planning. Thank you very much, Benjamin.

Benjamin Obling (32:20):

Thank you.

Søren Hammer Pedersen (32:21):

And thank you all for joining us again here today. We appreciate you being with us. Of course, appreciate any feedback you might have. Any topics you might have for future topics here on the S&OP MasterClass would be highly appreciated. Just reach out to Benjamin and I. If you have something interesting that you wish to discuss, we’ll be happy to connect. Also, of course, if you want to know more about Roima or what the things we do, drop into our website and have a look, we’ll be happy to talk to you. Thanks for your time here today. Have a wonderful day out there and we hope to see you soon again.

Ota yhteyttä meihin