Podcast EP 175 - How AI co-pilots are transforming the factory floor - Artem Kroupenev, Head of Strategy, Augury

EP 175 - How AI co-pilots are transforming the factory floor - Artem Kroupenev, Head of Strategy, Augury

May 05, 2023

For today's episode, we talked with Artem Kroupenev, Head of Strategy for Augury. Augury uses AI to detect machine failures before they occur by empowering the largest process manufacturers to integrate AI into their daily operations.

In this talk, we discussed how AI co-pilots transform the factory floor by guiding maintenance, quality, and production decisions with highly accurate diagnostics. We also explored the impact of GPT-4 and other large language models on how manufacturers use unstructured information and build more convenient user interfaces.

Key Questions:

  • How would you apply AI and machine learning to the manufacturing environment?
  • What does the journey look like for AI integration in industrial and manufacturing in the coming years?
  • How do you work with unstructured information to build more convenient user interfaces?

 

Subscribe

Erik: Artem, thanks for joining me on the podcast today.

Artem: Hey, nice to be here. Thanks for inviting me.

Erik: Yeah, this is going to be an exciting one. I had your CEO on about a year ago. And so, I think it's not necessary to kind of rehash it and deep dive into what Augury is doing. That gives us a bit of the luxury to focus on one of the topics that you're also taking a leading role in, which is bringing AI into the manufacturing environment. So, I'm really looking forward to this conversation. But let's still give folks a bit of a refresher on what Augury does. Can you give us maybe the two-minute, three-minute version of what's your value proposition? What solutions are you providing, and who you're providing them to?

Artem: Absolutely. Augury is today a market leader in machine health and now going into what we call process health. It's part of the production health category. When you think about manufacturing or industrial operations, today we make sure that industrial machines don't fail. We do that through AI, as well as IoT applications. When we talk about process health, that's making sure that your factories can produce product sustainably, the highest possible quality at the lowest possible cost, et cetera, and also applying AI technologies to be able to do that.

Then there's other things that we're starting to do, which go into the why the category of helping manufacturers start managing the network of their plants and their operations. We can delve into that as we talk. But in its essence, that's what Augury does. We work with over 100 different manufacturers and industrial companies across segments like chemicals and food and beverage, and plastics, and building products like cement and wood and metals. A number of other materials that we recently ventured into — the energy space, the oil and gas space, with a partner called Baker Hughes. It happened over the past couple of years. So, we like to say that we make sure that manufacturers of medicines, and beer, and snacks, and things that you use every day keep doing that and do that sustainably at a higher efficiency. And you're welcome.

Erik: Perfect. Thanks, Artem. And yourself. So, what is your role at the company?

Artem: I head strategy at Augury. I joined Augury some time ago, over seven years ago when the company was about three or four years old. I joined Saar Yoskovitz, who's the CEO, co-founder, and Gal Shaul, who's CTO, Chief of Product as well as co-founder, and the early-stage team in order to help build out the first product and bring that product to product market fit and then develop beyond that.

My role evolved from product to go-to market, to partnerships, ecosystems, corporate development for the company. I'm making sure that we have a solid strategy and a good vision for how we take that forward. We just keep building. Part of the role has been helping navigate that stage that we're in right now, which is hypergrowth — just very, very rapid growth across markets, across segments. That's been very exciting for me.

Erik: Yeah, absolutely. It's great. You're coming from a perspective where you were VP of or head of customer development, VP of product. So, you have these different perspectives before moving into the strategy role, which I think will be very useful for this conversation. Maybe let's start at a higher level and then go deeper.

If we talk about AI in the factory, I'm interested in hearing how you think through this. For me, there seems to be the traditional way of — we have data coming off machines. We use machine learning to process that data and have some sort of outcome of that process that we make decisions on. Then we have this newer development of GPT, and then you start to look at, okay, to what extent can we digitalize the knowledge? We have all these 55-year-old experts that have been working the factory for 15 years. They're all going to retire. How do we put their brains in the machine? That's quite a different process. Also, the front-end interaction that people might have with that machine would be quite different. That's a little bit my layman's way of thinking through this. But how would you, at a high level, think through how we apply AI or machine learning to the manufacturing environment?

Artem: I think, when we talk about AI, it's not one thing. It's a number of different things. So, we can break that down. But ultimately, these technologies require a slightly different thinking that we've had traditionally in the past through, let's say, the earlier industrial revolutions where we would think about taking what exists today, the processes that we have in automating them, automating certain tasks or digitizing those tasks. Moving from analog to digital to make them faster, better, better repeat it and so forth.

Here, we have the opportunity to create a partner to think through problems to make better decisions, to expand the scope of how we think, how we do, and even how we sometimes imagine things be, which is a bit of an abstract concept. But I think that thinking about AI just in terms of pure automation is, I think, does some disservice. The way I think about it, the way we think about it is, we help apply these technologies not to a task and not to just technical aspects like data or some part of what you do or process. We're thinking about applying this technology to a function — a function like reliability and maintenance, or a function like a process engineering or manufacturing.

Within those functions, we really are making sure that what is it that people in these roles, what are they tasked to do? What are their goals and ambitions, or what do they actually want to do and need to do? How can we help do that a lot better more effectively? But also, how can we expand that in a way that makes those roles and functions just a lot better and a lot more productive? And so, that perspective helped us really hone in and focus on how do we build our products, how do we approach this industry and these types of solutions? We started with reliability and maintenance. How can we make sure that those functions are a lot better, and that the key problems that they have today, we make sure that we do eliminate them as much as possible? So, that's the holistic approach.

It does come down to both a set of tool but also a different way of working. For instance, when you think about our product around machine health, it can accurately predict machine malfunctions and really hone in on a specific issue, with a specific component of a machine with over 99.9% accuracy. It can also tell you what's the severity. How much time do you have to fix it? That flips the traditional approach to reliability maintenance on its head. Whereas you have time-based approaches and you would go and say, "Well, I don't really know when this machine is going to fail but I'll make sure to maintain it in certain time intervals to reduce the risk of that failure."

Now, the first thing that you do is you monitor the machine, and you deploy AI to make sure that you understand exactly what's happening. Based on that, you create your maintenance reliability strategy. So, that changes the perspective. Eventually, they will change the way they teach these disciplines in university or at school. Because it's just such a game changer in terms of how you approach it. We're doing the same for process engineering and some other functions that we have. So, I think that's the difference in approach in terms of how we think about it.

The other aspect of it and the difference of approach is, you have to take into account how those functions interact between themselves, with each other. We see a lot of silos, hierarchies, that exist in the manufacturing industrial space. For a good reason, traditionally, you have to compartmentalize these roles in order to be able to perform some of these roles that functions effectively. In the world where you have access to insights, almost in real time, in many cases in real time, exactly what's happening with your equipment, with your processes, with your scheduling, and with your potentially large parts of your supply chain. In that case, all the different functions actually need to look at the same thing, the same data, make decisions based on that data, those insights and also work towards the same goals.

But you can no longer incentivize people to work on different things as part of COGS within this mechanical structure, this machine. But rather they have to act as a team and work very, very tight iterations in order to share learnings, understand what's happening, and then potentially make decisions that are strategic for the business at their level in order to be able to move faster, be a lot more agile and meet demand ultimately and make those changes. I think that is the other part that AI technologies have initiated within the space. It's that we're changing the nature of how we manufacture things by changing how we think about these functions and roles, and by thinking differently about how they interact with each other.

Erik: Okay. Let me present a specific case so that we can make this as tangible as possible. Let's say we have a chemical manufacturer in a commodity industry. Let's say it's one of those industries where if one factory globally goes down, it starts to impact supply and then pricing starts to fluctuate globally, and so forth. In one of those situations, there's some kind of outage in a piece of equipment. Then you have, as you said, these different functions that have to react. So, you have maintenance reacting. You also have forecasting reacting. You might have salespeople. You might have supply chain people who all have to react to the situation and make decisions around it.

Let's say the baseline AI that you've probably been working on over the past number of years is processing of the machine data, so that at least the maintenance people can look at it and have a sense of what's going on. The other people might look at that data and not be able to make sense of it. But they know that something's wrong. Different people have different level of understanding. And so, I would say that is very important. That's the first layer of, okay, there's a machine that has a problem. We have some sense of what the problem is. Then you have all of these other decisions that need to get made. What is the likelihood of us being able to fix this within one hour, within one day, within one week? What are the impacts on supply chain, et cetera? So, you have all these other decisions that are made.

To what extent are we at a point today where Augury or maybe the broader AI ecosystem is starting to also impact these higher-level decisions that people would be making, perhaps not automating them in any case but providing some kind of guidance or insight to allow people to make more confident, quicker decisions?

Artem: Yeah, absolutely. I think one of our customers put this very, very well. He said, "Ultimately, I would like you — meaning, Augury — to help me understand whether it's better for me to make this product in this plant or in that plant based on the health of my equipment that's tuned to specifically what I'm producing, the health of the materials that I have, the health of the production processes that I have, and also in a good understanding of how my team might react to it.

It's not just reacting quicker to a machine failure and the plans in terms of the supply chain, but being proactive about planning based on a good understanding how will predictably your plants react to a change introduction, a new product, some kind of systemic change that you have. That is the real inside intelligence that we're aiming for, and also to be able to execute on that autonomous way. It's not just a lot of automation in terms of running those processes, but also having parts of the process controlled in a way that those decisions can actually be made and implemented at a very quick time.

Let's say if you have an extrusion process or some kind of production line process, that process can automatically adjust the send point and V center line based on the new product that you're introducing taking into account the reliability impact, taking into account the maintenance and taking into account the materials and so forth. That's kind of the — I wouldn't call it the Holy Grail. But that's the next stage in evolution around being predictive capabilities, predictive production capacity.

Where we are today on this journey is that we can think about this as a three-stage evolution, three levels. The first level is, we call it predictability or stability, where you make sure your plants are humming, your machines no longer fail unexpectedly or fail at very much lower rate, and you have good predictability around if I'm running this process, that's my centerline and so forth. And I can run that very effectively.

The second piece is, we call that agility. So, you can essentially start introducing a new product. Your stops, your changeovers, the disruption to the system from a change becomes a lot lower, because you can adjust much quicker. Then the last stage or that next stage becomes autonomy, where not only do you adjust quickly but you can also predictably say here, is it better for me to do it here versus there. What will be the impact? What will be the effect? You can roll it out a lot more seamlessly than you can today. In some cases, introduction of a new product takes a year, months for some manufacturers. Can we cut that down into days accurately to predict what will happen? That's where we're thinking about in terms of the evolution and levels of autonomy, if you will, within factors.

Erik: And so, what does this mean in terms of integration of systems? Because I guess, at the lower, let's say first tier, you can almost look at a machine or a production line and say, okay, this machine flashed the red light. It sent an alert on an app to a group of people. It's fairly isolated in terms of the alert and the impact. As you move down that progression that you just outlined, then you start to look and say, okay, well, now we need information from MES and maybe two-way communication. We need to start automating processes. We might need to be sending alerts, not just to this group of people but to multiple other functions within the organization.

Once we move into that third tier, also potentially automating important decisions. Then you want to make sure that you really are doing those in the right way. You have safeguards. You have memos going out, why are we making this, et cetera. So, you start to get into this highly integrated system. Where are you today on this? Then what does that journey look like also from the perspective of the manufacturer? Because, of course, a lot of these plants are sitting on a 20-year-old or maybe a 40-year-old equipment and 20-year-old IT systems?

Artem: Yeah, absolutely. The level of complexity in the industrial world is, I would say, an order of magnitude higher than it is in applications like autonomous driving, which are really pretty complex. Because you have so many different ways of doing things and running a process, so many different types of pieces of equipment, different environments, different cultures, and also vastly varying levels of maturity in terms of how you do it. So, the level of standardization that you have in autonomous driving, you have very similar road signs that have been agreed upon across most of the world. In manufacturing, there are competing systems into how do you approach. For instance, just in time or agile manufacturing, that have not been fully agreed upon. There's various flavors of that.

So, there's a lot of complexity when you get to that level of really understanding customization. I think that's where a lot of companies that provide services around AI, not products like Augury but service around there. That's where the limitations come in. Because there's a lot of unique snowflake solutions for that specific plant, that specific sub process, that specific instance that doesn't really scale across.

The way we approach this is by looking at use cases and, again, as I said, functions that serve specific functions within manufacturing — people ultimately, really not just processes that are replicable and can lend themselves really well to amplification with AI. We mentioned Copilots, right? Well, maybe we haven't. But that's a term that I want to introduce. Can we build a copilot for reliability? What does that entail? That entails the ability to essentially predict machine failures across a vast array of equipment. Which machines are similar in that sense that we can actually do that? Well, rotating equipment is something that we started with, and then we're adding additional types of equipment as we go along. Different types of physical movement, linear movements that's not just rotating, and so forth. That is our approach. It's we start with a large use case that lends itself well. Then we expand that use case and create more and more covers within that. The same goes for process, and the same will go for the next two, or three, or four use cases that we have in our plan and our strategy.

Once we do that fairly effectively, and we provide value at each level of those use cases, the correlation between them becomes very, very clear and apparent. We really are starting to do that between machine health and process health by helping identify which machines are contributing to quality issues, for instance. Some of them do. In some cases, a lot of them do. We're starting to bridge those gaps, but we're bridging those gaps in a very tangible, very actionable manner. Not just, let's take all the data possible and try to glean insights from it. Rather, how can we bring these two functions together around the most critical parts of what they do and build those bridges between them? But first, we have to make sure that those functions are implemented with these AI copilots, these new capabilities at the average. So, I hope that approach, that structure makes sense?

Erik: Yeah, it makes a lot of sense. So, you don't go in and say, "Let's build a smart factory." You go in and say, "Let's focus on some high value problems to solve that are initially isolated maybe to one function. Then let's expand from there and start to connect these data points." I guess, as you go through that, you start to say, "Okay. For this particular application to solve this problem, we're going to now need to connect to your quality assurance system, or you start to build out those integrations.

My assumption is to start with, that Augury was working with a lot of machine data that's highly structured. And that as you start to expand and provide value to these other use cases and challenges, you start to potentially move into cases where you say, well, we have root cause analysis data in PDFs that have been written about every problem that's happened over the past 20 years. That data would be quite valuable. But it's in PDF. We have bonds. We have specifications. We have all sorts of information that would help us to better understand the problem, but it's not coming off a machine in a nice structured way. To what extent are you working with this less structured information today in your deployment?

Artem: One way that we think about it is that if you look at a factory manufacturing floor, a lot of the things are interconnected by design or through routines, or through some other variation just by nature of how it operates. But you can also see that some of the key issues, the more critical issues that can really unblock for a factory that coalesce around a few centers. You can machine centers, or a few ways of doing things, and so forth. The problems tend to cluster, and they're not completely dispersed. Making the connection between those is really driven by value what is needed and necessary to unblock in that effect.

The other piece that you mentioned, how do we deal with gaps in data or data that's kind of legacy or exist in various places, especially if we're talking about the history of maintenance or the history of running a process and so forth? The way we approach it is that, well, do we need it? Where is the actual ground truth? Where's the proof? For us, in the machine health side, it's actually in the current state of the machine health condition. So, we provide our own hardware to get the right quality data, and you get the property standardized. And so, we don't need historical understanding of what happened with that machine. It's almost irrelevant, because we can tell you the current state going forward.

Within the process side, there's some need for historical data and understanding how was this process running before to understand what's the optimal way of running this process. Then what matters is if Augury provides an insight around how to better run that process. When we implement that, it actually improves the process. And it works. That's the majority of the proof and value that you need. I'm not saying all the historical data or how the previous processes are not relevant at all. There's a lot of knowledge that goes into that, that you can get a lot of insights out of. But we focus on the actual operational side of the process, and is it being improved in a tangible sense. Based on that, then you can improve your routines. You can train people. But those insights and capabilities become ingrained as a natural part of how you operate. That's actually very difficult to go back for somebody who's had the experience with a technology product like Augury to go back to the way it was before. Because they really vividly understand that we're running blind. Then you see the evolution open jesting the routines, the maintenance practices, the process recipes start adjusting and taking into account the insights that Augury provides.

Erik: Okay. It makes sense. Yeah, the machine health, that makes sense. Then I guess on the process side, you're creating these feedback loops with every decision, every recommendation that the system makes and it provides.

Artem: Absolutely.

Erik: One of the reasons I'm kind of poking at this question here, and I'm really curious in whether you think this is interesting or not at all. But obviously, the world is going a little bit crazy right now because of the launch of GPT 4. There's been this whole world of machine learning around production data that probably isn't actually impacted too much, that was basically already working quite well for the problems that it was addressing. These large language models are basically addressing this other set of information, which is kind of messy human speech, images, and so forth and making sense of that.

The reason I was looking at this root cause, because there's this whole set of data that is human speech. It's written down somewhere, or it's even in somebody's — coming back to this 55-year-old maintenance engineer, it's in somebody's brain. Hypothetically, that would be valuable. We haven't, I'd say, generally tried to address this before. So, if there's a root cause analysis, somebody might go and look through some old reports. That's a very time consumptive problem. You're only going to do that if you really have to, a time consumptive way to solve the problem. And so, I'm curious from Augury's perspective and also from your strategic perspective at the company, has this impacted how you think about the way that you solve problems in the future, the ability to work with human language in a way that maybe 12 months ago didn't seem as feasible?

Artem: Yeah, absolutely. We're actually developing a number of things in that area. For us, we're AI company. So, our customers look to us for answers when a new wave of technology comes in. Obviously, we, very early on, have been diving into and thinking about what is the impact of this, the applications, the use cases and so forth. The way I think about it is that we have narrow AI. It's called AI that's kind of domain specific, which is AI that helps you figure out the health of the equipment. It takes certain signals that we can't even — for us, people, it's very difficult to analyze, understand them. But it makes a lot of sense within an AI application, process data, and so forth. You can take that to other areas of the enterprise where your financial data transactions, ERP data and supply chain data and so forth. That is a narrow domain-specific AI application that requires a level of, in some cases, physics, optimization, domain expertise to be baked into the solution to do well and understand it.

Now we have another area that's coming into, let's say, the enterprise and will come into manufacturing industry, which is AI that's conversational and AI that, essentially, if domain specific AI understands things and this AI is one way to think about this is that it understands people or at least our language. When you think about the company as a whole and production floor as a whole, it is really a combination of things and people and conversation that we have. To truly understand what's going on within the company, you cannot just have one. You cannot just judge a company by its production rate, by the health of its equipment, and by its ERP data, financial data. There's a lot of things that go on in between people all the time, that knowledge and so forth, which you alluded to.

That opens up an opportunity to essentially take the whole enterprise, the whole company into account when you think about solution, in making better decisions. When I think about the architecture of the future of that type of enterprise, it has a number of domain-specific AI applications, as well as a layer of conversational AI taking together. Today, when you think about who do you ask about what is happening in the company in terms of the business, in terms of how likely are you to produce something well, as well as what is the morale of your team, that's one of the executives. Probably, like a VP of manufacturing or somebody who's most in tune with what's happening within that part of the organization. It takes a lot of judgment.

With an application like GPT, you can actually have the ability to ask those questions, and in some cases a lot more effectively to a data executive in answering these business questions and to provide those answers a lot quicker, and with a lot more truth, assuming that they're accurate. Also, at the same time, to connect that insight to the various insights coming out of domain specific AI applications. So, you have a combination, I believe will emerge, of both GPT type applications as well as domain specific applications working in tandem.

Now it's really just a matter of, what does the interface look like? What do the connections look like? What does the data look like, and so forth? This is where all that data that you mentioned around, whether it's work orders or knowledge by certain people, or some things that we might have written down somewhere but we never look at, this is where they become extremely valued. Because that is the wealth of data and expertise that the company has that we don't really regard today for the large part because we just don't have access to it. I think that's really the future with it.

The other piece, if I may, is that conversations are a lot more natural for us, for people. If you can have a conversation that's beyond the rubbish chatbots that were around up until a year ago or a year and a half ago, you can elevate it to a level where it's actually insightful. Now you can have a true interface that's not just about talking about commands or ask from Alexa. It's about what is it that is blocking my production this year? What is it that I need to change within my maintenance liability strategy in order to account for this type of cost reduction initiative?

In the future, you can actually get intelligent answers provided that AI applications as well as data sources are properly connected. It can feed into that. That opens up a whole new set of opportunities. It's almost like having a whole different management executive team that is helping you and advising you on an ongoing basis. That's what I would call, when we're talking about AI copilot, this is enterprise level AI copiloting. So, you can drive executive decisions. That's the high level, the way I can think about the future of this.

Erik: Okay. Very interesting. You have, on the one hand, access to information sources that previously were too cumbersome to really access it in volume. Then you also have this new interface that allows people that otherwise might really struggle to make sense of information to actually access it in a way that works for a normal business person. Even a normal engineer, I suppose would sometimes appreciate something other than a graph too, you know. Okay, interesting.

There seems to be also some unique challenges around that, right? There's this integration challenges, kind of new datasets. There might be some privacy challenges. If you have a machine that's listening in on conversations, and then you have to — when should it ignore something? When should it call the police, and so forth? So, there might be some issues there. There's probably some IP related topics that anytime you start moving into new datasets, you have to figure out what a company is comfortable with, especially if it means that they're moving information to the cloud to be processed somewhere off premise.

If you look at this, are there already some low-hanging fruit where you say, in the next 12 months Augury is already going to be using this new technology to be making some impact? Maybe not the grand vision, but some impact. Then what do you see as a realistic timeline for addressing some of these other challenges that adhere to a new technology?

Artem: Yeah, I think we definitely need to — some of those things that you mentioned are challenges that need to be resolved. They have been resolved in similar situations in the past rather quickly. If you think about the evolution of closed software to open source, a lot of things happened in that evolution. But rather quick amount of time companies have figured out how to use those applications effectively, what should be the rules and regulations around and how do we share data and so forth. I think this will happen here. I've seen already a lot of thinking around how that would happen, or both from the users and the actual enterprises, the companies, but also from the providers of large models and how to segregate, how to make sure that the data is fair and stays private, and so forth. But there's a lot of challenges there still to resolve.

We take always the cautious approach to this in terms of data, protecting our customer's data, making sure what we're here is what we promise always, and the security standards that we set for ourselves. But we are already in the process of integrating some of these technologies into our product. The way we approach it is very similar to how we approach the kind of domain specific AI applications that we have. It took some time for our machine health, AI capabilities to become as accurate as they are, for reliability people in the plant to actually rely on the day to day, and to even issue automatic work orders once there's an Augury alert — without any review because there's a high level of trust. But to do that, we had to review those alerts for a number of years.

To make sure that we create that last mile of accuracy with our own service people, with our own experts to hold up to a very high standard, to make sure the machines don't fit. The same approach applies here. If we're integrating GPT type application into, let's say, our customer service and that provides recommendations to the customer, in the beginning, we will make sure that we review it until we're comfortable with it being accurate enough and useful enough for our users. It's a hybrid intelligence application. We always put a person in between in order to make sure that standard is there and that the quality is high. Then once it is, then we can start releasing that into direct interaction with our customers.

The applications that we have and will have for it, they range between really improving and making sure that we service our customers faster, helping them answer both questions. Just to give you an example, we have a lot of conversations and interactions happening within our product. Once we provide the recommendation, users ask, "Well, should I fix this machine now or later? What's the best strategy to approach it? " It's a systemic issue. So, we have a team of experts that help go dive deeper and look at strategy around machine health. Obviously, the same happens in the process engineering side.

Instead of recreating those conversations with text every time, you can use a GPT type application to essentially pre-populate a lot of information for users. Internally, it's already working rather well because those large language models do include quite a lot of information around reliability and engineering and so forth. So, they're rather accurate in what their responses would be. They're just not always specific enough. So, we can adjust from there.

The other piece is that I mentioned, in the future, you can ask an application, "Well, what should I do in my factory? What should be the impact of a certain business decision?" You can potentially get a natural language response back. That is another area that we're pursuing within our product — to help navigate them better, but also to understand what the interaction looks like beyond the screen and the graph. Our users, thousands of them, they have a lot of different questions. They would like to ask if they had the chance. I think that will also expand the type of functionality that we'll provide to them in the future by having some of these answers and these questions come up and some of these answers. We will see a lot more use cases of the kind that users would have with their product.

This is where we're going with this on the customer facing side and also internally, improving some of our processes, whether it's sales or marketing, or engineering, or product work. There's a lot of interaction going on with GPT applications. We're really seeing a lot of really, really good results. You just need to learn how to use it to your advantage and how to augment your role with this type of tool.

Erik: Yeah, great. Well, it sounds like a very pragmatic approach. You basically start fast but rollout conservatively. Your customers expect 99.X% reliability here. Artem, I know you have to jump on your next call soon. So, let me just open up the last question and say, what have we not touched on yet that you think is important for folks to understand?

Artem: Well, I think that you started with the question of, what is the approach to building AI applications, and do that just take my data and figure out what to do with it? Do I try to build a solution for a specific thing? I think I want to drive that home again, that there's an opportunity to really think anew about how we structure manufacturing teams, how we structure the different functions within manufacturing, and how we will create collaboration between those. The reason I'm saying that is not just because now we have these AI capabilities and these types of insights. It's important because this is the way you really unlock innovation in manufacturing industry, or in any industry for that matter. It's about diversity and collaboration. We're enabling some of the infrastructure for that device diversity and collaboration to actually merge. Because once you have these types of insights, will you have to actually collaborate on them to make better higher order decisions. But I think we're just getting started.

When you think about industry manufacturing specifically, there's such right fertile ground for innovation going forward. Maybe we're like 1% or 2% of the type of the wave of innovation where we have. A lot of people, I believe, based on that will choose to go into this industry, this profession in the future. Geopolitical movement aside, there's a lot to build for humanity. That's really exciting for me that the level of innovation that's going to happen in the next few years. Obviously, GPT and the new technologies around AI are going to play a huge role with it.

Erik: Yeah, great. I tell people why I'm in this space. I think similar to what lead to you is that you look out over the next 20, 30, and you can think, yeah, I could spend the rest of my career here, and life is going to stay exciting just at the cusp, and it's already getting pretty complex. Well, folks listening, this is Augury. augury.com. Artem, thank you so much for taking the time to speak with us today.

Artem: Thanks for having me. Thank you.

test test