Podcasts
    ANDOR
  • (49)
    • (40)
      • (37)
        • (28)
          • (27)
            • View all 13 Technologies
              ANDOR
            • (6)
            • (4)
            • (4)
            • (3)
            • (3)
            • (3)
            • (3)
            • (3)
            • (3)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (2)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • (1)
            • View all 151 Suppliers
            Subscribe
            Connect?
            Please feel encouraged to schedule a call with us:
            Schedule a Call
            Or directly send us an email:
            233 podcasts
            Sort by:
            EP 070 - Enabling smart manufacturing at the edge - John Younes, COO, Litmus Automation
            Thursday, Oct 01, 2020

            In this episode, we discuss the state of edge computing adoption in manufacturing. We also explore the most common edge computing use cases in OEE optimization, predictive maintenance, asset condition monitoring. 

             

            John Younes is Co-Founder and Chief Operating Officer at Litmus Automation. He is in charge of operations and growth for the company and draws on considerable experience working with start-ups and early stage companies. Litmus enables out-of-the-box data collection, analytics, and management with an Intelligent Edge Computing Platform for IIoT. Litmus provides the solution to transform critical edge data into actionable intelligence that can power predictive maintenance, machine learning, and AI. litmus.io

            Read More
            EP 069 - Building business models instead of technology - Ron Rock, CEO, Microshare
            Thursday, Sep 24, 2020

            In this episode, we discuss how to monetize data for multiple user groups with different needs and simplify IoT deployments with end to end productised solutions.

             

            Ron is the CEO of Microshare. Microshare provides Data Strategy as a Service, enabling our clients to quickly capture previously hidden data insights that produce cost savings, sustainability metrics and business opportunities. Our solutions create a Digital Twin of your physical assets, providing comprehensive picture of their performance, the risks they face going forward, and the steps required to produce maximum returns from these assets. https://www.microshare.io/

            Read More
            EP 068 - Device locations and communication technologies - Kipp Jones, Chief Technology Evangelist, Skyhook
            Monday, Aug 24, 2020

            In this episode, we discuss the key variables that determine locations of devices and guide technological architecture decisions. We explore the advantages and limitations of new IoT communication technologies like 5G and LoRa.

             

            Kipp Jones is the Chief Technology Evangelist at Skyhook, works to guide and shape the company's innovative location intelligence technology. Skyhook, a Liberty Broadband company, is a pioneer in location technology and intelligence. Skyhooks provides customers with real-time services and analytical insights via a combination of precise device location and actionable venues. Skyhook's products are built on the pillars of trust and respect for individual privacy. skyhook.com 

            Read More
            EP 067 - Data streaming for mission critical field assets - Matt Harrison, CEO, WellAware
            Tuesday, Aug 04, 2020

            In this episode, we explain how data streaming mission critical data for remotely monitor and control physical assets works. We learn how to simplify business model and connectivity for asset management, all with the backdrop of Covid-19’s impacts on industrial digitalization trends.

            Matt is Co-Founder, Chief Executive Officer and a member of the Board of Directors of WellAware Holdings, Inc. At WellAware, Matt drives the overall business and product strategy while also leading the day-to-day execution of the company’s vision of connecting people to the things that matter.

            WellAware empowers organizations to be efficient, safe and sustainable by streaming mission critical data to their employees, so they can remotely monitor and control physical assets.

            _________

            Automated Transcript

            [Intro]

            Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host Erik Walenza.

            Welcome back to the industrial IOT spotlight podcast. I'm your host. Erik Walenza CEO of IOT one. And our guest today is Matt Harrison, co founder and CEO of WellAware WellAware empowers organizations by streaming mission critical data to their employees and partners. So they can remotely monitor control and automate physical assets. In this talk we discussed well-aware as opposed to simplifying the business model and technology deployment process for edge connectivity and asset management. We also explored the impact of COVID-19 on industrial digitalization trends. If you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iotone.com. Thank you.

            [Erik]

            Matt. Thank you so much for joining us today,

            [Matt]

            Erik. Thanks for having us. We're excited to be here.

            [Erik]

            And so Matt, I'm really interested in kind of getting into the business and the technology behind well-aware.You know, I talked to a lot of companies that are very, very horizontal, and so it's very interesting to speak with a company that has more of a vertical focus, but before we get into the business and the technology, I want to understand a bit of your background cause you actually have, I think, a very interesting background quite, quite varied across healthcare and the energy sector. Also you're an investor. Can you just give us a quick run through of your background and then what led you to found or co-found well aware in 2012? Yeah, you bet. I'm happy to share a little bit of, of myself. I hate talking about myself, but we'll we'll at least provide a little bit of the, the breadcrumb history on, on how we got to where we are. It's probably tied back to my red hair more than anything.

            [Matt]

            I think that just kind of made me come out of the womb with a little bit of a fiery DNA, but I've always just been a problem solver naturally drawn to problems, got an electrical engineering degree at Texas a and M never necessarily practiced in engineering. Always wanted to apply that from a business perspective and using technology to solve problems. And so that's really what my career has been made up of is understanding problems and you know, trying to really understand those better than most. And I think that's that's been a huge feather in our hat as we're understanding well aware and understanding what true industrial IOT convergence looks like and then trying apply, you know, some of the latest it technologies to help our customers solve problems. So I've been very fortunate to work with some amazing people here at well-aware and in my past roles, I've had a lot of incredible mentors and investors and board members.

            So I really count myself blessed to just be kind of leveraged up by a lot of people along the way. But you know, if I could kind of characterize myself, I had to say, you know, we, we love to compete. My my whiteboard has one quote on it from pretty decent baseball player named babe Ruth. And it says it's really hard to beat people who never give up. And I think that just, that really kind of characterizes well-aware and the people that we try to bring into the company so that we can work really hard to help solve problems for our customers.

            [Erik]

            It's a good model for the time right now, right? This is a challenging time, but there's always opportunities and challenge. If you just keep moving forward, there are. And, and you know, one of the things I, you know, just to add to that, that I would encourage the listeners that are in IOT to, to really hold onto that we found to be true is, you know, a lot of times, even our customers don't know exactly how big of the problem they're trying to get addressed is how difficult it is to solve. And so we really have had to lean in and, and work a lot with our customers to not give up on them. When they've said, Hey, look, I'm this IOT stuff is too tough. It doesn't work. Like I thought it would, we had bumps out of the gate. You know, we really have, have had a lot of success just locking arms with our customers and, you know, really kind of dragging them to the outcome. And we've had a lot of customer outcomes now that they've appreciated that journey. And so this is hard. This is not an easy space. It is the future, you know, giving machines a voice is obviously one of the most exciting things we could be working on, but it's not easy. And I think the twenties are going to be the decade for IOT. For sure.

            [Matt]

            Yeah. And you could say right now because of the COVID-19 background, it's on the one hand never been more important than ever. And on the other hand, it's maybe never been more challenging just because getting people out there to actually deploy sensors getting budgets allocated at a time when companies are seeing revenue go down, this is all more challenging, but on the other hand, the need to be able to remotely monitor and control and understand your operations is increasingly important. I think it's very interesting for people to have a bit of a snapshot on what the industry looks like, what growth looks like, what, you know, what deployments look like at this time, how is, let's say Q two and maybe the, for the Ford outlook for Q3, how are these looking for a well-aware, you know, especially if you're in an industry that's quite impacted.

            [Erik]

            Yeah, absolutely. Absolutely. Well, we're in a global economic situation that none of us have really faced before with the, you know, a pandemic that, that created a lot of the uncertainty. And, and so, you know, to your original point, we've got tremendous amount of jobs that have been lost, lots of top line revenue at our customers that has been lost. And so immediately they've got to go in and cut expenses or, you know, find ways to drive additional profitability into their businesses. Well guess what, the, one of the things that didn't change is the number of machines out there. There's still the same number of machines, even though there's fewer people. And so there's just a huge need and we've actually seen it come through in our commercial revenue growth and in our pipeline opportunities for the rest of the 2020, this is the time where people are really going to start placing huge bets on, you know, moving full stock to industrial IOT.

            [Matt]

            Yes, you have to have a platform that's reliable. Yes. You have to show your customers outcomes. They're not interested in tools or widgets. They want to actually help you. They want you to help them understand what the data translates into in terms of outcomes for them. But this is the time and, and, you know, for well-aware I, you know, we'll probably grow over a thousand X from last year to this year in terms of just our recurring revenue base. And it's just it's fascinating because I think a lot of people need help. Part of that growth is coming from our, our own globalization, both outside of the U S but also, you know, we really built the company in oil and gas. And so we've begun to now naturally expand into other industrial markets and applications. And so that's really helped our company see new opportunities and, and grow. And lastly, we've got a really incredible partner channel that's starting to build. So there's big companies, big telecommunication companies that need help with IOT solutions, big cloud companies that need help with IOT solutions. And so we've been really fortunate to to really begin to be on the front end of a lot of those relationships driving some growth,

            [Erik]

            Right? Well, let's get into the business then. So as you said, you, you started in oil and gas, but now you're expanding outside of that vertical into other areas where there's, I suppose, heavy, heavy assets, a complex operational situations, what would you be, or your value proposition. And maybe you can also talk to whether that's shifting as you're moving into new verticals or whether the same value proposition applies.

            [Matt]

            Sure. Yeah. Well, our, our mission, which is really our value proposition, as well as we exist to connect people to the things that matter. And so things are clearly machines and sensors, but as we've talked about already, there are obviously outcomes as well. And the way those outcomes usually translate for our customers, three big buckets. One is operational efficiency, usually cost savings. And we get those, those metrics accomplished in a number of different ways. The other is improved safety. So we're just eliminating a lot of mileage and trucks and we're telling workers before they go onsite, what to experience what they're going to experience. And so it helps them do their jobs in a much more safe way. And then the last one is environmental and regulatory compliance. And so we're, you know, we're really helping our customers reduce their carbon footprint, deliver some ESG wins, which at the board level is always, you know, very interesting and something that people want to tout and just help them with, you know, a better economic or a better environmental footprint.

            [Erik]

            And so those are kind of the main value propositions that we're, we're delivering to our customer. And our mission statement is we're just, we're here to connect them to their critical infrastructure, their machines and their assets. And we make that easier. We work across pretty much any application. We work across lots of different manufacturers. I would dare to say most if not all manufacturers. So it really doesn't matter what kind of pump you have or gen set you have, or sensor you have well-aware as a unifying platform that can allow you to collect data from very hard to reach places and get it into a actionable format that you can really move the needle on business outcomes. And so that that's, that's been hard to build. It's a, it's a very full stack solution. It's taken us seven years to do it. We started in oil and gas, which I would say was, was really kind of the best and the worst decision for us.

            [Matt]

            It was the best decision because it presented the most difficult environment for us to forge the platform. And so very remote, you know, very hazardous environments, power is a luxury communications is a luxury. And we also had a very highly varying user base. So a lot of our users had very little technical competency. And so that presented a very, very difficult set of circumstances for us to build our technology and our platform and our user experience in a way that we just made it easy, easy to install. And so I think it was the best from that perspective. It was difficult because, you know, it's an industry that's very slow to change. And so there's a lot of headwinds on existing installs, existing infrastructure. Hey, we've done it this way forever. So existing operational patterns, and it's, it's been fun when we begin to see the great shift change, you know, some more technology oriented folks taking on some higher level positions where they say, Hey, you know what? I should be able to operate my business the same way I operate my thermostat in my house. I can change the temperature land from bed in my house. Why can't I optimize my remote gen set or my pump in my business operation? And the answer is you can, and the old legacy operational technology approach that's been owned and built by some very large industrial players over the years is very right for disruption.

            [Erik]

            Yeah. And before we get into the technology, I want to do out here a bit on this topic of of the people that are involved in these technology adoption decisions, who are you typically working with? Cause I guess you have, you have headquarters that might sit, you know, if we're talking about Total, it sits in Paris maybe, but then they have operations around the world. So are you talking to headquarters or are you talking to the, let's say the local general manager of, of some facility that has specific pain points that they understand, and then are you talking to the, you know, you have the, also the split between the OT guys and then maybe the it infrastructure and before those were maybe kind of separate technology to mains, they didn't interface too much, but you're right at the crux of that, right. You're, you're kind of bringing those two ends together. And so I suppose you have, you have decision makers, you know, people that might allocate budget, but also people that might want to control the deployment or the operation of a system sitting in both of those organizations. Can you talk to us a little bit about, you know, on a, let's say local versus headquarters, and then it versus OT, who are you typically talking with? Who are the decision makers, who are the influencers? What is this perspective look like for well-aware?

            [Matt]

            That's a great question, Erik. So, you know, the way it works or has worked for us is with a few exceptions, I'll say, so we do have some large enterprise deals where the customers have decided that they are going to define and fund a very large IOT implementation that has typically C suite and budget approval behind it. And so it is very much a different pursuit than when you're actually building the opportunities themselves. So when you're knocking on the door, trying to create leads, so the enterprise level deals, a lot of times, those are RFP based. Again, you know, the, the stakeholders attached to those are usually your PNL managers. They may have some internal technology or internal internal technology guidelines that they want you to adhere to. They may want you to leverage some of the CapEx and the infrastructure that's already been put in place.

            We love that because again, we work across anything that's out there. And so well-aware is really ideal to help customers leverage the existing investment they've already made. And we just make it so much better. We also, a lot of times we'll, we'll collaborate with existing SCADA systems existing SCADA teams, it teams that handle historians and handle data analytics, predictive analytics, et cetera. So the set of stakeholders, if you will, is really kind of across the board, I could list all of them for you across our, our opportunities when we're creating opportunities. When we're, you know, sending messages out and creating leads, we're looking for people who have a P and L budget and responsibility, and usually need to save money or improve safety, or, you know, their environmental regulatory compliance. And when we find those opportunities and we show people, those are typically the stakeholders that have budget and can move fast. And a lot of times the business owners, they don't necessarily mandate a specific technology approach. They just mandate that you get it right. And they do like you to check the boxes with the CIO and the CSO and, you know, make sure that the field teams all agree that yes, in fact, this is a great, you know, secure, safe technology implementation, but those are our most successful opportunities are when we're, we're working with P and L managers, business managers that need to solve a very specific problem.

            [Erik]

            And then you mentioned that you're moving into new verticals. I see in the, your company introduction that healthcare is one of those. And although you can think of them both as as industrial, I think the, let's say the, you know, you would also say, well, energy and healthcare are somewhat dramatically different in terms of the operating environments, right? As you mentioned, energy is remote healthcare. You're in a controlled facility, you have different con in both, you have safety and privacy concerns, but from very different perspectives, have you had to evolve your business significantly in order to move to this new vertical? Or do you find that the basic challenges that companies addressing are fundamentally the same? What, what, what, what are the adopt the, the evolutions that are necessary for you to move into a new vertical,

            [Matt]

            The healthcare space? Erik has a little bit of a misnomer. It makes it sound like we're, we're getting involved in medicine. We're really not. We're really doing exactly what you just suggested, which is we're taking our proven platform that works across pumps, motors, compressors, power equipment, and we're applying it in the facilities management or building management aspect of a big hospital. And so what we're basically doing is we're ensuring that that hospital systems, that their hospital buildings are ready to operate if you will, right? So we're not doing anything that's taken care of patients inside. We're not in the or suites or the ICU, or even the patient rooms. We're sitting below the, usually below the actual ground level in the chillers and the basement rooms. And we're controlling the HVAC systems, the pumps, the lights, the power equipment. And we're ensuring that all of those very expensive assets are optimized with how they're being run, that they are extending their useful life because healthcare an industry right now obviously is in very, very difficult position.

            It's historically been an industry that runs in very, very tight margins and the global pandemic has pushed it over the edge. And so they are very much looking for ways to save money. And so they're building management facility management expenses are high, and that we're helping our customers dramatically reduce costs associated with maintenance of a lot of these critical assets. So they either don't have visibility into these assets, or they have very limited visibility from some very old antiquated enterprise software solutions that are, you know, like a building management system or a building automation system, just legacy, expensive software platforms. And those are typically very OEM specific. So they work great for train or they work great for, you know, Emerson, but as you know, all of these, these installations in these systems that are made up of many different manufacturers, pieces of equipment OEMs.

            And so having a single platform that has visibility across everything is really being well received by our customers. And so that's, that's how we're doing it. And in healthcare we're doing the same thing and public utilities, the same thing, and manufacturing logistics shipping. We're starting to really get the opportunity to, to leverage the common platform that we've built and get it installed on lots of new machines. And then every time we bring a new machine on that's, that's really a new skew. It's part of the well-aware family. At that point. We just keep adding to that base every day, every week. So it's a lot of fun.

            [Erik]

            Okay, great. So that, that makes sense. So in all instances, you're dealing with legacy asset basis, somewhat conservative industries with a lot of also legacy software that it's not very well suited to modern needs. So I can see why this would not be really a critical change moving into these other, other markets let's then go into your your tech stack here. So, you know, at least what I've seen, you're covering, I didn't know if you would frame it as a pass or a sass, but you're providing this kind of data management layer. You're doing the data and gesture, and I believe you had your own hardware and then you also do managed services. And it seems like it's a fairly full stack solution. Can you kind of walk us through what the architecture would look like and then what would be the elements that are central and which, which are the elements that are maybe optional or a case by case basis? Yeah.

            [Matt]

            You bet happy to do that. Yeah. Let me start with the business model. Cause I think that's the most important one and it took us some time to kind of finally iterate to the place where it seems to be making the most sense for our customers, which is all that matters. So we're in this with our customers, Erik, meaning we only charge a monthly fee for the data service, that's it. So we do have our own hardware. We do have our own software. That's all included as part of our monthly implementation for our customers, no CapEx required. We handle everything, hardware, software warranty. We ensure that those data collection platforms work for the life of the contract. The easiest way I would, I would kind of draw a parallel or analogy to it is as like subscribing to direct TV. You know, they send you a hopper, you don't necessarily pay for the hopper.

            You just pay for the monthly, you know, TV subscription to direct TV or to dish network, whichever one it is. And so well-aware is doing the same. We're providing intelligence edge equipment that goes out and is installed on the machine and that, that equipment we retain ownership to and we manage it, we provision it, it's an extendable platform. So it's got intelligence on the edge. We handle all the, what we would call the OT protocol interfaces. So how, how our edge equipment actually talks to the machine, the ability for us to kind of get x-ray vision into the machine. So we look at any IO opportunity we have with that machine or that sensor. We also look at, you know, the OT legacy protocols, we support pretty much all of them, mod bus, canned bus J 1939, number of other, you know, backnet and some other things as well.

            And so where that Rosetta stone that kind of connects to that legacy OT protocol infrastructure that machines might talk to, we unify it on the edge. We add intelligence on the edge, which is processing capability right there, where we can run local ML and digital twin technologies and things like that. We've already begun to do some of that. We have storage on the edge and then this is probably most, most cool. We put a very rich user experience on the edge. And so our, our edge hardware wirelessly communicates to mobile devices and we use our customers, mobile devices, whatever they may have an iPhone or Android. And that becomes the user experience for our customers. And that's a very, very rich user experience platform that we didn't have to build, but we can leverage over time. So that's just the edge. The second layer is pulling the data back from the last mile, wherever that might be right out in the middle of nowhere or in a basement or, you know, on a large factory floor, we bring that data back either via cellular satellite.

            And in some cases, wifi, once the data is brought back, it's stored into both of our cloud partners, which are AWS and Azure, and then we have a normalized dataset. And so, you know, for those interested what a normalized data set means is it's time synchronized and it's, it is aware it's data aware. So meaning we contextualize what that asset is that we're, that we're monitoring in the cloud. So it's not just a, you know, call it a data tag with a voltage or a current it's actually a company or it's a Cummings gen set, or it's a, you know, Baker Hughes, ESP pump, or, you know, whatever the case may be. And so it's contextualized as a thing. And the data is normalized, meaning it's time sinked and it's high resolution, and that's all landed in the cloud. And then once it's in the cloud, we have kind of a fork of how the data can move.

            And so the data can obviously be consumed through our user experience. We talked about the mobile before, but the well-aware mobile app, which is extremely handy if you're in the field with devices or if you're at home and you want to take a look at any critical infrastructure or receive alarms your mobile platforms, a nice way to do that. And then of course, we have a web based platform. That's got charts and a much more rich user experience for notifications, reports, dashboards, and some things like that. The data though is also made available through API APIs for our customers to consume in any additional user experiences. So some of our customers use Spotfire some use Tablo, somewhat the data pushed into another historian like OSI PI. We do that for many of our customers today. So it's, we've just really tried to make it simple.

            If I think about what well-aware is excellent at, it's really the data collection, the provisioning and the data orchestration from the edge. We really solved the last mile across any machine where a little weaker on the predictive analytics and a lot of the really, really advanced machine learning. What we do provide is excellent data sets for those engines. And, you know, it's like anything in life, Eric, if you're, if you've got a predictive analytics or an AI platform and you feed it bad data, yeah. It's going to give you a bad result. And so well-aware is a great partner for really advanced AI ML platforms. It's that particular area of the stack is not something we've chosen to invest in. And we really look to partner there more than anything. So that's kind of how that's a little bit of a walk through the, the full stack. Obviously it is a huge part of every element of that. And we have, we believe one of the best security platforms out there. And so we can make it easy to deploy, very simple to our customers from a business model perspective you can get going for as little as 50 bucks a month for a machine and it's secure.

            [Erik]

            Okay, great. So, so thanks for the very comprehensive walkthrough. Let me follow up with a few, a few questions on some of these elements of your business model and your tech stack. So going back to the business model, you just mentioned as low as 50 bucks per machine. So I suppose this is a, a per machine per month fee, but then you would have different tiers maybe based on the complexity, the number of connections for a machine or the amount of data. Is that the case? Is there any aspect that's related to the volume of data usage system integration? You mentioned ML. So if there's a need to develop a predictive maintenance application for a particular machine, you know, I guess somebody has to actually put some labor into that, whether that's well aware or a system integrator or a customer. So maybe you can, you can just walk through a little bit more in detail, the the business model, what would be the, the elements that would determine what this monthly VR

            [Matt]

            Yeah. You bet. Well, the 50 bucks a month is really based on an entry layer solution and it's all inclusive. So it includes all of our edge hardware. It might include some some sensors depending on what kind of application it is. And it's, you know, it's going to be a fit for purpose solution for the value that we're creating. And, and so yes, there are scenarios where we have to, you know, rapidly develop some apps if you will, or some, some customization tailoring station for our customers. Everything that well-aware does is platform based. And so we don't, we don't really ever build anything for customers that the customer owns. We will tailor our solution though for customers. And so, you know, depending on the application, the machine and the value attached to that machine, that's really what determines you know, the ultimate price point for our customers.

            And so, you know, we, we have applications where we get 50 bucks a month for our solutions. We have application where we, where we get hundreds of dollars a month for far solutions. And, you know, those, those include different layers of sensors and hardware. That's all included under a very simple flat, you know, subscription model. So it, it really is just application dependent. You know, if it's tank level monitoring, it costs one thing. If it's pump monitoring and control, it costs another, if we're controlling a, you know, $350,000 train HVHC system with a lot more data, a lot more complexity, a lot more value than it might cost a little, a little more so that that's, that's how it's set up. It's it's really value based. We don't waste a lot of time with customers in every case. Our customers ROI is typically orders of magnitude more than what they're paying well aware. And we're in a growth mode as a company where we don't want to be stingy and focus on extracting every pound of flesh possible for our customers, from our customers. That's not our goal. Our goal is to keep delivering value and case studies and keep getting the word out because, you know, look, there's 25 30 billion machines that need to be connected out there. And we're just beginning to scratch the surface.

            [Erik]

            Yeah, well, and I was going to ask also about connectivity, you know, satellite costs. And I assume those are, those are all wrapped in. And I like this model because, you know, as opposed to maybe the more traditional model of pushing either a fixed license or a, a, you know, kind of large CapEx asset investment which is kind of one time, and then you can walk away and then the customer gets to deal with the operations here. You're really well aligned, right? You need to be delivering value consistently. Otherwise the customer at some point will cancel the contract, right? And you've invested a lot of time in building the solution.

            [Matt]

            That's exactly it, 100%. I mean, we are, we are being paid to deliver TV service. So if I want to watch ESPN, I subscribed to a TV TV service, well, awareness business model for IOT is the exact same. We are aligned with our customers on business outcomes. And what we find is that they really like to reward us when we work with them and we prove and show them the value that's been delivered, they will stand on the mountaintop and proclaim the value that we've helped them to capture and use more of us. And so that's, that's a win, win scenario,

            [Erik]

            A mobile application. So this is an area where at least in my experience, the requirements can shift. You know, whereas the, let's say the fixed, maybe you're using satellite, you have some sensors, you have a particular kind of connectivity environment around the edge. Those can be fairly fixed in the longer term, but the requirements around how users are using that data different. So we see this kind of shift towards more of a low code development environment, trying to allow users that are nontechnical to, to make some modifications. How do you approach this right now in terms of the, the end user application?

            [Matt]

            Yeah, it's a great question. So, you know, we, we've standardized on some, you know, some pretty baseline functionality that I think gets our customers. Like I said, 80, 90% of the way there, there's just a lot of you know, core feature functionality that exists that is all configurable, but it's configurable, you know, in the platform by our customers. So they can, they can set up assets, they can set up taxonomy, they can set up names and, you know, they can configure charts and they can set up notifications and alarms. And all those things are, are, are meant to be very user managed and manipulated. And so that took us a while to get that dialed in appropriately. Probably one of the things I'm most excited about as our our, our true edge intelligence platform. We are building an open developer environment, Lennox based environment on the edge, and it is going to allow our customers to, you know, write their own apps in low code Python, scripting, whatever the case may be, and, you know, really make it very easy for them to write new apps, deploy new apps and manage it on the well-aware edge platform.

            And so the, the example that I'll, I'll very humbly use here, Eric is, is, you know, we really love what Apple did for, you know, building an end to end developer community and the app store controlling the edge platform, which was the iPhone or the iPad or the iMac, and really owning that into end user experience. And I think Apple's obviously been very successful and doing that well aware is doing the same thing, but for machines. And so we're, we're taking that edge platform and, and in the 2020s over the next decade, it's, it's our intent to continue to build that out and provide a full environment where customers can contribute to what they're doing on the edge. Third party application developers and communities can be built around it. It's all an open platform. It's safe, it's secure and it's proven and reliable. And so, yeah, we're, we couldn't be more excited about, you know, what the future holds in terms of, you know putting control algorithms or, you know, enabling new widgets on the edge. So it's, it's, it's already happening today. We're just in the front end of it. But it's something that we're all very excited about here at malware.

            [Erik]

            Yeah. Fascinating. Fascinating. I mean, this is one of our beliefs also is that the companies that are going to be very successful in the longterm are the companies that figure out how to create value by incentivizing other individuals or organizations to to engage in development of their platform. You know, in some way, right, as you said, this is, this is kind of the Apple model, but going it alone and trying to do the full stack yourself. That's probably not a winning proposition for the complexity of the industrial environment that a company like well-aware is trying to serve. So we wish you the best in executing that next stage of building out your platform.

            [Matt]

            There's a lot of foundational infrastructure that's been built that we want to make available to the market and allow for much rapid, much more rapid deployment of solutions. And so, you know, we have a limited set of developers and our customers in the market doesn't have a limited sense. So we're, we are in the process of opening that up and making it a full, you know, shared open community platform, which is exciting.

            [Erik]

            Let's say if customer a build something for a particular type of asset, you would then be able to say asset assess somehow that the algorithm or the source code in order to deploy that for another customer, is that the case

            [Matt]

            We'd be able to certify it and make sure that it is functional. And then, you know, there's, there's an opportunity to potentially leverage that, that code environment, if that customer contributed contributes it to the open environment, then yes. There's other customers that could absolutely leverage that. Yes.

            [Erik]

            Okay. Great. One final question here. Is there also a monetization aspect for the developers? So I guess if it's a, if it's an oil and gas company, they're maybe not worried about monetizing this, but if it's a, an ML company that maybe wants to use well-aware to build an algorithm, what do you have an aspect where somebody that develops code could then monetize through well-aware

            [Matt]

            That is the ultimate intent. Yes, we are not doing that today, but I don't think we're very far off at all. That would be a 20, 21 milestone achievement for us.

            [Erik]

            Okay, cool. Let's go into a one or two case studies here. So it'd be, it'd be great to have a end to end perspective from, you know, who did you first start talking to at the client? What were the, you know, did you do a pilot for them and then, and then kind of walk us through two operations,

            [Matt]

            A couple that come to mind that are, that are some of my favorite ones. One of them is with the largest steel manufacturer in the U S so I'll start starting a non oil and gas application. So we're installed on us steel. They've got a very large, very historical plant in Gary, Indiana. It's 10 square miles. So it's a very large plant. And it's been around for over a hundred years is actually originally a Carnegie site. So it's, it's again, it has a lot of historical reference and relevance to it. We were contacted by us steel through one of our partners. And they basically were really struggling because the infrastructure of that plant had become pretty aged, pretty antiquated, a lot of the original sensors and infrastructure they had put out there had also become antiquated. And so they were having some pretty catastrophic negative outcomes associated with gas, distribution and power substations across that 10 square mile plant of which there's a lot of, both of them.

            And so what they wanted to do is, is get a much more high resolution monitoring and control capability across all their gas distribution, and also across all our power substations. And so we began talking to them, we went out there, we installed on a, on a pilot location for gas distribution. We were hooking into an existing gas sensor. And so I'm going to get into a little bit of detail here cause it's fun. And it'll show you how the platform works, Eric. But we were told when we showed up on site, that the sensor we were going to connect to was a pressure sensor. Okay. That would be very straightforward for well-aware. You go through all the certifications to get on site. You go through the onsite orientation to actually physically step foot on a, you know, an industrial location like us steel. Our team goes out to this first proof of concept and we walk up to the sensor and we realize, gosh, that's not a pressure sensor.

            That's actually a gas flow sensor. So any of our competitors, any of the old legacy players that are out there today, they would have packed their bags up at that point and headed home. They would have kind of probably yelled at everybody and said, Hey, you gave us bad information. Well, that's just what real life looks like out in the field. And so while we're used to that, we have a lot of scar tissue. And so we made a mobile configuration right on, on the spot. We changed our edge device to be able to accept gas flow instead of pressure. So we made the install, got the mechanical and the electrical connection. And all within a matter of five to seven minutes showed our customer standing behind us. We had a little bit of an audience. We showed them the gas application. So we were monitoring real time gas flow.

            And unfortunately one of the guys behind us said, Oh, that's not what we wanted to see. We're not interested in realtime gas. We're interested in accumulated gas flow. So you see right there on the display on that old sensor, it actually accumulates the gas flow for us. Well, the problem is there's not an electrical interface or there's not an electrical output from that legacy sensor that would give us an accumulated gas flow. So right there on the spot, we had our team in the cloud build a gas accumulator. So within a couple of minutes, we were taking the real time gas flow that we had just tooked into. We had built a gas accumulator in the cloud that gas accumulator was then giving them the customer complete totalized via the guests. They loved it. Now we had one problem in order to get that level of resolution.

            We had to back haul data every second. And so that would be very expensive, obviously. So within two days, our team wrote an algorithm that could reside on our edge intelligence, pushed it over the air and updated our unit, which was left behind with a localized gas analyzer. That's pretty cool. So us steel loved it. They rolled it out across all their guests distribution. We did something very similar for their power substations, which they had very little visibility into. They were having leaks into their power substations, which was causing the substations to go down. And then they were having very expensive downtimes in their, in their factory floors. And so that's one example that I, I love it kind of speaks to a little bit of the versatility and the flexibility of the well-aware platform. And now we've got the opportunity to continue to expand that across many, many more of not just that customer sites, but customers very similar.

            Another application, another case study, this one just actually came in Saturday night. I'll just share it with you. I got a panic call from one of our companies said, Hey, look, we're in the city of Houston, there's a water management water treatment facility here. We believe that we're potentially going to be dealing with chlorine gas emission. So we need real time monitoring of air filtration platforms associated with ensuring that we don't get chlorine gas releases. And when he well-aware to come help monitor the equipment, monitor temperatures, et cetera. So literally took that phone call on the 4th of July Saturday night. And then that was successfully installed on multiple locations this morning. So all within 72 hours, there just aren't people who can do that. Eric. So that's, that's the platform being built and being forged and being tested, being highly configurable and being remotely provisional that just allows us to move very quickly with our customers.

            So, you know, Houston is the fourth largest city here in the U S will now have the opportunity to expand out across a lot of their public utility infrastructure. And so just, just another example, outside of oil and gas that you know, that we just recently did over the weekend. One other one that, that I, I love to point to is we do work with the largest upstream midstream and downstream companies, helping them optimize their asset integrity programs. And so we work on pretty much every major upstream, midstream and downstream operator. In some capacity, we work in partnership with a variety of different service providers and we're ensuring for those companies that they are getting the right amount of chemical treatment to ensure that they're not experiencing corrosion, that they're not experiencing scale buildup, we're controlling pumps, we're monitoring tanks. We're doing real time control algorithms on the edge in the field on changing variables because that's, what's required to get to a, an ideal treatment solution.

            And you just can't get that accomplished by sending a person out there in any frequency, which is the current state of the industry today. And so well-aware does that across thousands and thousands of thousands of sites here in the U S and increasingly now and in other countries. And so, you know, we're, we're learning as we go. It's not perfect. We've always had issues. And I like to say we have a lot of scar tissue along the way, but we've been very fortunate to have earned the trust of some of the largest fortune 100 fortune 500 companies in the U S and I think we've got the opportunity to expand that now both States side and internationally as well.

            [Erik]

            So it sounds like the deployments are quite quick, actually they're pulling the timelines and these two examples very, very quick, but I assume that there are also, I mean, there's a fair amount of customized need. What would be the, maybe for the two examples that you just gave, or maybe more generally, what would be a typical timeline from, let's say a first site visit towards having operational deployment across the facility.

            [Matt]

            There are absolutely exceptions some faster, some slower doing multiple sites and less than 72 hours as fast. And so that was something that was very, very fresh on my mind and something, I was very proud of our team for doing it. It was a safety issue and one that I was really proud that we responded to, but I would say typically, particularly for larger installations, hundreds of sites, it takes us, you know, between four to six weeks to, you know, once we receive an order, once the customer has provided us with a general idea of what the infrastructure and the machines and the sensors look like that we're going to be installing on, which by the way is many times absent of some detail. And so we get out there and when we start installing for them, one of the value adds for our customers is just really getting the well-aware inventory of what they've got out there.

            And so it's an asset inventory solution in addition to everything else, but typically it takes four to four to eight weeks. We carry, you know, buffer inventory and stock Aero electronics, which is a very large company, is our manufacturing partner. And so we can scale up very quickly with them and, you know, they build units for us edge equipment and we get it provisioned and we get it out there. So that's a typical timeline. It really just depends on the machine, the application, the customer, the location, but you know, very seldom does it take longer than eight weeks.

            [Erik]

            Gotcha. And you're doing all of the hardware deployment by yourself. The integration work.

            [Matt]

            We have authorized technicians. We also like to train our customers. And so a lot of our customers become their own installers. We have a client success team that has incredible training and tools and videos on how to set everything up, how to configure everything. Our customers download our mobile app, which has set up wizards and provisioning wizards built into it. So we've really made it pretty straightforward and simple, a simple for customers. And that's how you get to those, you know, weeks of install, time versus, you know, what the industry is used to seeing, which is months and sometimes years. You know, so again, Eric, it's, we're tired of watching these, this very painful bloated, you know, value chain that exists of legacy automation, equipment customers, having to feel like they have to build their own telemetry networks and manage those and maintain them and buy enterprise software for SCADA and nother enterprise for workflow and ticket management, another enterprise software for historians, we're just compressing that very legacy and bloated value chain and, and our customers really appreciate it. Now we're also working with whatever existing installs they have. So they don't have to throw the baby out with the bath water. We're going to come in and work with what they've got. And, and usually when we show them how easy it is and what it looks like, then they like to give us a little bit more opportunity to expand

            [Erik]

            Great. Matt, I really appreciate taking the time to walk us through the business. I think this is a fascinating company that you're running this, this trend towards, let's say away from kind of isolated, you know, functional software towards more flexible software is, is a trend that we're paying very close attention to because I mean, you've just kind of given us a good walk through of the challenges that companies have in managing the cost structure and the complexity of these isolated products. Is there anything else that you wanted to quickly share with the audience or, or discuss before we call it a day? Look, I appreciate you guys

            [Matt]

            As having us on. And, you know, I am in this with every single person that's stuck with us this far to listen to the podcast. And if you're working in the IOT space, building solutions, I'm going to encourage you to tell you to do that. I think the opportunity is substantial. It has been hard for, for well-aware. And we're starting to see really the rewards from a lot of the work that we've put in. So I just encourage you to, to hang with it for the customers that are out there, partner with your vendors, partner, with your suppliers, share your outcome information that you're trying to get to upfront. So that together you guys can work on projects that are successful. You know, I always walk around well-aware, which by the way, is the third name of the company. Erik, it's the only one I didn't name, but I love the name.

            So it just means informed. It works extremely well for oil and gas, obviously, but it just means informed. And so it really fits our, you know, our ethos and our strategy, our mission, but, you know, we're here to connect people to the things that matter. We are here to make it easy. And I'll tell you, it's, it's funny if I just think through the history and the last seven years of well-aware, you know, I used to talk about our hardware and our software, a lot, our widgets and our things. And it's taken me a while to realize our customers really don't care about that. They really just want, they want the outcomes. And, and I've also learned that it's very complex to make things simple. And so, you know, it takes time. And so we've been working on this and we've, we've been learning through, you know, doing some things, right, doing some things wrong.

            And you know, I think now we're getting to the point where it's it's, it's just simple for customers. And even the business model, as you've heard is, is getting much more highly simplified. And I think that's what we're going to need to really get the industrial IOT market adoption to the place it needs to be. I hear people talking about IOT in a state of pilot purgatory that's because people are selling tools and they're saying, Hey, good luck. You know, go, go implement, go figure it out. Our experience has been when you partner with your customer and you're in it for the long haul and you're incentivized along with them on the outcomes, it's a much more successful experience. And so I think we're all working in areas that are very exciting. And I just want to encourage everybody to keep pulling and keep developing and really want to thank you again for, for having us join you. It's been a wonderful conversation. You've asked some great questions. Great. Well, Matt really appreciate you taking the time and I wish you, and WellAware the best in the future. Thank you, Eric. I appreciate it. Take care.

            [Outro]

            Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com.

            Read More
            EP066 - Event streaming architectures enabling IoT applications beyond messaging - Kai Waehner, Enterprise Architect, Confluent
            Tuesday, Jul 14, 2020

            In this episode, we discuss event streaming technologies, hybrid edge-cloud strategies, and real time machine learning infrastructure. We also apply these technologies to Audi, Bosch, and Eon. 

             

            Kai Waehner is an Enterprise Architect and Global Field Engineer at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Hybrid Cloud Architectures, Event Stream Processing and Internet of Things. References: www.kai-waehner.de

            Confluent, founded by the original creators of Apache Kafka®, pioneered the enterprise-ready event streaming platform. To learn more, please visit www.confluent.io. Download Confluent Platform and Confluent Cloud at www.confluent.io/download.

            _________

            Automated Transcript

            [Intro]

            Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host Erik Walenza.

            Welcome back to the industrial IOT spotlight podcast. I'm your host Erik Walenza, CEO of IOT one. And our guest today will be Kai Vernor, enterprise architect and global field engineer with confluent. Confluent is an enterprise event streaming platform built by the original creators of Apache Kafka for analyzing high data volumes. In real time in this talk, we discussed event streaming at the edge and in the cloud and why hybrid deployments are typically the best solution. We also explored how to monitor machine learning infrastructure in real time. And we discuss case studies from ADI Bosch and Ian, if you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iotone.com. Thank you.

            [Erik]

            Kai. Thank you so much for joining me today.

            [Kai]

            Thanks for having me, Erik. Great to be here.

            [Erik]

            So Kai, before we kick off the discussion here is going to be a little bit more technical than usual, which I'm looking forward to. But before we get into the details, I want to learn a little bit more where you're coming from. I think you've, you've had some interesting roles. You're currently an enterprise architect and global field engineer. So I'd actually like to learn what exactly that means. And then previously you were technology evangelist with both your, your current company conflict, but also with Tipco software. So I also want to understand a bit more about what does that actually mean in terms of how you engage with companies, but can you just give a quick brief on, on what is it that you do with conflict?

            [Kai]

            Yeah, sure. So I'm actually working in an overlay role, so that means I speak to really hundred, hundred 50 customers a year. And if there is no travel band and really I travel all over the world and IOT and industrial IOT is a big topic of that. And I talked to these customers really to solve their problem. So it's really, while it's technology under the hood, we try to solve problems. Otherwise there is no business value out of that. And I think that's what we will also discuss today. And therefore what I really do is I analyze the scenarios that our customers have challenges and problems and how we can help them with even screaming. That's what we are going to talk about today and my history and my background is really, I've worked for different integration vendors in the past. And therefore this is also very similar to what I do today with and with event streaming. The key challenge typically is to integrate with many different systems and technologies. This is machines and real time sensors. And so on only one side, but also the traditional enterprise software systems, both for IOT like an ERP System, but also a customer relationship management or big data analytics on the other side. And that's really where I see the overview of these architectures and how event streaming fits into.

            [Erik]

            Okay, so you have kind of a, a technical business interface role where you're, you're trying to understand the problem and then determine what architecture might be right to support that.

            [Kai]

            So I'm really exactly in this middle point, I taught both. And even to the executive level, but then also to the engineers on the other side, which need to employ.

            [Erik]

            During those initial conversations, how much do you get into kind of the completely nontechnical technical topics about how an end user might potentially put in bad data, or, you know, these, these kinds of they're almost HR topics or topics related to, let's say the completely human aspect of how a solution might be is, do you get into those early on, or is it more, once you get into implementation, you figure out what, what those other challenges would be and you address them as you go?

            [Kai]

            No, it's really early stage. So, I mean, we talk to our customers on different levels. It's both on the business side and on the technical side. So, and before we really have something like a pilot project or proof of concept, and we really already talked to many different people from every level from very technical, but also to management and so on to understand the problem. So we plan this ahead of time. So it's not just about the technology and how to integrate to machines and software, but really how to process data. And what's the value out of that.

            [Erik]

            And then do you have a very specific vertical focus or are you quite horizontal in terms of the industries that you cover

            [Kai]

            We are not industry related. So event streaming to continuously process data is used in any industry. However, having said that with the nature of what machines are in industrial IOT is with producing continuous sensor data all the time and also of the big data and more and more of that with that, of course, industrial IOT is one of the biggest industries, but it's really not related to that. So we also working with banks, insurance companies on telcos in the end, they have very different use cases, but under the hood from a technology perspective, it's often very similar.

            [Erik]

            Yeah. One of the issues that's both interesting, but I suppose also challenging is that there's almost an infinite variety of things that you could analyze in the real world. Right. I suppose there's also some kind of 80 20 rule. Is it the case where you see, like there's a short list of five or 10 use cases that constitute 80% of the work you do, or is it actually much more varied than that?

            [Kai]

            It really varies, and it depends on how you use it and that's what we will discuss later today. But in some use cases, really all the data's processed for analytics, for example, the traditional use cases like predictive maintenance or quality assurance, but as more and more of these industrial solutions propose so much data. Sometimes the use case is more technical so that you just deploy the solution at the edge and the factory to pre-filter because it's so much data and you don't process all of that. And therefore the event streaming is getting the sensor data pre-filters and preprocessing, and then in chest, maybe 10% of that into an analytics tool for more use cases. So it's really many different use cases, but in the end, typically it's really to get some kind of value out of the daytime. I think that's really the key challenge today that most of these factories and plants, they produce more and more data, but people cannot use it today. And that's where we typically help to connect these different systems.

            [Erik]

            And I know confluent is, is it right to say that it's built on Apache Kafka or that's the solution that you use? Can you just describe to everybody, what is Apache Kafka?

            [Kai]

            That's a good point. And that's also explains how this is related. So Apache Kafka was created at LinkedIn. So the tech company in the U S around 10 years ago, they built this technology because there was nothing else on the market, which could process big data sets in real time. So we have integration middleware for 20 years on the one side for big data. And we have real time messaging systems for 20 years. But we didn't have technologies which could combine both, and that's what LinkedIn built 10 years ago. And then after they had it in production, they opened sources. And this is exactly what Apache Kafka is. So it can continuously process really millions of data sets per second at scale reliably. Then when they open sourced it. And the first few years only the other tech companies used it like Netflix or Uber or eBay.

            And however, because there was nothing else on the market and there was a need for this kind of data processing all over the world in all industries. So they really, most of the fortune 2000 was a Patrick half kind of different projects. And with that in mind, five years ago, confluence was created by the founders, which were the creators of a Petrik half cuddling. So they got venture capital from LinkedIn and from some Silicon Valley investors and found that confluent with the idea of making CAFCA production ready, which means the tech times often can run things by themselves, but conference really helps to improve Kafka and build an ecosystem and tooling around it. But of course also had the services and support so that the traditional company, I always say also can run mission critical workloads with Kafka because they need help from a software vendor.

            [Erik]

            Okay. Very interesting. So this a little bit of the red hat business model, right, is like building enterprise solutions on top of open source software. It seems like that's, yeah, that's kind of a trend, right? Cause I guess an open source has a lot of benefits in terms of being able to debug and so forth, but at some point, right, people don't want to, they don't want to figure it out for themselves. They need a service provider.

            [Kai]

            Yes. And that's exactly how it works. So it's exactly like red hat. And the idea is really that everybody can use Kafka and, and many people use it even mission critical without any other vendor because they have to expertise by themselves. And on the other side, also these tech companies like LinkedIn, they also contribute to this framework because it's an open framework. So everybody contributes and can leverage it. And that's exactly also what we are doing. So we are doing most of the contributions to Kafka. So we many, many full time committers just for this project. But then in addition to that in the real world, like in industrial IOT, you also get questions for example, about compliance and security and operations, 24 seven and guarantees. And there's a place for that. And this is where the traditional companies like in, in industrial IOT have simply different requirements for something then a tech company, which runs everything in the cloud. And this is exactly where conflict comes in to provide not just a framework and support, but also the tooling and expertise so that you can deploy it regarding the USLS and your environment, which can be anywhere just in a factory or hybrid or in the cloud.

            [Erik]

            Okay. Very interesting. Well, let's get into the topic then a little bit here. So maybe a starting point is just the question, what is event streaming? So we have a lot of different terminologies around analytics, I guess people use realtime analytics a lot. And I think you also use the terminology on your, on your website to some extent, but how would you compare real time analytics to event streaming or what,

            [Kai]

            Just those two, two terms that's really very important because there are so many terms which are overlapping and often different vendors and projects use the same bird for different things. So this is really one of the key lessons learned in all my customer meetings define the terms in the beginning. And so I explain when I talk about events, streaming is really to continuously process data. That's the short version. So this means some data sources produce data, and this can be sensors for real time data, but this can also be a mobile, a very, you get a request from a click from the user button. So it's an event which is created and then you consume these events and then you continuously process them. And that's mainly the main idea. And other terms for this is realtime analytics or stream processing or streaming analytics. But the real important point is that it's not just messaging because that's what I really sometimes get upset when people say half guys, a messaging framework. And that's really the key point here. It's not, yes, you can send data from a, to B, with Kafka and people use it for that a lot, but it's much more because you can also process the data and you can build stateless and stateful applications with Apache Kafka. And that's really the key difference. And so in summary, half cars built continuously integrate with different systems, realtime batch and other communication signals and process the data in real time at scale, highly reliable. And that's in the end. What I mean with event streaming.

            [Erik]

            Okay, great. That's, that's very clear. And then there's another term, which is maybe not as common, but event driven architecture. Are you familiar with this? Would you say that's another thing that overlaps heavily with event streaming or is it a particular flavor or what would be the difference there?

            [Kai]

            Yeah, so it totally overlaps. So event streaming is more a concert and event driven architecture as the name says, is the architecture behind it. But how that works in the end is that you really think about events and that even can be a tender thing, like a lot, even from a machine or it can be a customer interaction from the user interface and all of these things are events and then you process them even phase. And this is really also key of this foundation. And that's definitely important to understand, because no matter if you come more from a software business or more from the industrial IOT and OT business in the past 20 years, typically your stored information in that database. So in the beginning it was like an Oracle database or file system. And you talk more about big data analytics or cloud services, but the big point here is you always start the data in a database and then it is addressed and you wait until someone consumes it with [inaudible] or with another client. And this is really more or less a too late architecture for many use cases. And what events, streaming and event driven architectures do. They allow you to consume the data while it's in motion while it's hot. And this is especially relevant for industrial IOT, where you want to continuously process and monitor and act on sensor data and other interactions. And this is really the key foundation and difference from event-based architectures. So traditional architectures with databases and vet services and all these other technologies, you know, from the past.

            [Erik]

            And then that maybe brings us to the, let's say the first deep dive topic of the conversation, which is event streaming at the edge versus hybrid versus cloud deployments. Because, you know, you just mentioned the certainly there's unique requirements around, for example, an autonomous vehicle, right, where a 10th of a second can be quite impactful. And in the real world, my assumption is, well, obviously you can, you can deploy this across, but of course it was initially developed for primarily cloud deployment. So I assume that the edge deployments are significantly more challenging just given the architecture of limited compute capacity and so forth. How do you evaluate deployments across these that say edge, edge cloud and then hybrid options?

            [Kai]

            Yeah, so that's a very important discussion. And so actually in the beginning yes, Kafka was designed for the cloud because I'm LinkedIn build it and LinkedIn, and that's the big advantage of all these tech companies. So they build new services completely in the cloud, and most of them just focus on information, right? So this is not a physical thing like in industrial IOT and therefore it's very different. But even at that time, cloud 10 years ago was very different than today. So even at that time, you had to spin up your machines in the cloud, like on AWS, you spin up a Linux instance, for example, and therefore it's not that different from on premise deployment. And with that in mind today, of course you have all the options. I mean, confluent only one side, we have contraband cloud, which is a fully managed service in the cloud, but you only use in a serverless way, so you don't manage it.

            You just use it. However, having said that 90% or so of CAFCA today are self managed and they're not just mentioning cloud, but really on premise either in the data centers or at the edge. And this is especially for industrial IOT where you want and need to do edge processing also directly in the factory and with that all in mind. So there's all these different deployment options. We have use cases and just edge analytics and processing and a factory for use cases like quality assurance in real time. But then also we see in industrial IOT, many hybrid use cases where on the one side you do edge processing. As I mentioned before, either just for preprocessing and filtering, or maybe even building business applications at the edge, but then you also replicate data for another data center or the cloud for doing the analytics. And this is really all very complimentary. And especially in industrial IOT, it's really a, comment's a use case we'll have hybrid architecture because you need edge processing for some things. This is not just for latency, but this is also for costs. People often learn the hard way, how expensive it is to ingest all the data to the cloud and process it there, especially if you really want to see all the sensor data before you deleted it again. And therefore these hybrid use cases are the most common deployments we see in industrial IOT.

            [Erik]

            Yeah. I was actually a lets see. Was it last week or two weeks ago? I was on the, on the line with the CTO of Foghorn. Are you familiar with the company Foghorn?

            [Kai]

            I even listened to.

            [Erik]

            Oh, okay. Okay, great. And so I guess, you know, one of the things that they were emphasizing was the, let's say the challenge of doing a machine learning at the edge, right. Just do the compute power there. How do you view this? Or let's say when you're in a conversation with a client and the clients, it kind of discussing their business requirements, how do you assess what is actually possible to do at the edge? And, and then, you know, where at the edge are we talking about? You know, actually, I mean, let's say at the sensor, right, which is maybe very limited compute or at the gateway or at the, at the local server, how do you kind of drive that conversation to understand what is possible from a technical perspective based on their business?

            [Kai]

            Yeah, that's a good question. And this discussion of cross always has to be done. And therefore we really start this also from the business perspective, what's your problem and what we want to solve. And then we can dive in dive deep into what might be a possible solution, or maybe there are different options for you. I'm not just one thing that you have to do, but more like if you, if you do want to do predictions with machine learning and AI and all this, these passwords, and typically it's a separation between model training, which means taking a look at this historical data to find insights and patterns, and then deploying this model somewhere for doing predictions. And this is the most common scenario. We see that this is separated from each other. And therefore only one side you ingest typically at the center of better into a bigger data Lake or store where you want to do the training to find insights.

            [Erik]

            And this can be in a bigger data center, right? You need more compute power. And this often than is in the cloud, and this is the one part, but, and until you need really the more infrastructure, so you cannot do this often, shouldn't do this directly at HD wise, which is smaller, but when you have done the training in a bigger, with more compute power, then the model scoring or the predictions, this really depends on the use case, but this can be deployed much closer to the edge. And here, when we see different scenarios, depending on what the use cases, but you can either do also the predictions in the cloud or in the data center, or even ambit and bet this model into our lightweight application. So just from a technology perspective, I do have an understanding the model training has done, for example, in a big data Lake like tube or spark or cloud machine learning services, there's many options they're meant for the model deployment.

            [Kai]

            And this can also either be a Java application for example, and be really scalable in the distributed system or on the other side, you can also use, for example, the C or C plus plus with a Kafka client from confluence and deploy this really at the edge, like in a microcontroller, if it's very lightweight. And this of course, then also depends on the machine learning technologies you use, but most modern frameworks also have options here, like to give you one example, we see a lot of demand for TensorFlow, which is one of these cutting edge, deep learning frameworks, and which was released by Google. And here you also have different options. You can train a model and deploy it and it's then too big and it really has to be deployed in a data center or on the other side, you can use TensorFlow Lite and export it. Then for example, Rhonda model, like in a mobile client with Java script, or really in an embedded device with C and therefore you have all these options. And it depends on the use.

            [Erik]

            And I guess right now we have, let's say from a, just a fundamental technology perspective, we have trends that are kind of moving in both directions that are making it a little bit easier maybe to do compute on, let's say both levels of the architecture. So you have improving hardware at the edge, you know, so greater, greater compute power at the edge. You also have potentially 5g making it. Maybe people would disagree with this, but potentially making it more cost effective also to move data to the cloud, or if not more cost effective, at least, you know, better latency and bandwidth to move data to the cloud, which would, which would allow you to do more of those kind of real real time solutions without connecting to the edge. Do you see any trend based on just the underlying technology development dynamics that would drive us towards doing more work at the edge or more work in the cloud? I mean, obviously it's still going to be a hybrid, but do you see a direction one way or the other?

            [Kai]

            Actually, no, because it really depends on the use case. And also it's important really to define the stone terms like real time then, right? Because there's different terms on what that means, but in general, really, I can give you one example of where it will always be in this mixed state. So if you have different plants all over the world, on the one side you want to do real time analytics like predictive maintenance or like quality assurance, that's things that should happen at the edge. It doesn't make sense to replicate all this data to the cloud, to do the processing for latency and for cost even the five chins alone, it's always more expensive to first send it somewhere else and then get it back. And this is expensive from a cost and latency perspective. And so you want to do this analytics at the edge of victories.

            However, having said that in this case model training or for doing other reports or for integrating with other systems, or for correlating between data from different plants to answer questions like we have one plant in China and one in Europe. So why is the same plant in China, much more problematic. And then you have to correlate information to find out what's the different temperature spikes and what's the different environments. And for this, then this doesn't make sense at the edge because you need to aggregate data from different insects, different regions and hear them typically the cloud is the key trend for this, because here you can elastically scale up and down and integrate with new interfaces. And for this, you want to do that in the cloud to replicate data in from many different other systems. So it's really I think the trend is that maybe two, three years ago, every time I talked about getting everything into the cloud and even the cloud providers of course wanted to do that.

            But now the trend is to do it more in a hybrid way that it's both cloud for some use cases, but also edge for some others. And the best proof for this is if you take a look at the big cloud providers. So if you take a look at Amazon, Microsoft, Google, Alibaba, they all started with the story and just everything into the cloud and to all our IOT analytics they are. But today all of these vendors also release more and more edge processing tools because it simply makes sense to have some things that yet.

            [Erik]

            Okay. Okay, great. Then that's actually a good transition to the next topic here, which is event streaming for real time integration at scale. What type of integration are we talking about? You know, we're talking about integrating data. Are we talking about integrating systems?

            [Kai]

            That's a good question. That actually it can be both. So first of all, also to clarify here, Kafka or this conflict, doesn't whatnot. So what, what Casper really is it's about event streaming and that includes also integration and processing of data, but typically, especially in industrial IOT environments, but also compliments our solution. So if you're in a plant and want to integrate all these machines, or even directly to PLCs, you have different options and you can do direct integration to a PLC. So something like a Siemens seven or two mode bus, or you use a specific tool for that to give you one specific example. I see a lot in Germany, you, of course, people use a lot of Siemens, so they have Siemens as three as five or seven PLCs. And therefore you could use an IOT solution like Siemens MindSphere, which was exactly built for this integration for these kinds of machines.

            On the other side, this is probably not the best solution also to integrate with the rest of the world, which means your customer relationship management system and with other databases and data lakes or cloud service. And therefore in most cases in industrial IOT customer really compliments other IOT platforms here. So it's more about the data integration and not so much about the direct system integration, but having said that you can do this. So we have customers which directly integrate to PLCs and, and machines. And on the other side, also dietary integrate and to any essence ERP systems like SAP, for example, this is always what you have to discuss in more deep dive. So there is all these options, and that's a great thing about caftan by people use it because it's open and flexible and you can combine it with other system. It's not a question one or the other.

            And one last side note here, what might be also interesting for the listeners is that the modern European and the SSM and all of these tools, many of them also run cash under the hood because also the software vendors or these enterprise vendors, they have understood the value of CAFCA. And therefore also build their systems on that because these systems also have the same needs. So the legacy approach of storing everything in a database with web services, like rest or soap web services, this is not working for this new data sets, which are more realtime and bigger. And that's the earliest approach we see everywhere,

            [Erik]

            I guess, at the, at the it level integration is typically quite feasible at the OT level. At least my understanding is that we still have some challenges around kind of data silos that companies put up in order to protect market share. Do you see any trend here in terms of opening up the OT level to make integration say across vendors easier or, or let me ask when you're looking at a deployment, how significant of a challenge is this? Is it something where you can always find a solution? It's a, it's just a matter of putting in a bit of extra time or, or is it a significant challenge?

            [Kai]

            It's definitely one of the biggest challenges. And that's why people want to do this. As I said in the beginning today, when we talk to customers too big to not have access to that data because of the proprietary, because it's not accessible so far newer infrastructures that the vendors in the end are forced to use standards like OPC UA or [inaudible], they don't want to do that, but otherwise I'm, the customers would really get in trouble. And so they, the software vendors have to go into this direction a little bit on the other side. Also, as I said, there is technologies to directly integrate with PLCs too. For example, if you want to get a quick man. So if you want to see, I have all these machines in my plant, and I just want to get data out of it to monitor it, to get reports. And then you can also connect to the PLCs. So something like a Siemens seven. So having said this, this is definitely the biggest challenge to get all this data out. And however, this is also often why people then come to us because they say it's okay to me to do it to the last mile with a proprietary solution, like Siemens MindSphere but we are a global vendor all over the world, many different technologies. We cannot use every proprietary vendor everywhere because that's executive status seals and what CAFCA is. So it makes us strong as that on the one side you can integrate with all the systems, but you also decouple all of them from each other. So this means on the one side, you might have some Siemens on the other side, you might have some GE or whatever. And on the other side, you have direct integrations. You can integrate with all of that, and then also correlate all these different information systems and also combine it if your MES of your European system, or if your data Lake, and this is what makes Kafka so strong, so that it's open and flexible how you integrate it and what you use for the integration either directly or with a complimentary other tool. And this is why we see CAFCA used in IOT, but also in general for these use cases, because you can integrate with everything, but still you're open flexible.

            [Erik]

            Yeah, I suppose that's the, the real value of open source here is that you have a, a large community that's problem solving and sharing, sharing the learnings, right. Which you don't have with the next topic. And we've already touched on this a little bit, but is the machine learning element here, and we've already discussed, you know, model training in a data Lake that might be better stored on the cloud and so forth. But maybe the interesting topic here is when you're implementing machine learning and you're kind of segmenting this between different areas of your architecture, how do you view, let's say the future of machine learning for live data.

            [Kai]

            Yeah, that's a very good question. And that's really often why we talk to people about that because what we clearly see, and this is true for any industry is that there is an impedance mismatch between the data science teams, which wants to analyze daytime build models and do predictions and between the operations teams, which can either be in the cloud or which can be in a factory very, really deployed at scale. So you have seen, I've seen too many customers where they even got all the data out of the machines into the cloud. And so the data scientists could build great models, but then they could not deploy it in production anymore. And therefore you always have to think about this from the beginning. How, if you are a data science people access to all the historical data, but then also before you even start up with this, you need to think about what's my SLA is for later deployment of this, does it have to be real time? What's the data sets is this for big data, for small data. What's my SLS. And in production lines, it's typically 24 seven mission critical for the ride and then configure CAFCA differently. Then when you run it just in the cloud for analytics where it's okay, if it's down for a few hours and with this in mind, why here also we see so much Kafka is because there's a huge advantages if you built this pipeline once with Kafka. So let's say you have Kafka DH to integrate with the machines. And then you also replicate the data to the cloud and analytics. They are, this pipeline with Kafka is mission critical and runs 24 seven. So Kafka is built. It's a, it's a, it's a system which handles problems. So even if a notice down, or if there is a network problem, CAFCA handles that.

            So that's how it's built by nature is a distributed systems, or it's not like an active passive system or where you have maintenance downtime that doesn't exist in Kafka. And if you have them this cuffed up pipeline, you can use it for both. You can use it for the ingestion, into the data analytics cloud, where the data used up the data in historical mode, in batch for training or for interactive analyzes. But the same pipeline can then be used for production deployments because it runs mission critical. And therefore you can easily use that also then to do predictions and to do quality assurance because these applications run all the time without downtime, even in case of failure. And that's one of these key strengths. You can build one machine learning infrastructure for everything. Of course, I'm some parts of the, of the pipeline and use different technologies, but that's exactly the key.

            So the data scientists will always use a pipeline client, right? So they typically do rapid prototyping with twelves, like Jupiter and psychic learn. And this is frameworks, which data scientists on the other side in production, on the production line, you typically don't deploy pipe and for different reasons, it doesn't scale. Well, it's not a robust and performance. They are, you typically deploy something like a Java or like a C plus plus application. And as CAFCA in the middle of which handles the back pressure and is also the decoupling system, you can use these different connecting technologies and the data scientists can use Python while the production engineers uses Java, but you use the same stream of data for that.

            [Erik]

            You get involved also in, in building a machine learning algorithms or, or are you focused just on managing the flow of the data and then the client would have some, some system that they're using to analyze it.

            [Kai]

            So this is really, we are building the real time infrastructure, including data processing integration. And then this is really where the data science teams, for example, and choose their own technology. But this is also to understand and point out here. This is exactly the advantage because here all these teams are unflexible and different. I actually had a customer call last week, and this is really the normal that different teams use different technologies in the past. Everybody tried to have one standard technology for this, but in the real world, one data science team uses this framework like TensorFlow. And the other one says, no, I'm using Google ML with some other services like terminal and here, because cuff kind of middle is the decoupling system. You're also flexible regarding what technology you choose. And therefore the reality is that most of our customers don't have one pipeline where you send all data from a to B, but you typically have many different consumers, and this can include analytics tools where you really have to spoil for choice depending on your problem and use.

            [Erik]

            Okay. Interesting. I think that the next topic we wanted to get into is, was use case. And I think that's pretty important here because understanding how this is actually deployed, but before I want to go into, you know, kind of some end to end use cases in detail, I have a bit of a tangent here, which is a question that a number of companies have asked me recently, and I don't have a good answer. So I'm hoping you have a better one and that's, are there any use cases for five G that kind of really makes sense in 20, 20, 20, 21? Or are we really, you know, and I've thought about this and, you know, I've talked to some people and it seems like maybe augmented reality for industrial makes sense cause of high bandwidth requirements and, you know, wireless solutions. And AGVs probably make sense once you make them more autonomous because you have that same situation of, you know, latency, bandwidth wireless, but there doesn't seem to be so many yet.

            And kind of my hypothesis was that over time as 5g becomes deployed, maybe the OT architecture of factories will start to change. There'll be less, you know, maybe a less wires you'll have the option to then build Greenfield somewhat more wirelessly. So that might change the architecture and then people would develop solutions specifically for this, this new connectivity architecture. And then, and then you might say, okay, now it's providing real value. But you know, aside from AGVs and AR was a little bit at a loss to identify anything that is really highly practical in the near term, anything that you've come across where you said, yeah, this 5G would really solve a real problem for one of your customers.

            [Erik]

            I think yes, because one of the biggest problems today is definitely network and data communication because today when I go to a customer for factory, which exists for 20 years and typically the integration mode, how we get the data from these machines is something like a windows server where you'll get connected and then you'll get a CSV file with the data from the last hour, because there is no better connectivity to integration. So I definitely think that in general, by their networks, I'm allowed to implement better architectures also for OT at the edge. But having said this, I also see these discussions about 5G with different opinions. So there was of course not just five cheaper, but also for factories. There is other standards and possibilities, how to do a network there. And also what I think if 5g gets into this industrial IOT, I'm I guess the bigger factories and so on, they will build a private five G networks for that.

            [Kai]

            So that's also possible. And I think that's great because what I don't expect to see at least from my customer conversations is that that's what the cloud vendors want. But if you directly integrate all these 5G interfaces from the edge and with the cloud, but that's probably not going to happen and because of security and compliance and all these kinds of things, but for private 5G networks, I think this would be a huge step for more modern architectures in OT. And that also, of course, then is the building block for building more or for getting more value out of that, because today again, the biggest problem in factories is that people don't get the data from the machines to other systems to analyze.

            [Erik]

            Okay. Gotcha. And I guess in brownfield, do you still need some kind of, yeah, you'll need some sort of hardware to deploy on that, but at least if you use 5g, then you can extract the data wirelessly without you can always just lay either net, I suppose. Right. But then that, that becomes a

            [Kai]

            Yeah, exactly. I mean, that's just different options. I mean, then you just somehow need to get the data out of these machines and production lines into other systems and it can be with Ethan and it can be with five G what's the best solution depends on cost and scalability and NCO.

            [Erik]

            Okay, great. But sorry for taking it down that tangent let's go into some of these use cases. So you've, let's see. I actually, I won't mention any of these until you do. I don't want to throw out names, but there's a connected car infrastructure. Should we start there?

            [Kai]

            Yeah, that's a good first example. And that's also a good relation to the 5G question. I think, let me explain this. So I think we can cover three or four use cases here because what's, for me, what was important when I talk about event streaming is to really talk about different use cases so that people see, this is not just for one specific scenario and a connected car infrastructure. As one great example, we see in many customers and Audi is one of these examples. So the German automotive company, and we have started building with them upon active conference structure on four years ago. And so what they actually did is they had the need to integrate with all the cars driving on the streets. I started this with the eight, with one specific more luxury car, but they are now rolling it out to all the new cars and what's happening there is that all these cars connect in the end to a streaming Kafka cluster in the cloud so that you can do data correlation in real time of all that data from a use case perspective, there's a demand for things like after sales, right?

            [Erik]

            So did you always in communication with your customers for different reasons? Like on the one side I'm sending them an alert that their engine has some strange temperature spikes and maybe he gets to the next repair shop, but also then to keep the customer happy to do cross selling, or if you know it from Tesla, you can even upgrade your car to get more horsepower. And there's plenty of use cases. And then you can even integrate with partner systems. Like for example, if the restaurant on the chairmen auto Autobahn or you're driving, and then you do a recommendation. If you stop at lunchtime at this restaurant, then you get 20% off and these kinds of things, and you'll see the edit value here is really not just getting the data out of the car into other systems, but really correlate and use this data in real time at scale 24 seven. And that's exactly one of these use cases for confluent, what we are doing and from a technical perspective of what these cars are doing. I mean, I'm Dara of course, using the, in this case for cheat today. And this is a great example. If you have five cheat here you can do many more things because still data is a data transfer from the cars, the most limiting factor regarding cost and latency and all these things.

            [Kai]

            Okay. Okay. Very interesting. One of the topics, maybe we don't have to speak specifically to ADI here because this may be topics a little bit more sensitive, but once you get into these situations where you have aftermarket services, for example, of course, there's not just value for the OEM, but there's potentially value there for a lot of different companies that might also want to be selling services to this this driver, this this vehicle owner, for example. So, and this becomes then an issue of not just moving data, but also regulating who has access to data in, in which way to what, to what extent it's anonymized or not. So what metadata is available and so forth, do you get into these discussions about this gets smart into the legal privacy discussion about, well, what can we do to like monetize this data that we have?

            [Kai]

            Yeah. So, so actually this is all this part of the problem. I mean, especially in Europe term chairman you've, our privacy is really, really hard, right? It's not much different than in the U S for example, therefore we get industry introduced discussion all of the time and you have to be security compliant, which is part of the conversation of course. And you need to be, for example, GDPR compliant and Germany and Europe. So this is part of the problem. And this is also, I'm very need to think about this from the architecture perspective. So who has access to what data? And so, and that's also the point where for example, confluence comes into play because if open source Kafka, you would have to implement this by yourself with confluent yet, then you have things like role based access control and audit logs and all these kinds of features, which, which help you here with multitenancy and all these questions.

            And with that in mind, also the brings up more problems and questions also for all these vendors, because as you said, not just Audi or let's get away from audio, but in general, an automotive company wants to get the added value, but also the tier one and tier two suppliers. And this is really a big discussion. And this is where all of these vendors today have a lot of challenges and nobody knows where it's going, but today everybody's also implementing its own connected car solution. So if you Google for that and you will find many automotive companies, many suppliers, and also many third party companies, which implement this today, and nobody knows where it's going. But today already I have seen a few automotive companies where the car is not just sending out data to one interface of one vendor, but to two or three different interfaces because everybody wants to get the data out.

            And so this is really where the next years we'll definitely consolidate things and new business models will emerge. And in my personal opinion is really that the only realistic future is that these different vendors also partner more together. And that will happen because it's not just for the automotive company, but it's also for the suppliers. If you take a look at these innovations, they are, they are all working on software. If you go to some kind of conference, they are not talking about the hardware, they are talking about the software on top of that. And therefore this is really where the market is completely changing because in this automotive example, in some years, many people will not care about if it's an Audi or Mercedes or BMW, but how well it's integrated with your smartphone and with the rest of the technology. And therefore this is a complete shift into the market. And we see this at every automotive or at every IOT company today.

            [Erik]

            Okay. Very interesting. Yeah. This is a topic that comes up a lot with our customers who are sometimes automotive tier one, tier two suppliers, right. And then they face the challenge of getting data out, you know, from an OEM. And, you know, we produce the air filters something right now, the OEMs are never going to give us our data. Right. But we have these, we have these business cases. Right. So yeah, this is a very interesting discussion. Okay. Then the next one we are covering here is Bosch a track and trace for construction. I think track and trace is very interesting because it's, it's applicable to you know, basically anybody who is managing assets that are kind of in motion. What was the problem here and what did you do with Bosch?

            [Kai]

            So, so that's another great problems in India and just also clarifies the different use cases. The first one was getting all the attended cloud for analytics and using the data. This is normal hybrid one. Did the I don't interesting party because before I talk more about the use cases that here you see, and this is also not all real time data or big data. So in this use case, it's really about smaller data sets and about also request response communication and not just streaming data. The use case here is that Bosch has several different construction areas, but they use together with their partners and where they build new buildings, for example. And then you have a lot of devices that are in machines and only one side, of course, and the new devices and machines have sensors, which continuously give updates to the, to the backend system.

            But then also they had many different problems and use cases here. Like the workers in the construction area didn't know where the machine or the devices, or when to do maintenance for the device and replace batteries or other things. And therefore in this case, it's really a track and trace system and where you monitor all the information from all the systems. And actually it's not just a machines and devices, but also track and trace information about our customer. So whenever a worker has finished something, he uses his mobile app. And in this case, so it's not streaming data about, he does a button click and then it's even, but just sent to the backend and the data is stored there and correlated and this way, Porsche chief solution, so that they really know all the right information in the right context for each contract construction area.

            And just as important for the edge in the end, which is the construction area. But then in the backend, of course, this is also important for the management and for monitoring all the different projects. And also all this data goes to analytics tools because the data science team takes a look at all the construction areas and what's going on and how to improve the products or the services they offer for the new products they build. And if this solution has deployed it also to the cloud, so that they can integrate with all these different edge systems and store information and correlate it, and here, it's also important I'm in this use case that they don't just continuously process the data, but they also start the data in Kafka so that you can also consume old events. And this is a part we didn't discuss yet, but this is important.

            So in Kafka or even streaming systems, everything is upended only. So it's even based guaranteed order of flocks or events. And then you can also take a little bit old data. So the data scientist doesn't consume all data in real time like others, but they say, give me all the data from this construction area from the last few months. And then they want to correlate it with the last three months, from another core construction area. And they see maybe this construction area had some specific problems and then they can find out what the problems were. And so this is a great other use case because this is hybrid and this is not big data, and this is not only real time data. But this is still about Kafka makes so much sense for the integration and processing of these events.

            [Erik]

            Yeah. That's a very interesting one from a, let's say an end user perspective, right? Because you really have just, even within the construction site, you have a number of different end users that have quite different requirements around the data from the, the person looking for the tool to the maintenance team, to, to management, that's making decisions about how many, how many assets do we actually need and so forth. And I suppose, yeah, again, coming back, you don't get into, you're laying down the architecture. So you cover what architecture is necessary for this, but you're not going to be advising them on these individual use cases. Is that correct? Or do you ever get involved in advising what use cases might make sense or helping to?

            [Kai]

            That's why we also have, I mean, because we have the experience from all the other customers. So we also do them under consulting and the help of the engagement and approach. We are not doing the project itself. So this is typically what a partner does or what they do by themselves. And we help really with the even streaming part and the infrastructure, but only from a perspective of calf, because we are not doing a bulk project on that. And that's maybe also important. So as I said before, really, even streaming is not competitive, but really complimentary, also total solution like for the management team, which has some MBI tools in the back end. So this is not CAFA, right? So this is where you connect your traditional BI tool like Tableau or power BI or a click all of these vendors and connect the two parts of the data. So this is really complimentary.

            [Erik]

            Okay. Okay, great. Yeah. We were working with a European construction company about a month ago on a track and trace, well, we were, we were surveying how in China, they are there. I'm able to ramp up operations at construction sites by tracking where people are. And, and, you know, our, our people grouping together are people wearing masks, et cetera. So it's kind of a track and trace for people and it's been okay, extremely effective in China. And then the question is, how do we translate this to the European market where this would probably all be highly illegal? And then the last one that we wanted to look into was energy, an energy, a distribution network for smart home and smart grid. So this, yeah. Completely different set of problems. What, what was the background with this case?

            [Kai]

            Yeah, so, so this is one example is Aeon, which is an energy provider. And these kinds of companies also have a completely changing business model. And that's often actually where Kafka comes into play to really reinvent a company. Often the problem for them is that in the past, they only produce their own energy like nuclear energy. And of course, this is obviously changing to more green energy and so on, but the business model also had to change because they can not just sell energy anymore, but they also see more and more customers or end users, which produce their own energy, like for solar energy on their houses. And often they produce more energy that they even used by. So they want to sell it. And therefore I'm for this example Ian has built on streaming IOT platform, which is also hybrid or some of the analytics in the cloud, but some other processing is more at the edge and what they are doing in the end.

            They are no more like on distribution platform. So this means only one side, they still integrate with our own energy systems to sell their energy and do the accounting and billing and monitoring. And all of these things here, of course they have to get mentioned is it's still in real time. And even for the bigger data sets, these produced two data systems, they can handle it. But on the other side, they also know integrate directly with smart homes and smart grids and other infrastructures. So they can that they can get into the the, the system of the end user, like the customer has the smart home. And with this, they are providing now much more services. And in this case, for example, you could sell your salon, achieve to another person and they provide the platform for that. And this is really just one of the examples, or they have tens of them because these companies and energy, they have to completely change and in a way, their business models, and this is fair, CAFCA helps.

            So good because again, only one side it's realtime data. So you can scale this and process data continuously, but on the other side, it also decouples the systems again. So the smart home system is completely decoupled from the AI stuff. Sometimes it sends a new update, like a sensor information to the system so that the system knows, Hey, this house has produced a lot of energy. Now we can sell it. And so please distribute it somehow. And this is again where many different characteristics come into play. So it's only one side hybrid, very do analytics in the cloud, and then also agentic ration. But on the other side, also, this is really a mission critical system. This has to run 24 seven. So it's distributed over different geo locations. And with this infrastructure, this is really the critical center of their system to integrate with their own infrastructure, but also with all the customers and end users. And also then of course, with partners like your, this it's the same strategy like in automotive and the future of these companies will not put everything by themselves, but they complimented with partner systems, which are very good in one specific niche. And they provide the distribution system for that.

            [Erik]

            Okay. Yeah. And this is quite a contrast of systems, right? You have a mission critical utility, and then you have your grandfather's home where, you know, I suppose you have a lot of different types, cause we're not here talking about always enterprise scale with smart grips, but we're talking about also home deployments with probably a, quite a range of different technologies, different connectivity solutions and so forth. Was, was this a challenge there, or is it already fairly standardized that when they do a, when they install a solar deployment on a home that the, the right connectivity infrastructure is already deployed there for, for an easy integration or, or is that a, is that a challenge

            [Kai]

            In this case? It's much easier than in plants and factories, because here you don't have to challenge that every vendor is very proprietary and doesn't really want to get the data out. And in this case, it's typically only one site and it's also not 30 year old machines, like in a production line, but it's really maybe ammonia to your old small devices. And so here are these manufacturers also a bit more modern technologies and she had a difference also is they want you to integrate it with other systems. So this typically has a standard interface or API, something like MQTT or HTTP. So this is actually pretty straightforward to integrate because here the business model and the integration idea much different than in production lines and plans. So this is really pretty straightforward. It's really more the challenge that, again, some of these interfaces are real time and sensor base. Some others are like more like pull based where you just ask the system every hour. And this is exactly what Kafka was built for with, it's not just a messaging system. It also has integration capabilities. And so it's pretty straightforward with Kafka to integrate these different technologies and communication, communication, power Dignitas, and still correlate all of this different data sets and protocols to get value out of that. And sent an X or alert or whatever the use case is on that.

            [Erik]

            Okay. Interesting. Yeah. I was reading a, an article maybe a month or so ago, which said that I think it was in Germany or UK, the percentage of energy on the grid from renewable spiked up to something like 33%, which was a significant high, right. And it was due to a few factors, I think like lower energy, lower air pollution, because factories were shut down and a few other factors. But I think that's something that five years ago, people were projecting that to be kind of an apocalypse, right? You couldn't, you couldn't handle that kind of swing in terms of renewable energy. But I suppose Kafka is part of the reason that energy grids are now able to handle a lot more variance in the load than they were designed to right. 10 years ago.

            [Kai]

            Well, it's really changing how you stay. So every year you see new innovation on that. And really Kafka is at the heart of that in many different infrastructures. Often you don't see it because it's more because it's under the hood, right. But it's really not just that these typical end user projects are using Kafka, but it's really that also these software and technology vendors under the hood and use Kafka for building new products.].

            [Erik]

            Okay. This has been really interesting and what, what have we missed here? What else is important for people to understand about events streaming?

            [Kai]

            The most important thing is really that today it's really much more than just ingesting data into data Lake. That's what people know it from in the last five years, but really today and half can event streaming is for mostly mission critical systems. So that's what 95% of our customers do. That's why they come to us because we have the expertise with Africa and build it more or less many parts of it. And therefore this is really the most critical thing. And it doesn't matter if it's just at the edge or really not global deployment. So we provide technologies that you can deploy. CAFCA globally. We have many industrial customers which run and plants all over the world and still you can replicate and integrate in real time for big data sets all over the world. There's different components here on different architectural options with different SLA is of course, but this is really the key power to take away from the session for industrial IOT.

            [Erik]

            Kai thank you so much for taking the time. Last question from my side is how should people reach out to you? I would be glad if you connect to me on LinkedIn or Twitter. So I'm really present. They are into a lot of updates about use cases and architect just there. And also of course, you can check out my blog. Hi, Vina, hi, minus wine or tea, or your check, the links maybe. But this is really where I blog was about IOT a lot every week or every two, and have a lot of different use cases and architectures around events streaming and different. Okay, perfect. Then we'll put those notes in the in the show notes guy. Thanks again. Yeah, you're welcome. Great to be here.

            [Outro]

            Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com

            Read More
            EP065 - How cloud-edge hybrid strategies drive IoT success - Sastry Malladi, CTO Co-Founder, Foghorn
            Tuesday, Jun 30, 2020

            In this episode, we discuss the business value of machine learning on the edge, and the increasing need for hybrid edge-cloud architectures. We also propose some technology trends that will increase usability and functionality of edge computing systems.

             

            Sastry is the CTO and co-founder of Foghorn. He is responsible for and oversees all technology and product development. Sastry’s expertise include developing, leading and architecting various highly scalable and distributed systems, in the areas of Big Data, SOA, Micro Services Architecture, Application Servers, Java/J2EE/Web Services middleware, and cloud computing.

            FogHorn is a leading developer of edge intelligence software for industrial and commercial IoT application solutions. FogHorn’s software platform brings the power of advanced analytics and machine learning to the on-premises edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as smart grid, smart city, smart building and connected vehicle applications. info@foghorn.io

            _______

            Automated Transcript

            [Intro]

            Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host Erik Walenza.

            Welcome back to the industrial IOT spotlight podcast. I'm your host, Erik Walenza, CEO of IOT one. And our guest today is Sastry Malladi, CTO and co founder of Foghorn. Foghorn delivers comprehensive data enrichment and real time analytics on high volumes at the edge by optimizing for constraint, compute footprints and limited connectivity. In this talk, we discussed the business value of machine learning on the edge and the need for hybrid edge cloud architectures. We also explored technology trends that will increase the usability and functionality of edge computing systems. If you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iotone.com.

            [Erik]

            Thank you Sastry. Thank you for joining us today.

            [Sastry]

            It's my pleasure.

            [Erik]

            So today we have a slightly technical topic today. So how cloud dominated solutions will adopt a more edge first or cloud edge hybrid approach.

            But before we get into the technical details, I want to understand more where you're coming from and also the background to Foghorn. So starting from your background, I know you're, you're now the CTO of Foghorn. I believe you joined about four and a half years ago. How did you come to end up at Foghorn? What was the path that led you to this company? And what was it about Foghorn that made you feel like this is a company that I see high potential in?

            [Sastry]

            Absolutely. So yes I am. I'm a cofounder and CTO finance about four and half years. Plus I'm an entrepreneur at heart. I've got a technology background. What in the leadership roles, as well as the executive management roles for the past two decades or so on and off what big companies, startups self-funded companies. Primarily my background has been starting from hardware devices, operating systems, applications network, slowly walk back up to application service as big data.

            And so on how I got interested in our seed fund, the investor, the hive, which is based in Palo Alto here, typically set aside some seed funds and try to look for founders for solving certain problems. One of the problem areas that they wanted to solve was in the industrial IOT space. And they've been looking for founders who can come and help build this technology to solve specific problems. We'll get into context of what we're actually solving in a second, but that's how, you know, they started talking to me for about, I'd say, six months or so. And then I would other co founder, David King also came on board around the same time. And that's how I ended up in forgone and then never looked back and I've been enjoying since then. We're actually building a pretty cool company to solve a real problem for industrial IOT customers.

            [Erik]

            That's interesting. So it was actually a seed fund that had a problem that they thought needed to be solved. And then they basically recruited you or they, they scouted founders that they thought would be able to solve.

            [Sastry]

            This is that that's it. And the way it works is that once they recruit us, then they leave it to us. We actually go raise the funds like CDC say, we started with CDC. We're currently, we've just recently funded a close series C last November, but the ABMC. And then we basically hired the rest of the team, built the product, take it to the market, you know, bill customers and so on. So we take it from there. In fact, the seed fund company helps us to bootstrap this and then we take it from there, but that's really how their model works.

            [Erik]

            Okay. And I'm sure they're happy with the results is Foghorn has great traction right now.

            [Sastry]

            Yeah, absolutely. So far so good.

            [Erik]

            Did you know David, before you co founded the company with him, or

            [Sastry]

            Actually I did not. I met him. I've talked to David when our seed investor company introduced us

            [Erik]

            And tell us a little bit about what Foghorn is and what problems you solve before we go into the technical details. Just what, what is the value proposition behind the company?

            [Sastry]

            So if we look at the IOT, I know IOT is a buzz word. A lot of people use that. But if you look at specifically in the industrial sector, whether you're talking about manufacturing, violent gas, transportation across the board and problems do occur, right? So that is potentially the yield improvement issue is scrapped issues or predictive maintenance issue is now all of that up until now. What they've been always doing is trying to somehow collect some data information about those assets that they're trying to monitor to optimize for their assets and the business outcomes. And then somehow maybe send all the ship into the cloud environment and do some analysis and try to find what the problem is, right? That has not been working really well for them. It's not only cost, not cost effective, but it's also not practical to send all of that information into a cloud environment and process it there and then send the results back into the asset.

            And by the time they do that for hops, the whatever issue they were trying to solve, maybe the machine is down. Maybe the part that they're manufacturing was bad that would have already happened. It was too late and they wouldn't have helped them. So what we set out is to solve exactly that problem, which is to say, how do we enable these customers to figure out problems in a proactive, predictive manner? Meaning that some time from now the machine is going to fail are the parts that are going to be coming out are going to be defective. So with age automatically an alert to the operators. So they can fix that with the simple goal of optimizing the business outcomes, reducing their tradition, their scrap, improving the yields, as well as doing some predictive maintenance and things like that. That's the fundamental premise deriving actionable insights that will help the business outcomes. That's really what we do. Obviously there are lots of challenges in doing that. That's where we have to go invent our technology and that did not exist before, which is to say that there are constraints because you're working in these constrained environments and typical existing software that, that exist do not run in those environments. So we had to come up with an innovative way to do this on the live data that's coming in from these assets. That's really how we got started in Europe.

            [Erik]

            Surely software, right? So all, all of the hardware involved, you'd be working with partners.

            [Sastry]

            That is exactly right. The software. But as you can imagine, as you probably have seen, we have lots of investors that are also hardware partners like Dell, for example, is invested in across partner, HPE, not an investor, but close partner. We've got Bosch, we've got Advantech number of hardware partners that we closely work with. And then we certify, but we don't actually sell necessarily any hardware. In fact, these hardware manufacturers have, they don't catalog of skews where they preload and test our software. If a customer actually wanted to buy their hardware, plus our software, we do have that kind of bundles of packages available, made available one way or another.

            [Erik]

            Okay. Yeah. And actually that's a great business model, right? I mean, as a, as a younger company, building up a Salesforce to reach a broad market is quite challenging. So having HP and Dell and Bosch and so forth, being able to bring your solution to the market, I imagine that's a, that's the right way to, to enter on software. I guess we can kind of divide the, you know, software into the, let's say the architecture elements around how you process data, capture, process, manage data, and then the specific application. So every customer has their specific problem. And I suppose those applications in some cases are somewhat standardized, but in some cases there's going to be some customization around the requirements. Are you typically doing both the underlying architecture and the application, or would you, in many cases provide the architecture, but maybe work with a third party software provider that has an application for a specific problem?

            [Sastry]

            Well, that's actually a good question. Let me take a couple of minutes to explain that our fundamental business model is an annual subscription software, right? We provide the software, which is a core engine where customers can install the software onto the, either the existing devices. Remember our core IP here is the ability to run these on constraint devices, whether you have an existing PLC, whether you have an existing, small ruggedized, vast Prairie pilot device, or some cases, the asset itself, right. We can fit our software in small footprint compute environment. But once you do that, you can configure the software to, with all of their, their local censoring information, everything program, using our tools as to what it is that they find to detect and what specific problem they find to solve. Now, what we have learned early on when we started shipping this product back in 2016 to our customers, is that many of these customers, because their industrial nature, they're not necessarily a technology focused and therefore they would come and ask us, can you help us to also, you know, not only, you know, install, but also actually help configure program your tools so we can detect the problem you're trying to look for.

            And we'd started doing that. Obviously as an internet company, we have to get into these customer accounts. We started doing that and soon we realized, and within four months or so that since every single customer has been asking for it, we've got to find a real solution for it. And we solved that in, in two ways. One is we started building an internal data science and technical services division to help with that. Like initially, if we want to do a pilot, if you need a couple of our data scientists to go help, initially use our tools to help set you up. And all of that, we can do that. Many customers continue to do that. We've also established a huge slew of partnerships ecosystem across the globe, starting from larger size, you know, all of the, you know, the Vipros India, Indian SIS, as well as local Micronet sites across the globe, as well as Accenture, Deloitte, and April TCS, a number of those guys, those are all our partners.

            They are actually familiar with our software. We have trained them. They are running it in their labs. So when a customer actually comes to us or to them, frankly, we're looking for a solution that requires our type of technology and they can handle that as well. But but we do have an internal data science division that also helps with a lot of the pilots. Now, one other point I'd make, before I pass that back on to you, is that what we have not done in the last four or five years is that we have done a number of such pilots, many, many fortune 500 customers. And then we began to identify what are some of these repeatable commonly used use cases across. And we started packaging them together so that neither the customer, not us, not a third party needs to go and do any customization per se, but they would get the package out of the box solution installed themselves and then fine tune it, using our own UI and then get it up and running for these commonly used use cases. And that's where we're finding also a lot more traction these days in the last one year or so.

            [Erik]

            Okay. In these packaged solutions, you'd be developing those in house potentially with a strategic partner. Is that the case?

            [Sastry]

            Not for the partner necessarily for software is only developed by ourselves. And then we do have strategic partners. For example, that say we are developing, we'll get into use cases in a second, developing a solution that requires the camera. We've got partnerships with camera vendors, starting with BOSH and the number of others to let's say that solution actually requires two sensors. Somebody actually wants to install it by centers too. They've got partnerships with sensei renders to it. So it all depends on it, but our core package solution development in good it software itself is done in house. And then the partnership comes to when it comes to bundling the hardware.

            [Erik]

            Okay. Very clear. Let's uncover use what would be, let's say the top five use cases that would typically be relevant for Foghorn

            [Sastry]

            Use cases before I say that, just one word, right? So we are a generic platform engine. That's how we started using edge ML. AI, was it on some of our trademarks to being able to run machine learning and AI in a constrained small corporate environment. And of course, traditional analytics and CP based analytics too. That's our core engine that we built from ground up. We've got several patents on that branded as well across the globe. Now use cases. Initially we started out, you know, obviously manufacturing, we've started doing word processing, discreet manufacturing, loose cases. Almost all of them can be categorized into either in a yield improvement, scrap production type use case, meaning that, you know, the machine itself is manufacturing a particular part on a product. And that product comes out with defective. And then our software is trying to protect and detect ahead of time before it produces defective parts and then fix it.

            That's one type, the second types of, and many, many types of machines, whether the CNC machines or pumps, compressors, the number of types of assets, it doesn't matter regardless of what type of facet you're talking about doing a predictive analysis, either for the parts that come out of it or secondarily the process itself, if there is actually a problem with the process and how, you know, maybe it's not feeding the pots of properly, maybe the input themselves are wrong. Maybe the temperature control is, is wrong. Whatever the process issues are. We can detect that as well. That's a couple of types of use cases I've done in the manufacturing side effects. We'll give you a specific examples when we get to case studies and switching to a different worker, like in something like an oil and gas in that the types of problems are different, which is to say, you know, starting from upstream, downstream, and missing in these cases, when you're drilling for oil, there could be a number of issues.

            For example, potentially that is a blurred prevention optimization that may need to do, or there's contamination of fluids as they're drilling the dominion to stop. Or that is flaring. For example, as you're refining the gas that you just drilled and potentially do the compressor problems or some other problems that it could be a phenomenon called flaring, where you start releasing these gases into the atmosphere causing emission problems and EPA regulation, violations and penalties, and so on, we can actually predict and proactively prevent and be able to do that at monitor that if there is exceeding certain officials and take care of that too, and then there are other types of problems where you're drilling, for example, there's a problem with the drill bed, or there is an issue with them. Steam tabs is an issue with something else at number of types of use, get gas, leaking gas, making a Macedonia lasers, things like that types of use cases vastly vary within that sector as well.

            But again, you might be wondering, you know, how we can take care of such vast majority types of use cases. One more fundamentally that the way to solve a use case would be to install our soccer, configure auto, detect the sensors and use our tools to specifically program what you're trying to detect, unless it's a PAC submission available for us. And then we moving on to the transportation, which is another vertical that we're working on. The types of use cases. There are again, efficiency and optimization of assets. We started with locomotives initially, you know, GE, which is also an investor in us early on. And they had the locomotives installing our software within the locomotive itself, being able to optimize and predict the fuel efficiency conditions, you know, interior detections, the wear and tear of this equipment itself, things like that. And then we moved down to fleet management, especially for trucks, you know, monitoring the diver behavior, something called reason, condition two, all the way to the autonomous driving vehicles.

            Now we're working with them companies to actually install our software inside their bagels. This is actually publicly announced. Porsche was one of our customers. We have done number of use cases there. We can talk about those as well. So as you can imagine, the types of use cases vary from predictive maintenance, proactive failure, condition detection, to condition monitoring a facet in all of these three different sectors. And lately I'll say one more thing for a pass down to you. We have been also getting into energy management use cases, especially in buildings, smart buildings, whether they're office buildings, whether they are school buildings, whether they are hospitality buildings, regardless, also partners with Honeywell. We're actually trying to, we have a solution now to optimize energy consumption in those buildings by simply running our software and connect to the sensors and programming to detect those conditions and things like that. So it's, it's a gamut of use cases that that we've been after

            [Erik]

            Wide variety. So I imagine that even though you are bringing your own kind of integrated solutions to market and more standardized solutions to market, in many cases, there's some degree of customization required. If you can give me a kind of a rough estimate, what proportion of your customers are able to do this in house and what proportion require some external support, whether it's by you or, or by a third party system integrator?

            [Sastry]

            Yeah, if you ask me that say up until last year, right? Majority of the portions of our customers were using some help or in one shape or form from ours for us to either to customize that or to build a solution for them or collaborate in SSI. But in the last 12 months, that picture has, is changing quite significantly rapidly. But now that we're actually started realizing these packaged solutions out there, customer those customers that are actually buying these packaged solutions no longer have to depend on any services support from us. We're actually coming up with a part of that pack of solution would be like a UI, which helps them. I'll give you a simple example. Let's say, you're talking about flare monitoring, right? So player monitoring, this is a vision based MLAI system. So we take the video camera feed. We take the compressor sensors and valve positioning all of the different sensors.

            And then we build, for example, the packet solution, we'll have a machine learning model that's processing this display images to identify certain KPIs. Now, obviously when you take that solution to install in a different customer environment, maybe how their flat looks is slightly different. Maybe the conditions are different. Maybe the camera positioning is different. Maybe the resolution is different. Something else is definitely the environment. And therefore in order for that solution to accurately produce those KPIs, they will have to fine tune that that's specific to their environment. And to do that rather than they hiding us, as, as we says, are more going to NSI to help with that. We've actually built a UI where they can come in, for example, upload their video, upload their parameters in a walk them through, you know, how to fine tune that and then sell fine on we create the solution different. Right? So in other words, in the last 12 months that the number of customers beginning to depend on our services is reducing, but in the first few years, it's pretty much a vast majority of them. Although there were quite a few of them also doing it on their own.

            [Erik]

            Okay, great. Yeah. That's a very positive trend that we've been seeing is companies making the interfaces much more, let's say comfortable for a nontechnical user to modify it.

            [Sastry]

            That's exactly right. If I may say something, something that neglected to mention before, one of our core strengths of our offering is also, we are actually 40 centric, operational technology centric, as I'd mentioned early on, a lot of our customers have not really highly technology savvy. They're all engineers, but they're all engineers from a manufacturing mechanical standpoint, but they're not necessarily computer science engineers. Right. So if you'd go and ask them and tell them anything, in terms of programming, complicated thing, it's going to be really hard for them. So we started building from the get go, what we call this OT centric tools, a drag and drop tools where they can drag and drop a sensor definition, identify and explain or express what it is that they want to derive. And then we take of the coding business behind the scenes. So we would definitely take pride in that, which is it's really, really important to put out auto centric tools as opposed to it centric tools in order for us to be successful in this market. Yes.

            [Erik]

            That's a great point. I was just going to ask you about who is the, let's say, who is the buyer or the system owner? Because I suppose 10 years ago it would be it, but it sounds like that's not the case here. So is it then the engineering team is that who you'd be working with or who would be a typical buyer and then system owner wants to see.

            [Sastry]

            Yeah. So that's actually another great question. Typically, the user, maybe I would split that into two parts. The user of our software is the actual operator, the engineer that's in the plant environment or a refinery environment or a vehicle or whatnot, right? So those are the users of the system. But obviously the person who has got the budget who actually buys it is somebody, their CIO, CTO, whatever is the role is right now, unlike a typical it sale, where you go and convince the budget owner or the CIO or the CTO for the budget and go sell this. And then everybody started using it. That's not as simple in these environments because the person who actually wants the budget, who has got the money to buy, this is not the same person or the same team that's actually going to use it. So we have to bring both people onto the table to make sure it convinced the operator, convinced the engineer, that this actually does solve the problem for them.

            And of course, if you don't have the budget, it doesn't help. Even if the engineer thinks that operator thinks that it is going to be helpful. And therefore it's a three day conversation, you got to have the budget first to make sure that somebody has got the money to pay for it. And then second, you have to have a business problem. It's been identified, the business problem that you want to solve. It can be a science experiment where somebody, you know, wakes up on one day and said, Oh, let's just try something new. Right. It has to be a valid business problem. Agreed upon problem. And then the operator needs to have the feeling that that the solution that we're offering actually does solve that problem. That's kind of how it starts.

            [Erik]

            Okay. Yeah. And I suppose, because you have these two different stakeholders are like the user and the buyer are different entities pilots to some degree are important, but it, you know, it's a, it's a topic that's come up quite often lately the challenge where a pilot is implemented and then the solution at scale is fundamentally different from the problem being solved for a specific pilot. So pilots, maybe in many cases, don't scale. Well, how do you address the, the issue of maybe having to do a pilot to demonstrate value to both these stakeholders, but then ensuring that the pilot actually will, will scale and provide also the same required value across the entity.

            [Sastry]

            So it always says your guests, it always talks with the pilot most of the times like that, because they want to make sure that we're actually able to run the software in their environment. We can actually connect to their equipment and sensors. We can in fact show that we're able to predict the failure conditions that they want. So typically the pilot runs anywhere from two to six months, depending on the customer. But before we get into the pilot, we always obviously have contract negotiations to say that if the pilot were to be successful, what's next, right? If we have bells, because we have been doing this for the last, I would say four or five years now that I, you know, some of them are actually running in production, large scale deployment as well that we have enhanced all of our tuning to say, look, it's not just one device.

            It's not just one machine that you're connecting to. Of course, they're going to do that in the pilot, but beyond the pilot, if you've got multiple locations, multiple sites, multiple machines. And how do you scale that up? How do you take one of the same solution that you already built to automatically with one click, deploy this across many sites and then be able to still further localize and customize to the specific environment. How do you do that? So we built a tool called fog and manager that helps them with large scale deployments and a local customization. And then also things like auto discovery and many, a times to scale this up. For example, the goal manually configure a system to list all of the sensors. It's going to be practically impossible, much less not to mention the error prone nature of it. So we've got tools built in to automatically discover what sensors are actually available and exist.

            Present that to the user, allow them to be able to customize that solution. And then with one, once you customize it, a localize it with one single click, select a number of these devices at once, and be able to push that same thing for any updates since then. It's not a one time thing from time to time, but might release patches fixed us, bug fixes, maybe updates. It's the same mechanism that happens as well. So we use a container technology mechanism to actually start shipping this to a number of sites automatically without shipping physically the bits and things like that. So we have actually considered all of that. The management monitoring configuration tools for scaling up from the get go. Luckily we have had big partners in, many of them were investors early on that helped us to actually test this scaling aspect of it in their environment. And that's how we kind of beefed it up. Of course, we continue to learn from each of these customer deployments and to see if there are, if there are other things that we can improve on and it's an ongoing process to improve.

            [Erik]

            Okay, great, great. Very interesting. So let's then turn to the, the technology and in particular, this discussion of when to use cloud edge or a hybrid system. So I guess you have maybe three, let's say four, four terminologies that we should define upfront for users that are not so familiar here. So we have the cloud, we have the edge, we have the fog and then we have hybrid systems. Could you just in your own, in your own words, define those four alternative architectures.

            [Sastry]

            Yeah, absolutely. So let's talk about the cloud, which I'm sure all of us are familiar with. And by the way, just early on, back in more 15, 20 years ago, I get in my own self funded startup. It's like the aggression here at that time, people used to call it different things. I call it great. Some other folks called it utility computing, ultimately the word cloud stack. But in any case, the hosted centralized data center like environment where all of the data processing and all of this competition happens in a central location. It's the same centralized to be know that, right? That's the cloud. They've got major providers, whether it's Microsoft, AWS, Google, and whatnot, right? So that's fairly clear. I don't spend a lot of time on that. Now let's talk before I go to the edge for a second, let's talk about forward because in fact, we named our company as far gone early on, and the context stated that what eight, eight years ago or so.

            Right. In fact, Cisco initially came up with the term fog computing, although they didn't quite execute on that. The concept behind that at the edge of the network, when you've got assets, manufacturing, machines, oil, refineries, you know, weight goals and buildings, whatever these things are, they are at the edge of the network or assets and any competition that happens closer to them, sometimes on the assets that are closer to them is what initially started off as the fog computing. So that's how we, in fact, named our company as far gone, you might have noticed that we no longer use the term fog in our any of our references materials. And there is a reason behind it. So what happened in the last six years or so? A lot of folks, as well as, you know, standard organizations like open, which we were part of and things like that, they started to dilute the definition of what fucking greeting is people started talking about, Oh, fog is anywhere between edge and cloud.

            It's the continuum, it's this it's that. And then, you know, the definition got diluted. So we actually stopped using the term fog. In fact, not many people are even using it anymore. Edge is the definition that is sticking right now. It is simply the mean that is the edge of the network, closer to the asset or the asset itself, where you start doing some competition to identify or predict whatever you're trying to do right now. There is a slight variation of that edge, especially with five G and the mobile network operators coming in, that's called Mac MEC previously. It used to be called multi-access compared. Now it's called mobile it's compete or vice versa and what their definition of edge, which is the other definition that sticks today, is that rather than considering edge as the asset or the edge of the network, instead of they consider the edge as the base station, the cell tower base station, where the information of the data from the assets flows into the base station and that's their edge computing.

            It's not all the way to the cloud, but it is somewhere in between that's data definition. So we have talked about cloud, we talked about fog. We talk about edge two different flavors of edge, and then hybrid systems. What is really practical today is actually a hybrid system. Almost every single customer that we have deployed into that anyone that you talked about always uses a hybrid system and because edge is good for what it is good for. Cloud is good for what it is good for. When you have historical data, petabytes scale services, aggregation across multiple sites, sideways visibility across the whole company and things like that. Some of those services are typically hosted in a cloud. So cloud still has a role. What it needs to play for edge. On the other hand is people have real problems. They've got plans, they've got factories and they've got real problems and they need to solve those problems right then and there, as it is happening, they can't wait for that information to be shipped to the cloud.

            And then somebody telling them back, here's the problem? Go fix it. It's probably too late. So most customers install our edge software, deploy this software to find the problems in real time, derive the insight, take care of business, and then use the cloud to send them transport these insights from each of these different locations, into a central cloud storage, into a cloud service. And that's where the aggregation happens. And that's where any fine tuning the machine learning models are for the building of the models happens, things like that. So that's where the cloud gets used. And not to mention that, you know, dashboarding central dashboarding company wide visibility and things like that. So most definitely the hybrid systems is where every customer is choosing to put a plug with these days,

            [Erik]

            The edge, I suppose, as you said, it, it can be a number of different things. It could be a base station. It can be, I think in many cases that gateway is quite common. It could even be a sensor or something with a very low compute power, but because you're dealing with ML and things that require, I suppose, a bit, a bit heavier, how is it typically going to be the gateway that you would be deployed on? Or is there a wider range of hardware. Where your computer could be located?

            [Sastry]

            It is a wider range. Remember I mentioned our core part of the technology is this notion that we can run in this constraint environment. We have this technology called edification. What it means is that typically analytics and machine learning and models that run in a cloud environment that almost assume always infinite amount of compute and storage and memory available. That's not the case in this constrained devices. So we have come up with a number of techniques to Edify those analytics and machine learning models, to be able to run in this constrained environment. We use a number of techniques, quantification binder, addition, our CP converting all of the Python code into our CEP number of techniques that we use, but software based as well as hardware based acceleration, things like that, given that, but having said that normally, if you're not doing deep learning, vision-based deep learning machine learning models out there in almost about a hundred, 150 megabytes of memory, dual core CPU.

            Typically what you would find in a PLC or half the size of a rack ruggedized basketry pie. You can run a lot of the analytics, but the moment you connect video cameras, audio acoustic, or vibration sensors, where you're combining them all together and doing deep learning. This is where we need a little bit more memory. So this is where devices like the gateways, whether they are Dell IOT gateways, HBI, or the gateways or Samsung ruggedize them on based devices, ruggedized, raspberry pies, things like that would come in. So it depends on the use case. Deep learning machine learning requires a little bit more power. We're talking about maybe a few gigabytes of memory, but not deep learning, traditional typical analytics and figuring out you can actually fit it into, into a PLC at a very small gateway device.

            [Erik]

            We've covered kind of the architecture. What would be the, I mean, you've already alluded to this, but I just want to make sure that this is very clear for everybody that's listening. What would be the decision criteria? So we're talking about latency, bandwidth, certainly cost of hardware, cost of data transfer and so forth. How would you break down the decision when you're discerning? What type of architecture fits a particular use case?

            [Sastry]

            I think you, you started out listing some of those. Those are the right criteria, right? So first of all, we always start with what's the business problem. What's the business impact because if it is simple science experiment, it's not really good for either of us. What's the business problem. For example, it starts up customer comes and says, you know, here's my problem. My every single day I've got X number of parts coming out as defective from this machine, that's costing me Y amount of impact to the business, right? We've got to give back to solve that. It always starts with a business problem. Now, how do we solve the business problem? Of course, the immediate one obvious thing might be Alexa connect, all the sensors, send it to the cloud environment, process it, come back and analyze it and see what it is. Typically.

            That's all good. If a one time thing, meaning that a problem only happens just one, one time. And that said, go fix it and never have the problem. Again, unfortunately, that's not the case. It's a continuous problem. What happens is even when you have a solution that is sometimes drift in environmental changes, the calibration issue, as the data changes as a result, the same solution may not work. So it has to be a continuous live up there too. And then also the problem is not solved. If you tell the operator after the fact that something happened yesterday or something happened even one hour ago, what's the point it's already happened? How do you predict it and tell them ahead of time, just in time. So they've got a chance to go fix and rectify what's happening. This is where they decide. Well, edge obviously makes more sense there.

            So when you start talking from there and now then comes we, okay, what kind of data is actually available? What kind of hardware do you need? Is the existing hardware enough to fit there? Do we need to go acquire a new hire to gateway as the connectivity exists or not? Those kinds of questions will happen, but ultimately what's the cost of the software for the edge of deployment. And of course it's a total cost, right? Not just the software, the hardware, any networking, any sensors that lemonade to install. What's the total cost of that in comparison to that, how much business problem is it actually going to solve? Meaning that if they've got a million dollars, you know, a scrap happening every month, for example, right? If your software is going to cost you a million dollars, they're not solving anything too. So there's always going to weigh those two cost of owning this software versus actual business impact.

            And then the it's not so much about cloud versus edge. As I alluded to here, it's always a hybrid. There is always going to be some cloud part because most customers typically do not have one site. They have multiple sites and therefore you've got to send insights from each of these sites into a central location. Plus you will need to do fine tuning of the models based on calibration issues that are rocketing out there. So that hybrid part of the thing, all this is there, but whether or not it makes sense for them to deploy an it based solution is purely directly related to the extent of the business problem that they're trying to solve.

            [Erik]

            In many cases, people already have cloud environments up, right? So the onetime cost there is, is not going to be a factor. It's just the cost of transferring a data story and data and so forth, which in some cases could be high. In some cases it could be insignificant, but then there's, there's probably going to be more of a onetime cost when it comes to deploying an edge, but then maybe lower operating costs. So there's a bit of a trade off there, but it sounds like for you, it's, it's more about looking from a business perspective, seeing where's the value and letting that then drive the decision.

            [Sastry]

            Yeah. Ultimately the business value is going to drive the decision, but some of the points that you're making up a person are still valid, right? So leave aside the initial cost of setting up a cloud environment when you're transporting all of the raw data into the cloud environment, assuming that latency is not a problem for the customer, that's a big gift. If that's not an issue for the customer, then transporting any given typical use case. You've got megabytes to petabytes of data coming in every single day, you know, transporting all of it into the cloud there's transportation costs. And then the store has costs on the onset outlook. It seems like, Oh, it's actually really cheap to store. You know, just the, every megabyte is only cost a few cents and all of that, but what happens over time? What a month? It actually pretty much accepts very significantly, right?

            And then more importantly, many a times that raw data is actually not very useful after, you know, a few hours. For example, if he keeps telling him my temperature, temperature sensor measuring for the machine cost 76 degrees, you know, that that data is actually not very useful in often a number. Okay. So what if was only six? What do you need to find out in real time as to what actually happened, but you're absolutely right. There are trade offs. What is the cost of storing this information? And can we afford the latency of transporting all of that? And then and then is it quick enough? Is it solving enough of a business problem in real time to do that? So the number of factors that come in ultimately resolve goes back to their CFO, any companies that going to look at to say, has it contributed to my bottom line in the sense that has it helped the bottom line, has it solved preventing the losses? That's really how they're going to look at.

            [Erik]

            And so I want to get to a case study, but just one more question on the technology side, before we move there, we have a number of trends right now, which could be pushing in different directions, right? So we have five G that I think some of the cloud architecture providers are hoping is going to make cloud somewhat more real time, potentially even lower cost. And then we have improvements in hardware that make edge computing much more powerful. So what are the technology trends that you're paying closest attention to that will be impacting how this architecture is structured in the coming years?

            [Sastry]

            So we are very closely working with, you know, all of the major telcos across the globe, not only here in the U S but across the globe, right? The mobile edge computing or the Mac as they call it is real right now in scenarios where let's say you take a manufacturing plant, you know, if they've got existing connectivity, hardware, industrial, internet installed and everything else, most of the times they're going to run this edge analysis locally there now, but with this 5g, you mentioned a 5g coming up and the Mac, the notion of Mac coming up, what is happening is that people are going out to these plants and manufacturing customers saying that me, you don't really need to install this industrial internet or any of these connectivity issues out there simply use 5g, send all that information into mobile base station. That's where the compute happens.

            And these telcos are actually leveraging us to, we're just running our software in the base station as opposed to running in the plant. But the technology trend that's shaping up right now. And I think it's going to evolve quite a bit in the coming year or two, and we're paying very close attention and already working with telcos on that would be shift the edge computing happening in some cases where there is no infrastructure installed back into these Mac base stations and that's really what's going to happen. So it's, the people will have a choice to make whether they want to go install the infrastructure within their plant or whether they want to use 5g and actually transport that into a Mac. And then the five D because of its very low latency, they will not see a difference. It's matter of their choice in terms of whether they want install locally there, or have the compute done in the base stations as opposed to all the way to the cloud.

            [Erik]

            Okay, great. Let's go into one or two of the case studies then, do you have one in mind that you'd be able to walk us through ideally kind of an end to end perspective from initial communication with the customer through deployment?

            [Sastry]

            Yeah, absolutely. Let's talk about that. Stanley black and Decker. You've thought about the company. It's one of the largest tool manufacturers, pretty much anything you can think about that we all use in the hospital they make right. And we met them, I would say maybe two, two and a half years ago in one of the events at a GE sponsored event. And they have actually looked at, so they came by, they talked to us and we have showcased, we've been showcasing the manufacturing problems we solve for GE and a number of others. And then they came on a base, started brainstorming with us and say, look, we've got a problem too. We've got a problem. I don't know everything they've got about 80 plus plants across the globe, as you can imagine. And the manufacturing, what is different kinds of things, starting from, you know, measuring tapes to tools, to toolbox, to high powered hammers, and now all kinds of tools out there.

            And what they were wondering was when they went to the analysis and they're at the event, the McKinsey analysis and figuring it out, what exactly is that is the business problem. And so on, it was clear to them that, Hey, look, each and every one of these plants is having a lot of scrap, the quality inspection. They only are detecting that at the quality inspection time that was too late. So they throw away all of that and that's costing them millions of dollars by pretty high value. So when they heard about us and when they came and talked to us too, about two and a half years ago, and then now we were showcasing what we have done similar uses cases, similar customers, none of that. They want it to do some kind of a pilot. Initially they wanted to start with, you know, one or two use cases to see whether this is actually real or not.

            And obviously right away, they did not get into any contract or anything like that. But initially the first, I want to understand how real this solution is because their problem is huge for us. All of these plants, we have actually publicly talked about it. We've also done the joint, you know, seminars and papers and all of that too. So this is some of this information that I'm saying is actually also publicly available. So I'll talk about one specific use case, which is kind of interesting. So one of the plants in in Connecticut, it's called new Britain and they manufacturer among other things and measure measuring tape. That'd be all use at pumps, the white tapes, yellow tapes, the traditional ones are these yellow tapes that I'm sure you've seen that. And so that tape is actually the mic, a lot of it every single day, right?

            So it's a very high speed, high moving, in fact factory machinery. And as they're making this tape and sometimes what comes out of it, there is extra ink, extra painting, broken markings, or the markings on the measurements are not quite right, whatever, they're, you know, 50 different, 150 to a hundred different types of defects that can occur. That is almost impossible for anyone to notice that the tape is moving so fast, it's being manufactured and moving in a very high rate. And then it comes up at the end of the day, it goes to quality inspection, somebody physically manual excited is everything okay on this. And occasionally, you know, this product and if this product, the throw away that part right there, if they don't even spot it, it goes to the next day, the distribution center or somebody else then spot, you know, notices that, and then it's, and it's thrown away.

            That's costing them millions of dollars out there, right? So they wanted to solve that problem prior to talking to us, they're installed. They talk to another company, you know, national instrument, for example, they put in a system called LabVIEW, which is basically a vision based system where they've installed a video camera. And the video camera is pointing to this tape measurement process, the machine, and then mostly observing, for example, if they're needed defects occurring on that, on the tip, and then they're displayed on a dashboard. And of course an operator has to look at it in the dashboard, which is also impossible. Nobody's going to look at it on the screen all the time, but the system is supposed to flag if there was any defect. And so the doctorate register goes and takes care of that. What actually ended up happening was that 90% of the time it was giving them false positives, right?

            So factually they were not detecting anything. If anything else was actually causing more churn on the operator, as well as less productivity because they're raising false positives, they stopped the machine only to find out that was actually not the defect. Something else is the defect. So what we did is we, they have this existing, luckily they had this existing video camera installed. They've got all this machines, connectivity setup and everything else that was actually easy. So we actually went and drop our software onto the same system. They've got the same compute gateway device there. And then we connected our software to that same video camera that they want installed. It's a high speed, high resolution camera doing 60 frames per second. The thing is moving a little bit fast. And then, then we built using our data science capability, built a machine learning AI model on that live data to detect to a hundred different types of defects.

            And then they've given, of course, I'm kind of simplifying this. They've given several different constraints and conditions as to while the defect happens. A certain number of times within a certain distance, in some length of the tape is exactly the slight variation that we're looking for. I mean, there's just so many nuances to that. So we take in all of that into account and then a that, and in few milliseconds, I remember this has to happen in milliseconds because it's already too late. If it's anything beyond that we detected, but it exactly meets the criteria. The condition that the operator is interested in, we send an automatic alert to the operator. We automatically display that on the dashboard as well, whether anybody's looking at it or not. And then the operator then goes and tries to stop the machine and then takes that particular process out, fixes the paint process, print process, and then restocks, it we've actually eliminated their scrap within this process.

            Once it's running for several months, then once the operator is comfortable, Hey, you know, there are no longer, there are no false positives. It's actually protecting everything correctly. This is what we want. It's actually saving us. Then we actually have as part of our product an SDK software development kit that if they wanted to automate and you know, programmatically to say, whenever relays this particular defect, you know, go ahead and actually stop the machine. We don't need to have an involvement operator to do that. They're able to do that as well. They're using, they can program a simple program using our STK to programmatically do that. But you know, that level of maturity comes in after deploying and running this for a certain period of time and things like that. So that's one big risk I sent to end. And obviously before getting into all of that, we obviously had a big contract saying that, Hey, look, if this kind of thing actually works, we are going to deploy this to similar solution to all the 80 plants. And suddenly we had a value based contract created for them. And and so since then we've been working on many, many plans, some here in the U S some in Europe, some in the issue and things like that. So that's the use case.

            [Erik]

            Okay, great. So sounds like a great use case and a very, very common one. To what extent are you able also to run root cause analysis? Right now, we're looking through a camera. So we're basically identifying in real time the problem and minimizing the amount of scar scrap by, by stopping the machine and solving that I suppose, Hey, hypothetically, at least you could also take data from equipment upstream and identify potentially what, what was the cause of the problem? Is that something that you've been able to do or that you've you've looked into and in this use case or another

            [Sastry]

            In this use case, we did not because they did not have sensors for the other one, but we certainly did the root cause analysis. And the other use case, I'll tell you one, remember the flare monitoring example that I was giving you where, you know, you're processing and you're finding the guests, for example. And then, you know, occasionally because of the, you know, sometimes the pressure builds up in the compressor. That is a compressor problem as a result of which it's not refining it. And then they have to release the gas. That's one problem. The second problem is there's something that they actually take this, what they call the sour gas, and then try to Sweden this, it using a chemical process in that process. Sometimes they could be forming a condition called forming that could occur as a result of which the flare can occur too.

            While we initially built a solution for identifying if the flare is happening or flare is going to happen. And then the contents of the flag, which is basically the composition of the gasses is beyond certain values and all of that. But then customer obviously wanted to find out and we all wanted to find out what was actually causing it. Okay. It's great that you're finding, you know, the KPIs for the Fred, and then we can stop it, but what is actually causing it. So then we fed information from the compressor, which is the compressor sound, and then trying to correlate if a bad compressor sound directly correlates to for example, a bad smoke coming out of the Flint. Sure enough. We wrote to find that. And of course it didn't stop that either. Then the next question is why is the compressor bad? So we took the, all of the sentences from the compressor to find, to identify one, will the compressor go bad or one where the forming condition occurred. So we have done the root cause analysis, as long as data is available, sensors that available, we can get to the root cause. Like

            [Erik]

            So far, all of the use cases we've covered here have been efficiency related to some degree. This is, I think kind of an interesting topic in the IOT space, which is, let's say the value of efficiency related use cases is quite clear. And building a business cases is often fairly easy to do because you have a real clear cost KPI attached. Then on the other side, you have revenue related business cases. So if I'm thinking about black and Decker, or I'm thinking about an equipment OEM, you could also see them potentially deploying some sort of edge computing solution on their equipment in order to provide a new functionality or a new, a new distribution model, a new business model to their customers. It sounds like this is not a focus of Alcorn today, but what is your perspective on the value of, of kind of revenue oriented edge computing deployments in the future? Is this something that you expect to invest in, in the coming few years, or do you think this is more a number of years away before this is going to be a big,

            [Sastry]

            So we'd already doing that. I'll give you a couple of different examples, even in the case of a standard black and Decker or other scenario, that's a new one example right on the machine itself, right? How do you know if the machine is properly used? Are there revenue opportunities for producing additional number of products out there? Right? So that kind of supply chain management and identifying. So these are all closely related, right? The efficiency of the machine is certainly lifted to give that opportunities for new revenue lines too. But a concrete example in the case of SPD, that that's a direct correlation, but a separate, extensive, exclusive opportunity like that closely work with Honeywell. One of their divisions is called SPS. They provide the manufactured, a device called mobility edge, which is like a handheld ruggedized. A lot of the operators like, you know, logistics, retail operators, like FedEx, ups, Walmart, any of these companies, they all use them.

            They're now on those devices, I'll tell you a couple of use cases where potential given the opportunity was there for them. And then now we're adding more use cases, is that for example, these operators, for example, scan boxes with the barcodes on them, right? And now what is many times the barcodes are damaged either they're not printed correctly or they're torn or there's not enough lighting or something else. Then what happens to operate a device to scan the barcode? And it doesn't work. It does not record. And therefore the pack gets left behind. It's going to take its own course there's business impact and that significant business impact to that. What we have done now is we've run our software on those devices too. We put our first edge AI software on those Android based devices. And we, what happens is as the operator is scanning this barcode simultaneously, that image is actually sent to a solution running in fog on behind the scenes, which had, which reconstructs that image and sends that image back onto the user's application visit is not even aware.

            This is all happening successfully. So this is all again, based on, based on UI, this is additional business impact for that. But the other thing, now that it's running there, now, they're saying all of a sudden, you look, mr. Customer, you've got this device with you already. I've got now fog on AI on, I showed you one solution. Now let's put on additional solutions. Now there is trying to use the same device for health monitoring. Now all of this Colbert situation we've got, we actually have just, this is what Kara was referring to. We're going to announce the solution that we dispelled. Many customers started using it, health monitoring solution. It can run on the same device. All you need is a device and the compute power and a camera attached stage. We can do social distance monitoring, temperature, elevated temperature monitoring, you know, cop detection, mass detection, things of that nature. Now customers don't need to invest in anything other than buying this addition solution. They've got the device it's already running. Fogging is already running on that too. So the opportunities for additional revenue channels is quite clear in, in those kinds of scenarios. And once you have that platform up and running, how do you add more and more of these solutions to the customer to solve their problems for additional business lines?

            [Erik]

            Okay. Okay, great. Yeah. Sounds like you're already quite mature in this space. So if you were to divide the market, just in terms of the potential, very high level between efficiency and revenue growth, how would you see it today? And how would you see it moving forward?

            [Sastry]

            It's hard to tell. Why? Because many times it's customers who sometimes are not as clear to either. They're not going to necessarily tell you there. They want to do this because they want to increase. It's a two sides. So they this same point, right? So reducing their, using the scrap and therefore you're improving the productivity, you're producing more things and therefore improving the revenue. Now, do you call that an improvement of revenue or call it an efficiency, right? Customer sometimes how the bucket, this is really hard to tell. That's why I think it's a really hard thing to tell exactly whether they classify this as a revenue increase type of solution or whether the yield improvement, absent suicides at the same point.

            [Erik]

            So that's a super interesting conversation, I think, where we're coming at the hour and I want to be cognizant of your time, but is there anything else that you would like to cover today?

            [Sastry]

            No, I think we've covered a lot of topics in, in summary. What I want to say is that our core differentiation, what we're seeing in the market is that edge is definitely taken off right. Four years ago. If we talk about it and everybody's saying, Oh, we just need cloud. We don't need edge. Now, even the same edge cloud players, which by the way, we have very close partnership with all of the major cloud partners, because we're complimentary to what they offer. They're all coming in, seeing the same just, Oh, you need to actually edge cloud alone. Doesn't do that too. So I think people have clearly recognized the need for the edge computing and what it actually plays and where it plays well. And then the fact that people do need a hybrid system, many cases to compliment what they've done in the cloud is happening.

            The AI part of it, analytics and machine learning. We have done quite a bit. And the other thing that I would say is that having deployed this in many, many sites, now the problem that we're trying to solve and addressing is this notion of automated first loop. Once you deploy a solution, it's not going to keep producing the exact same accurate results all the time. How do you continuously update it automatically in the loop? This is what we've been calling the closed loop ML or AI. I know it looks like another buzzword, but that's not really a bus, but it is. How do you automatically update some model when there are changes happening in the system? So these are the kinds of things, but it's really promising. Now they mentioned 5G, I think this is really, really taking off, but yeah, I mean, this is a super interesting trend.

            [Erik]

            We have a lot of different technologies that we've mentioned 5G already, but also in terms of ML structures that are really trending in your direction right now. So very positive outlook for Foghorn really interesting conversation today. I really appreciate your time. Last question from my side would be if somebody was interested in learning more about Foghorn, what's the best way for them to get in touch with your team.

            [Sastry]

            So I think if they're sending a message to info@foghorn.io, somebody will get in touch with them.

            [Outro]

            Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at

            Read More
            EP064 - Connecting assets as easy as installing an app on a smartphone - Peter Sorowka, CTO, Cybus.
            Tuesday, Jun 16, 2020

            In this episode, we discuss how improved data access control can remove barriers to use case adoption in complex environments by enabling effective IoT governance. We also imagine a future of connected industry where data is an asset to be monetized effectively.

            Peter Sorowka is the Chief Technology Officer and Managing Director of Cybus. Cybus develops a smart networking solution for Industry 4.0 and the Industrial Internet of Things (IIoT). Cybus enables innovative industrial equipment manufacturers to provide data-driven value-added services, such as quality management, remote monitoring or predictive maintenance, to its customers from manufacturing and logistics. For more information, please visit cybus.io.

            _______

            Automated Transcript

            [Intro]

            Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host, Erik Walenza.

            Welcome come back to the industrial IOT spotlight podcast. I'm your host Erik Walenza CEO of IOT one. And our guest today is Peter Sorowka managing director and CTO of Cybus. Cybus lets you connect digital services as easily as installing apps on your smartphone and enables you to keep control of all the data that is about to leave your factory. In this talk, we discussed how improved data access and control can remove barriers to use case adoption in complex environments by enabling effective IOT governance. We also explored a future for connected industry in which data becomes a tangible asset that can be monetized through new business models. If you found these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iotone.com.

            [Erik]

            Thank you, Peter. Thank you for joining us today.

            [Peter]

            Thanks for having me.

            [Erik]

            So Peter we're going to be discussing primarily today. Cybus and the solutions that you're bringing it to market for the, the smart factory. But before we jump into the company, I want to just learn a little bit more about where you yourself are coming from and how you ended up as the CTO of Cybus. Can you give us just a quick background on what led you to the managing director and CTO role here

            [Peter]

            I, myself, I am electrical engineer originally, but also in my studies and my research work after university, I've always touched more bits and bytes than electrons. So I would consider myself being a software engineer today. And I would say roughly five years ago, maybe it was five and a half, the whole IOT industry 4.0 hype started and back then two friends of mine and I decided to start a company in this field. And the, the main motivation that we had back then was that we were quite excited that the concepts of the internet, so interconnectivity, API APIs and so on were proposed to be extended to the real world. And we immediately understood the plethora of use cases and applications and systems that will be needed to be connected across windows. And we decided to take a very specific niche and to have to create a company that works on the, on the layer. That's usually called the gateway layer because that's a necessary evil everybody needs, but that supposedly doesn't create any value as its own. And we decided to be a company that has a core competence at the position where many others have a necessary evil and provide a focused solution on this layer.

            [Erik]

            Okay, great, this just came up in a conversation with an investor last week, the pain she was having with our portfolio companies there before we jumped into that, I have to take you on a quick diversion here, so apologies for this, but I see that you were running a peer to peer sharing platform for sailboats. Can you just give us a quick 60 second? What, what was this company that you had set up this boot shaft?

            [Peter]

            That was very, very good. So bolt shaft, it's a German play on words actually today you would describe it as Airbnb for sailboats, I would say. So we enabled private owners to share the sailboat through an app that we created that in 2010. And that was before Airbnb was at least in Germany famous or well known. And so we, we, we created everything from scratch three programs, that whole system to connect the sailboats and set ups systems from scratch. And today I would say what we did back then was an IOT platform for sailboat sharing. So we already had some insights into the technology that was required for IOT use cases. And also we had already done a number of mistakes, I would say technological mistakes or wrong decisions and learned a lot. And with this experience, basically we decided to go on the industrial market and apply our learnings and create a company from that.

            [Erik]

            Very good. So let's then dive in here. You've already alluded to the problem that you are solving here. Can you just describe a bit the tech stack on a typical solution where Cybus would be useful and how you fit in there? What is the pain that you're solving? And the typical tech stack we have focused on really the industrial IOT market. And I would narrow it a bit more down to connecting assets in factories session areas. So we are, it's really more about connecting machines to it systems in general. And then the interesting thing is there are so many perspectives from which you can approach IOT and the factory. There are so many different stakeholders that are interested in connecting the very same machines, two very systems. So you have a, you have a big data team that tries to connect machine data to a data Lake in order to do big data analysis.

            You have a maintenance team that wants to do smart or on demand maintenance planning. You have a production manager wants to introduce an MES system, and then you have all the clouds and then you have all the external stakeholders like the machine builders, suppliers, customers, insurances, banks, and everybody wants to connect the same machines to different target systems. And as everybody is bringing along their own gateway, that more or less uncontrolled would send the data somewhere. There's an interesting playground. I would say to introduce a layer of that distribution and to give control into the hands of the factories and in the end governance into the hands of the it departments to not lose overview about which data is collected, which data is sent, where, and also to avoid basically double connecting machines because you have two different gateways for the same machine because you have two different target systems. So that's the playground we are working on. And so our offering is a pure standalone. So not cloud connected or not clock dependent, let's say like this, a pure standalone software solution that's running on premise in the factory and it allows a multitenant and multipurpose data distribution.

            [Peter]

            So you mentioned that there's many stakeholders that want access to data for, for many different purposes. And then there, of course, are some stakeholders who expressly don't want other stakeholders to access their data. So I think a, a number of the larger kind of legacy sensor and PLC manufacturers try to put up barriers because this helps to protect their market share. Are you relevant in that kind of that, that battle to gain access to data when it's a, it's somewhat being protected by an OEM or, or is it basically this data has to be in the gateway? So there has to already be, let's say access to the, the end device before Cybus would be able to say regulate the flow of this data.

            I mean, as you know, especially in the industry, we are really coming from a world where secrets like a secret protocols, secret proprietary data and codings also were put up to to ensure some competitive advantage and so on. And this kind of contradicts the I in IOT, right? I mean the, the internet lifts from APIs that are open and standardized and very non secret. And that's an interesting contradiction, I would say that's also something I observe the industry has along the last five years, that this idea that you always have to keep everything secret, must be given up a little bit in order to gain a joint benefit from collaboration still. It's not something that is simple. And, but there are some very interesting associations that have been founded to solve these problems. Specifically we are a member of the of the international data spaces association that tries to, to, to solve this problem that there is at the same time, the, the demand to protect data and to share data from different stakeholders.

            And you need to somehow create trust. And the industrial, the international data space association has a very interesting reference model to solve this problem. There's even a inspect or just released for this which will become an ISO standards too soon. But there are also other associations like the open industry, 4.0 Alliance, which is also a German association of specifically sensor manufacturers that have realized years of trying to solve individual IOT solutions that in the end, the customers have a hard time accepting the incompatible solutions from the different suppliers because manufacturers, most factories have not only one supplier, but many. And if every supplier has their own full stack IOT solution, it just doesn't work. So they have to collaborate. And so there are these standardization efforts. We are active in some of them in the active world groups, and we try to always keep our software solution up to date with these standardization efforts to be able to be a solution provider in these specific groups.

            [Erik]

            Based on your perspective. I think you have a fairly unique perspective based on where are you on the tech stack? Where are we today in this, let's say this struggle. We looking at a three year time horizon or a 10 year time horizon, or who knows one time horizon when data from the vast majority of end points will be standardized, maybe not to the point, you know, HTML or HTTP, but standardized enough that somebody can go in with, you know, an application provider can go into a factory with a good application and they can get access basically to the data that they need in order to apply that. Cause I still today I talked to a lot of companies that feel like they need to build a full stack and deploy their own sensors in order for their solution to work. And it's a, I think it's a very interesting topic. What you know, where we are today in terms of the status quo and what the timeline might be to, let's say Nirvana.

            [Peter]

            So that's a super interesting question. So I think we are, when I dropped the history of, of IOT in the industry, then I would say for a number of years, everybody has been working on proof of concepts and pilots. Many companies still have a hard time getting beyond that because they realize that certain questions like the one we are just discussing really just arises. When you try to scale that particular project, it's fairly easy with today's technology, even with a raspberry PI and some open source software to just connect the machine to some cloud and do something right. And now I think we are in the beginning of the phase where people start to realize that they really need to find some standardization, but still, I think to be very honest, I think most associations are not on the right track for now. They try to overstand or dice things to the standards are getting fairly big.

            And we had the same in the, in the classic internet. So the classic internet try to find very complex standardizations like for the tech experts, from the audience, like the soap standard. So it was a technology, an interface technology basically based on XML, which tried to be, be able to explain the world that is super complex. It's super complete, I would say, but it's super hard and annoying to implement. And in the end, after this effort to have a very complex soap interface where we ended was the rest API and the rest API is what you see as a standard everywhere. So it's not important if you look at PayPal or Amazon or any other project, everything has a rest API, you even start creating your rest API when you program the new software today. And the rest API is the most simple and most non-standardized thing in the world, but it's so simple that it's trivially easy to adapt to a new recipe. And I'm pretty convinced that in the industry, we will see two or three more years of trying to over standardize everything. And then I hope that we will observe that things get more pragmatic and specifically I've a lot of trust in MQTT as a protocol, as becoming something as simple and pragmatic and straightforward and open like the air rest API has become for, for the classic internet. Okay. Okay. Very interesting. I hope so. And I guess this is going to be a bit of the open question right now, but two to three years, that would be a very welcome timeline for everybody.

            [Erik]

            To make it, make some sort of transition here. So I suppose where you are today, where Cybus is really providing value is not necessarily then providing access to the end points, but it's in a situation where this data is already moving to the gateways. You're providing value around governance, the it efficiency of managing who has access to data. And then is it also being able to, let's say optimize the use of gateways by not requiring kind of duplication for every use case that's being deployed with. Would those three be the, kind of the key elements of your value proposition or how would you frame that kind of the core elements of your value proposition when you're introducing to a new customer?

            [Peter]

            Okay. So that depends on, on which type of cast customer we're talking about. So we are selling to two fundamentally different customer groups.

            One is the factories. So we are selling directly to factories. So operators of industrial production equipment for them, the governance problem is really the biggest one. So we, as I said, they have dozens or hundreds of machines from different ages, from different vendors with different maturity, I would say too, in regards of digitization. So some already have a user net and you can just plug in your cable and start reading the data and then modern protocol and many don't even have any digital interface. So really being able to abstract this heterogeneity and then route the data to different it systems, that's really the governance problem. And it's also the scalability problems. So when it comes to how to implement standard standardized processes about how the next machine will be connected, how to extend the data, that's gathered how to pre-process the data or where to do the pre processing.

            I mean, if you look into it into the big enterprises, the biggest pain is usually called customization ERP systems. MES systems have a very high degree of customization because you're adapting your target system for, for every new data that's arriving with every new process we try to, to to take away the customization efforts in the target systems and on the machines, because we believe you should never be required to change a PLC program just because you are connecting in another cloud. And we try to, to, to centralize this customization effort into a configuration effort on the middleware layer, basically, and then it's really scalability is customization savings. It's security because when you connect to machines, you can also control machines, right? So it's very, very critical to prevent that if you don't want it and to allow it, if it's required and the second customer group, now that was a long sentence.

            The second customer group are, the suppliers are the machine vendors are the insurances that are external to the factory, but also want to bring in some infrastructure, some gateways in order to get data out. And for them, we thought totally different problem. That's mainly acceptance, acceptance that your customer would share the data because there's this always open question who owns the data. And my answer to this question is it doesn't really matter about who owns the data, but it's very clear who has control over the network, where the data comes from, and that's always the factory. So when you start creating the gateway as an external company, you usually create a black box that's even remote controlled. And that somehow does what you want. And we propose to have not a black box gateway, but to give the control about the date gate gateway layer into the hands of your customers, but stay in control about the software and the configuration that's running there. And that's a very special, special USP off of the cybers connect web, because we can, we can combine, combine these two desires, the control desires, factory, the delivery, basically of the, the supplier about the actual intelligence and the configuration. This is really two very different that you propositions. I would say two very different customer groups, but in the end it will be always one installation of our software in the factory and serve those needs. Okay.

            [Erik]

            And is it more common that you'd be entering the factory initially through one or the other of these customer groups?

            [Peter]

            It's really today 50 50. So 50% of our customers are direct end users and 50% are suppliers. Then you have often a dual cases because most suppliers have their own factory as well. Right. And we of course are hoping that's an open secret to gain some acceleration when the machine vendors and the component suppliers start rolling out in large scale with our software distributing and factories, that's our strategy. But to be very honest today, the market is not that fast.

            [Erik]

            I'd like to dive into a little bit more detailed for both of these cases. If we focus first on the factory, this is actually a, a point that's very relevant to me at this moment because we have a company that we're working with that has this, this challenge, right? They're moving to a new greenfield facility. They have some legacy use cases. They want, they have some new use cases that they want to deploy. And the question is, how do we minimize the complexity and the cost of both deploying sensors to acquire this data, to play in their connectivity. And then also to an extent, you know, deploying specific applications, we don't necessarily even want to have a unique application, or let's say app for each, each application. We want to try to standardize some of this. So there's a lot of complexity in this situation. In this case, they prefer to have a platform that is governing the processing and the storage of data. And then there's the question of what, what requirements do we, you know, do we put into that platform? Well, when we build it and those requirements would be driven by the use cases. So if you're moving into kind of a messy situation like this, how would you view this? So what would be your main questions, your main lines of inquiry in order to understand the customer and then understand how your solution can help them to resolve some of the complexity?

            [Peter]

            My approach on, on complexity, like an answer with a very simple word, that's decoupling. I try to decouple everything from everything. And in the, in the classic software development, we have a notion that's called microservice architecture. Microservice architecture means we try to split a big complex software project into individual independent modules. And these modules can be developed on their own and don't know anything about the others, and they can have an individual lifecycle. And this is very efficient. It also has some challenges to be honest, but it's very efficient because you can throw away modules, you cannot use new ones, you can replace individual ones. And I see the smart factory as a very big microservice architecture. So I have my data sources and I have my data sinks and the data sources are usually the centers and the machines and so on. And primarily machines should do what they should do, produce stuff.

            So primarily a PLC has, if you ask me the tasks to control the process and a real time fashion, then very efficiently PLC should never know anything about a cloud or an MES system. Or so instead I see then the, the protocols like OSU or Mo, or it doesn't really matter as like the API to the machine and the machine should expose the data it can provide. And it might also expose some, some control and points, but then that's it. And the same on the other side of the table and MES system is an MES system. And it has many information about processes and orders and order numbers, but an MES system has no idea about big data or predictive maintenance or even maintenance process that it shouldn't have because that's not its, so I really believe in specialization and I say, okay, MES system or ERP system, you also should implement your specific interface, your API that is designed and the way you need it, that you shouldn't know anything about OSU.

            I think architecture in between the middleware that connects the respective APIs with each other. And we believe in MQTT as I just said. So that's the message. Architecture can very well collect data from the one site and deliver it in the right format on the other side, and really draw the lines between the many elements, so many microservices. And then you're pretty efficient because you can, you can, of course installing some such an architecture, such infrastructure for the first use case is quite an effort and it's a bigger effort than just connecting directly, but already the second use case will benefit largely because you just already have everything's there. It allows you to quickly iterate to quickly add more applications, to try something, to let it fall again, if it's not worth it to change something in the bottom or in the, in the top without touching the rest of the system. So that's, that's my approach on this complexity.

            [Erik]

            Okay. And if you're in a brownfield environment, is, is it any different from, into Greenfield or you just start with what you have and you just built the infrastructure as you've just described it and try to adapt the existing system and infrastructure to this ideal format,

            [Peter]

            The rough process the same, but of course it's very different when it comes to the specifics because in the brownfield environment, the biggest problem you usually have is that connecting a single machine, usually as a research project for its own. Right? So my, my typical customer interaction here is my favorite anecdote customer coming to me say, can you connect my factory? I say, of course, which data do you need? And the answer usually is everything. And then I ask why, and then the customer would say, that's what I want to find out. And when they then say yeah, okay. So how old are your machines and how different are they? Then usually we end up in a, in an effort estimation that quickly can go towards five to 10,000 European machine because we just have to find out everything. Nobody knows anything about the interfaces and the protocols since the address space, that's usually not effective.

            So what we do in the Brownfield is we recommend customers to start very simple and to find common denominators between the machines. And they might be much simpler than the customer expects like connecting energy centers because most machines need electrical energy. That could also already, if you some hints, or my favorite is really connecting the status lights, this little traffic light on the, on the, on the top of the machine, green, yellow, red, which is very, very simple, but just a 24 volt voltage signal that you can easily grab by connecting the things you can with a very low investment already know from all of your machines, if they are currently running or if they have some, some kind of error and that's not much, but if you know it from, from all the machines, then it's already a lot. And I usually recommend customers to start with that.

            Then you can start implementing the first use cases like a dashboard that shows the machine status, like very simplified OEE calculation, like notification services for the maintenance team. And as I say, better, start with that and get to a point where you say it's not enough anymore because XYZ is missing. And then we can specifically care about adding XYZ to our data inventory and look after that. And somehow get to a point where we can define a requirement. And with this modularized approach that I said in the first answer, it's very easy to really add more complexity to the system as we need it and not have a dead end road for a specific it system when we started from.

            [Erik]

            Okay. Very, very interesting. Yeah. Thank you for sharing your perspective there. And then if we look at the other customer group, which is the, the machine builders, the technology providers, I guess one of the key, one of the areas that I think is very interesting, but maybe most challenging is this topic of how to provide data to stakeholders when it doesn't necessarily benefit the, let's say the owner or the manager of the data. So I guess the typical case would be a machine builder or going to a factory and saying, can I have access to the data that's coming off my machines. It's going to be very useful for my R and D process. And I'm going to be able to build better machines and sell them to you. And the factory says, okay, great. You can build better machines. Maybe I get some minor benefit from that, but there's potentially some unforeseen risks.

            And, and you know, I'm just not going to, I'm not going to provide you access, right? The default answer is no, but there's certainly a lot of value in these cases to provide any data, whether it's to a, a service provider, a supplier, or a customer who might want to know if their product is coming off the line and so forth. So the question is how do you enable companies to safely and securely provide access to the data, to, to access that value? How do you see this evolving? Do you see a lot of use cases among your customers where they're using Cybus to enable the sharing of data to, you know, in cases where it's not necessarily that it directly benefits the owner of the data, but maybe there's some, some monetization model or there's some way where you can create a little bit of a market ecology around data.

            [Peter]

            So I'm always a bit reluctant when somebody asks a question like this in the real world. Because I think always, if you, if you have to start about to discuss about data, if that's a start, then something's wrong. I would say that you should always discuss about the value of the predictive maintenance. You should always discuss about the, you will never have a downtime anymore, and that's something you can sell to a customer. And when the customer buys it. And one of the notes then in the contract is by the way that only works. If you share this data specifically for that, then probably the problem has already gone. But if you discuss about the data in the first place, so the data is the only central element, and then you try to find an excuse, why you would need the data, then something is still wrong.

            And then I would say we are still very much in a world where nobody really has an idea still. Yes, I know examples where that worked. And actually I have a very specific customer case study in mind from a German milling and drilling machine builder. One of the larger ones who has actually done exactly that. So they have given gateways to customers and they have given them some reduction on the maintenance contract. So they actually, basically, they paid them now for four for getting the data. And then they collect the data, the data for their research work. They were actually quite successful with that. So they, in the end were able to bring a new product family onto the market, which was 30% cheaper than all the other machines. And it was limited in their capabilities. So the Spindler was not turning that fast and the, the machine was just smaller, but that was all based on the learnings of the actual machine usage of customers. And they realize that although customers buy a machine that has this spindle speed of, I don't know, 20,000 RPM, nobody would ever exceed 10,000 RPM. So they learned that they could just sell simpler machines. And that was the story that actually happened by paying the customer basically for getting off the data.

            [Erik]

            Okay. Interesting. Yeah. And I can see why you generally be adverse to this, but I think it's quite interesting that a lot of the more profitable and maybe less moral or arguably less moral profitable internet companies that really have developed businesses where they sell services or they give away services basically to acquire data and then they sell data. So it's a, it's kind of an open question, whether that's viable in the internet of things world. And if so, where there's a company, a, that a friend of mine ran back in 2013 or 14 and it went, it went bankrupt or they closed it down. But what they were doing was kind of selling at a very low cost. These sensors into factories and the value proposition was you can deploy the sensor, you can collect energy data and understand your consumption of energy, which will enable you to reduce your energy bill. And so that's the benefit to you? The benefit to us is that we get energy data and this was in China. And their thought process was if we can get enough factories to deploy this and we get enough energy data, then we have a leading indicator for energy consumption.

            And we can either, we can use this data to, you know, sell this to wall street or sell this to let's say, commodity traders or, or themselves become a trader on the energy markets and have kind of a unique data set that nobody else has, which I thought was, it was at least quite an interesting concept. Obviously it didn't in practice, work out and maybe it was a bit too early, but do you see the potential for kind of energy broker or a data broker businesses to start to evolve? Let's say if you're extremely successful and enough factories kind of have controllable you know, control of their data at the right level, do you see that as a likely course of market development, or do you see some fairly significant barriers to that type of business model becoming successful?

            [Peter]

            I think that's absolutely the, the, the future. So what I find most interesting is to think about who is able to provide database services in the future, who's able to provide a service like the best predictive maintenance as a service thing. And I think today it's very much restricted to the machine builders themselves, but I believe that as data becomes more easily available or accessible, there's a big potential that other customers, pure digital companies can provide such services. And I think at the very least, because of this value, that you can actually create a business based on data. Something like a data stock exchange will develop, but as you just draw this comparison between our private data and the company data, I think there's a big differentiation to be made between consumer or B2B scenarios, because I think exactly the transparency that we about our private data that we know we are aware of in our private life is exactly one of the things that motivates businesses to be much more careful about their data.

            So I think in order to be able to have such a data exchange market in the future, we need to bring the data, producers, the factories, in this case, into the driver's seat, and to give them tools to control which data they sell. And that's exactly what this international data space association is doing. So that they're creating even something like a digital rights model. So you can attach users rights to data so that it's only allowed to use it for a specific purpose, or you're only allowed to keep it for number of specific time in order to, to prepare something like this. So I think that is absolutely the future. And I, I, I always try to do a very simple comparison on your smartphone. We have this very simple user experience that each app tells you what it wants to access from your private data, right?

            I mean like Google maps wants to access your location. And of course you allow that because it's peer, you cannot navigate on Google maps without sharing your location. But if a random website or a random app that doesn't have anything to do with navigation asks me for accessing my location, then I would say no, because I need a clear reason to share my data for us, for receiving the specific service. And that's what we try to copy in our software. So when you, when you have a factory and you have, I explained earlier, we have like a multitenant scenario where a machine builder, for example, can provide a plugin to your IOT infrastructure. And this plug in would send certain data to the cloud machine builder. When you activate the plug in at first, it presents you, which data is is transferred. So very similar to this smartphone process. And second, it also gives you transparency along the lines. So you are always see who receives which data and you can immediately interrupt. So this, I think, control and transparency, beats trust.

            [Erik]

            So I I'd love to dive into a use case or a case study, but before we go there, I just wanted to quickly touch on one other topic that I think we've just briefly mentioned, which is the topic of security. So this is certainly one of the, let's say a top three in any case concern or priority for just about every manufacturer, where do you see the, let's say the status cool of the security landscape right now, and how does Cybus fit into that landscape?

            [Peter]

            Cybersecurity, when people talk about cybersecurity, you very often talk firewalls, intrusion detection and so on. So the combination is that you will usually call cyber companies, are companies that install something in your network, which tries to monitor your network traffic and tries to warn you when something unusual is happening or tries to control with which device is allowed to communicate with which other device. And if you ask me that's super important, but I think it's not everything because that's like having a security team in your building or locks doors. But as we have such a complex system that we discussed earlier, we will have so many data sources and data things. I believe that there is a security layer that's hardly talked about, and that's really a pure access control layer. So for example, I have a dashboard which shows the current status of the machine and the dashboard is running maybe in the cloud, but maybe also locally.

            So then this dashboard should be able to get data from the machine, right? It needs to get data from the machine, but it never should be allowed to also control the machine. And that does the security layer. That's hard to talk about. So really access control, not about who is able to talk to, to whom at all, but to, to narrow it down to the specific data points that are allowed to be red or the specific data points that are allowed to be written back. And that is the security layer where, which we try to, to add to the system, I don't say replace anything, but really add to the system, probably some more technical backgrounds protocols. Like obviously you have a strong security focus and it's the most modern protocol in the industry, but I have never seen an OPC UA server that implements an access control scheme.

            So when you have access and you have access, when you don't have access, you don't have access, but there's no in between, there's no gray scale and other protocols like the seven protocol that you use to reprogram also of our network on the Siemens PLC that has not even encryption that has no password. It doesn't have any excess control either, but still these are the protocols that we are basing the whole industrial IOT. And so this really in a positive sense access control layer, so that ensures a least privilege approach. So each microservice that we treat data from another one is only able to retrieve the minimum that it requires. That is some security notion that we add with our software to the whole system.

            [Erik]

            Interesting. Yeah. I've heard other people discussing, let's say the shortfalls of a firewall because you're, you're not only trying to protect the system from external bad actors, right? There are also internal actors who might have legitimate because of their legitimate access to system, but who could also have motives that are not aligned with the company. Right. So I can see how an access control system would at least constrain their ability to, to act within it's the only attack prevention, to be honest. It's, it's also failures. I mean, assistant can just work wrong in order to cause damage and it doesn't need to be evil in the first place. And also it's also a governance issue because when you have in the end hundreds of systems connected to each other, then you will need to know what happens when you unplug a particular cable now to speak. Did you activate a specific sensor? What happens, which systems will be effected that has a relation, do you need to keep track of and having a well-maintained access control list is an approach to that. Yeah. Good perspective.

            Let's then discuss a case study and let's look at it from an end to end perspective. So from, let's say the first conversations with the customer about their they're challenged through the deployment, do you have a particular case in mind?

            [Peter]

            Yeah, so I would jump again into the case of this milling and drilling machine builder. I mentioned earlier, it's one of the top three in Germany. And they were quite early thinking about digital services, providing basically a web portal for their customers, where they could see their machines, the health status of the machines probably recommendations that order, how to, to, to better perform on the machines and order spare parts, maybe even automatically this customer as everybody else implemented everything on their own in the first place. So they implemented a, I'm not even sure which cloud platform they chose, but that doesn't matter.

            So they implemented this portal, they implemented a very simple gateway. They were able to deploy via VPN to their customers and they installed, they rolled it out in the first place in their own premises because that's, it's a company that produces on their own machines. Basically. That's basically the situation where they, where we found them or they found us. So technically everything worked. Of course, as I said, it's technically not very hard, but they had a hard time rolling that out to customers because this was one of these black box gateway issues that didn't find much acceptance, especially at larger customers. That was one problem. The second problem was this customer had created a known department for creating this digital services. And there were a number of software developers in the department sooner or later, they realized that the gateway is not as simple as they thought, because you have to be very universal when it comes to a proxy scenarios as customers, you have to be efficient on deployment. You have to maintain the gateway security critical, as we said and they said, no, our core competence is not creating gateways, but our core competence must be creating the digital services. And so they were searching for a supplier with a play gateway solution that solves the acceptance.

            [Erik]

            In this case, are you providing the entire gateway? Kind of white labelling gateways and installing your technology? Are you deploying on the existing gateways that they already had deployed?

            [Peter]

            What we did was pretty interesting because they have already created their gateway or at least the specific business logic they needed for the gateway. And as most people today they implement as a, as a Docker container and DACA is quite  a good plug and play solution when it comes to integrating somewhere else. And we have in our software, our runtime, environmental third party Docker containers, because that's how we consider plugins to be most straightforward. And so it was pretty straightforward to take their existing gateway technology, integrate their specific business logic into our solution and deploy that to the end customer. And so then we basically provided the prepper for product lifecycle management updates, integrating into customer achieves systems, active directory integration that the customers proxy integration and so on, but we never do white labelling.

            So what the customer, the end customer again, is a cyber connect with a, we call it service. You could also call it an app, but we call it the service with a service that carries the label of the machine builder. And the service can be deactivated or activated. And if you activate it, then you get presented a little dialogue that explains machine builder, XYZ wants to access data from the following machines. Then of course you have to enter the IP addresses of the machines. And specifically the following data will be transferred to the cloud of manufacturer. And then it really leaves you with allow or not. And if you don't allow it, then of course it doesn't get activated, but that's a sales problem basically for the machine builder. And if you allow it, our software ensures that the opposite end connections to machines is set up that's right.

            Data points are subscribed that these data points are made available in the right format to the Docker container off the machine builder that the Docker container is running basically. So that's a plug and play user experience on the same deployment of our subs connect, where you could also either installed similar service from another vendor or do something else with the same data. So in this case, and that's how we have this logic tendency. We have even situations where this machine builder could deploy their services and our software was already there because we already did another sale earlier at the same customer. Okay.

            [Erik]

            Okay. Very interesting. So this is really almost a sales enablement proposition for your customer care.

            What was the deployment time for this? So let's say from the time that you started having conversations to the time this was deployed, what, what does what does a typical, maybe you could ask, what was it for this situation? And then what would be a typical timeline?

            [Peter]

            It really depends on, on the prerequisites. They were quite good here because as I said, the machine was already equipped with the necessary interfaces and the, and the cloud platform was already developed. So I think we had three to four months. We worked together with them like full time on our site. But just because they have a sprint planning and things develop and we have individual refuse, but I think in very, in a few weeks you can already achieve a lot. It totally depends on, on the prerequisites. So if today, a customer approaches us and says, here's an OPC server, we need a gateway solution for Azure IOT. Then it's something we can, we can implement in very few days. If we start at Houston machine, can you explain OPC UA? And which cloud should we choose then of course, a more complex consulting project starts. The pure implementation is pretty straightforward because we usually don't have to customize anything it's just configuration.

            [Erik]

            Okay. So your business model, I mean, I guess it's dictated to an extent by just the maturity of the, of the market right now, but then what a typical solution, it would look like some upfront advisory or development costs like a onetime. And then does it move to a SAS model based on the number of gateways or machines or, or facilities, or how does a typical business model look like for, let's say maybe a factory in a in a technology provider.

            [Peter]

            Of course we can do some consulting work or system integration where we really sell days basically when that's required by the customer, but that's not our business model. We have a small team of support engineers that can help here, but we are not trying to maximize these days instead if a customer really needs consulting or system integration support, we have partners namely from the big ones, MHP and DXC that are trained for our software and they are doing really the bigger projects here. And then our business model is it's a licensed model. And there are two models basically. So when we use held factories and that's a very classic software license, a monthly subscription that's scales was a number of machines. And also with the enterprise grade, basically. So in the smaller tiers, we don't support complex user management and so on.

            And in the larger years, you can have high availability, clustering support, enterprise active directory integrations for these service providers. Like I just described this more complex because usually they have they're at the beginning of defining their own business model. So we have a value based selling approach here. And we usually agree on a license per deployment of the service that is a ratio of, of the turnover of the service provider. But that's a very individual thing because we have realized that usually the service providers, they're just they're often at a point where they discuss technology, but their business case is also not find yet. And we try to be not in the way there.

            [Erik]

            Oh, that's a super, super interesting though that I mean one of the things that really fascinates me around the IOT market is that the technology enables a tremendous amount of business model innovation. Right. And, and you're now in the position of basically innovating together with your hands to figure out what makes sense for, you know, for both of you in the long term. Very good.

            Well, I think this has been for me really a fascinating conversation. Is there anything that we haven't touched on yet that you think is it important to cover?

            [Peter]

            I think we discussed a lot of things. I just, I look into my notes, but there's nothing in particular that I think we have missed.

            [Erik]

            Great. Then just maybe one last question from me, which is what is the best way for people to reach out to you or to the team if they're interested in learning.

            [Peter]

            So just the easiest way is send us an email to hello@cybus.io. You can also visit our website. There's some white papers to download it's cybus.io, and we would be very happy if you would reach out to us. It doesn't matter if you're in a factory at the beginning of your own digital endeavor, or if you're in machine builder or components, supplier that wants to provide digital services. Everybody who needs a gateway to get machine data into a cloud might be potential customers.

            [Outro]

            Thanks for tuning into another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com.

            Read More
            EP 063 - How IoT can be sustainable and humane - Bob Sharon, Chief Innovation Officer, Blue IoT.
            Thursday, Apr 30, 2020

            In this episode, we discuss smart building environmental management systems. A crisis is the best time for disruptive technologies because there is motivation and incentives to change the old system. The environmental crisis sparked the change for technologies that address sustainability. Improvements in wireless connectivity, cloud computing, open source APIs, and SaaS operating models, enable more flexible and cost effective solutions. Potential benefits of improved building management systems include reduced carbon footprints, more sustainable solutions, lowered costs, increased cybersecurity and productivity.

            Bob Sharon is the Founder and Chief Innovation Officer of Blue IoT. Blue IoT is the global leader in data and information driven virtual building and facilities automation. Blue IoT delivers end to end integrated data and technology driven services around real time optimisation, predictive maintenance and machine learning that maximises clients’ operational effectiveness and efficiencies across all building, precinct, asset, facility, maritime and smart city sectors. https://www.blueiot.com.au/

            Read More
            EP062 - Advanced machine vision and deep learning systems - Iain Smith, Managing Director, Fisher Smith
            Tuesday, Apr 07, 2020

            In this episode, we discuss the use of deep learning mechanisms to accomplish tasks that are not possible with traditional rule-based systems. We use two cases to illustrate how deep learning can be used to solve non-traditional and recognition problems within hours.

            This is part 2 of 2 with Iain Smith on machine vision.

            Iain Smith is Managing Director and Co-Founder at Fisher Smith. Fisher Smith designs and supplies machine vision systems for automatic inspection and identification of manufactured parts on industrial production lines. https://fishersmith.co.uk

             

            Read More
            EP061 - A primer on machine vision: technologies and use cases - Iain Smith, Managing Director, Fisher Smith
            Wednesday, Mar 25, 2020

            In this episode, we give an introduction to machine vision technology, use case adoption trends, and key success factors for an high accuracy solution. We also review decision making process to determine the right tech stack, and cost structure for low and high complexity systems. 



            This is part 1 of 2 with Iain Smith on machine vision. 

            Iain Smith is Managing Director and Co-Founder at Fisher Smith. Fisher Smith designs and supplies machine vision systems for automatic inspection and identification of manufactured parts on industrial production lines. https://fishersmith.co.uk

             

            Read More
            EP060 - How to integrate the IT and OT for better IIoT deployments - Keith Higgins, Vice President of Digital Transformation, Rockwell Automation
            Tuesday, Mar 17, 2020

            In this episode, we discuss how to improve the ways a company accesses, processes, and leverages data to make better decisions. We open the question of what digital transformation actually means for industry as opposed to non-industrial sectors. Keith also discusses what the PTC partnership means for IT and OT integration.

            Keith Higgins is Vice President of Digital Transformation at Rockwell Automation. He previously served as Vice President at FogHorn and CMO at RiskVision until its acquisition in 2017. Rockwell Automation, Inc. (NYSE: ROK), is a global leader in industrial automation and digital transformation. We connect the imaginations of people with the potential of technology to expand what is humanly possible, making the world more productive and more sustainable. Headquartered in Milwaukee, Wisconsin, Rockwell Automation employs approximately 23,000 problem solvers dedicated to our customers in more than 100 countries. To learn more about how we are bringing The Connected Enterprise to life across industrial enterprises, visit www.rockwellautomation.com.

            Read More
            EP059 - Connectivity, processing, and storage trends reshaping the IoT landscape - Ed Kuzemchak, CTO, Software Design Solutions
            Tuesday, Mar 10, 2020

            In this episode of the IIoT Spotlight Podcast, we discuss the improving connectivity, storage, and processing cost structures, and 5G’s relevance or irrelevance for IIoT. 

            Read More
            EP057 - Databases designed for the IoT - Syed Hoda, Chief Commercial Officer, Crate.io
            Thursday, Feb 13, 2020

            In this episode of the IIoT Spotlight Podcast, we discuss the challenges facing companies when they scale industrial use cases that rely on large complex data streams and trends driving the development of systems at scale by traditional industrials that are expanding their business scope.

            • What is different about data in IoT and why is it harder than other projects?
            • Why do traditional databases and dev-ops methods fail in IoT projects?
            • How do consumer application development trends interact with, and drive, the development of industrial applications?

            Syed Hoda is the Chief Commercial Officer and President North America at Crate.io, which develops IIoT data management solutions for customers around the world. An IoT industry veteran, Hoda most recently served as CMO of digital manufacturing platform company Sight Machine, and previously had been CMO of ParStream, the IoT analytics company acquired by Cisco Systems Inc. He lives in Palo Alto, California. https://crate.io/products/crate-iot-data-platform/

            ________

            Automated Transcript

            [Intro]

            Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host, Erik Walenza.

            Welcome to the industrial IOT spotlight podcast. I'm your host Erik Walenza, CEO of IOT one and our guest today will be Syed Hoda, chief commercial officer and president of North America at crate.io. crate.io is a data management software company that helps businesses put machine data to work more easily in real time to enable gains in operational efficiency and profitability together, we discuss the challenges facing companies as they scale industrial use cases that rely on large and complex IOT data streams. And we also explore trends driving the development of systems at scale by traditional industrials who are expanding their business scope. If you're building a complex large scale system that requires real time access to IOT data, I highly suggest reaching out to sit or his team. And if you find these conversations valuable, please leave us a comment and a five star review. If you'd like to share your company's story or recommend a speaker, you're also welcome to email us at team@iotone.com

            [Erik]

            Thank you for joining us today. So this is an interesting topic for me because we're going to be discussing database technology for IOT. And I think databases are one of those technology domains where everybody pretty much understands what a database is and why it's needed, but once you scratch the surface, most people, myself included know almost nothing about how a database actually works, what differs between one or another. So this is going to be, I think, very enlightening for, for me and the rest of the audience. I really appreciate you taking the time out of your afternoon to speak with us today.

            [Syed]

            My pleasure I've been passionate about IOT for a long time, as we talk further, you'll realize.

            [Erik]

            Before we jump in there, I want to understand a little bit more on your background. You have quite an interesting background. So you were working with Cisco for a number of years, and then recently you've had a couple of roles with senior roles with startups. What triggered this transition for you from working with a larger corporate towards taking these roles with young scaling companies?

            [Syed]

            I spent 13 years at Cisco. It was a great career. I have a such deep respect for the company and the, I learned so much with gigs around the world and in Paris in Bangalore, India, as well as the U S. And in fact, the term, the first time I ever used the phrase IOT, was it just go about 2007 or 8:00 AM. I heard this weird phrase called internet of things, and it didn't even sound like a normal phrase, what tech that. And so I learned a heck of a lot there and was probably the chief storyteller of a lot of our smart city IOT stuff that we were doing back at Cisco for many years and decided eventually that it was as much as I loved my time at Cisco. It was really fun building a business at Cisco. Although I was in a very large company, I ran often new immersion groups that were in the business of monetizing a new idea, scaling it on a large level around the world and creating a business side of it.

            I kind of ran sort of startups within Cisco as my previous jobs there and got the bug to do startups. I live here in Palo Alto, California. We have a few famous startups that have started around us, a company garage called HP that people have heard of, and then a company called Facebook and Google. And when you live around these big names, you get inspired and you want to be part of that world. And so I decided to leave Cisco and went to my first startup, which less than two years later got bought by Cisco. That was exciting. And then since then I've done a couple others and now eventually here by myself to create as a chief commercial officer.

            [Erik]

            Okay, great. Well, was there any coincidence between you joining the company and it being acquired by Cisco or Cisco already had an interest in the, in the company?

            [Syed]

            I think it was just a really good fit with what Cisco is trying to do back then. They needed some edge technologies. They were actually with a company called par stream, which was a database for edge analytics and sort of emerging IOT use cases. And Cisco really wanted that gained some software footprints in that area. So it was a good strategic fit and no, it was just a coincidence that I was at park street at the time.

            [Erik]

            Okay, great. And now you're at crate.io and it was founded in 2013. That's right. So it's in this phase now of kind of having a fully functional product and really now scaling where, where do you see create now in terms of their development as a company from the ideation stage now moving towards really scaling out the solution.

            [Syed]

            Yeah, we started a number of years ago and of course, like any start up, there's a point of, you know, it's just a good idea. When do we have a product that works then eventually moving toward a product that actually works well. It is a large company. And then eventually you get to the point of, we have a product market fit, and now we're going to hopefully increase the amount of repeatable and grow the business. That's where we are now. We're raising our next round of funding and we're growing the company. We've got kind of 50 to 60 people right now. And probably about a year's time. We'll probably double that number. We right now are headquartered in Europe and have most of our employees there, but I think over the next year or so, it'll become more balanced between North America and Europe where most of our businesses. And so now we're, we're yeah, we're growing to go to market side. We have a tremendous engineering team and we have a very, I would say, a light go to market team and the goal is add to that and grow it. And our biggest, I would say challenge looking ahead is getting to the customers on a wide enough basis and kind of help them close the deals.

            [Erik]

            I'm curious why, when you left your last company, I'm sure you had a number of different avenues you could have taken what convinced you to land at, at crate and take on this role. What was it about this particular company that was intriguing?

            [Syed]

            Yeah, that's a great question. I learned from some of my really most influential mentors throughout my career. When you think about new opportunities, one, you never leave a job because you're sort of unhappy with it. You should leave a job when you're actually happy and content and doing well. And this was opportunities because that's, when they'll make a more prudent decision, you'll actually, you don't want to run away from something you want to run to something. And so I really had deep respect for my last team that was coming to call cite machine manufacturing platform, a great company, great people. And I just felt it was time to stretch myself in perhaps a different direction or a wider direction. I was a CMO over there and now I'm running in crate.io sales and marketing and partnerships, business development.

            What attracted me here? Wasn't the technology. It wasn't say the market we're serving, but it was the people I met. I think it is such a privilege to work with people that you respect, that you like and you enjoy working with, and you can learn from when you have the dose things coming together, then it makes your life better, frankly. So when I met the people that created crate.io, I was very impressed. These were incredibly smart people, but also very nice people, people that you enjoy working with now that really matters in a startup much more so than in the big company, because unlike a big company where you might have 50,000 people and you can go hide in living number 45 in a startup, you're a family. I mean, this is it. There's no hiding me. We worked together. We spent a lot of time together. We play together, we fight, we, we brainstorm. And so you really have to be a part of a culture that you believe in and you feel like you can learn from and thrive in. So for me, that was why I joined crate. I was very impressed with people. Of course, after that, is there something you can do to help? Is there a, does it align with what you're good at, what you want to do and what their business needs?

            [Erik]

            Yeah. Great. Yeah. Well, I can certainly buy into that where we're 12 people sitting here and it's certainly a family. That's the game when you're young. Let's learn a little bit more about what crate actually does. So, you know, again, I think if you say great does databases for IOT, people generally have some vague concept of what that means, but what is really creates value proposition?

            [Syed]

            Step back a bit as the chief commercial officer of a database company, here's my challenge. Nobody gets up in the morning and says, Hey, I want to buy a database. Nobody says that ever. And in fact, nobody actually wants to buy a database. They already have databases. There are plenty of them more than they wish they had. And so the last thing we want is just another database. So it's not easy. Our jobs, it's a hard job because it's, we're selling in a space there where someone goes, Oh, I already got one. Actually I already have five. Okay. So the question is, why should somebody care? Why should somebody listen? Well, let me, before we go down to the database level, it's going to go higher up a bit. And in the space of IOT, when you look at the research done by a number of firms like Cisco, like McKinsey, Bain, BCG, et cetera, then these firms have found that roughly about 70% of IOT projects have failed.

            And what does failed mean? It means it didn't reach the expectations or aspirations of the original charter of the project. Now, maybe there were unrealistic expectations, maybe not, but roughly it has a 70% amount of failure. Okay. Now, why does that happen? Why did it happen and what can be done about it? We look at the top three reasons of why projects have failed in my OT. There are three consistently, again, same kind of companies, including others like IDC and Microsoft have studied this and found that there are really three reasons that are common. Number one, the lack of necessary skills at the T number two, sort of a siloed and resistant to corporate cultures. So the first two reasons are about people and organizations and culture. The number one technology reason is the data and it infrastructure. It's just not what it needs to be to support these applications and initiatives.

            In other words, IOT projects fail technologically because of data. That's why. And the question is why it is so hard that industrial space it's harder than any other space. Morgan Stanley study data in my factory was that in our classmate verticals, there were a couple of unique attributes. One, there was more data and factories and manufacturing, supply chains than anywhere else in the world. And number one, vertical number two was government. Number one was in the industrial space, manufacturing factories and supply chains. And it was growing faster and the data variety was higher than anywhere else. So that's what makes it so hard is that the shape and scale of industrial data is very different than anywhere else. The problem is that the tools, the legacy tools that are being used to manage this and bring it to life are designed for this. So we have a challenge in our hands, traditional databases, traditional infrastructure technologies weren't designed for this.

            They were designed that many of them for kind of the web scale world. Let me give you a little, maybe a little brief history about kind of how we got here a long time ago. There was something called ERP and client server computing and mainframes that all sort of made up the landscape. And back then we had relational databases that started to emerge and companies like Teradata and Oracle and IBM and Microsoft, all sort of flourished with relational databases. Then something happened a few years later called the internet business e-commerce e-whatever. And what happened there, that was the beginning of web scale and web scale meant that now more than ever availability and redundancy mattered the ability to support a massive number of users, concurrent users. And so all of a sudden the companies like Mongo DB and Splunk and Hadoop and other sort of came out, were born with flourished and then something else happened the world of digital transformation and IOT, et cetera.

            And what this meant is that the landscape changed in a number of ways. It meant that now machines were users. We had connected things I OT and it began to convert. And what's expected from databases and technologies that support data were very different than before what's happening. Now, is that the volume of data while it's very high and big, data's become bigger and faster. The variety is way different than it used to be. That's the biggest difference. Variety of data has been massively increased compared to yesterday. And that's, what's made it very hard for infrastructure to catch up and support IOT projects. We expect data to be adjusted fast, to be analyzed quickly and scaled, and it's not easy to do.

            [Erik]

            Okay. So you have a, yeah, you have a kind of a nest of issues here, right? You have the variety of data. You have maybe different real time requirements around that data. So some of them might need to be stored and processed on the edge or on the device or on facility, or, you know, some can be moved to the cloud. You have a lot of messy data that doesn't have good metadata attached to it. So data that's maybe high in volume, but hard to, hard to actually interpret or analyze what is possible for a database to address. And, and what are, what are the pain points here that are somewhat outside of your scope and maybe need to be addressed at the protocol level or by, by some other solutions?

            [Syed]

            What a good database does in this world is it handles the data. It's quite simply, it's able to take the data you got and make sense of, but give it a context. Number one. So it's able to, one is just handling it, handling the kind of data that exists out there, handling the speed of the business. So ingestion speed and, and the real time nature of the use cases require databases to be fast but fast at scale. And that's, what's been very hard. A lot of databases like acknowledges can be really fast with small amounts of data or very simple queries, but what if the queries are complex and what if they're different every single time? What if the data is in terabytes and petabytes and beyond? What if the data is mixed and even mix of structured and unstructured data? If it, what if it's not just time series that's, what's hard. And that's why no company frankly, has been purpose built for this world right now. If you look at most IOT platforms, if you look at most applications, people buy by industrial companies, they're using web scale technologies to try to do IOT scale and we're failing because you can't scale it. It's so hard with databases we have to do is handle kind of the scale with ease, kind of the speed with ease and the data types with ease.

            [Erik]

            And are you doing this at great deal? Are you doing this across from the device through the cloud or where can create be deployed?

            [Syed]

            One of the advantages we have is we're actually exactly, you said their device to the cloud, we're an edge. We can be at the edge and we could be at the cloud. And what's unique about us is it's the exact same functionality, whether it's in the edge or the cloud and in between. So if you look at most use cases and especially factoring, that's what people want. They want to be able to do certain things in the cloud for things on the edge and kind of go back and forth as necessary as they wish. And so we have no limitations and I'm helping them do that. And with massive amounts of data, and we can run on something as small as a raspberry pie, which isn't recommended for most use cases. It sounds nice, and it looks good on a POC, but the reality is it's probably a little light for most of these cases that we get involved with more of a gateway up to the cloud. And you know, we, we have a very strong partnership with Microsoft Azure where we also, one of our biggest customers runs on AWS. And so it's important to give people a little bit of flexibility from that realm, but we are friendly, but both edge and cloud.

            [Erik]

            Okay. Yeah, this is, it's kind of an interesting topic. We had a panel a few weeks ago and we had somebody from Microsoft and somebody from Nvidia on the panel and, and it came up that they were partnering with each other. And so the, you know, we asked, well, don't, you, you kind of have competing stories, right? Microsoft is saying, move everything to the cloud. And the video saying, you should compute on the edge. Cause that's, you know, that you need to be faster and more agile. And it seems to be emerging that we really need this more nuanced approach. And, and actually they both bought into this. And then yeah, we're partnering because companies really require the right deployment at the right place. And this, I think has been one of the challenges in the IOT space of companies figuring out where data needs to be deployed. I think it's still somewhat of an open question. How involved do you get when you're working with your customers and advising them? Is it, do you have this kind of advisory function? Because I think this is a big open question for a lot of operators where, where they should be managing their data.

            [Syed]

            Yeah. Yeah. This is a great question. When we hear this all the time, should I do this or should I do this cloud? And the answer is super simple. The answer is yes. Where is the use case happening? What do you need to do? Don't get, so I tell my customers here at previous companies don't get lost in the buzzwords. Don't look for an edge project. Don't look for a cloud. Do what makes logical and business sense for the project, for the, for your projects and the use case. And I can tell you that whatever you want to do with the use case, there is a technology that can help you make it happen, but don't let the technology lead you, but the use case lead you. And so the answer is it's going to be a blend. It's the same thing people talked about with public clouds, private clouds, hybrid clouds, yes. All the above, but don't let the technology dictate how you do what you're going to do, figure out what you want to do. First technology will follow. We tell them, look, we're agnostic. We can do either one. What makes more sense? What's going to be more effective and that's what you should do.

            [Erik]

            Yeah. And this brings us back a little bit to your earlier point without out of these three pain points. Two of these are, are human related. And one of the big challenges here is that, you know, the it department would, this would typically be in the function, you know, the purview of the it department, determining which technology to use, but especially in the IOT space, the use case, the application, the form factor for the application, these are, these are very critical and these to help to determine the technology. So you really need a very strong role or a strong input from the business. Whoever is going to be operating this application. And, and there's a lot of friction there, right? Between decision maker, intuitively I would think you're selling to the CIO and the CIO is team, but the application is so critical here. How do you communicate to organizations between the different stakeholders?

            [Syed]

            So that's a really great question. We've actually modelled the kind of the buying process is pretty complicated. So here are the two points in time. When people say, gee, I want to buy a database. There's only really two points that happens very often. Number one is when a couple of companies building a new application or buying a new application and this application that I'm going to build, run on the databases that I have. And if the answer is yes, great, we just go right in. Good. We do it. If the answer's no, then we figure out, well, how should I write this application on what database the buying center in that case is very often heavily, I would say recommended the head. Recommenders are DevOps, developers, developers have preferences. And frankly, it's like a religion, right? They love a certain kind of database and they don't like another time.

            And it's always a religious conversation. Normally the CIO doesn't really care, frankly, if I'm going to buy a new database from modern application, dev ops will have a very strong voice in that. And a strategic shy will say fine. You know, whatever is most effective. The business leader just wants to deliver the use case. They want, you know, good data data. They can trust. They want applications that run in a way they're expecting the use case to be delivered. So what I want to buy a new application, I look for database dev ops has a very strong role there, more so than probably anybody else, but maybe they're not their decision maker, but it's hurting your recommender. Now, the other point in time, we called the second control point of database opportunities with this. I have an application I've developed it, let's say at small scale, in a POC or a pilot, and now I want to scale across 150 plants or machines or whatever.

            I'm rolling it out. Oh, now it all of a sudden comes to a standstill performance level. Do we need needed to be, or it's very expensive because now I've got to store a certain amount of data and I got to buy more instances or larger databases. So the cost goes way up performance. My neck goes Michael way down and all of a sudden, my application's not running the way it should. I got a problem who is the buying center in that case, all the dev ops, person's already doing the application typically, no, it is a CIO to your point earlier. And the business side saying, gee, we just invested all this money and time not working, fix it. And now the buying center has been kind of widened. And now there's more scrutiny on the choices that were made and looking for the choice that can properly enable this use case.

            [Erik]

            Gotcha. And I guess then it's the business side. There's going to be primarily the side that's complaining. That's going to the CIO and saying, Hey, listen, this this application is not meeting our requirements, but they're not going to be then making the decisions. They're just going to be driving, driving that a decision needs to be taken.

            [Syed]

            Yes. And, and the reason we've had a problem here with OT and it, and dev ops is because lawson the flavors of technology that, you know, that we're the dev ops guys have learned on. Aren't the ones that scale the best. And very often the ones that have scaled the most like legacy, large companies, we all know about the Oracles of the world and IBM's world and et cetera. Aren't the ones that modern developers, frankly like to use like these other tools. And so these worlds have to try to connect. And now you have to have the ease of a database and tools from modern, modern applications, with the scalability and reliability of the legacy applications. And that's where frankly, we play, you know, we come in with as much focus on ease of development as on ease of scale.

            [Erik]

            Yeah. That seems to be a big trend. Now it seems to be that industrial applications are learning from consumer to an extent. Whereas I think if you looked 30 or 40 years ago, industrial is really leading the way in consumer tech was somewhat following, but at least that's, that's somewhat my impression, but now it seems like usability is, is something where industrials are or industrial technologies are really trying to play catch up now and are learning somewhat from that consumer landscape.

            [Syed]

            And it'll continue to it'll continue exactly. You know, because I'm an age myself, but there was a point in my career. I worked at IBM. I came out of college and you know why you went to work. You went to work to go check your email. When I was a fresh grad, I went to work to go do your expense reports. And so you did like electronic stuff at the office because you didn't have a laptop. No, you don't have to go to work to do any of those digital things. It can all be done from anywhere in the world. You've got to work to meet people, talk to people, your brainstorm life. And even that you can do on video, of course, right. But what's happened is, is the next generation and the next generation, whether we're at the office now, often we're behind a generation technology than at home.

            When I was very young, fresh grad, I went to work and I was ahead in technology at home. I was behind now, it's flipped. Now the things we have in our home are smart. You know, Google and Amazon devices, et cetera, can be often more advanced in our work environment, which is why, what we expect in our daily lives being delivered in the home. We now also expect at work and work is catching up. So what used to be considered to be a good interface in industrial application now looks terrible because companies like Apple have redefined our expectations of what good interface means. And certainly we're learning from that. So it's not just pretty charts and pretty slides it's even the ease. So in my world, the database world, it means when you install something, it should be zero configuration. So just sort of automatically get set up when you expand it and scale, it should just, you know, quickly sort of drag and drop things. It shouldn't be a PhD and, and databases to be able to run this thing should be anybody

            [Erik]

            Interesting trend. Let's talk a little bit about your customers because on the one hand, this is a solution that could be applied pretty much by any industry for any use case. But I imagine that in reality, there are some industries and some use cases that you tend to be serving more. Is there kind of an 80 20 rule? Is there a cluster of industries and use cases that you found for whatever reason have a, a greater need right now for a really IOT design databases?

            [Syed]

            Yeah. And, and I'll say, you know, we use the word IOT as a moniker, but I, I think it's really any industry with a lot of data. And in particular, a lot, a large quantity of machine data where we'll time matters. And so our customers you know, they, we have a lot of industrial company or companies that are our customers, but we also have, for example, one of our biggest customers is McAfee software Qualtrics. The recent acquisition MSAP is a customer as well as big industrial companies. So it's a, it's a wide range of array of industries, but certainly have you got any industrial sites? The common theme amongst these companies are that they are enabling use lots of data, terabytes, petabytes, et cetera, time matters. The shelf life of data can often be very short, you know, in a company like a Mac, either they're doing security well, security time matters.

            It'd be able to find out what's going on and do something about it is very time sensitive or in a factory. One of our customers, a company called elk love, you probably haven't heard of them, but they aren't a multibillion dollar packaging company that makes packaging for companies. You have heard of like Coca Cola and P and G and others. And what matters there is that when I'm running a production line at very high speeds, with high numbers of products, if there's any issue I can know about it immediately and act on it fast. So lots of data, real time use cases that rely on data being delivered to where it needs to go quickly with insights to help me act on it immediately.

            [Erik]

            Yeah. It makes sense. Right. So you, in this, I guess, is this a cluster that's expanding or the number of industries that are dealing with this, but right now, yeah, you, you have your kind of core niche of industrials and then other companies that are serving these use cases, which are maybe technology providers like McAfee, the system integrators, are they heavily involved? Because I mean, I guess on the one hand is, you know, needs appear when there's an application being built and, and the system integrators or the other application providers that are going to be the ones that know when an application is being built or maybe when an application is struggling, probably more than your sales team. Are you typically going to market together with other companies who are helping to build these applications or selling, and in your kind of saying, you know, when you encounter this problem, then we have the solution that you can bundle in, or are you going directly and in managing relationships, how does that?

            [Syed]

            Yeah, both, both the channel is very important to us and it'll grow over the next year. And beyond that, 10 will become bigger and bigger. And I would say a very big part of our actual revenue will come from channels over the next a year or two. We're growing that out right now. We're directly, we're mostly direct selling, but then this is 2020 the shift beginning to happen in a dramatic way. A lot of our customers, I would say our IOT platform type of companies. And I mean that in multiple ways, for example, some large industrial companies, we have two right now that we're very deeply involved with publicly, but there are two different companies that are both industrial companies that are building their own IOT platforms for their own manufacturing of products, as well as selling those platforms to their customers. So these are companies that are not software companies, they're industrial companies that happened to be selling a platform.

            And so that's a very typical, I would say, customer similarly, there are software companies that we're talking to right now who that's all they do. They don't actually make a physical product, but they build software with one of them will have a press release in the next 30 days or so coming out with, we're gonna announce a partnership. We're going to be the engine inside their platform. Their platform is application layer delivers business value based on certain use cases, we're going to be the engine and side that helps take all that data and make it useful. It will give it a context to make the application work better, frankly, and faster. It's a combination of, I would say companies that are building their own applications and some that are building applications to sell the companies that that's right now, where we at and channel become an extremely important part of our business. Yes,

            [Erik]

            That's an interesting trend. This first set of examples that you gave of industrial companies who are building platforms for both our internal operations and also as a new revenue model. This is something that we've seen very actively in China and where you have the Petro China and the state grid and these very construction equipment OEMs. So he's very traditional industrial companies who are building platforms and they use them first for their internal operations. And then they start to sell them up and down their supply chain to partners. And then they start to aspire to moving into new verticals. But it's created kind of a new dynamic where these companies don't have very much it competency. Even if they hire a couple of hundred developers still, it's not really in the DNA of their organization. So they're much more in the role. There's a concept that Alibaba is kind of pushing here, which is one plus end, which is, you know, Alibaba provides the high level of functionality.

            And then you build your vertical platform on top of that. You provide the domain expertise, but of course, it's, it's like one plus N plus X. You still need people. You need crate. You need other companies to fill in a specific competencies because really what the, the industrials and I, I think this is going to be a big trend. I think that we need more, the application is so critical that we need the industrials to really take an active role in building these applications. But the tech stack is so complicated. There's no way that they're going to manage entire tech stack, which means that they, it needs to be really a partner approach.

            [Syed]

            Great observation. It's so interesting. You say what you just did because we had in three different companies, we had a couple of different philosophies in our technology strategy, and I think one of them was wrong and a couple of them are right. Here's the difference. You have to figure out what you're good at. What is your real value property you really good at and weigh it? Secondarily. So the smart platform companies are smart. Industrials realized their strength is what the shop floor, the process, the industrial side, the automation side, that's their strength. And while they have a lot of software, people have some very good ones. They should probably use them off the tool, off the shelf tools where they can, because database company puts all our blood, sweat, and tears in the database. You're not going to design a database better than us or any other database company, because that's all we do for living.

            So you got to focus on what you're good at and pull in products where, you know, you don't have that level of depth. And so the smart ones are doing just that. The smart ones are also trying to minimize what I'll say, the grunt work of their own teams and focus on the value layer. What does that mean? Forrester studied the world of data analytics and data science. A couple of years ago, I had a very well written document about how I'm going to be rough here. About 70% of the time that data scientist works in a company is spent on grunt work, cleaning data, sorting data, organizing data, et cetera. 70%. 30% is spent on deep data science. So grow work, and the stuff they hate to do is a majority of their job. Alright, now let's translate this into the world of applications and IOT platforms.

            You should not be focusing your time as the big industrial company on getting the data ready. The data gets collected, and if you have good tools, good databases, for example, that should be done as part of the database. If you're spending time, we just spent yesterday with another company or they're using three different databases because not one of them can do the job. They use database A for this part of the task, I database B for that part of the task and see when they have a time series, only database and they have one for this kind of stuff. And one is expensive. There's you a cold storage, all of the junk. So what happens is they have all these people whose jobs are just to administer the database. That's a waste, focus your attention on the application, the value, add and get product, get the infrastructure products that can take away the crop work for you. And that's kind of the value we add is that we allow them to not have to manage five databases to do the job of what Craig can do. Go focus your attention on the value you bring to your customers, not the database.

            [Erik]

            Yeah. I think this is a very positive trend right now because it means, you know, as companies start to adopt this approach, that we're, we're going to be freeing up a lot of effort to focus on solving real problems, as opposed to, you know, building replicas of, you know, of existing systems, right. This is, this is the way we need to be moving. So we have we, we can lower that that failure rate from 70% and start start really growing the market. Can you talk to us a couple about, about a couple of cases and, you know, maybe just walk through us if you're able to, from when the conversation first began through, you know, the decision process into deployment so that we can understand the cycle of a great deployment,

            [Syed]

            Several examples of where a company was trying to run an application on a database or some tools, and as the application grew and got bigger, it started to fall apart. And so you know, one example, I'll tell you a company that essentially built an application that was running fine, but our data grew. And all of a sudden we got into hundreds and hundreds of nodes and billions of rows, et cetera. It just started to fall apart and they needed to sort of do this differently. And we went in there and replaced kind of their SQL elastic search database with, with us. We were by 20 times, faster by 75% lower footprint costs and allowed them to grow, grow very easily. What would happen here was they built a really good application on a, I would say, legacy underpinnings. How did it start?

            Well, you have often you have a like said earlier, you have a business leader of a function or a sponsor, but use case that has a plan to deploy has a plan to then scale. And when all of a sudden scaling doesn't happen, we can come in and help. Hopefully save the day by putting in an IOT skill database to an IOT project. Sometimes it works like that. Sometimes you have a company like the case of, who I mentioned earlier, a manufacturer of packaging and products, where they have an aspiration, they want to do much more data driven, smart manufacturing in their plants. They have 180 plants around 45 countries around the world. They have world-class, OAE, I've never heard numbers this high in my career. I can't disclose them, but I'll just tell you, I'm just amazed how good he is, but they wanted to do better.

            And they realized the only way I can get better at this is by sort of the last analyzing to the next level of digits, it's kind of the next or last mile of the processes. And what they found was, you know, at a very generic level is that if the people on the plant floor just knew certain information sooner, or if we could filter out junk information more effectively, they're more effective. So in other words, I'm telling you 10 things before those things don't really matter. And three of them, you can wait on, but three, you got to act on right now. I just did that better. I could even improve even well. So this requires being able to get a bunch of data, the hard data that's not easy to necessarily analyze and make sense out of it very quickly in a real time basis. So this case, we aligned the aspiration of what I'll call very visionary leadership with thinking, gee, how can I do this better? And we helped bring it to life by doing it in a couple of plants and then growing it from there, what would be very soon a hundred plants, let's say real good use of data on the shop floor

            [Erik]

            For the first case. I'm curious, when you moving into a situation where an application has been built on more of a legacy database, that doesn't scale, well, would you deploy great. Does the application architecture have to change at all? Or is there, I mean, is there a zero change? Are there some, some, some moderate changes?

            [Syed]

            Yeah, yeah, of course the classic answer. It depends, but I can tell you that in most cases that we've seen, these are modern applications, I would say generally more modern databases. But what you'll see is that there are databases, cult databases from modern applications that are really good for developing applications on, but not really good at scaling. So the good news is the migration from those databases to ours is very easy. Migration was never end up in much of an issue that has been probably somewhat easy because we can handle a wide variety of data quite easily. That's real space. I would say we've had very little problem with the migration because often even built on data, but I would say more modern databases, but just databases that weren't designed to scale very well.

            [Erik]

            Yeah, that's a good point that there's a whole set of technologies that are really designed for helping companies to take it from R and D through a pilot phase. Right? And that's a, that's a necessary, you need tools that are designed for cost effectiveness and, and, you know, quick deployment. And that those tools don't necessarily scale, but they're, they're not bad tools. They're just suited for that particular, that particular set of problems of taking a product through thorough validation.

            [Syed]

            They scale, well, just not IOT scale. So, you know, I I've heard this before I put the customer first. They said, well, wait a minute. So, and so company scales a lot, it's scaled to this side. Sure. They scale that size as long as all you're doing is time series data and no other kind of data. And you have extremely simple queries and they're known queries that get repeated a lot, but the reality in the industrial space, the queries are change. And not always super simple. It's not just time series data, as we all know. So they, they do scale, just not in the world that we're talking about today.

            [Erik]

            Okay. So we have to stop thinking about scale in terms of just the number of queries or data points, but in terms of also the, the complexity of the, of the system here, you'd mentioned pricing earlier. We don't need to get into the specific pricing of crate, but what are the variables that impact, this is this, what would be kind of a typical equation? And cause I think it's quite challenging often for companies to understand what might be the lifetime cost of a solution, you know, because you know, you'll go in and at first the cost is very minimal, but then of course it can scale very quickly when you're talking about large data sets. If the algorithm is based on volume of queries or something, what is a typical algorithm here?

            [Syed]

            Yeah. It's a run to around the number of nodes is that you use you know, the amount of data you have or the cost. I can tell you the street or your sort of rough numbers. If you look at kind of most, I would say traditional modern databases, but what our competitors were often three to five times cheaper, it's the way they were architected. It's the way we scale is the amount of resources that we use. So the question becomes, why are some of these more expensive? Well, it's the kind of hardware utilization. If the cloud footprint that they have, that makes a lot of these companies more expensive, we're actually really efficient in terms of how much hardware we need. We also run on commodity hardware, really cheap instances. And so these are the kinds of things that really make this cost a lot lower.

            So those are, there's a pure running operational cost. And the pricing model is really bad. Number of nodes and amount of data, et cetera, which is very typical, most databases. The other piece that I'll tell you that we have a big cost advantage is the actual human part Eamon of people. You need to run a system to manage the system to scale it, do these sort of flavors of technology, or are great technologies that help a lot of people, really smart PhDs to be able to do really simple things. And that's part of the challenge too. You want to have databases technologies that are easy to operate, easy to scale and don't need super deep high expenses, costs, resources to do basic things that adds to the cost.

            [Erik]

            Yeah. I've seen a big trend there of, of looking at how to improve the efficiency of teams that are developing also on the application development side. You know, there's, there's a lot of different areas where labor ends up being the very significant cost here, right? And this is this is a need, right? Especially when you're dealing now selling with industrial companies who don't naturally have these people on, you know, so this means new hires and hires, which maybe they're not used to hiring. Right? So it's, it's quite different when you're, you're hiring your first set of data, scientists and PhD is where you haven't really managed those people before. It's not just the cost, but it's also the complexity of managing kind of a new group of, of individuals in your organization.

            [Syed]

            We'll talk to our customers and we have like reference calls, they'll ask them. So who manages the crate database? And when you scale it, when you grow it, who does that? Our prospects are shocked, just how simple it is to use. They're shocked by the level of person they're running it. They're almost putting that. Well, wait a minute. How about the rest of the team? And there were surprised of how easy it is. And that goes back to your earlier point. Ease of use is something that is expected and the standard for ease of use has greatly increased over the years and that you should expect that from your database. It should be, it should be effortless for the easy shouldn't require massive amounts of people or training in order to run it. And that's one of the value propositions that we talk about. So yes, hardware costs are low, but also the people cost of managing and complexity is also something that really should look at.

            [Erik]

            Talk to me, see, what is going on in 2020 for you? What are the, I know that you're, you're moving more from you know, product development towards ramping up sales, right?

            [Syed]

            The milestones for the coming year. Yeah. So for us, it will be, we have a big push toward expanding our cows. As you mentioned earlier, we're going to have some very interesting partnerships that we'll be announcing and growing with. You're going to see some of our customers who have let's say started off with a smaller deployment of a few plants growing to a hundred and more comps. So this is a year of scale for us, massive scale in terms of the growth within some of our existing customers to some new, big names that will well from Percy's is about as well as partnerships we're at that stage now where, you know, you can kind of in the life of a startup, you have, it's never a straight line it's lines that go up really hard or fast, and then flattened out and go up again. And this is, I would say, 2020 for a year for us as a year of scale and a year of some pretty good growth. We've already had that growth. I mean, we've grown it three X this past year, and we expect 2020 to be also at three next year to this year. So it's wearing a very good trajectory and I'm pretty, pretty excited about what's coming up.

            [Erik]

            Right. Yeah. Fantastic. And I know you're also raising a round, so I guess there could be a number of people that would be interested in getting in touch with you, whether they're investors or potential partners or customers, what is the best way for people to reach out to you or, or more generally to the create team?

            [Syed]

            I mean, certainly I am on LinkedIn, I'm on Twitter. So I welcome people. I say hello and happy to talk about anything that I won't talk about in terms of our business, in terms of our strategy products or fantasy football, you name it, all those things are fair game.

            [Erik]

            Cool. So we'll, we'll certainly put your Twitter feed and LinkedIn in the show notes seed really appreciate you taking the time to talk with us. Is there anything else that you want to cover that we missed before we call it a day?

            [Syed]

            No, I like to always ask a good host, like you a question as well. So you tell me something as well, looking ahead now it's appropriate. People always have predictions for the next coming year. What do you think in terms of 2020? What are a couple of interesting or exciting trends in the world wire T that you're keeping your eye on?

            [Erik]

            Yeah. I mean, there's a couple of points that we already talked on here, but when we set up our company in 2015, you know, really, but when you go to conferences, people were still asking the court, what is industry 4.0, what is industrial IOT? You know? And then we've moved fairly quickly in the past few years from what is it to, you know, what are the use cases? And then what are the challenges? And, and, and then, you know, what, how should I start? And I think now we're really getting to the point where a lot of companies have done their pilots. They've maybe learned some hard lessons, they've wasted some money, but they've learned. And now I think we're really moving towards a point where companies are going to be able to see scale. And part of this is the learning curve of the end user or developing the applications for their business.

            And part of it is the development of companies like crate that are providing better functionality to build on top of. So this is really what I'm looking forward in the coming year. We see a lot of this in our class based, you know, companies that have maybe in the past and talking to us about what should we be doing, or how do we start and are now moving more towards, okay, now, how do we scale this, this system that we've been playing with for the past 12 months? So that's, I think the big trend, and then this other trend that we've already touched on a bit, but of companies figuring out how do we build solution that are not just an operational improvement, but that are impacting our, our bottom line. So we see a lot of companies that are, you know, they've been elected trickle or mechanical engineering companies or whatever engineering companies, even for the past, you know, 50 or a hundred years, and are now looking, how can this new set of digital tech [inaudible] enable us to grow revenue?

            So we see chemical companies, for example, that are saying, you know what, maybe we can sell a smart factory solution that includes sensors and platform and applications to our customers so that when they purchase our chemicals, they can utilize those more efficiently. And, and, and, you know, this then makes them more likely to use us as a supplier, but also we can have incremental revenue on this IOT or this, this system that we're selling them in the new, you know, we see construction of companies that are also looking at how can we not now compete on lifting capability, but we can compete on usability or remote control and so forth. So we see a big trend among these companies to look beyond operational improvements. Although those are, those are still critical, but to look really at how they can start to integrate this technology into their, into their offering.

            I think these two things, I think we're going to see companies really moving from pilot to scale, and we're going to see more companies figuring out how to adopt digital technologies. So I would say non-IT companies that are not digital natives, but are now adopting digital technologies into their core offering. Those are the two things that I'm really looking forward to in the next, you know, and this is, you know, I'd say, I think we're going to see a lot of growth in this in the next year, but this is a decade trend.

            I think we're just at the start of a longer term trend here. I totally see that as well in the, and those things are very connected as well, by the way, of course, right? You gotta, couldn't pilot to scale, to fundamentally changing your business models that you're suggesting we're seeing the same thing. We're seeing the smart companies, the leaders not getting excited about a 14th pilot. I can excited about another science project, but really saying, how are we really fundamentally improving the business? And if so, let's do this, do it all over the place. And so then your first point of pilot to scale it, we see that often that excites us. And that's why we're here in business, frankly, a great 2020 for both of our organizations. I hope it's been really a pleasure to speak with you.

            [Outro]

            Thanks for tuning into another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com

            Read More
            EP058 - A conversation on AI, social responsibility, and future disruptions - Neil Sahota, Master Inventor, IBM
            Monday, Mar 02, 2020

            In this episode of the IIoT Spotlight Podcast, we discuss the current state of AI in practice, the integration of AI and IoT, and the importance of new data sources designed for machine use. On a more human angle, we highlight the importance of measuring social impact while deploying disruptive technology.

            Neil Sahota is an IBM Master Inventor, United Nations Artificial Intelligence subject matter expert, and on the Faculty at UC Irvine. With over twenty years’ experience, Neil works with clients and business partners to create next generation products/solutions powered by emerging technologies. He is also the author of Own the A.I. Revolution, a practical guide to leveraging A.I. for social good and business growth. 

            Read More
            EP056 - Airbnb for telecommunications - Frank Mong, COO, Helium
            Monday, Dec 09, 2019

            In this episode of the IIoT Spotlight Podcast, we discuss how Helium is providing universal and affordable connectivity for IoT devices, provided by people, and secured through a blockchain.



            How do you provide connectivity where there is no infrastructure?

            How do you incentivize people to contribute to an open-source connectivity ecosystem?

            What are the connectivity challenges facing IoT device manufacturers today?



            Frank Mong is COO at Helium, where he is leading the global go-to-market strategy for the company. He is responsible for sales, marketing and business development at Helium. Before Helium, Mong spent 20 years in cybersecurity including CMO at Hortonworks, SVP of Marketing at Palo Alto Networks, and VP/GM of security at HP. frank@helium.com



            Helium is building the world’s first peer-to-peer wireless network designed specifically for IoT devices. Helium's technology will help spark a wave of innovation by enabling a new generation of small, mobile, low-powered IoT devices that include remote wildfire and agricultural sensors, micro-mobility trackers, and more. helium.com



            Use cases: Helium.com/enterprise

            Developer resources (software, hardware, firmware, connectivity documentation): helium.com/developers

            Buy Helium Hotspot: helium.com/store

            ________

            Automated Transcript

            [Intro]

            Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host Erik Walenza.

            Welcome back to the industrial IOT spotlight podcast. I'm your host, Eric Walenza CEO of IOT ONE. And our guest today will be Frank mung, COO of Helium. prior to joining helium in 2017. Frank gained 20 years of experience in cybersecurity as COO of Hortonworks as VP of marketing at Palo Alto networks and VP of security at HP. Helium was founded in 2013 by Shawn fanning, the founder of Napster and aims to provide affordable and universal connectivity for IOT devices through the use of a decentralized network, owned by people and secured through a blockchain together, we discuss the connectivity challenges facing IOT device manufacturers today, and how Helium's peer-to-peer wireless network could reduce costs and improve coverage and open spaces as an alternative to cellular coverage. We also explored the role of blockchain and crypto tokens and securing the network and providing an incentive model for the individuals who build the network by deploying helium hotspots. I hope you found our conversation valuable, and I look forward to your thoughts and comments.

            [Erik]

            So Frank, thank you so much for joining us today.

            [Frank]

            Thank you, Erik. It's a pleasure to be on.

            [Erik]

            Frank, before we dive into helium and the technology and the business model, I want to just share a little bit of your background because you have really quite a deep background in cyber security. So people know who they're talking to. So you've, you've worked with Hortonworks with Palo Alto networks with HP. Can you just give a quick overview of what your roadmap has been and how you ended up now joining a startup helium?

            [Frank]

            Sure. It's always been in cybersecurity. It did start in early days and they started as well. When I first started my working life, I was at a startup called ignite technology is a company focused on managed security services or managed firewalls back in 1999. And when I was 23 or 24 at the time, it was the innovation companies and individuals were first connecting to something called the internet and using something called Netscape browser. If you remember back in those days. So that was definitely revolutionary time for us those years, it's really been focusing and honing in on different aspects of cyber security, running product marketing there, and you don't want to trend micro to run marketing, both of which are antivirus companies. And I steered away and went into more general management roles in cyber security and at Palo Alto networks where I was the senior VP of product marketing for the next gen firewall business. And and then I took a jump and went into more of a COO type of role. And really, I think at that point I realized I, I did not, you know what? I wanted this to be different. I wanted to really challenge myself and get out of my comfort zone. And that's sort of when my search for startups started up again, really thought through about what it would mean to actually be at a startup again. And I found helium or helium found me through a trust live ventures, which is their series, a investment company.

            And after meeting with his team and I realized, you know, helium is definitely a company that I'm interested in and what sealed the deal for me was my meeting with the cofounder Amir Halane. We realized that he and I have a lot in common growing up. We both love video games. He goes like to tinker with hardware and really the mission and the challenge is what sold me figuring out how to build a ubiquitous networks for IOT devices to connect to that. And that was it. And here we are two years later.

            [Erik]

            Yeah. Great. And it's yeah, really quite, quite the team behind helium and quite the ambitious mission as well. So shaping future by rewarding anyone for creating a global decentralized wireless network for billions of machines. So that's just at least your, your mission as it is on, on LinkedIn. Can you just give us the high level, why is this important? Why is, why is the work that Helium's doing important for the future of the internet?

            [Frank]

            Yeah, I think the challenge of internet of things, IOT. So for folks that aren't familiar, you know, we're talking little sensors, little devices, potentially autonomous machines that have to do simple work, or maybe it's complex work, but they need to connect not just inside a factory or inside the smart building, but think outside of the building and going outside of traditional where wifi sits. Right? So if you think about extending outside the buildings, the only connectivity option is cellular. It's not easy tooling to create something from nothing using cellular is actually quite difficult. And so we saw an opportunity as a company, we saw an opportunity to change that. And part of that is having an existing wireless network that can compete with the conductivity of cellular, not from a speed perspective, but from a coverage perspective for IOT and just built for IOT.

            [Erik]

            Right. And that's one challenge. How do you do that? And then the second piece of that is how do you make it extremely easy and simple for anyone that has a great idea or has an application that they want to bring to reality? How do you make it super easy and simple for them to build on that, on that platform? And usually when you, when you think about super easy, super simple to build in involves open sourcing involves a community of folks with their likeminded that have the same interests, really build a network, using a decentralized method. It involves a lot of, I think a lot of components beyond just wireless technology, but I think the key component is incentive. How do you incentivize individuals, consumers to create a network that's ubiquitous that works for everything. And once you're able to do that, how do you get folks to build the technology and applications using that? That is it, that's the challenge we're in that's the road we're on right now is to create that two sided marketplace of both network operators who are consumers and individuals and network users who may be a simple hobbyists building IOT devices for it could be a large enterprise that wants to deploy millions and billions of machines.

            And then you're using the helium tokens in order to create that incentive model. Can you give a, just a quick overview of what are the different elements? So you have the helium hotspot, the connectivity device, you have the helium tokens that provide the incentive mechanism. What are the other elements and how do these fit together into

            [Frank]

            The comprehensive solution that enables this marketplace? Sure. There's probably three components, three main components to think about one is the healing spot itself. It's a, it's a standard off the shelf hardware we're running on LoRa technology. So it's LoRa hardware and lordships built by syntax or any compatible version of that. And to make that work and to make that run and as low cost as possible, we use off the shelf componentry complementing that LoRa technology. So we have a processor in there. It's a dual core processor. We have, I think is something like, I want to say six gigs of Ram 64 gigs of storage for SSD space on the hotspot itself. And we created a open source protocol running on the Lord hardware. And the reason we had to do that is because our networks are not essentially owned. So we could have used some of the other protocols out there like lower went in because that requires a centralized server for data and key management.

            We are decentralized because there is a second component in there, which is the blockchain. The blockchain is a homegrown invented by helium from the ground up in our blockchain, similar to big point that most of your audience buys a Bitcoin is where Bitcoin has proof of work for hashing and rehashing data. Our proof is proof of coverage. And so if helium as created a blockchain that can allow the healing hotspot to communicate with other hotspots nearby and create a trusted that word automatically. And that that trusted network involves reliability, health, security of network to prove that they're providing coverage, that the coverage is, is healthy, it's reliable, it's secure. So that's the second component of what we've built and that, and that for the hostile attitude, prove that when they prove it, they mind the cryptocurrency Helion. So this the work they've done, I used to prove that they provided coverage.

            So as, as a reward for doing that work of proving is providing coverage. They the us let's mine, the cryptocurrency, and then the third component is really the users using the network. How do you get people to build on top of that network? And that's where Helium's open source SDKs and API APIs, and all of our developer kits come into play. It's doing everything possible to make it easy, affordable, and simple, to onboard new applications. And whether it's software or hardware onto the healing network. Folks can take a look at our new designs and our specs and code and build their own hotspots that they want. So those are probably the three main components that build up what we've created.

            [Erik]

            Frank, when you look at who's purchasing the helium hotspots, are we talking about small businesses that want to develop their own networks to serve their devices? Are we talking about enthusiasts who are particularly interested in the technology of wireless connectivity and want to be part of this, this trend? Are we talking about well, intentioned individuals like my mother who might not know very much about the technology itself, but wanting to provide connectivity for her neighborhood,

            [Frank]

            I call them crypto enthusiasts, people that are into cryptocurrency that probably have a Coinbase account that purchase some Bitcoin or Ethereum. They tend to also be very interested in, in helium. Hot-Spots from a purely from a mining perspective, mining cryptocurrency, and they're, they liked this technology because it's very consumer friendly. We're probably the most, consumer-friendly both IOT wireless gateway hotspot, and a very customer friendly blockchain that kind of takes away all the technicalities of blockchain and cryptocurrency and really makes it a nice user experience. And so those are two early sort of adopters of our technology. The healing definitely, you know, if your mother knows what blockchain is or sorry, knows what big point is. She would be potentially interested in this because the hospital, it's also very low power. It's like five Watts of power. So it's like a led light bulb, right? It's not, it's not going to consume a whole lot of energy that's noticeable.

            And so from that perspective, it's fairly low cost over time. But it, it allows you, anyone that understands how to use a smartphone and apps to get going. And that's attractive for a lot of folks. That's sort of the profile of our initial adoption. We actually did a survey recently. I think we asked about, I want to say 280 people who've purchased. Hot-Spots like, why? Like, why did you buy hotspots? And it was interesting that overwhelming, roughly 60% of the respondents told us that they purchased a hotspot because they want to build a network. They want to have a sense like that they're an entrepreneur and they can own and operate a network, which, which I think I I'm, I'm surprised, but not surprised because if you think about what we're doing, it's very analogous to Airbnb. We're enabling people to become network operators and everyday folks, average people like Airbnb has enabled average people to become hotel, hotel, hotel management, where they can rent their room or rent their apartment to anyone Helium's doing the same with the network, your home network at home can become something that you can leverage. And the, you know, we're trying to create the other side where others want to use your network and we'll, you know, we'll, we'll reward you for it. So that system, like an Airbnb is what we've done for what we think we've done for telecommunications. Really, really changing that model.

            [Erik]

            Yeah. And I, you know, of course for Airbnb people that are not necessarily doing this out of an idealistic, you know, objective, but it's, it's because there's an ROI on, on an asset that's otherwise being under utilized. And certainly that's the case for many people's internet, much of the time, right. Where we're not utilizing it. But I guess this is a little bit early to point to what the ROI is, but is there any indication of what somebody might expect to return on a hotspot that's in a moderately, the medium use case present,

            [Frank]

            We don't know, and that's not part of how we sell. It's not, you know, we can't talk about potential future gains and we're not involved with that at all. So it's unknown. And so for someone to purchase a hotspot today, and the ones have really believe in the idea that you're doing this to build a network, you're doing it to become an independent network operator in a very simple and user friendly way. And then the hope is that someday just becomes big someday. The network becomes extremely useful and it's something that's highly valued. And in that world, every everyone in that ecosystem wins. So, you know, it's definitely not for everyone, but you know, there are those that believe that, and we're super happy. They're there with us. Sure.

            [Erik]

            Yeah. Well, you don't, you don't need everybody. I guess if you have a couple of hundred people

            [Frank]

            That's right.

            [Erik]

            Says that a so limited quantity available for a us ordering 77% sold. Are you able to share with us the number that are out in adoption right now?

            [Frank]

            Yeah. So if you actually download the app, there is something called a blockchain viewer. So in our app, you can actually see the total number of hotspots that are out there today. And so I think like right now, the exact number of hotspots installed across the United States is 395 hotspots. The predominant cities that are covered are Austin, Texas, San Francisco, Boston, Massachusetts, Chicago, Atlanta parts of Florida. And then we've got I would say another handful, another dozen cities. We only launched in Austin, Texas first. And that was August one. That was our first city that we launched in San Francisco has a nice network because of our employees and friends and family who are doing a lot of the early alpha testing with us. What we've done over since August one is we've started to really test our supply chain and we've shipped hotspots to various customers that ordered really just so we can make sure that our logistics is sound and iron out the wrinkles and correct mistakes, which there are a lot of them it's early days.

            And so we're learning quite a bit about that, but everything we've done so far is assembled in the United States. We're building everything in San Jose. We're not offshoring this anywhere else. And so we're proud of that for the same time. It's very, very expensive. You're in Shanghai. So, you know, it could be a lot cheaper. We built this in Asia somewhere, but we've elected to do it in the United States for security reasons because our encryption keys and primary injections are happening on site in our factory. And we're going to keep it that way for as long as we can. That's what that means in our next shipment is coming out October 15th. So just in a less than two weeks, we're shipping thousands of units across the United States. And this indicator that you see online today is for a November shipment. So we're probably going to start doing monthly rollouts where if folks order, we should be able to fulfill within 30 days, max probably less. And so right now, because we're shipping in unit volumes of just thousands, not tens of thousands or hundreds of thousands, it's very expensive. And it's almost just in time manufacturing, which is costly for us. And so we try to batch the orders as best we can. And that's what that bar is indicating. So that bar is indicating in November, should change shipment, which is a couple of thousand, a few thousand quantity. And then we have another batch for December, which is a few thousand more.

            [Erik]

            Okay. So if you're putting out a few thousand, a few hundred thousand a month, when do you expect to have sufficient? I mean, I guess this is still a city by city, but when do you start to have sufficient coverage where you can go to, let's say a large corporate that maybe they need a solution that can serve them, you know, in a significant portion of the U S and you can go with a viable proposition and say, listen, the network is ready for you. We've got the top, you know, the top 50 cities or 20 cities or whatever. And we have this percent average, 50% average coverage in these cities. Do you already have a kind of a roadmap for when that be?

            [Frank]

            So when we ship in October 15th, we will be in 263 cities across the U S and some cities will have more density than others, but I would say 10 to 15 cities are, will be very well covered. And these are the major metropolitan cities, United States like San Francisco Bay area, Austin, Texas, New York city, Bronx Boston Boulder, Colorado, other parts in Texas, as well as Atlanta, Georgia, and beyond. So those are in Chicago as well. So those are kind of the typical major Metro areas in the United States should be very well covered at that point.

            [Erik]

            Great. And then if I'm, let's say I'm I'm manufacturing smartwatches, or, or bikes, and I want coverage for my device, then I go to you and what's the conversation? What do I have to install in terms of hardware and software to, to join the network?

            [Frank]

            Yeah. So first and foremost, we're open sourcing all of our tech and the open source SDKs and KPI's are coming out. I believe that beta is coming out towards the end of October, that will be a set of tools that developers would need. And then we would have a hardware development kit that's scheduled to be released towards the end of the year. And that would include other wing boards, development kits, reference designs for whether it be a farm dog color smartwatch or, or some kind of tracking device for bikes and scooters. All of those things will be available. We have a lot of stuff in there get, get hub, which will release. We have over 200 developers already that's on the waiting list. And some have already started developing with us in our early alpha. And so all the tools that developers need will be there for them very, very soon within the next couple of weeks.

            [Erik]

            Okay, great. Do you, do you have any visibility in terms of what the cost of the hardware might might be in comparison to other, let's say the legacy or the protocols that they're using today, or the, the solutions that they're using today?

            [Frank]

            Because we're, we're on a Lora hardware, existing Lora devices that a lot of developers in the world of IOT understand or have played with should, should just work, we'll build on what they will need as long by the long fight SDKs and long by API. So that's zero costs really. They can take their existing hardware and modify it to be compatible with the healing network. I think anything beyond that, it's exploring the latest and greatest hardware, which is off the shelf. So nothing should be too pricey, nothing should, everything should be fairly competitive in price based on volume, but certainly to complain around with the test, it's, it should be pretty simple. We're going to package up dead kids for folks. I don't have pricing on that, but we're going to try to make it as competitive as possible and as easy as possible to adopt the healing network and our open source technology,

            [Erik]

            And then connecting the device. And so let's say the manufacturer has adopted the necessary technology to, to sync to the network. The device is now on the street, and it will then pair automatically with whichever helium devices is closest, or how does this pairing happen?

            [Frank]

            It's really cool because we're on a blockchain. Every hotspot is running a node, an independent Nova and blockchain. And what that means is the blockchain knows how to wrap the traffic. And so the owner of that sensor does not need to worry about the onboard of the device. They just simply need to ship it out there into the world. As soon as it talks, a helium hotspot that understands long flight will hear the sensor and know how to route that traffic back to its owner, fully encrypted. It's using something. We called an UI organizational unique ID. That will be a private key public key pairing between the device and the cloud database for that particular device owner. And doesn't matter where you are in the world, everything is permissionless and from a wireless onboarding perspective, it's ubiquitous. So there's no concept of roaming cause you don't need to roam. And that's a huge advantage of what we're trying to build, which is it's a public network it's decentralized, it's owned by the people. And if your device is talking long thigh and the healing that data get back to you, regardless of what country you're in, regardless of whose network it is, it doesn't matter essentially is built in. So that, that hotspot delivers the data.

            [Erik]

            Okay. Interesting. I guess two topics that come up, one is security, and I know this is your background. And the second is, let's say government regulation. So I suppose some governments are going to be more receptive to having a, a new network. That's not under the control of state owned enterprises, for example and, and others less. So maybe we talk first about security. So I imagine that companies always very sensitive about their data. And now the data is moving through devices that are being owned by a lot of individual. And I'm sure that blockchain has a role here, but how do you ensure that somebody is not hacking into a data stream?

            [Frank]

            Yeah, so each device that's out in the world will be called an ECC chip where the private key is an injected. This is a private key. This actually sitting in hardware and only the owner of the device has the public key to decrypt that data. As soon as the data's collected on the sensor and it's transmitted over the air, the hotspot, here's an encrypted packet essentially. And the hotspot does not know the contents of the data. It cannot unpack it. Doesn't have the keys to unpack that data. What it does is it looks at the organizational unique ID. It looks in a blockchain to see where, where the destination addresses and it's an encrypted address. And that address then then the data is sent to that address. And once the address received the data, the credit then goes to that hotspot.

            So the hotspot will get credited for transmitting the data. And that's part of, part of our mining protocol. The other aspect of, so that's, that's sort of one aspect of security is that that data from collection to motion it through a public network is fully encrypted. And it's never unencrypted until it hits the sensor owners database and their private key is the only thing they can unlock that data. So there's the security, at least in transit is fully secured. Once it hits the customers AWS or Azure or Google cloud instance that then us where the processing of the data occurs in processing of your keys. The other aspect of security is on the blockchain itself because every hot-spots a node on the blockchain, the blockchain is immutable. And so it's a collective peer to peer network where there is an attempt to tamper the blockchain.

            It takes 51% to overrule consensus. If you have a network of thousands or tens of thousands of nodes, it is a lot of work to try to take over with network. It can be done, but it's very these hotspots and these nodes require you to be in a unique location and you have to prove your location. We can't just be a virtual machine. On some instance, you can't create hundreds of thousands of virtual machines to try to fake the blockchain if it won't work. So we have the, you know, we're sensory resistant. We, you know, we're trying to protect yourself against civil attacks and DDoSs and so forth. So all that, all of that is built in to our blockchain logic. It's native to what we do is that hopefully answers your first part of your question. The second part of the question really is I think less about government philosophies or more about what empowering their people to do, right?

            Especially in a world of IOT, if regardless of your government mission, government philosophy, if every person under your care and your government in your country benefits because they get a chance to own and operate something in this case for devices, I think the incentive is there where everyone will want to do it and they'll want to continue to do it just like Airbnb and other things. It's, it's good for everyone. I don't know why government would be against it, especially in the world of telco. It's so expensive to build infrastructure. If you can offload that to the masses, give them a, give them, give, give them a piece of the action. I don't know why, how it's hurtful anybody. Sure.

            Yeah. Maybe the point here is a little bit more that there's, there's very strong legacy players that, and it's a highly regulated industry. It has this been an issue for the other players that the Sigfox and roll out have they been coming up again in lawsuits or I don't know about lawsuits, but if you think about, you know, at and T and Verizon, they have their own approach, their, their supporting things like NBC IOG and LTM. Ultimately we, we, we think those are flawed approaches because of a small little sensor has to pay anything close to three, $4 a month on a data plan to provide somebody temperature information, as an example, that's never gonna work. It just doesn't scale, right? It should be fractions of a penny. And of course, traditional telcos, aren't really interested in that business because they don't have to be there and providing great service for traditional cell phones and iPhones and Android phones. And every, every person that has a cell phone plan is a product right? Every year it gets more expensive. It's a great business. Why leave that business? Why not focus on that farm? That as long as you can, and if there's a budding IOT, you know, emerging market that's happening, we'll let that grow. I mean, I think that's the best, best handled by disruptive technology, disruptive entrepreneurs, perhaps the people and let them grow it.

            [Erik]

            Yeah. Yeah. Perfect. I think you're, you're probably right there that this is right now, not interesting to them. And by the time it becomes interesting to them with the landscape will look quite different, right? 5G what is going to be the potential impact? I think there's a lot of, there's a lot of hype and probably some of it is well placed and perhaps some is misplaced, but how do you see 5G impact your business in the future?

            [Frank]

            I see zero impact. I think your audience in the world of industrial IOT knows so far, I think 5G is a farce. It's just a lie. There's nothing unique about it. It's, you know, to truly get gigabit speeds or fiber, like speeds on your phone, you need a back haul to carry that through. Right. And so how many, how many homes are there in the world that have fiber back haul? It's certainly not the majority. There may be a minority of that happening in new buildings and new Metro areas, which is great. But for you to get the amount of data you need, right? Imagine streaming, you know,  4K on your phone using 5g for 10,000 people at the same time, that pipe massive, but then you have to be super close to the gateway. It may not even go a block.

            You have to be within half a block of by gateway to get that kind of. And so what I've heard is one, that's incredibly, that's a lot of hotspots. That's a lot of gateways that telcos will have to deploy. It's insane. It's an insane amount of equipment, which is expensive, right? So again, we're getting back to building infrastructure. If you had to put a 5G gateway and on every block to provide 5G service, I don't know how that's, how is that possible? How is that even possible? What kind of costs is going to incur? Right. It's just ridiculous. And so then you think, okay, well, great. Well maybe it's not every block. Maybe, maybe we go every, every five blocks, every 10 blocks, guess what? That bandwidth goes down to like LTE speeds. Oh, that's what we're on. Right? We're on four GLT or are we that's what the world's on? So I don't know how this is any different. I think ultimately the idea of Fiji unfortunately, is the doing of like guys like Qualcomm, the, the, the phone makers that should have makers, they have to iterate, they need everything to refresh their network equipment. Guys. It'd be the Erickson's of the world that were selling telco equipment. If you don't have a new standard new chip, there's no swap, right? There's no infrastructure change. So there's no revenue. So to me, I think the whole thing is just a game they have to upgrade. But, but in any case, you see this more as them upgrading towards a higher bandwidth towards streaming videos, more efficiently, not, not serving

            [Erik]

            Sure. I mean, at best, at best. My understanding was that that that was at one of the propositions. Although I know that the standard is still incomplete. So I think there's a lot of but at least that was one of the propositions was that five G would be also better suited towards low bandwidth devices small data packages and so forth. But yeah, I guess we won't, we won't know if that is the reality for some number of years.

            [Frank]

            There's 5G has chip technology that uses low power that we can run on a little, a little lithium battery for my three years. Maybe I doubt. I doubt that's the value prop. I haven't seen that. I haven't seen anything actually other than 900 megahertz. Right.

            [Erik]

            Okay, great. No, it's funny. I was, I was having a conversation with somebody earlier, let's say last week, who was just convinced that 5G was like flipping the switch and then IOT was going to all of a sudden BA be everywhere. And I'm thinking generally, when we're talking to people about, you know, making IOT use cases work, this is not the, you know, I mean, they have a lot of, they have a lot of problems, but you know, this is not the necessarily the core one here. Let's talk about one or two specific use cases of maybe some of your early adopters. You'd mentioned ag tech earlier. Maybe that'd be a good starting place.

            [Frank]

            Yeah, certainly. So we have a customer located in South Carolina and they're a company that provides telemetry for greenhouse farming, things like vegetables and fruits year round. And what they've done is help their farmers leverage technology and healing. And the healing network by building out sensors can measure soil conditions and air conditions, temperature across all the greenhouses from acres on the land, using a one, one helium HOSPA as the collection point for those sensors, and then routing that data back to their analytics platform to help the farmers better manage their crop yield over time. And so that's, that's a great example of leveraging the technology for, for farming purposes to help improve yield and manage that, manage that land abuse over years. The, the great thing about that is that they don't have to deploy wifi across their property. They don't have to use, you know, cellular, which could be very expensive and there are certain areas on their land that doesn't get cell coverage either.

            So that's problematic as well. So just really use them a lot of the problems. The sensor typically, depending on use the sensor is very low power. So you can plug a sensor into, into the soil and you kind of forget it for a couple of years and not have to worry. Those are all cool things. In addition to that, they've, they've also been testing a lot of our tracking sensors that send GPX GPS coordinates back to the farm owners. So heavy equipment machinery that's on the farm can be tracked, whether it's in use or not in use. And apparently staffed on farm equipment is decently high, and this there's at least a problem there. And so having an independent sensor that's operating on the battery that can last for awhile is very useful for them from a tracking equipment perspective. And that, you know, that kind of ties into a lot of what we've seen around bike sharing and scooter sharing and cities. I don't know how things are in Shanghai for you, but in the cities across the U S there are tons of scooters. One of the companies that we've been working with is line that is that's servicing 200 plus cities around the world, and they've been testing our, you know, our technology in San Francisco and Austin for some time now on basically keeping their scooters available on the streets for customers to use. And if is because they're highly mobile, they're easy to steal, but they can track them down with our healing them.

            [Erik]

            Were they using for connectivity before? I was curious about that, cause they all have some sort of cellular for transactions. The problem with the cellular implementation is they're mostly all, all these guys, all these scooter companies, sharing companies are using the cheapest possible cellular available. So that's likely to G in a lot of, to G in the U S certainly is getting decommissioned. So cartridge is really poor and it's not great. And so having helium as a backup, sort of as the, as the aide or the LoJack, let's say for scooters has been really helpful for them.

            [Frank]

            Yeah. Well, I know in China, some of these bikes were being manufactured for something like 40, $40, you know, CapEx. So, so I imagine even if there's a $1 a month subscription on, on the, you know, I don't know what that would actually look like, but that's already a significant cost for them to bear. So I'm sure they're also happy, happy to find an alternative, how many devices can one helium hotspot, potentially service.

            We don't have real world numbers because we have fairly good density across cities, but I imagine it'd be probably like 300 to 500 a burst. So remember these are not sustained. They're not sustained connections. So there are bursts of data that are sent. And so depending on application and use, if it's simultaneously, I'm guessing three to 500 sensors talking exactly at the same time, exactly the same nanosecond is supported for hotspot. But if you have a couple of hundred hotspots in the city, you can sort of multiply that out, decent, decent coverage for the number of devices on it on a moment in time.

            [Erik]

            And if we had a device like a, a smartwatch, which was not just sending kind of simple bursts of data, but was, you know, maybe streaming audio for a conversation or something, would that be feasible or, or is that required too much bandwidth for the device to manage?

            [Frank]

            I would say likely not a good idea only because for us to achieve, you know, a hundred square miles of range, the data packet has to be small. So we're talking 24 bytes to 20 kilobytes at the high high end. So I would argue voice is probably not a good application. However, text messaging or email limited email could be fine. And so if communications is, is the application of need, certainly you can create some, you know, old form pager system or old form, like the old blackberries text, each other simple things. Gotcha.

            [Erik]

            Gotcha. Okay. But so really designed for IOT machine to machine

            [Frank]

            That's right. That's right.

            [Erik]

            That we're missing here. That's a, that's critical for people to understand it.

            [Frank]

            I think probably questions I get the most when talking to potential folks who are interested in using the network is network coverage. And so I think maybe just reiterating how fast we're creating coverage is important. August 1st we shipped Austin, Texas, and within, I would say four or five days, the entire city of Austin was covered October 15th, we're shipping to 263 or 64 cities. That means within four or five days, there will be 264 cities with healing coverage. And that's the days, right. Days, worst case couple of weeks. I mean, that is amazing. So if you think about it, that is bringing a network to market in lightning speeds, that's never been seen before. I think, I think that's the power of the helium incentive and leveraging the people that create this is that that incentive model creates a hyper speed for deploying infrastructure that's useful for enterprises and industrial IOT essentially. And so that's something to think about. That's a hard idea to, to understand, because it's hard to believe that there are people out there, thousands of people willing to pay $500 for the hotspot for the sake of mining crypto, but you know, it's a whole new world out there times are changing

            [Erik]

            Well, Hey, I, I believe it. I mean I think finding 10,000 people to pay you know, a bit of money for, for a new technology and, and ownership of a new business model, I think that's that's something that is very feasible today. The challenge for you then is, is going to be to engage in with the the device manufacturers device operators and getting them on board.

            What would be, if somebody is interested in exploring how their device use the network, what's the best way for them to get in touch with helium or, or to inform themselves around what the steps would be?

            [Frank]

            Sure. Yeah. Check out our website, helium.com. There's a number of, I think, great resources on there are helium.com/enterprise talks about the use cases that we've already share. Some of those also, if they're a developer and they want to develop devices to operate on the healing network, check out helium.com/developers. There are great resources. Certainly. Is there a focus on listening to your show that are interested in buying the hotspot.com/store page? Take a look. And if anyone's interested, just send me an email@helium.com or sales@helium.com. And we're happy to talk more about how they can onboard devices to network, or you want a discount on the hotspot, contact me. I'm happy to share a promo code for them to use.

            [Erik]

            Okay. Awesome. Well, I really enjoyed the conversation. Thanks for taking the time to brief us on what your ability to hear. I think this is, incredible. I mean, we really do need new solutions to make connectivity more affordable and simpler. So I wish you all the success and really appreciate your time today.

            [Frank]

            Thank you, Erik. Appreciate it.

            [Outro]

            Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at IoTONEHQ and to check out our database of case studies on iotone.com/casestudies. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at team@iotone.com.

            Read More
            EP055 - How to cut 20 minutes off an emergency 911 response with data - Michael Martin, CEO, RapidSOS
            Monday, Dec 02, 2019

            In this episode of the IIoT Spotlight Podcast, we discuss the challenges facing 911 IT systems, the impact of data on emergency outcomes, and the role of IoT devices in enabling first time responders and predictive alerts. 

             

            How much data does a 911 center have?

            How do we know which data is useful to a 911 first responder for deciding actions to take?

            How can software and device companies use their data to increase the safety of their products and services?

            Read More
            EP 054 - How removing the battery from sensors can enable a trillion sensor world - Bob Nunn, CEO, Everactive
            Wednesday, Nov 06, 2019

            In this episode of the IIoT Spotlight Podcast, we discuss the battery problems in IoT that hinder deployments and value creation today, and how innovations in ultra low power circuit design and wireless communications that allow sensors to operate solely on energy harvested from the surrounding environment.

            • Are we trading one problem for another with IoT sensors?
            • How do we create a sensor the size of a stamp with larger range and compute ability than the average sensor today?
            • How accurate are consultancy forecasts about the size of the IoT market? 

            Bob Nunn is the CEO of Everactive. Everactive produces data intelligence for the physical world. Operating without batteries, the company’s always-on wireless sensor networks deliver continuous cloud-based analytics at a scale not possible with battery-powered devices.  Everactive’s end-to-end solutions are built upon groundbreaking advances in ultra-low-power circuit design and wireless communication that allow the company to power its Eversensors exclusively from harvested energy. For additional information, visit www.everactive.com.

            Read More
            EP053 - The first iteration of cybernetic workers - Gabe Grifoni, CEO, Rufus Labs
            Monday, Oct 28, 2019

            In this episode of the IIoT Spotlight podcast, we discuss the evolution of warehouse for commerce, and modern solutions and flexible service based business models to support the new business and competitive challenges in logistics. 

             

            What is driving the development of industrial wearables?

            How do you sell Hardware-as-a-Service?

            What are the disruptive technologies to come in the IIoT?

            Gabe Grifoni is the CEO of Rufus Labs. Rufus Labs’ WorkHero is the most advanced connected operator platform for enterprise. Comprised of machine learning software, rugged wearable technology, and superhuman support, WorkHero is a complete Productivity as a Service solution for the evolving workplace (Industry 4.0). Compatible with existing WMS/ERP systems, it connects workers to each other and to automation, allowing for optimal coordination between humans & machines. WorkHero provides our logistics, e-commerce, and fulfillment customers superhuman efficiency gains, safety enhancements, and zero downtime, while providing management with robust, actionable, visibility & analytics into their teams and facilities.

            For info and demo: getrufus.com or enterprise@rufuslabs.com 

            Read More
            EP052 - Empathy is the key to successful industrial design - Gordon Stannis, Director of Design and Strategy, Twisthink
            Tuesday, Sep 17, 2019

            In this episode of the IIoT Spotlight Podcast, we discuss human-centric design and change management in high uncertainty innovation processes. 

            • What is human-centric design?
            • What is empathy in an industrial environment?
            • How to manage and communicate through the process of trials and failure?

            Gordon Stannis is the Director of Design and Strategy at Twisthink, A business growth accelerator / innovation firm that partners with clients to imagine, develop and launch exciting new user experiences and products that “their” customers will love! For nearly 2 decades the Twisthink team has consistently unlocked game changing opportunities for clients in a broad range of markets including home, transportation, workplace, healthcare, sports and many things in between. Because at Twisthink, they’re not tethered to industry paradigms or mired by the day to day constraints of those industries. They flow between industries and can spot consumer, design and technology trends that affect them all. They’re entirely devoted to creating the ideal user experience which allows their customers to become market leaders and grow.

            twisthink.com/iotspotlight

            Read More
            EP 051 – Hyper converged infrastructure for scale and simplicity in IIoT, and other success factors – Satyam Vaghani, VP of IoT, Nutanix
            Tuesday, Jul 16, 2019

            In this episode of the IIoT Spotlight Podcast, we discuss the adoption of hyper converged infrastructure to enable scale and simplicity in IIoT applications, how decisions migrate from IT to project managers and business decision makers, and the factors behind successful IIoT deployments.

            Why must the OT understand the IT?

            How to design an IoT application with strong value propositions for users?

            What are the technologies on the rise that will drive progress of solutions available today and future potential solutions?

            Read More
            test test