SUPPLIER MANAGED
FogHorn Logo

FogHorn

Edge Intelligence for Industrial and Commercial IoT
United States
2014
Private
$10-100m
51 - 200
Open website

FogHorn is a leading developer of “edge intelligence” software for industrial and commercial IoT applications. FogHorn’s software platform brings the power of Machine Learning and advanced analytics to the On-Premise edge environment enabling a new class of applications for advanced monitoring and diagnostics, asset performance optimization, operational intelligence and Predictive Maintenance use cases.

Read More
FogHorn is a provider of Industrial IoT analytics and modeling, and infrastructure as a service (iaas) technologies, and also active in the automotive, buildings, cities and municipalities, healthcare and hospitals, mining, oil and gas, renewable energy, retail, transportation, and utilities industries.
Technologies
Analytics & Modeling
Edge Analytics
Machine Learning
Real Time Analytics
Infrastructure as a Service (IaaS)
Others
Use Cases
Edge Computing & Edge Intelligence
Machine Condition Monitoring
Functions
Discrete Manufacturing
Process Manufacturing
Industries
Automotive
Buildings
Cities & Municipalities
Healthcare & Hospitals
Mining
Oil & Gas
Renewable Energy
Retail
Transportation
Utilities
Services
Data Science Services
FogHorn’s Technology Stack maps FogHorn’s participation in the analytics and modeling, and infrastructure as a service (iaas) IoT technology stack.
  • Application Layer
  • Functional Applications
  • Cloud Layer
  • Platform as a Service
    Infrastructure as a Service
  • Edge Layer
  • Automation & Control
    Processors & Edge Intelligence
    Actuators
    Sensors
  • Devices Layer
  • Robots
    Drones
    Wearables
  • Supporting Technologies
  • Analytics & Modeling
    Application Infrastructure & Middleware
    Cybersecurity & Privacy
    Networks & Connectivity
Technological Capability
None
Minor
Moderate
Strong
Number of Case Studies2
Pump Cavitation Detection
Pump Cavitation Detection
Cavitation is a condition can occur in centrifugal pumps when there is a sudden reduction in fluid pressure. Pressure reduction lowers the boiling point of liquids, resulting in the production of vapor bubbles if boiling occurs. This is more likely to happen at the inlet of the pump where pressure is typically lowest. As the vapor bubbles move towards the outlet of the pump where pressure is higher, they rapidly collapse (return to a liquid state) resulting in shock waves that can damage pump components.
GE Detects Early Defects and Improves Capacitor Production
Hard to Detect Capacitor Failure Conditions Reducing Yield, Increasing ScrapGE was facing multi-million-dollar scrap problems due to limited real-time insights into the entire production process. They believed they could significantly improve the yield and reduce the scrap of their manufacturing operation by analyzing a large amount of RFID sensor data being produced by 30+ machines during the production cycle. This included correlating processing data in real-time from several sources to create an edge intelligence layer with FogHorn for real-time condition monitoring throughout the production process. The goal was to identify defects early, quickly determine the root cause, and speed remediation actions to improve yield and reduce scrap costs.
Number of Podcasts4
EP015a: At the Edge of IoT Intelligence - An Interview With Foghorn's David King
Tuesday, Oct 24, 2017

Fog computing is a concept of bringing a layer of computing power closer to the edge of an enterprise's network and was a term coined by Cisco several years ago as part of their "Internet of Everything" pioneering initiative. While Cisco succeeded at the networking element of fog computing, that left the whole computing side of the equation unanswered. In the first episode of this three-part series, we chat with a company that was formed with the core objective of finding the answers to this question. We are pleased to welcome David King, CEO of FogHorn Systems to the show who will be telling us more about the company and how they are working hard to bring IoT intelligence closer to the edge.

Read More
EP015b: Cutting-edge Edge Technology - An Interview With Foghorn's David King
Thursday, Oct 26, 2017

There has been increasing interest in recent times in the IoT world with regards to the edge domain of the IoT technology stack. In particular, the focus has been on moving increasing compute power to the edge domain in order to improve real time IoT application response with big firms such as Dell doubling down on the edge with their recent IQT initiative. In the second part of this episode, we are happy to be joined again by Foghorn's CEO David King who will do a deep dive into the technology driving the latest advancements in edge analytics. 

Read More
EP015c: Navigating The IIoT With Edge Analytics - An Interview With Foghorn's David King
Friday, Oct 27, 2017

The Industrial IoT world is seeing soaring adoption rates of edge analytics solutions as a means of staying ahead of the IoT curve. In fact a recent report by Transparency Market Research estimated that the the global edge analytics market is expected to be worth US$25.569 billion by 2025. It is vital that your business is aware of the benefits that edge analytics solutions can bring to your firm's IoT portfolio. In the third and final part of this episode, Foghorn's CEO David King who will run us through a use case of how their edge analytics technology is being implemented. 

Read More
EP065 - How cloud-edge hybrid strategies drive IoT success - Sastry Malladi, CTO Co-Founder, Foghorn
Tuesday, Jun 30, 2020

In this episode, we discuss the business value of machine learning on the edge, and the increasing need for hybrid edge-cloud architectures. We also propose some technology trends that will increase usability and functionality of edge computing systems.

 

Sastry is the CTO and co-founder of Foghorn. He is responsible for and oversees all technology and product development. Sastry’s expertise include developing, leading and architecting various highly scalable and distributed systems, in the areas of Big Data, SOA, Micro Services Architecture, Application Servers, Java/J2EE/Web Services middleware, and cloud computing.

FogHorn is a leading developer of edge intelligence software for industrial and commercial IoT application solutions. FogHorn’s software platform brings the power of advanced analytics and machine learning to the on-premises edge environment enabling a new class of applications for advanced monitoring and diagnostics, machine performance optimization, proactive maintenance and operational intelligence use cases. FogHorn’s technology is ideally suited for OEMs, systems integrators and end customers in manufacturing, power and water, oil and gas, renewable energy, mining, transportation, healthcare, retail, as well as smart grid, smart city, smart building and connected vehicle applications. info@foghorn.io

_______

Automated Transcript

[Intro]

Welcome to the industrial IOT spotlight your number one spot for insight from industrial IOT thought leaders who are transforming businesses today with your host Erik Walenza.

Welcome back to the industrial IOT spotlight podcast. I'm your host, Erik Walenza, CEO of IOT one. And our guest today is Sastry Malladi, CTO and co founder of Foghorn. Foghorn delivers comprehensive data enrichment and real time analytics on high volumes at the edge by optimizing for constraint, compute footprints and limited connectivity. In this talk, we discussed the business value of machine learning on the edge and the need for hybrid edge cloud architectures. We also explored technology trends that will increase the usability and functionality of edge computing systems. If you find these conversations valuable, please leave us a comment and a five star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@iotone.com.

[Erik]

Thank you Sastry. Thank you for joining us today.

[Sastry]

It's my pleasure.

[Erik]

So today we have a slightly technical topic today. So how cloud dominated solutions will adopt a more edge first or cloud edge hybrid approach.

But before we get into the technical details, I want to understand more where you're coming from and also the background to Foghorn. So starting from your background, I know you're, you're now the CTO of Foghorn. I believe you joined about four and a half years ago. How did you come to end up at Foghorn? What was the path that led you to this company? And what was it about Foghorn that made you feel like this is a company that I see high potential in?

[Sastry]

Absolutely. So yes I am. I'm a cofounder and CTO finance about four and half years. Plus I'm an entrepreneur at heart. I've got a technology background. What in the leadership roles, as well as the executive management roles for the past two decades or so on and off what big companies, startups self-funded companies. Primarily my background has been starting from hardware devices, operating systems, applications network, slowly walk back up to application service as big data.

And so on how I got interested in our seed fund, the investor, the hive, which is based in Palo Alto here, typically set aside some seed funds and try to look for founders for solving certain problems. One of the problem areas that they wanted to solve was in the industrial IOT space. And they've been looking for founders who can come and help build this technology to solve specific problems. We'll get into context of what we're actually solving in a second, but that's how, you know, they started talking to me for about, I'd say, six months or so. And then I would other co founder, David King also came on board around the same time. And that's how I ended up in forgone and then never looked back and I've been enjoying since then. We're actually building a pretty cool company to solve a real problem for industrial IOT customers.

[Erik]

That's interesting. So it was actually a seed fund that had a problem that they thought needed to be solved. And then they basically recruited you or they, they scouted founders that they thought would be able to solve.

[Sastry]

This is that that's it. And the way it works is that once they recruit us, then they leave it to us. We actually go raise the funds like CDC say, we started with CDC. We're currently, we've just recently funded a close series C last November, but the ABMC. And then we basically hired the rest of the team, built the product, take it to the market, you know, bill customers and so on. So we take it from there. In fact, the seed fund company helps us to bootstrap this and then we take it from there, but that's really how their model works.

[Erik]

Okay. And I'm sure they're happy with the results is Foghorn has great traction right now.

[Sastry]

Yeah, absolutely. So far so good.

[Erik]

Did you know David, before you co founded the company with him, or

[Sastry]

Actually I did not. I met him. I've talked to David when our seed investor company introduced us

[Erik]

And tell us a little bit about what Foghorn is and what problems you solve before we go into the technical details. Just what, what is the value proposition behind the company?

[Sastry]

So if we look at the IOT, I know IOT is a buzz word. A lot of people use that. But if you look at specifically in the industrial sector, whether you're talking about manufacturing, violent gas, transportation across the board and problems do occur, right? So that is potentially the yield improvement issue is scrapped issues or predictive maintenance issue is now all of that up until now. What they've been always doing is trying to somehow collect some data information about those assets that they're trying to monitor to optimize for their assets and the business outcomes. And then somehow maybe send all the ship into the cloud environment and do some analysis and try to find what the problem is, right? That has not been working really well for them. It's not only cost, not cost effective, but it's also not practical to send all of that information into a cloud environment and process it there and then send the results back into the asset.

And by the time they do that for hops, the whatever issue they were trying to solve, maybe the machine is down. Maybe the part that they're manufacturing was bad that would have already happened. It was too late and they wouldn't have helped them. So what we set out is to solve exactly that problem, which is to say, how do we enable these customers to figure out problems in a proactive, predictive manner? Meaning that some time from now the machine is going to fail are the parts that are going to be coming out are going to be defective. So with age automatically an alert to the operators. So they can fix that with the simple goal of optimizing the business outcomes, reducing their tradition, their scrap, improving the yields, as well as doing some predictive maintenance and things like that. That's the fundamental premise deriving actionable insights that will help the business outcomes. That's really what we do. Obviously there are lots of challenges in doing that. That's where we have to go invent our technology and that did not exist before, which is to say that there are constraints because you're working in these constrained environments and typical existing software that, that exist do not run in those environments. So we had to come up with an innovative way to do this on the live data that's coming in from these assets. That's really how we got started in Europe.

[Erik]

Surely software, right? So all, all of the hardware involved, you'd be working with partners.

[Sastry]

That is exactly right. The software. But as you can imagine, as you probably have seen, we have lots of investors that are also hardware partners like Dell, for example, is invested in across partner, HPE, not an investor, but close partner. We've got Bosch, we've got Advantech number of hardware partners that we closely work with. And then we certify, but we don't actually sell necessarily any hardware. In fact, these hardware manufacturers have, they don't catalog of skews where they preload and test our software. If a customer actually wanted to buy their hardware, plus our software, we do have that kind of bundles of packages available, made available one way or another.

[Erik]

Okay. Yeah. And actually that's a great business model, right? I mean, as a, as a younger company, building up a Salesforce to reach a broad market is quite challenging. So having HP and Dell and Bosch and so forth, being able to bring your solution to the market, I imagine that's a, that's the right way to, to enter on software. I guess we can kind of divide the, you know, software into the, let's say the architecture elements around how you process data, capture, process, manage data, and then the specific application. So every customer has their specific problem. And I suppose those applications in some cases are somewhat standardized, but in some cases there's going to be some customization around the requirements. Are you typically doing both the underlying architecture and the application, or would you, in many cases provide the architecture, but maybe work with a third party software provider that has an application for a specific problem?

[Sastry]

Well, that's actually a good question. Let me take a couple of minutes to explain that our fundamental business model is an annual subscription software, right? We provide the software, which is a core engine where customers can install the software onto the, either the existing devices. Remember our core IP here is the ability to run these on constraint devices, whether you have an existing PLC, whether you have an existing, small ruggedized, vast Prairie pilot device, or some cases, the asset itself, right. We can fit our software in small footprint compute environment. But once you do that, you can configure the software to, with all of their, their local censoring information, everything program, using our tools as to what it is that they find to detect and what specific problem they find to solve. Now, what we have learned early on when we started shipping this product back in 2016 to our customers, is that many of these customers, because their industrial nature, they're not necessarily a technology focused and therefore they would come and ask us, can you help us to also, you know, not only, you know, install, but also actually help configure program your tools so we can detect the problem you're trying to look for.

And we'd started doing that. Obviously as an internet company, we have to get into these customer accounts. We started doing that and soon we realized, and within four months or so that since every single customer has been asking for it, we've got to find a real solution for it. And we solved that in, in two ways. One is we started building an internal data science and technical services division to help with that. Like initially, if we want to do a pilot, if you need a couple of our data scientists to go help, initially use our tools to help set you up. And all of that, we can do that. Many customers continue to do that. We've also established a huge slew of partnerships ecosystem across the globe, starting from larger size, you know, all of the, you know, the Vipros India, Indian SIS, as well as local Micronet sites across the globe, as well as Accenture, Deloitte, and April TCS, a number of those guys, those are all our partners.

They are actually familiar with our software. We have trained them. They are running it in their labs. So when a customer actually comes to us or to them, frankly, we're looking for a solution that requires our type of technology and they can handle that as well. But but we do have an internal data science division that also helps with a lot of the pilots. Now, one other point I'd make, before I pass that back on to you, is that what we have not done in the last four or five years is that we have done a number of such pilots, many, many fortune 500 customers. And then we began to identify what are some of these repeatable commonly used use cases across. And we started packaging them together so that neither the customer, not us, not a third party needs to go and do any customization per se, but they would get the package out of the box solution installed themselves and then fine tune it, using our own UI and then get it up and running for these commonly used use cases. And that's where we're finding also a lot more traction these days in the last one year or so.

[Erik]

Okay. In these packaged solutions, you'd be developing those in house potentially with a strategic partner. Is that the case?

[Sastry]

Not for the partner necessarily for software is only developed by ourselves. And then we do have strategic partners. For example, that say we are developing, we'll get into use cases in a second, developing a solution that requires the camera. We've got partnerships with camera vendors, starting with BOSH and the number of others to let's say that solution actually requires two sensors. Somebody actually wants to install it by centers too. They've got partnerships with sensei renders to it. So it all depends on it, but our core package solution development in good it software itself is done in house. And then the partnership comes to when it comes to bundling the hardware.

[Erik]

Okay. Very clear. Let's uncover use what would be, let's say the top five use cases that would typically be relevant for Foghorn

[Sastry]

Use cases before I say that, just one word, right? So we are a generic platform engine. That's how we started using edge ML. AI, was it on some of our trademarks to being able to run machine learning and AI in a constrained small corporate environment. And of course, traditional analytics and CP based analytics too. That's our core engine that we built from ground up. We've got several patents on that branded as well across the globe. Now use cases. Initially we started out, you know, obviously manufacturing, we've started doing word processing, discreet manufacturing, loose cases. Almost all of them can be categorized into either in a yield improvement, scrap production type use case, meaning that, you know, the machine itself is manufacturing a particular part on a product. And that product comes out with defective. And then our software is trying to protect and detect ahead of time before it produces defective parts and then fix it.

That's one type, the second types of, and many, many types of machines, whether the CNC machines or pumps, compressors, the number of types of assets, it doesn't matter regardless of what type of facet you're talking about doing a predictive analysis, either for the parts that come out of it or secondarily the process itself, if there is actually a problem with the process and how, you know, maybe it's not feeding the pots of properly, maybe the input themselves are wrong. Maybe the temperature control is, is wrong. Whatever the process issues are. We can detect that as well. That's a couple of types of use cases I've done in the manufacturing side effects. We'll give you a specific examples when we get to case studies and switching to a different worker, like in something like an oil and gas in that the types of problems are different, which is to say, you know, starting from upstream, downstream, and missing in these cases, when you're drilling for oil, there could be a number of issues.

For example, potentially that is a blurred prevention optimization that may need to do, or there's contamination of fluids as they're drilling the dominion to stop. Or that is flaring. For example, as you're refining the gas that you just drilled and potentially do the compressor problems or some other problems that it could be a phenomenon called flaring, where you start releasing these gases into the atmosphere causing emission problems and EPA regulation, violations and penalties, and so on, we can actually predict and proactively prevent and be able to do that at monitor that if there is exceeding certain officials and take care of that too, and then there are other types of problems where you're drilling, for example, there's a problem with the drill bed, or there is an issue with them. Steam tabs is an issue with something else at number of types of use, get gas, leaking gas, making a Macedonia lasers, things like that types of use cases vastly vary within that sector as well.

But again, you might be wondering, you know, how we can take care of such vast majority types of use cases. One more fundamentally that the way to solve a use case would be to install our soccer, configure auto, detect the sensors and use our tools to specifically program what you're trying to detect, unless it's a PAC submission available for us. And then we moving on to the transportation, which is another vertical that we're working on. The types of use cases. There are again, efficiency and optimization of assets. We started with locomotives initially, you know, GE, which is also an investor in us early on. And they had the locomotives installing our software within the locomotive itself, being able to optimize and predict the fuel efficiency conditions, you know, interior detections, the wear and tear of this equipment itself, things like that. And then we moved down to fleet management, especially for trucks, you know, monitoring the diver behavior, something called reason, condition two, all the way to the autonomous driving vehicles.

Now we're working with them companies to actually install our software inside their bagels. This is actually publicly announced. Porsche was one of our customers. We have done number of use cases there. We can talk about those as well. So as you can imagine, the types of use cases vary from predictive maintenance, proactive failure, condition detection, to condition monitoring a facet in all of these three different sectors. And lately I'll say one more thing for a pass down to you. We have been also getting into energy management use cases, especially in buildings, smart buildings, whether they're office buildings, whether they are school buildings, whether they are hospitality buildings, regardless, also partners with Honeywell. We're actually trying to, we have a solution now to optimize energy consumption in those buildings by simply running our software and connect to the sensors and programming to detect those conditions and things like that. So it's, it's a gamut of use cases that that we've been after

[Erik]

Wide variety. So I imagine that even though you are bringing your own kind of integrated solutions to market and more standardized solutions to market, in many cases, there's some degree of customization required. If you can give me a kind of a rough estimate, what proportion of your customers are able to do this in house and what proportion require some external support, whether it's by you or, or by a third party system integrator?

[Sastry]

Yeah, if you ask me that say up until last year, right? Majority of the portions of our customers were using some help or in one shape or form from ours for us to either to customize that or to build a solution for them or collaborate in SSI. But in the last 12 months, that picture has, is changing quite significantly rapidly. But now that we're actually started realizing these packaged solutions out there, customer those customers that are actually buying these packaged solutions no longer have to depend on any services support from us. We're actually coming up with a part of that pack of solution would be like a UI, which helps them. I'll give you a simple example. Let's say, you're talking about flare monitoring, right? So player monitoring, this is a vision based MLAI system. So we take the video camera feed. We take the compressor sensors and valve positioning all of the different sensors.

And then we build, for example, the packet solution, we'll have a machine learning model that's processing this display images to identify certain KPIs. Now, obviously when you take that solution to install in a different customer environment, maybe how their flat looks is slightly different. Maybe the conditions are different. Maybe the camera positioning is different. Maybe the resolution is different. Something else is definitely the environment. And therefore in order for that solution to accurately produce those KPIs, they will have to fine tune that that's specific to their environment. And to do that rather than they hiding us, as, as we says, are more going to NSI to help with that. We've actually built a UI where they can come in, for example, upload their video, upload their parameters in a walk them through, you know, how to fine tune that and then sell fine on we create the solution different. Right? So in other words, in the last 12 months that the number of customers beginning to depend on our services is reducing, but in the first few years, it's pretty much a vast majority of them. Although there were quite a few of them also doing it on their own.

[Erik]

Okay, great. Yeah. That's a very positive trend that we've been seeing is companies making the interfaces much more, let's say comfortable for a nontechnical user to modify it.

[Sastry]

That's exactly right. If I may say something, something that neglected to mention before, one of our core strengths of our offering is also, we are actually 40 centric, operational technology centric, as I'd mentioned early on, a lot of our customers have not really highly technology savvy. They're all engineers, but they're all engineers from a manufacturing mechanical standpoint, but they're not necessarily computer science engineers. Right. So if you'd go and ask them and tell them anything, in terms of programming, complicated thing, it's going to be really hard for them. So we started building from the get go, what we call this OT centric tools, a drag and drop tools where they can drag and drop a sensor definition, identify and explain or express what it is that they want to derive. And then we take of the coding business behind the scenes. So we would definitely take pride in that, which is it's really, really important to put out auto centric tools as opposed to it centric tools in order for us to be successful in this market. Yes.

[Erik]

That's a great point. I was just going to ask you about who is the, let's say, who is the buyer or the system owner? Because I suppose 10 years ago it would be it, but it sounds like that's not the case here. So is it then the engineering team is that who you'd be working with or who would be a typical buyer and then system owner wants to see.

[Sastry]

Yeah. So that's actually another great question. Typically, the user, maybe I would split that into two parts. The user of our software is the actual operator, the engineer that's in the plant environment or a refinery environment or a vehicle or whatnot, right? So those are the users of the system. But obviously the person who has got the budget who actually buys it is somebody, their CIO, CTO, whatever is the role is right now, unlike a typical it sale, where you go and convince the budget owner or the CIO or the CTO for the budget and go sell this. And then everybody started using it. That's not as simple in these environments because the person who actually wants the budget, who has got the money to buy, this is not the same person or the same team that's actually going to use it. So we have to bring both people onto the table to make sure it convinced the operator, convinced the engineer, that this actually does solve the problem for them.

And of course, if you don't have the budget, it doesn't help. Even if the engineer thinks that operator thinks that it is going to be helpful. And therefore it's a three day conversation, you got to have the budget first to make sure that somebody has got the money to pay for it. And then second, you have to have a business problem. It's been identified, the business problem that you want to solve. It can be a science experiment where somebody, you know, wakes up on one day and said, Oh, let's just try something new. Right. It has to be a valid business problem. Agreed upon problem. And then the operator needs to have the feeling that that the solution that we're offering actually does solve that problem. That's kind of how it starts.

[Erik]

Okay. Yeah. And I suppose, because you have these two different stakeholders are like the user and the buyer are different entities pilots to some degree are important, but it, you know, it's a, it's a topic that's come up quite often lately the challenge where a pilot is implemented and then the solution at scale is fundamentally different from the problem being solved for a specific pilot. So pilots, maybe in many cases, don't scale. Well, how do you address the, the issue of maybe having to do a pilot to demonstrate value to both these stakeholders, but then ensuring that the pilot actually will, will scale and provide also the same required value across the entity.

[Sastry]

So it always says your guests, it always talks with the pilot most of the times like that, because they want to make sure that we're actually able to run the software in their environment. We can actually connect to their equipment and sensors. We can in fact show that we're able to predict the failure conditions that they want. So typically the pilot runs anywhere from two to six months, depending on the customer. But before we get into the pilot, we always obviously have contract negotiations to say that if the pilot were to be successful, what's next, right? If we have bells, because we have been doing this for the last, I would say four or five years now that I, you know, some of them are actually running in production, large scale deployment as well that we have enhanced all of our tuning to say, look, it's not just one device.

It's not just one machine that you're connecting to. Of course, they're going to do that in the pilot, but beyond the pilot, if you've got multiple locations, multiple sites, multiple machines. And how do you scale that up? How do you take one of the same solution that you already built to automatically with one click, deploy this across many sites and then be able to still further localize and customize to the specific environment. How do you do that? So we built a tool called fog and manager that helps them with large scale deployments and a local customization. And then also things like auto discovery and many, a times to scale this up. For example, the goal manually configure a system to list all of the sensors. It's going to be practically impossible, much less not to mention the error prone nature of it. So we've got tools built in to automatically discover what sensors are actually available and exist.

Present that to the user, allow them to be able to customize that solution. And then with one, once you customize it, a localize it with one single click, select a number of these devices at once, and be able to push that same thing for any updates since then. It's not a one time thing from time to time, but might release patches fixed us, bug fixes, maybe updates. It's the same mechanism that happens as well. So we use a container technology mechanism to actually start shipping this to a number of sites automatically without shipping physically the bits and things like that. So we have actually considered all of that. The management monitoring configuration tools for scaling up from the get go. Luckily we have had big partners in, many of them were investors early on that helped us to actually test this scaling aspect of it in their environment. And that's how we kind of beefed it up. Of course, we continue to learn from each of these customer deployments and to see if there are, if there are other things that we can improve on and it's an ongoing process to improve.

[Erik]

Okay, great, great. Very interesting. So let's then turn to the, the technology and in particular, this discussion of when to use cloud edge or a hybrid system. So I guess you have maybe three, let's say four, four terminologies that we should define upfront for users that are not so familiar here. So we have the cloud, we have the edge, we have the fog and then we have hybrid systems. Could you just in your own, in your own words, define those four alternative architectures.

[Sastry]

Yeah, absolutely. So let's talk about the cloud, which I'm sure all of us are familiar with. And by the way, just early on, back in more 15, 20 years ago, I get in my own self funded startup. It's like the aggression here at that time, people used to call it different things. I call it great. Some other folks called it utility computing, ultimately the word cloud stack. But in any case, the hosted centralized data center like environment where all of the data processing and all of this competition happens in a central location. It's the same centralized to be know that, right? That's the cloud. They've got major providers, whether it's Microsoft, AWS, Google, and whatnot, right? So that's fairly clear. I don't spend a lot of time on that. Now let's talk before I go to the edge for a second, let's talk about forward because in fact, we named our company as far gone early on, and the context stated that what eight, eight years ago or so.

Right. In fact, Cisco initially came up with the term fog computing, although they didn't quite execute on that. The concept behind that at the edge of the network, when you've got assets, manufacturing, machines, oil, refineries, you know, weight goals and buildings, whatever these things are, they are at the edge of the network or assets and any competition that happens closer to them, sometimes on the assets that are closer to them is what initially started off as the fog computing. So that's how we, in fact, named our company as far gone, you might have noticed that we no longer use the term fog in our any of our references materials. And there is a reason behind it. So what happened in the last six years or so? A lot of folks, as well as, you know, standard organizations like open, which we were part of and things like that, they started to dilute the definition of what fucking greeting is people started talking about, Oh, fog is anywhere between edge and cloud.

It's the continuum, it's this it's that. And then, you know, the definition got diluted. So we actually stopped using the term fog. In fact, not many people are even using it anymore. Edge is the definition that is sticking right now. It is simply the mean that is the edge of the network, closer to the asset or the asset itself, where you start doing some competition to identify or predict whatever you're trying to do right now. There is a slight variation of that edge, especially with five G and the mobile network operators coming in, that's called Mac MEC previously. It used to be called multi-access compared. Now it's called mobile it's compete or vice versa and what their definition of edge, which is the other definition that sticks today, is that rather than considering edge as the asset or the edge of the network, instead of they consider the edge as the base station, the cell tower base station, where the information of the data from the assets flows into the base station and that's their edge computing.

It's not all the way to the cloud, but it is somewhere in between that's data definition. So we have talked about cloud, we talked about fog. We talk about edge two different flavors of edge, and then hybrid systems. What is really practical today is actually a hybrid system. Almost every single customer that we have deployed into that anyone that you talked about always uses a hybrid system and because edge is good for what it is good for. Cloud is good for what it is good for. When you have historical data, petabytes scale services, aggregation across multiple sites, sideways visibility across the whole company and things like that. Some of those services are typically hosted in a cloud. So cloud still has a role. What it needs to play for edge. On the other hand is people have real problems. They've got plans, they've got factories and they've got real problems and they need to solve those problems right then and there, as it is happening, they can't wait for that information to be shipped to the cloud.

And then somebody telling them back, here's the problem? Go fix it. It's probably too late. So most customers install our edge software, deploy this software to find the problems in real time, derive the insight, take care of business, and then use the cloud to send them transport these insights from each of these different locations, into a central cloud storage, into a cloud service. And that's where the aggregation happens. And that's where any fine tuning the machine learning models are for the building of the models happens, things like that. So that's where the cloud gets used. And not to mention that, you know, dashboarding central dashboarding company wide visibility and things like that. So most definitely the hybrid systems is where every customer is choosing to put a plug with these days,

[Erik]

The edge, I suppose, as you said, it, it can be a number of different things. It could be a base station. It can be, I think in many cases that gateway is quite common. It could even be a sensor or something with a very low compute power, but because you're dealing with ML and things that require, I suppose, a bit, a bit heavier, how is it typically going to be the gateway that you would be deployed on? Or is there a wider range of hardware. Where your computer could be located?

[Sastry]

It is a wider range. Remember I mentioned our core part of the technology is this notion that we can run in this constraint environment. We have this technology called edification. What it means is that typically analytics and machine learning and models that run in a cloud environment that almost assume always infinite amount of compute and storage and memory available. That's not the case in this constrained devices. So we have come up with a number of techniques to Edify those analytics and machine learning models, to be able to run in this constrained environment. We use a number of techniques, quantification binder, addition, our CP converting all of the Python code into our CEP number of techniques that we use, but software based as well as hardware based acceleration, things like that, given that, but having said that normally, if you're not doing deep learning, vision-based deep learning machine learning models out there in almost about a hundred, 150 megabytes of memory, dual core CPU.

Typically what you would find in a PLC or half the size of a rack ruggedized basketry pie. You can run a lot of the analytics, but the moment you connect video cameras, audio acoustic, or vibration sensors, where you're combining them all together and doing deep learning. This is where we need a little bit more memory. So this is where devices like the gateways, whether they are Dell IOT gateways, HBI, or the gateways or Samsung ruggedize them on based devices, ruggedized, raspberry pies, things like that would come in. So it depends on the use case. Deep learning machine learning requires a little bit more power. We're talking about maybe a few gigabytes of memory, but not deep learning, traditional typical analytics and figuring out you can actually fit it into, into a PLC at a very small gateway device.

[Erik]

We've covered kind of the architecture. What would be the, I mean, you've already alluded to this, but I just want to make sure that this is very clear for everybody that's listening. What would be the decision criteria? So we're talking about latency, bandwidth, certainly cost of hardware, cost of data transfer and so forth. How would you break down the decision when you're discerning? What type of architecture fits a particular use case?

[Sastry]

I think you, you started out listing some of those. Those are the right criteria, right? So first of all, we always start with what's the business problem. What's the business impact because if it is simple science experiment, it's not really good for either of us. What's the business problem. For example, it starts up customer comes and says, you know, here's my problem. My every single day I've got X number of parts coming out as defective from this machine, that's costing me Y amount of impact to the business, right? We've got to give back to solve that. It always starts with a business problem. Now, how do we solve the business problem? Of course, the immediate one obvious thing might be Alexa connect, all the sensors, send it to the cloud environment, process it, come back and analyze it and see what it is. Typically.

That's all good. If a one time thing, meaning that a problem only happens just one, one time. And that said, go fix it and never have the problem. Again, unfortunately, that's not the case. It's a continuous problem. What happens is even when you have a solution that is sometimes drift in environmental changes, the calibration issue, as the data changes as a result, the same solution may not work. So it has to be a continuous live up there too. And then also the problem is not solved. If you tell the operator after the fact that something happened yesterday or something happened even one hour ago, what's the point it's already happened? How do you predict it and tell them ahead of time, just in time. So they've got a chance to go fix and rectify what's happening. This is where they decide. Well, edge obviously makes more sense there.

So when you start talking from there and now then comes we, okay, what kind of data is actually available? What kind of hardware do you need? Is the existing hardware enough to fit there? Do we need to go acquire a new hire to gateway as the connectivity exists or not? Those kinds of questions will happen, but ultimately what's the cost of the software for the edge of deployment. And of course it's a total cost, right? Not just the software, the hardware, any networking, any sensors that lemonade to install. What's the total cost of that in comparison to that, how much business problem is it actually going to solve? Meaning that if they've got a million dollars, you know, a scrap happening every month, for example, right? If your software is going to cost you a million dollars, they're not solving anything too. So there's always going to weigh those two cost of owning this software versus actual business impact.

And then the it's not so much about cloud versus edge. As I alluded to here, it's always a hybrid. There is always going to be some cloud part because most customers typically do not have one site. They have multiple sites and therefore you've got to send insights from each of these sites into a central location. Plus you will need to do fine tuning of the models based on calibration issues that are rocketing out there. So that hybrid part of the thing, all this is there, but whether or not it makes sense for them to deploy an it based solution is purely directly related to the extent of the business problem that they're trying to solve.

[Erik]

In many cases, people already have cloud environments up, right? So the onetime cost there is, is not going to be a factor. It's just the cost of transferring a data story and data and so forth, which in some cases could be high. In some cases it could be insignificant, but then there's, there's probably going to be more of a onetime cost when it comes to deploying an edge, but then maybe lower operating costs. So there's a bit of a trade off there, but it sounds like for you, it's, it's more about looking from a business perspective, seeing where's the value and letting that then drive the decision.

[Sastry]

Yeah. Ultimately the business value is going to drive the decision, but some of the points that you're making up a person are still valid, right? So leave aside the initial cost of setting up a cloud environment when you're transporting all of the raw data into the cloud environment, assuming that latency is not a problem for the customer, that's a big gift. If that's not an issue for the customer, then transporting any given typical use case. You've got megabytes to petabytes of data coming in every single day, you know, transporting all of it into the cloud there's transportation costs. And then the store has costs on the onset outlook. It seems like, Oh, it's actually really cheap to store. You know, just the, every megabyte is only cost a few cents and all of that, but what happens over time? What a month? It actually pretty much accepts very significantly, right?

And then more importantly, many a times that raw data is actually not very useful after, you know, a few hours. For example, if he keeps telling him my temperature, temperature sensor measuring for the machine cost 76 degrees, you know, that that data is actually not very useful in often a number. Okay. So what if was only six? What do you need to find out in real time as to what actually happened, but you're absolutely right. There are trade offs. What is the cost of storing this information? And can we afford the latency of transporting all of that? And then and then is it quick enough? Is it solving enough of a business problem in real time to do that? So the number of factors that come in ultimately resolve goes back to their CFO, any companies that going to look at to say, has it contributed to my bottom line in the sense that has it helped the bottom line, has it solved preventing the losses? That's really how they're going to look at.

[Erik]

And so I want to get to a case study, but just one more question on the technology side, before we move there, we have a number of trends right now, which could be pushing in different directions, right? So we have five G that I think some of the cloud architecture providers are hoping is going to make cloud somewhat more real time, potentially even lower cost. And then we have improvements in hardware that make edge computing much more powerful. So what are the technology trends that you're paying closest attention to that will be impacting how this architecture is structured in the coming years?

[Sastry]

So we are very closely working with, you know, all of the major telcos across the globe, not only here in the U S but across the globe, right? The mobile edge computing or the Mac as they call it is real right now in scenarios where let's say you take a manufacturing plant, you know, if they've got existing connectivity, hardware, industrial, internet installed and everything else, most of the times they're going to run this edge analysis locally there now, but with this 5g, you mentioned a 5g coming up and the Mac, the notion of Mac coming up, what is happening is that people are going out to these plants and manufacturing customers saying that me, you don't really need to install this industrial internet or any of these connectivity issues out there simply use 5g, send all that information into mobile base station. That's where the compute happens.

And these telcos are actually leveraging us to, we're just running our software in the base station as opposed to running in the plant. But the technology trend that's shaping up right now. And I think it's going to evolve quite a bit in the coming year or two, and we're paying very close attention and already working with telcos on that would be shift the edge computing happening in some cases where there is no infrastructure installed back into these Mac base stations and that's really what's going to happen. So it's, the people will have a choice to make whether they want to go install the infrastructure within their plant or whether they want to use 5g and actually transport that into a Mac. And then the five D because of its very low latency, they will not see a difference. It's matter of their choice in terms of whether they want install locally there, or have the compute done in the base stations as opposed to all the way to the cloud.

[Erik]

Okay, great. Let's go into one or two of the case studies then, do you have one in mind that you'd be able to walk us through ideally kind of an end to end perspective from initial communication with the customer through deployment?

[Sastry]

Yeah, absolutely. Let's talk about that. Stanley black and Decker. You've thought about the company. It's one of the largest tool manufacturers, pretty much anything you can think about that we all use in the hospital they make right. And we met them, I would say maybe two, two and a half years ago in one of the events at a GE sponsored event. And they have actually looked at, so they came by, they talked to us and we have showcased, we've been showcasing the manufacturing problems we solve for GE and a number of others. And then they came on a base, started brainstorming with us and say, look, we've got a problem too. We've got a problem. I don't know everything they've got about 80 plus plants across the globe, as you can imagine. And the manufacturing, what is different kinds of things, starting from, you know, measuring tapes to tools, to toolbox, to high powered hammers, and now all kinds of tools out there.

And what they were wondering was when they went to the analysis and they're at the event, the McKinsey analysis and figuring it out, what exactly is that is the business problem. And so on, it was clear to them that, Hey, look, each and every one of these plants is having a lot of scrap, the quality inspection. They only are detecting that at the quality inspection time that was too late. So they throw away all of that and that's costing them millions of dollars by pretty high value. So when they heard about us and when they came and talked to us too, about two and a half years ago, and then now we were showcasing what we have done similar uses cases, similar customers, none of that. They want it to do some kind of a pilot. Initially they wanted to start with, you know, one or two use cases to see whether this is actually real or not.

And obviously right away, they did not get into any contract or anything like that. But initially the first, I want to understand how real this solution is because their problem is huge for us. All of these plants, we have actually publicly talked about it. We've also done the joint, you know, seminars and papers and all of that too. So this is some of this information that I'm saying is actually also publicly available. So I'll talk about one specific use case, which is kind of interesting. So one of the plants in in Connecticut, it's called new Britain and they manufacturer among other things and measure measuring tape. That'd be all use at pumps, the white tapes, yellow tapes, the traditional ones are these yellow tapes that I'm sure you've seen that. And so that tape is actually the mic, a lot of it every single day, right?

So it's a very high speed, high moving, in fact factory machinery. And as they're making this tape and sometimes what comes out of it, there is extra ink, extra painting, broken markings, or the markings on the measurements are not quite right, whatever, they're, you know, 50 different, 150 to a hundred different types of defects that can occur. That is almost impossible for anyone to notice that the tape is moving so fast, it's being manufactured and moving in a very high rate. And then it comes up at the end of the day, it goes to quality inspection, somebody physically manual excited is everything okay on this. And occasionally, you know, this product and if this product, the throw away that part right there, if they don't even spot it, it goes to the next day, the distribution center or somebody else then spot, you know, notices that, and then it's, and it's thrown away.

That's costing them millions of dollars out there, right? So they wanted to solve that problem prior to talking to us, they're installed. They talk to another company, you know, national instrument, for example, they put in a system called LabVIEW, which is basically a vision based system where they've installed a video camera. And the video camera is pointing to this tape measurement process, the machine, and then mostly observing, for example, if they're needed defects occurring on that, on the tip, and then they're displayed on a dashboard. And of course an operator has to look at it in the dashboard, which is also impossible. Nobody's going to look at it on the screen all the time, but the system is supposed to flag if there was any defect. And so the doctorate register goes and takes care of that. What actually ended up happening was that 90% of the time it was giving them false positives, right?

So factually they were not detecting anything. If anything else was actually causing more churn on the operator, as well as less productivity because they're raising false positives, they stopped the machine only to find out that was actually not the defect. Something else is the defect. So what we did is we, they have this existing, luckily they had this existing video camera installed. They've got all this machines, connectivity setup and everything else that was actually easy. So we actually went and drop our software onto the same system. They've got the same compute gateway device there. And then we connected our software to that same video camera that they want installed. It's a high speed, high resolution camera doing 60 frames per second. The thing is moving a little bit fast. And then, then we built using our data science capability, built a machine learning AI model on that live data to detect to a hundred different types of defects.

And then they've given, of course, I'm kind of simplifying this. They've given several different constraints and conditions as to while the defect happens. A certain number of times within a certain distance, in some length of the tape is exactly the slight variation that we're looking for. I mean, there's just so many nuances to that. So we take in all of that into account and then a that, and in few milliseconds, I remember this has to happen in milliseconds because it's already too late. If it's anything beyond that we detected, but it exactly meets the criteria. The condition that the operator is interested in, we send an automatic alert to the operator. We automatically display that on the dashboard as well, whether anybody's looking at it or not. And then the operator then goes and tries to stop the machine and then takes that particular process out, fixes the paint process, print process, and then restocks, it we've actually eliminated their scrap within this process.

Once it's running for several months, then once the operator is comfortable, Hey, you know, there are no longer, there are no false positives. It's actually protecting everything correctly. This is what we want. It's actually saving us. Then we actually have as part of our product an SDK software development kit that if they wanted to automate and you know, programmatically to say, whenever relays this particular defect, you know, go ahead and actually stop the machine. We don't need to have an involvement operator to do that. They're able to do that as well. They're using, they can program a simple program using our STK to programmatically do that. But you know, that level of maturity comes in after deploying and running this for a certain period of time and things like that. So that's one big risk I sent to end. And obviously before getting into all of that, we obviously had a big contract saying that, Hey, look, if this kind of thing actually works, we are going to deploy this to similar solution to all the 80 plants. And suddenly we had a value based contract created for them. And and so since then we've been working on many, many plans, some here in the U S some in Europe, some in the issue and things like that. So that's the use case.

[Erik]

Okay, great. So sounds like a great use case and a very, very common one. To what extent are you able also to run root cause analysis? Right now, we're looking through a camera. So we're basically identifying in real time the problem and minimizing the amount of scar scrap by, by stopping the machine and solving that I suppose, Hey, hypothetically, at least you could also take data from equipment upstream and identify potentially what, what was the cause of the problem? Is that something that you've been able to do or that you've you've looked into and in this use case or another

[Sastry]

In this use case, we did not because they did not have sensors for the other one, but we certainly did the root cause analysis. And the other use case, I'll tell you one, remember the flare monitoring example that I was giving you where, you know, you're processing and you're finding the guests, for example. And then, you know, occasionally because of the, you know, sometimes the pressure builds up in the compressor. That is a compressor problem as a result of which it's not refining it. And then they have to release the gas. That's one problem. The second problem is there's something that they actually take this, what they call the sour gas, and then try to Sweden this, it using a chemical process in that process. Sometimes they could be forming a condition called forming that could occur as a result of which the flare can occur too.

While we initially built a solution for identifying if the flare is happening or flare is going to happen. And then the contents of the flag, which is basically the composition of the gasses is beyond certain values and all of that. But then customer obviously wanted to find out and we all wanted to find out what was actually causing it. Okay. It's great that you're finding, you know, the KPIs for the Fred, and then we can stop it, but what is actually causing it. So then we fed information from the compressor, which is the compressor sound, and then trying to correlate if a bad compressor sound directly correlates to for example, a bad smoke coming out of the Flint. Sure enough. We wrote to find that. And of course it didn't stop that either. Then the next question is why is the compressor bad? So we took the, all of the sentences from the compressor to find, to identify one, will the compressor go bad or one where the forming condition occurred. So we have done the root cause analysis, as long as data is available, sensors that available, we can get to the root cause. Like

[Erik]

So far, all of the use cases we've covered here have been efficiency related to some degree. This is, I think kind of an interesting topic in the IOT space, which is, let's say the value of efficiency related use cases is quite clear. And building a business cases is often fairly easy to do because you have a real clear cost KPI attached. Then on the other side, you have revenue related business cases. So if I'm thinking about black and Decker, or I'm thinking about an equipment OEM, you could also see them potentially deploying some sort of edge computing solution on their equipment in order to provide a new functionality or a new, a new distribution model, a new business model to their customers. It sounds like this is not a focus of Alcorn today, but what is your perspective on the value of, of kind of revenue oriented edge computing deployments in the future? Is this something that you expect to invest in, in the coming few years, or do you think this is more a number of years away before this is going to be a big,

[Sastry]

So we'd already doing that. I'll give you a couple of different examples, even in the case of a standard black and Decker or other scenario, that's a new one example right on the machine itself, right? How do you know if the machine is properly used? Are there revenue opportunities for producing additional number of products out there? Right? So that kind of supply chain management and identifying. So these are all closely related, right? The efficiency of the machine is certainly lifted to give that opportunities for new revenue lines too. But a concrete example in the case of SPD, that that's a direct correlation, but a separate, extensive, exclusive opportunity like that closely work with Honeywell. One of their divisions is called SPS. They provide the manufactured, a device called mobility edge, which is like a handheld ruggedized. A lot of the operators like, you know, logistics, retail operators, like FedEx, ups, Walmart, any of these companies, they all use them.

They're now on those devices, I'll tell you a couple of use cases where potential given the opportunity was there for them. And then now we're adding more use cases, is that for example, these operators, for example, scan boxes with the barcodes on them, right? And now what is many times the barcodes are damaged either they're not printed correctly or they're torn or there's not enough lighting or something else. Then what happens to operate a device to scan the barcode? And it doesn't work. It does not record. And therefore the pack gets left behind. It's going to take its own course there's business impact and that significant business impact to that. What we have done now is we've run our software on those devices too. We put our first edge AI software on those Android based devices. And we, what happens is as the operator is scanning this barcode simultaneously, that image is actually sent to a solution running in fog on behind the scenes, which had, which reconstructs that image and sends that image back onto the user's application visit is not even aware.

This is all happening successfully. So this is all again, based on, based on UI, this is additional business impact for that. But the other thing, now that it's running there, now, they're saying all of a sudden, you look, mr. Customer, you've got this device with you already. I've got now fog on AI on, I showed you one solution. Now let's put on additional solutions. Now there is trying to use the same device for health monitoring. Now all of this Colbert situation we've got, we actually have just, this is what Kara was referring to. We're going to announce the solution that we dispelled. Many customers started using it, health monitoring solution. It can run on the same device. All you need is a device and the compute power and a camera attached stage. We can do social distance monitoring, temperature, elevated temperature monitoring, you know, cop detection, mass detection, things of that nature. Now customers don't need to invest in anything other than buying this addition solution. They've got the device it's already running. Fogging is already running on that too. So the opportunities for additional revenue channels is quite clear in, in those kinds of scenarios. And once you have that platform up and running, how do you add more and more of these solutions to the customer to solve their problems for additional business lines?

[Erik]

Okay. Okay, great. Yeah. Sounds like you're already quite mature in this space. So if you were to divide the market, just in terms of the potential, very high level between efficiency and revenue growth, how would you see it today? And how would you see it moving forward?

[Sastry]

It's hard to tell. Why? Because many times it's customers who sometimes are not as clear to either. They're not going to necessarily tell you there. They want to do this because they want to increase. It's a two sides. So they this same point, right? So reducing their, using the scrap and therefore you're improving the productivity, you're producing more things and therefore improving the revenue. Now, do you call that an improvement of revenue or call it an efficiency, right? Customer sometimes how the bucket, this is really hard to tell. That's why I think it's a really hard thing to tell exactly whether they classify this as a revenue increase type of solution or whether the yield improvement, absent suicides at the same point.

[Erik]

So that's a super interesting conversation, I think, where we're coming at the hour and I want to be cognizant of your time, but is there anything else that you would like to cover today?

[Sastry]

No, I think we've covered a lot of topics in, in summary. What I want to say is that our core differentiation, what we're seeing in the market is that edge is definitely taken off right. Four years ago. If we talk about it and everybody's saying, Oh, we just need cloud. We don't need edge. Now, even the same edge cloud players, which by the way, we have very close partnership with all of the major cloud partners, because we're complimentary to what they offer. They're all coming in, seeing the same just, Oh, you need to actually edge cloud alone. Doesn't do that too. So I think people have clearly recognized the need for the edge computing and what it actually plays and where it plays well. And then the fact that people do need a hybrid system, many cases to compliment what they've done in the cloud is happening.

The AI part of it, analytics and machine learning. We have done quite a bit. And the other thing that I would say is that having deployed this in many, many sites, now the problem that we're trying to solve and addressing is this notion of automated first loop. Once you deploy a solution, it's not going to keep producing the exact same accurate results all the time. How do you continuously update it automatically in the loop? This is what we've been calling the closed loop ML or AI. I know it looks like another buzzword, but that's not really a bus, but it is. How do you automatically update some model when there are changes happening in the system? So these are the kinds of things, but it's really promising. Now they mentioned 5G, I think this is really, really taking off, but yeah, I mean, this is a super interesting trend.

[Erik]

We have a lot of different technologies that we've mentioned 5G already, but also in terms of ML structures that are really trending in your direction right now. So very positive outlook for Foghorn really interesting conversation today. I really appreciate your time. Last question from my side would be if somebody was interested in learning more about Foghorn, what's the best way for them to get in touch with your team.

[Sastry]

So I think if they're sending a message to info@foghorn.io, somebody will get in touch with them.

[Outro]

Thanks for tuning in to another edition of the industrial IOT spotlight. Don't forget to follow us on Twitter at

Read More
Number of Similar Suppliers4
C3 IoT
C3 IoT
C3 IoT provides a full-stack IoT development platform (PaaS) that enables the rapid design, development, and deployment of even the largest-scale big data / IoT applications that leverage telemetry, elastic Cloud Computing, analytics, and Machine Learning to apply the power of predictive analytics to any business value chain. C3 IoT also provides a family of turn-key SaaS IoT applications including Predictive Maintenance, fraud detection, sensor network health, supply chain optimization, investment planning, and customer engagement. Customers can use pre-built C3 IoT applications, adapt those applications using the platform’s toolset, or build custom applications using C3 IoT’s Platform as a Service.Year founded: 2009
Altizon Systems
Altizon Systems
Altizon empowers Industrial Digital Revolutions globally by helping enterprises use Machine Data to drive business decisions. With a global footprint of over 100 enterprise users, Altizon is a leading Industrial IoT platform provider as recognized by Gartner, Forrester, BCG, Frost & Sullivan, and others.
Dataranya Solutions Pvt Ltd
Dataranya Solutions Pvt Ltd
An Industrial IoT startup providing customised solutions to solve clients' operations problems impacting their top- and bottom-lines.
Augury
Augury
Augury is on a mission to give our customers superior insights into the health and performance of the machines they use to make products, deliver services and improve lives.We are all surrounded by machines and rely on them in everything we do – from the buildings in which we live and work, the goods that we consume, to the power and running water that we use. There is significant cost and effort that goes into designing, manufacturing, installing and maintaining the machines that enable and support our daily lives. We combine Artificial Intelligence and the Internet of Things to make machines more reliable, reduce their environmental impact, and enhance human productivity.
Download PDF Version
test test