Podcast EP 147 - How low-code platforms can transform your AI development process - Brian Sathianathan, co-founder and CTO, Iterate AI

EP 147 - How low-code platforms can transform your AI development process - Brian Sathianathan, co-founder and CTO, Iterate AI

Sep 28, 2022

In this episode, we interviewed Brian Sathianathan, co-founder and CTO of Iterate AI. Iterate AI is a low-code rapid prototyping platform set up in 2013 that helps accelerate innovation projects with large enterprises by developing and employing digital solutions faster using the 465 pre-built software components.

In this talk, we discussed how a low-code platform simplifies the completion of tasks by using components of AI and IoT tech stacks that are already pre-built. We also covered how low-code platforms can transform the go-to-market path for edge and cloud AI solutions.

Key Questions:

  • What is the difference between the traditional AI-development process and low-code platforms for enterprises?
  • What are the unique challenges of deploying a platform for edge AI versus cloud AI?
  • How can high volumes of data be efficiently tagged to train AI algorithms?
Subscribe

Erik: Brian, thank you so much for joining us on the podcast today.

Brian: Erik, thank you for having me. I'm super excited to be here today.

Erik: Yeah, and you're a fellow podcaster, so I think this is gonna go really smooth. Maybe before we kick off, just as a little bit of an intro, I'd love to hear a little bit more about the podcast that you're running.

Brian: Yeah, absolutely, Erik. Along with Jeff Roster, who's a retail veteran, I'm running this podcast called This Week in Innovation. It's www.thisweekininnovation.com. Essentially, we are focused on how innovation is happening, specifically in retail. But we also cover a broad range of themes. We also look at primarily the five forces of innovation: AI, IoT, blockchain, data, and startups, how all these things are coming together. Typically, we interview startup founders, lots and lots of retail, and other executives and wanted to give our listeners a little bit of a view of how innovations are influencing in retail.

Erik: Great. I'm going to make a note of myself to return to the topic of blockchain later. Because the other pillars of innovation that you mentioned there make a lot of sense to me. Blockchain, it's always a bit of a puzzle for me. On the one hand, being very, very interesting theoretically. Then at least in our space, in manufacturing, still struggling a little bit to find those applications. But that's true for a lot of early technologies. That happens. So, I'd love to hear your thoughts. But let's return to that later in the conversation.

Maybe you can also walk through a little bit about the background of the founding of Iterate.ai. So, this is a low-code AI application development platform set up in 2013, which feels actually a bit early to me. I've seen more similar platforms developed, I'd say, more in the '16, '17, '18s. What was the initial impetus behind the setup of the company?

Brian: Interesting story. I think it's a great question, too, Erik. I'm a serial entrepreneur. I've also, in the past, done corporate venture capital. So, I was sitting on a couple of boards. There was one board that that we were in. It was actually in Ukraine. So, it was basically a startup accelerator board, where we looked at work with companies across the Ukraine, Belarus, various Eastern European companies. Especially, there's a lot of computer science, PhDs, who come there. They build a lot of advanced technologies. So, me and Jon Nordmark, my current co-founder, we met there. We were looking at it. That's the time when AWS was gaining traction among startups.

Building a startup was so cheap. It came from — when I did my first startup in 2007, I don't know, it cost us seven figures to just buy servers. Because infrastructure was expensive and compute was very expensive. But in 2011, '12, that cost was coming down. At one point, it even came down to the point where it almost costs $5,000 to do a startup. Somebody actually wrote a blog about it. We will talk into amazing entrepreneurs, highly, deeply technical, highly-skilled. Some of them even lived with their parents, but they actually had a company running and running on AWS. We were like this great explosion is going to start happening. But then, at that time, I was also a corporate VC at Turner. I've invested in a couple of companies at that point. Then my co-founder, Nordmark, was an angel investor. He had us a couple of companies invested. We're like, "How do we get traction for all these companies in the corporate?"

Then the more and more we thought about it, there wasn't a very creative, formal mechanism for large leaders, executives in large companies to have a gateway point to work with startups. There were venture funds. There were a couple of accelerators starting at that point, but there was in the feed mechanism. So, we ended up building this gateway company, Iterate.ai. Actually, initially, it was called Iterate Studio as that gateway accelerator, like a clearing house for startups. But very quickly, we realized in that journey that we not only need a clearing house mechanism and a cataloging platform, where we can catalog world startups.

Today, we have the ability to catalog up to about 70 million some startups and about 78 million patents through our platform signals. But then, what we also realized is that it's great, because that gives information to the leaders. When they are in a board meeting, somebody talks about some trend in AI, the C-level actually know. So, I know what he's talking about, right? It's education. But then what happened very quickly, we realized is that, that is one part of the puzzle. But the other part of the puzzle, which is even bigger is, a lot of times, an innovation leader's tenure in a big organization is exactly two years. So, they become a head of VP of innovation, EVP of innovation, head of new products or whatever. Then within two years, they'd go get a job in somewhere else.

One of the reasons is because a lot of times, in big companies, it takes a lot to move the company. So, there has to be actionable capabilities of software and digital solutions that goes out to the market. The ones who have been successfully built an innovation practice between bigger companies are leaders who have actually bought things to market. They may not have changed the revenue from day one, but they have enough products and solutions they bought out to the market.

In that journey, we realize there is so many moving pieces. There is the startups. Definitely, AI was coming in into the market, where there was a lot of deep learning companies coming in. It was '17, '18 timeframe. Quite a lot of deep learning companies. There was AI, IoT, data, and startups. Because, at that point, we had relationships with chips with 80 odd startups. Why don't we take all these things and put them into these software blocks? Then we can actually drag and drop them. Low-code wasn't even there. I mean, there was low-code solutions, all of Microsoft and others had solutions. But they were essentially enterprise RPA solutions. They were not targeting the innovation market. So, why don't we bring these five forces of innovation and everything that can be there? Then create these lego blocks, and create a canvas, and a runtime environment or some sort of a middleware, where organizations can start building digital solutions — both consumer-facing as well as back-end facing applications? A lot faster, right?

When we did that, our business boomed. Because the customers we started working with, they said, "We love it. Let us bring some products to the market." We ended up doing a conversational commerce, an AI application with one of our customers first. That blossomed into another application with like an AI-based application with another retailer. Then one thing led to another. Before we know it, we had a double digit, million revenue. We went from some 20 people during the pandemic. Now we're like 70 people here in the Valley, and in Denver, and of course, all over the world.

Erik: Okay. Fascinating. I always love to hear the backstory for a company, because it's always so interesting. You look at the tagline on the website, and it feels very straightforward. This is our value proposition. This is what we do. But then you dig a little bit, and you figure out that, actually, there was a winding path to arrive there.

Brian: Yeah. I mean, it's interesting for every entrepreneur to say, "Oh, I had this great vision from day one." I don't think, in a journal, I could say that. I think we had an understanding of where things could lead. We've pitched the low-code idea, I think, somewhere in 2014 a bit to a few customer partners. I don't think it got traction. But then in 2017, '18, it got traction. It's a little bit like when you have a startup, it's a bit like water. Then you'd go through the path of least resistance. Eventually, all the different water streams get together, and you have a big lake going on.

Erik: The logic of why this is useful for companies makes complete sense. I was chatting with the head of innovation for a large telco here in China recently. He said that when he was going through the recruiting process, HR asked him, "Hey, why do you keep moving jobs every two years? We're a little bit concerned about your stability." He said, "Hey, I'm not trying to move jobs. It's just that every two years, in this field, some senior executive changes strategy, direction." The first thing to change is innovation, because we always have a pipeline of projects that are easy to cut. Say, we have to do some cost cutting. Because we don't have them out in the market, because it takes too long to get things out into the market. Being able to reduce that timeframe from 36 months down to 12 months is huge. It allows you to actually show people success and then build on that. I think it's a big pain point in anybody who's working in corporate innovation.

I'd love to get your perspective on what this looks like in practice, this process of developing an AI application at a corporate. Let's say, it's distinct from maybe the way a very lean tech-oriented startup might do. Because you have this tagline on your website which is, "We can help companies build AI applications 17x faster." So, if we break that down and we say from the ideation through deployment, what is the process look like? Then where are the stages in that process where there's the most time, the most risk of a cost overrun, or a deadline overrun, the most risk of failure because you don't have the data, and you didn't discover the correct data set in advance, et cetera? Can you just walk us through what that traditional process would look like? Then we can get into a little bit more where local platform like Iterate.ai can shorten that process. But maybe first, just starting with the traditional process that a large enterprise might follow.

Brian: Yeah, let me explain to you a little bit of the traditional process. The traditional process in the enterprise is, everything starts at the business analysis and then business need. A lot of times, as a part of strategic planning, like in animal project planning, there are certain projects that are earmarked for these companies to do. But at the same time, what happens is if a given AI or an ML project is already earmarked and is currently already being recognized by the business as its value creation for them, that's the first place to start. Because it's actually easier, because the business justification is already done. Sometimes what happens is innovation teams also, it's a push and pull mechanism. The pull is the business, they need it. Then of course, the innovation and the IT, and the teams figure it out. The push mechanism is when the IT teams actually, the innovation teams actually say, "We think these are based on what we see on the strategy across the sphere. This is exactly five things we could do."

Here is something that's interesting. They build a prototype. They go to business, and business swallows it. Then it becomes history. That's the starting process. So, if the product is already earmarked and there's budget for it, it's much, much better. Because it goes a lot faster. Because the thing with large organizations is it's about not only just building the product but also getting the snowball around it. It's like the typical in many startups. When you build a product, you have these three roles right here. You have the hacker who'll build it. You have the designer who designs it. Then you have the hustler who sells it. Same strategy. Then you have the product development teams, and then you have the design teams, and then the other strategy teams in this case. Because strategy is also getting involved. Then you actually have a hustler just selling it. There is a sponsor who actually goes and sells it into other business units. That starting process is interesting.

But then what happens is, after that process, a lot of times they say, "Okay. Let's get going." But then what happens is, there is a lot of time spent on analysis. Because typically, they end up bringing in a third-party firm or somebody to actually go analyze all the problems. This analysis creates a lot of paralysis sections right there. Then they basically figure out. Okay. If you just want to put a mobile app or a simple solution, that is easy because you can go to an agency and get it built. But nowadays, what happens is, maybe from 2010, 2007, when Apple released their product, 2010 when the mobile devices became popular, from 2010 to somewhere in 2015, '16, that era is over in the sense of you can just hire an agency to build an app.

But now everybody has apps. Consumers typically use 13 apps a day. Nowadays, what happens is when people are releasing a solution, they think deeply about what is the value proposition. They don't want to put an app for them to just find where my store is. That's Google, right? Google Maps or whatever. We want to provide something that's far more deep. It's an engagement app. It's an app where, for example, in the convenience store space, they can put their card in and they can drive the vehicle. Automatically, it'll open the pump. They can pump gas just like the Uber experience. But if I were to do something like that, it gets very complicated, because it's not just the app that needs to be built. You need to actually have cameras in every one of these convenience stores and the gas stations. Then you need to basically talk to all the lead pumps, all the gas pumps, which are mostly legacy systems, legacy IoT systems. They're all internet connected. They're not even IoT. They actually have to put adapters to make them IoT, right? Then you have a lot of accounting and all the legacy systems in there.

When that complexity is unearthed, that's when a lot of times project fail within big companies. You can just build a prototype or an MVP that fakes it. But when you talk about unearthing the complexity, it becomes harder. Then once you unearth the complexity, it's where the forces of innovation come in. That's where you're dealing with. In this example, I told you about you drive your vehicle to a gas station, and the camera recognizes. That is AI. That's where computer vision comes in. Then you're removing, you're unlocking pumps. Basically, that is where you're dealing with IoT connectivity. Then actually, you are working with companies that are actually providing various forms of resolution and data capability, or loyalty. Then that's where your startup force is coming.

When you ask every one of the force comes, that's when innovation breaks down in large company. Because suddenly the head of innovation says, "Oh, my God. This is not what we originally thought as an MVP. This is far more bigger." That's when I think some of our approach works very well. In traditional cases, that's where the breakdown on the process happen because once you start unearthing complexity. So, that's just what I explained to you. It's just the technology complexity.

But on top of that, you also have a series of legacy and business complexities. The legacy complexities are like there are systems that are out there, enterprise systems. Typically, you have to have IT to integrate them, because they are essentially owned and managed and operated by IT. Then, depending on who is going to take the responsibility of integrating them. So, that's one challenge that comes in. The second challenge somewhat comes in as essentially the business complexity of, have we looked at all the cases? Is all cases satisfied? That's a little bit of a bigger — that's not so much of a local problem per se, but that's more of a selling job and a phased approach. Because a lot of times, if you take all your business requirements and you want to build something, you will never release a product to the industry. So, that's a little bit of a test and learn. Then also teaching your business team to make a phased approach. So that's the traditional method. I can explain to you where the low-code actually makes it even simpler next.

Erik: Okay. Great. That feels very familiar. I like your framework. You have the technical complexity, business complexity. Also, maybe organizational complexity. I guess, on the technical side, there are some very clear ways where a low-code platform can simplify the completion of the tasks. I think, probably on the organizational, in the business side, there are also ways that you can modify those processes based on a different tool set. So, how do you look at the changes that low-code platforms enable?

Brian: On the technology side, it does a couple of things. One is, it's because if you were to build a house. The best analogy is, if you're to build a house. In California, with all the city permissions and everything, it'll take you 19 months to build a home. I don't know how it works in China, but it takes forever, right? At the same time, you could actually get a modular home, come in a truck, put them together really rapidly, which is actually very fast. Same philosophy applies to low-code. Because in low-code, a lot of components and building blocks — either in AI or in IoT — are already somewhat pre-built. But you can bring them and put them together.

So, that actually creates two or three benefits. One is, it actually creates upscaling. Because a lot of times when you're dealing with advanced technologies like AI and so on, typically, even in larger organizations, you might have a data science team or a computer innovation team, but there might be only two people in it, or three people in it. They might already have like, their actually analysis loaded with projects. But if you actually bring components where an application engineer can configure, you don't necessarily need the AI persons there. Then a lot of people become open to that project. That creates that — they will build it. But the innovation team will take the responsibility of building it. So, that creates it.

Then for an application web engineer, it upskills. Because in the world today, across traditional organizations — basically, companies whose core is not technology — there is almost about 25 million web engineers or application engineers. Because the internet has been around. If you look at retail, they have a lot of web engineers, a lot of application engineers. That's true in every industry. Because the internet has been around for a long time, from things like at least from a commercial use, from the 2000, almost about 25 years. So, there's a lot of that. But in the ML and AI space, you only have about 30,000 engineers across traditional companies. If you count the researchers and everybody in this space, you're looking at about 160,000 people. So, that big gap.

Actually, because you have this code pre-built in, it enables what a data scientist can do, the same thing that a data scientist can do to an application engineer. So, you cannot technically do everything. You might have to not do it, no ML at some point. But it's also an 80/20 problem. Most of the problems can be solved by connecting things together, assembling things together. So, that provides the can-do ability to a lot of companies, especially if you start abstracting these five forces of innovation: AI, IoT, blockchain, data, and startups. Then it creates that can-do, which gives better confidence for the innovation team to say, "Hey, we're going to go build this quickly."

The other thing about organizations is that — I remember I used to be at Apple during the early iPhone days. I was there for eight years. One of the things that I learned there is, the leader who builds first wins. So, it's true in every Silicon Valley, every company. If you build something very quickly, then a lot of people see things working. Because everybody else is actually pitching PowerPoints. So, if you're a leader who actually brings a solution, you are first. It may not be perfect. It gives you that edge to build the next thing. The next thing will allow you to build the next thing. So, that's essentially the empowerment we do. Also, sometimes, especially when you — because a lot of times, in companies, the sponsors for this type of project are actually marketing. Marketing is, of course, visual people. If they see more things working and more things that are visual that they can see, you can actually convince a lot more people. So, that process. The low-code enables that. First, it actually abstracts the five forces of innovation. Second, it actually enables even web and application engineers to, basically, build the application faster with even more advanced technology, sort of upskilling them. Third, it actually creates that iterative process, and creates that can-do attitude. Then it enables a lot of the outsiders, allows you to what I call the snowballing effect. You can snowball people.

Erik: Okay. That's a very well-structured thought process here. We struggle with this, obviously, a lot. I'm sitting here in Shanghai. You have that dynamic, where you have China often contributing 10% to 20% of global revenue for a company. But then, you were talking about having small teams. Here, often that looks like one. We have our data scientist. His name is is Jang. He sits over there. This poor guy is trying to figure out how to service all of these requests. What that usually means is that you're funneling those back to headquarters and trying to get somebody in Germany or in the US to prioritize your application. They come back to you and say, "Oh, yeah. Okay. We'll work on that feature in Q2, 2023. Don't worry about it."

Brian: Yeah, I actually call it the DMV line. I mean, if you go to DMV, they'll tell you to take a ticket. You wait on the ticket. Now they'll say, now serving 3128. Then you wait until your name is called. That's done for a reason. Because companies have very limited resources, and they are really trying to prioritize. It's not negative. But the thing is, we want to find additional and alternate mechanisms to work with those restrictions.

Erik: Yeah, exactly. Every company has constraints, right? It's around optimizing those. But it is a frustrating process. You mentioned in your case study there, this connection between AI and IoT. I think that is also particularly interesting point to dive into, especially if we look at the industrial or anybody who's managing supply chains. Then you're starting to deal with the application of AI to problems that are touching the physical world. It's a much messier process, because you don't have all of your data nicely in a cloud environment where you can control all the parameters. You have all these messy human parameters of a sensor is out, and we have to get somebody out there to deploy a new sensor. We have integrations with legacy, equipment, et cetera.

Can you just share a few thoughts on the difference of deploying a local platform for edge versus cloud AI deployment? Is it significantly more challenging for edge? Are we already already there in terms of the maturity of technology on the market to deploy that effectively? What are your thoughts on this to deployment?

Brian: I think that's a very good question. I think it also leads into the future where all these things are going. Because if you look at a lot of the processing that happens today, the IoT is everywhere, everything that we see today. I mean, even in my house right here, there is at least, I wouldn't say at least 100 plus IoT things. I mean, sure, it's true for everybody. With this propagation, especially in manufacturing and industrial sectors, what happens is that the data that comes out of these IoT devices can be harnessed in a much more smarter way when you apply AI, too. So, that's why I think edge deployments are critical because you need real time. You need to be able to connect to them, and also be able to respond really rapidly.

The thing about doing something in the cloud is, it has always been there for the last 5, 10 years. But then the thing with that cloud capability is that there is a lot of processing ability in the cloud. You have all your GPUs, naturally everything, naturally auto-scales. You have the Nvidia, GPU instance that just scales. That's your load comes. That privilege or that luxury is not available when you're running in the edge. Because a lot of times, edge devices are very limited in compute power. They may have one or two specialized systems. So, the limitation happens from many, many, many, many different angles. One of them is all the way from compute. Storage becomes a problem. The chipset that you run become a problem. Then the support for AI model on that chipset may also be limited depending on the type of computations you're trying to run and the type of operations.

What we've done with our platform, at least, and typically in the low-code is, when we built our platform in the cloud, we made sure our platform not only horizontally scales, but also vertically scales. Because one of the challenges is that, today, a lot of kids are coming out of school. Because everybody is taught on high-level languages — Python, NodeJS and everything. A lot of kids don't actually know what happens underneath. With an application code, they have no idea what it does. I don't blame the engineers. It's just that how we are going up stack. As things are evolving, technology is evolving.

What's really interesting is that, in this world, we focus a lot on how do we actually make sure the runtime — whatever we are running on top of either NodeJS, on top of Docker — can actually scale at the same benchmark of the underlying NodeJS or very close. That means, in a given system, if you're running on a single node, can you scale vertically on that node? I think that's important for every person, every new technology provider who has aspirations to run on the edge. We do that in the cloud itself, so that we make sure that part scales. In fact, we have benchmarks where we show that we can run 35,000 simultaneous sessions on a four-core CPU. We can run probably 35,000, maybe 100 sessions less than the core NodeJS runtime. That's because when you start thinking that way, then you begin to optimize, figure out what all your overheads are. You're beginning to optimize it. That's mostly from the vertical compute.

The other thing is to also really think about storage. Because a lot of times when you run AI models and because of versioning and all those things, storage is — because AI engineers tend to be very liberal with storage. Because in cloud, your data is so cheap. But when you go into an edge device, you don't have that luxury. So, to think about storage in a much, much careful way, especially in versioning and things like that. Those are some of the things I would think about.

The other is also just a lot of corner edge case. Because the other is, sometimes people make assumptions that I can always phone home. There is something I can update. But we really run in an industrial environment. Sometimes phone home, the connection goes away. So, there is no phoning home back again. So, you want to be super sensitive to that. Can the system recover automatically? Those are things that you need to pay lots and lots of attention.

I would say, also, it's critical. This is a really interesting point you bring up. It's not only just one company for us or somebody to optimize it. But also, for the total industry, to have a lot of cross training. In fact, one of the things that I've been right now advocating in my own company is to train all the engineers to teach them how Kernels work, how operating systems work. Because at the end of the day, you had to scale. I think a lot of cross training within these companies is important, especially if you are building a software. Because everything is converging together. You can be an expert on one thing, but you also have to know something else really well in order to make it scale.

Erik: Yeah, I think that's a good perspective, that we're building very powerful tools that allow you to — without specialized expertise — perform certain functions. But you still need to have this broad array of generalist capabilities. I mean, also on the business side, I feel equally important, get the engineers to have a bit of training in marketing. Just these simple things, like if you have an idea, go out on the street, or go visit some customers and talk to 10 people. It's annoying as it is. It's a simple thing, but it's very uncomfortable for people if they're not accustomed to doing that. They're just little things that can really delay a project or cause you to go down the wrong path, because this little skill set or this little mindset issue is preventing you from moving ahead efficiently.

Brian: Yeah, I, 100%, agree, Erik, especially if the engineers are in innovation teams. Because if you're in an innovation team in a large enterprise, just like being in a startup, you want to be able to articulate what you do, what your products very clearly to the business teams. Essentially, you're doing a bit of a sales job. That's very, very critical to have that capability.

Erik: Maybe this is a good point to discuss who you're working with. It sounds like we're discussing the functions a little bit. Maybe we can start with the industry, the functions within those industries, the roles. Who would be the buyer? Who would be your main counterpart that you're collaborating with? What does your customer and your partner network typically look like?

Brian: Basically, we are today crossing industry. So, we can work in a number of industries — retail, being one of them. Retail is one of them. We work in the convenience store industries. We work in automotive. Especially, we are very good in selling technology to traditional industries, because we understand large enterprise and legacy. The higher the industry, the better we are. In fact, we were internally joking. We work with companies greater than a billion dollars in market cap. Sorry. We're greater than a billion in market cap. Because those are companies that we are very good and well, because we understand enterprise systems really well. Then right now, we also started working with banks and companies selling to banks. We are also working with companies that sell into manufacturing, into industrial systems, and so on. So, it's very cross industry.

Typically, depending on how the relationship is and the company is, the relationship somewhat starts at different spots. But on average, in most cases, it starts with the VP of innovation or somebody in marketing who actually has a business problem. So, it starts with the solver. Buyer is somebody who's responsible for bringing something to market and solving the business problem. Essentially, that is true for any company that has a B2C front. Like a retail, a lot of banks, or any of those folks, or in the convenience sector, or automotive. But there also are scenarios where we have worked with IT or leaders who are running certain business units. They have internal projects where they need to do it. But typically, it starts with somebody, essentially, who is responsible from the business side for that problem to get sorted out. Either there is a new revenue or an attachment of a part of an existing revenue there. So, that's the sponsor and the buyer.

The user of the platform, essentially, are innovation teams, IT teams, and engineering teams, or product teams, who are actually using the product. Typically, it starts with some sort of an architect, one of those leaders, or local leads who is working there. They become an evangelist. Initially, they get to learn it. Sometimes what we do is, we also provide some services in certain areas, and we build it for them. Then we transition it to them. There are some big companies where still we have our service team constantly running it and hosting for them as well. Because sometimes, innovation takes time. Some companies don't even have the resources to do that. It's the combination of all the above that comes into play.

Erik: Got you. Do you work very often with system integrators or these service providers, whether it's consultancies, or are you typically going to be working directly with the, let's say, the end using company?

Brian: Typically, we work with the end using company, because that has worked really well for us. I think we're also now establishing relationships with various startup service providers. Because what happens is, when we work with the big end using company, we often interact with the service providers as on the other side. They're sort of our partners. Even though we don't go through them, we hold the paper directly with the end company.

Now, we are actually also looking at working with a few service providers, especially in international regions where we don't have arms and legs. We are going through some service providers, especially for specific industries that require a lot of regulation like banks, and so on. We are actually going through some of the service providers, which makes our life a little bit easier. Because they already have all the established bells and whistles. We can serve our core competence in between.

Erik: Got you. Well, I'd love to walk through one or two case studies. But before we do that, I want to quickly touch on this topic of blockchain. So, I was mentioning earlier that — I don't know. I get pitched a lot on blockchain concepts for industry. Often, I feel like a lot of these approaches seem to be non-user centric, right? They're either looking at it from the perspective of, okay, this is something a little bit sexy, a little bit of a quick money, or they're very passionate individuals. They're rethinking, okay, we can rethink the internet. They have a lot of passion. But often the user is not necessarily centric to what are their real problems.

I was thinking, being pitched on one case where it was tracking blood donorship. You donate blood, and you have Type O or whatever. Then one of the medical providers want to be able to track that blood through the supply chain. They were pitching blockchain as a solution to anonymize the data behind this, so you can do the tracking. I'm thinking, okay, if I just donated blood, I don't care. I mean, frankly, you track my identity. You know what the blood type is. Just put it in a standard database, and share the data with who needs it. That seems to be a fine solution. So, I can understand why blockchain, decentralized platform, might have some value there. But it's hard for me to imagine any end users there that really care about that, when all they really want to know is, is this the right type of blood to match to the recipient? I see that kind of situation a lot. Just would love to hear. I'm always exploring. I know in financial services, there are some real underlying applications of infrastructure that is using blockchain very efficiently. But maybe beyond that, in more industrial environments, what have you seen that has inspired you that this is something that's really solving real-world problems today?

Brian: I'll get to that in a minute. I think before I go there, I'm sure you are very familiar with — I think every technology that's in the S-curve, depending on where they are on the curve, in the path of the evolution, a lot of times, there's a lot of expectation that it would actually pick up really fast. But the industry doesn't develop as fast enough. Suddenly, you see it taking off. Depending on where you are in the curve. What's interesting in blockchain specifically is, I think, blockchain is going to become — the applications are going to be very, very much internal. I think that's where initially it's going to take. Because I think the plays are going to be efficiency plays, security play, reporting play, compliance play. Those are going to be the strongest before the consumers.

I think in the consumer side, there are a couple of things that can grow. I've seen some interesting things. One of them is, basically, right now, especially with all these restrictions and laws by advertising companies and even for large retailers and others where they have to be very careful in buying and using third-party data because of privacy. But what happens is, at some point privacy was very — everybody took advantage of the data being available. But now, as there is more and more restrictions, especially how Apple is controlling their devices rightfully so, putting the power in the hands of the user. What would be really interesting is that, if there is blockchain-based methodologies, where just like your Facebook Connect. You look at your Apple Health app. If I want to give my health data to another cloud provider, I can click on authorize health and it'll give the information. Very similar to Facebook Connect.

This consumer-initiated data sharing, I think, is a very powerful place for blockchain. Because that could be a really powerful play. Because now I know who I shared it with. I know my logs or when I shared it with. It's all in the blockchain, and that data is something that I know. I can only share the portion of the data that I want. Protocols for that is very powerful. Because if you look at even just this whole health industry, in the health industry a few years back — in fact, we were doing a project with a medical company. We're trying to get some data on asthma and so on, and run AI models on top of that. But then we realize the actual data for the patient visit and various health data is across several providers. So, you had to do this long nutritional stitch, and you need the third-party anonymization companies coming in and removing PIIs. There are four providers. By the time you get access to your data, before the AI happens, that's like months and years.

Then now the same parts of it can be very gracefully be done. Because all you got to do is put an app like with an authorized model. The user takes these things and gives it out. It's selected by the user. If the user doesn't like it, it's not going to share it. You're putting the power back into the hands of the user. I think there is a lot of trust that can be built by showing that this is in a blockchain or posters on database. Somebody can't hack. It's more authenticated. It's distributed. It's protected better by more standard tech. I think users will care about that. I think that's a place where it would be very, very interesting for blockchain companies.

Then of course, there are the other part of noise. There is metaverse. There is NFT. Then the market goes up and down. I think the idea is to look beyond the noise, and say there is something really valuable. I think the idea is to see if I can, as a consumer or as well as a company, can I take advantage of those pieces? I think all that track and trace back end, I think, is going to happen. I think it's going to slowly propagate across the industry. But I think the consumer side is also going to get interesting if companies build some of these solutions in blockchain. Consumer-initiated authentication, authorizations are very interesting. So, I walk into a beauty company. I want to talk about all of my skin ailments and everything. It can be a blockchain thing, right? If I am going to a beauty company, I want to give my information, everything in my Apple watch, like how much water am I drinking. Because in beauty, how much water consumption is critical. Things like that, I could basically press on a button and then give that information away. That's where I think blockchain gets very interesting.

Erik: This makes sense, I think, for simplifying internal interfaces. Also, this value proposition from a consumer side, it also makes complete sense. That's my struggle with blockchain. It's that, a lot of these concepts, they make a lot of sense. Then I just look in the world, and I tried to find examples of whose mom has been helped by this. I find very few moms that have been helped by blockchain solutions. But you're absolutely right. I mean, the S-curve is, there's this — it's very easy to imagine things. It's very hard to build things. So, I guess that's where we are right now.

Brian: Get adaption, right? I mean, when you look at the phone. Today, you have an iPhone on everybody's hands. I was actually on the team that worked on iPhone in 2005, 2006 at Apple. But then Motorola put out a phone like, I don't know, in the late '80s, or late '90s, early '90s. It went through an evolution of 20 plus years before it got to a point where it was ready. So, I think that evolution is going to happen in certain technologies. But I was very impressed with how fast it happened in IoT and AI, which I think is really interesting. Because AI — I think IDC is one of the analysts. I think a few years ago, it said that market size was, I think, 120 billion. I think they went to 300 billion. I think now there are 600 billion. Within a few years, they up the market. Then things happening in the IoT. I think those two technologies are actually propagating a lot faster, which is interesting to see. But then one could also argue the same thing, too. Because sometimes when you talk to AI folks, they say, "Well, I studied AI in college in 1978, in my PhD." So, there is also a little bit of that.

Erik: Exactly. Somebody who was working on IoT systems for missiles back in the '70s or something, he's like, "Yeah, I know." It's just taken 30, 40 years for the commercially to become viable so we can start playing around with a wide array of use cases. So, yeah, that's very true. Great. Well, maybe we can wrap up here, Brian, with one case study. Is there an end-to-end case that you can walk us through for one of your customers?

Brian: Yeah, I'm actually going to go through a very interesting case. I have a quite a lot of cases at this point. We had deployed, I think, 3,100 some stores. We had deployed at about, I think, 40 million monthly unique visitors across. Our solutions are quite skilled at this point. But there is one solution that was really interesting. We worked with an automotive provider, who was actually in the collision damage estimation industry. If your car gets hit, they'll work with the insurance. Then they'll repair it. They'll basically do collision repair. They'll do repainting, body job, whatever needs to be done. But then, we rebuilt the solution for them. They had a solution in the past. It didn't quite work very well. So, we actually use machine learning — both vision, computer vision, as well as tabular data — to build a solution that actually estimates very accurately. Because typically, in the old world, you would get an estimation, run an online estimation or a mobile estimation. Your price would be significantly different from what you get in the shop. As a customer, that's just terrible experience. People actually started dropping out. So, what we decided was, why don't we look at images of prior damages? But the problem is, the customer we're working with didn't have enough images. They, for various logistical reasons, couldn't get enough images.

So, we worked with another partner, completely in a different part of the world that we were working with. We actually got the images, completely different set of images used in a completely different scenario. But it was also in some insurance accident scenario. We use advanced machine learning and transfer techniques to transfer that back into the domain, transfer the image. Then we built a whole image database, about a couple of 100,000 images. That was really interesting on how we use vision data in a different way.

Then we actually looked at transactional record with the customer, in a couple of million transaction records, and merged both transactional record and vision data together to build an ML pipeline. Because typically, what happens is, you might have a little dent, I think, in your car. But depending on the type of car you drive, it might be an entire backplate. So, you may think this is a little dent. If I hire some mobile service, I could get it done for 100 bucks. But you actually had to change the backplate. That's a $1500 job.

By looking at transactions, we were able to identify the depth of the job, match them with scenarios. It was really interesting. One of those fusions between vision data, image data, and actual transactional records. We built this thing that was very accurate. Today, it's running in lots of stores, which was a very, very interesting case. It also was a case that shows a number of proof points. One is, customers are happy, because it created a better customer experience. So, it's not only a back-end job, but essentially a customer experience. It was an interesting experience from AI, because you worked on an environment where there was not enough data. You took domain transfer type things, plus you combined with a lot of transactional data, merging of vision and transactional records. The other is the accuracy and the business benefits and the type of KPIs that it created. Those are the things. Then, of course, the success of the innovation, of it being rolled out into lots of stores. So, I think from a lot of places, it created the checkmark. I mean, we've done a lot of more exciting projects. But I think that's something in my mind, I would call a holistic checkmark across all the different parts.

Erik: Yeah, it was a very practical solution. It sounds like a great use case. I have a question about the dataset. So, you have this dataset that basically has the images that you require. I guess, do those need to be tagged? Do you need an expert team to kind of say, okay, here's the type of dent, here's how I'd interpret it? Because those have not been matched historically, too. You said that they were coming from another insurance case. So, maybe those had already been matched to some assessment.

Brian: That's a great question. There were some tagging involved. Also, these days, there's a lot of auto-tagging capability. So, it's a combination of prior tagging, doing transfer capabilities on prior tagging, using machine-generated auto tagging, and then humans on top of it.

Erik: Let me ask you, a professional explanation. So, we're working on a case right now where it's also a similar case, actually. Machine vision for paint scratches, in a different environment but similar case. The company that we're working with, their internal team — which is very expensive — said, "We estimate it will take two hours per image to tag. Then it's like $1,000 per day for manpower." So, it becomes quite expensive than to tag any number of images. Does two hours seem like a reasonable amount of time to tag an image, or does that seem a bit excessive?

Brian: No, I think it depends on the problem. I don't know the problem more deeply, so I need to look at it. But I think, typically, in these days, there are a lot of services out there — outsourced services — that could actually do it for you fairly fast. You could even do probably like in 5 minutes to 10 minutes an image, fairly complexed tagging. So, I've seen scenarios where they've tagged even in the beauty industry, where they tag lips and whole facial structures fairly quickly within a couple of minutes. So, that's one system.

The other is also there are like Amazon's Mechanical Turk type services that can also do tagging, which is more like a service on need. You pay for what you need as opposed to dedicated tigers. You can hire dedicated taggers, outsourced taggers. You can do this to pay for the service type things. The other is, there is quite a lot of auto-tagging, transfer tagging technologies that are there where you could train things. You could train on certain things, and the machine generates the remaining tags. Then humans will correct it. Then you run on a loop, and then it gets better. So, you can do a lot of those techniques. Because nowadays, the kids are fairly, fairly good. Especially with kids like YOLO v7 and all those things, it's come a long way, Detectrons and all those things, fairly, fairly good, fairly advanced capabilities.

Erik: Yeah, we had Super Annotate on the podcast maybe six months ago or so. They're basically an annotation software developer. They have a large resource team.

Brian: I mean, the people who have figured out how to do it well, I think, they're going to have an edge. I think I agree.

Erik: Yeah, great. Okay. Good. Well, last question from my side. I know that you've recently launched version seven of the platform. Maybe, what is new? Also, what's on the horizon for you in the next two or three years?

Brian: Great question, Erik. I think what happened was, when Interplay came in, we went through very quickly. From 2017 onwards, we went through like six versions really fast. Then version 7. In fact, our version 6 was called 'Tiger.' Version 7 is called 'Spirit,' code-named "Spirit." The versions are great. But from the old Apple days, you got to give it a really nice name that you got to love, right? So, I ended up calling this thing Spirit. But I think the goal was basically it's about helping the creative spirit of the engineers. So, you might be an AI engineer, or you might be a UX front-end engineer. We looked at these large two buckets.

Earlier, in the prior versions of Tiger, we were purely concentrating on backend. In fact, Interplay's power is actually building complicated backend technologies. But in this version 7, we also enable some front-end capabilities. Because a lot of times, with our internal enterprise teams and all, they have to build a UX quite quickly. Then we also worked with a lot of agencies and teams that wanted to build some UX. So, we enable some front-end capability. One is, we built a Figma builder. You can upload a Figma file, and it'll convert back, convert it into actual UX code, write actual React code.

The other is, we also introduce a new page builder, which actually has the ability to drag-and-drop and connect things together. In the old days, like when I was growing up, I was an engineer who wrote code in C. But when I was not in school, my dad and I had a business in Sri Lanka, where we made software for all these accounting and all these software for companies. So, I wrote most of my code for these accounting companies’ software in Visual Basic, where you drag-and-drop and double click on a button, and you can write code. Sometimes, as modern technologies came in, a lot of those things somehow vanished away. So, I wanted to make it really simple where, in a way, you can drag-and-drop and double click on it, and a function with a P, and you can write code. So, that one-stop capability, we created that in this new version.

Then on the AI side, what we did was, as we know, there's a lot of data scientists who still like to work in Jupiter. Because most work is done in Jupiter, which is the data science environment. But then, we also indeed built an integration into Jupiter back into Interplay, our local environment, where you can write a piece of code. We built a plug in. You click on a button, Interplay button. Automatically, that code is wrapped in a lego block and bought in the lego block, so an application engineer can use it. Pretty nifty capabilities like that.

The other thing that also is a problem in the ML world is — because a lot of times when you run AI models, you're talking about a lot of data, millions and millions of records. Those records are sitting in a data frame. Typically, in a Pandas DataFrame, or one of those desks, or one of those systems. So, the question is when you're dragging and dropping, if you write them as separate components, that data is getting copied in memory. But the engineer who's using dragging and dropping, he won't realize that. But when you run in real-time in production, suddenly you're going to see a performance lag. So, we built actual internal memory bridges and Python bridges that can gracefully rethread them, do shared memory to manage all these things. As an engineer, you can create so many things and connect them together. But internally, the resources are used very, very intelligently, efficiently. All that came in the new version. We are also, right now, working on a lot of performance enhancements post. I think 7 came in now. We have like 7.05 or something. We'll release five versions after that just as we benchmark. Lots and lots of capability.

This Spirit version is all about creativity. We are also focusing a lot on GPUs, enhancing it via GV. We just became an Nvidia Inception partner. We are also integrating a lot of Nvidia kits into our platform. Our goal is to make it like a seamless innovation platform where you can build for the five forces really fast. By that promises, it will enable our application engineer to tackle the five forces or anything that can make them faster. We'll keep working on it.

Erik: Great. It sounds like you're making a very quick progress. I'm looking forward. Maybe we'll have you back on in two years or so, and I'd love to see where you are at that point.

Brian: Awesome. Erik, thank you for having me. It was such a pleasure talking to you. Great questions, by the way.

Erik: Well, then, just a last point here. What's the best way for folks to reach out to you?

Brian: I can be reached at brian@iterate.ai. That's my company. If you want to learn more about us, you can go to www.iterate.ai.

Erik: Awesome. Brian, thanks so much for the time.

Brian: Yeah, thank you, Erik. Thank you for having me.

test test