Podcast EP 154 - How are data marketplaces streamlining data monetization - Nick Jordon, founder & CEO, Narrative

EP 154 - How are data marketplaces streamlining data monetization - Nick Jordon, founder & CEO, Narrative

Dec 02, 2022

Our guest this week is Nick Jordon, founder and CEO of Narrative. Narrative is a category creating company that has redefined data commerce by helping organizations acquire and monetize data more efficiently.

In this talk, we deep dived into data monetization, from common use cases to marketplace pricing mechanisms and data quality assurance. We also explored how large, mature companies can monetize data from customer interactions with their products to improve margins.

Key Questions:

  • What are the buyer and seller  dynamics of a data marketplace? 
  • Which use cases are mature today, and which are being tested?
  • How are prices and purchase rules set rules on the platform, and by whom?
  • What is the process to guarantee data ownership and accuracy?
Subscribe

Erik: Nick, thanks for joining us on the podcast today.

Nick: Thanks for having me, Erik.

Erik: Great. Well, Nick, I always love talking to founders. This is a particularly interesting business model, focused on monetizing data. I think it's a topic everybody thinks about. A lot of people, including myself, are not quite sure how to do well, let's say, when you get into the detail. So, I'm really looking forward to getting into that.

Maybe before we get into the topic and your company, we can talk about you a little bit. You've had a very successful career with Yahoo, Demdex, and Adobe. Now you are a founder. So, it'd be interesting to understand what led you to leave the successful, well-paid, stable, I suppose, career to jump into the unknown.

Nick: Yeah, what I've realized over the years is, my real passion is solving problems. I got to solve really interesting problems in all of those companies. My last company was a company called Tapad before I founded Narrative. I found a problem that we just had internally. At the company. we were both a big buyer and a big seller of data. That process was hopelessly broken. Maybe this was one of the most inefficient business processes I've ever experienced in my life. Being naive, I said to myself, "Oh, certainly someone has solved this problem. I will find that person or that company, and then I will take their product. Then I will integrate it into my workflows." The more I looked, the less I found anyone that had even come close to solving the problem.

So, I found a problem. Again, I have a passion for solving problems. Step two was to say, do I have a solution for the problem? It could have been a problem that I also couldn't figure out. So, I gave myself about six months just to noodle on the idea, whiteboard, run some things by interesting colleagues. At the end of that six months, I basically decided that I thought I had a way to solve the problem that was both unique and effective. I started Narrative, basically, because I saw a problem that I couldn't unsee. I thought that problem was important enough, that it needed to be solved, and I couldn't live with myself if I didn't go try to solve it.

Erik: Great. Tapad — that was sold, right? Did you leave after the acquisition? What was the timeline there?

Nick: Yeah, maybe one of the most fortunate timing situations of my career. I originally had the idea for Narrative in the beginning of — I'm going to get the years wrong, because so many years have gone by. Let's call it the beginning of 2015. I'd really given myself six months to convince myself that it was a good idea. I've got hundreds of ideas, and I'd say most of them are probably bad. So, an idea alone is not a good reason to go start a company.

So, I had made the final decision in October or November of that year that I was going to leave and start the company. I actually went to our CEO, because I was on our management team. I said, "Hey, I really found something that I'm passionate about. I want to go. I want to go start this company. I'm not going to give you two weeks’ notice. I'm telling you now. Let's spend the next two or three months coming up with a transition plan. Maybe after the holidays, I won't come back to the company."

Literally, the next week, like six days later, the CEO emailed me. He said, "Hey, there's a management consulting team from Telenor — the Norwegian Telecom — that's going to be in the office for 30 minutes next week. Would you sit in with me and just help them understand what we do?" I don't know all of the details of how that meeting got set up, and what the intentions were. But I suspect it wasn't, "Hey, this company wants to acquire us." Because it was a 30-minute meeting with someone that didn't even work at the company. It was someone they had to hire to look at opportunities outside of Norway.

That meeting, which was scheduled for 30 minutes, lasted for three and a half hours. They asked us, "Hey, can you guys come to Norway next week to have a further discussion? Because we're interested in acquisition." I think they probably referenced and referred to it as a strategic partnership or something like that. So, that deal went fairly quickly. I'd say within 45 days, they had a term sheet where they were going to make an offer to buy the company. There was another couple of months of due diligence before the deal closed.

So, I stayed through all of that. But the day the deal closed, it, basically, was my last day employed by Tapad. That's when I started Narrative. So, the fortunate part is I had the upside of being a senior leader in a company that exited for $400 million. So, it was a fairly sizable exit. It actually gave me a little bit of personal runway to go do the silly thing, which is start your own company and not pay yourself for a number of years, without having to worry about how I was going to put food on the table.

Erik: Wow. Okay. That's good timing. I guess, you get a little bit of luck and you've earned that luck by putting yourself in the right place. But, still, timing is good. Interesting. You mentioned the problem here, that you are struggling with buying and selling data and just being very frustrated. I think a lot of folks that are listening in on this are going to be in a position where maybe their organizations are involved in that, but they might not personally understand the challenges. So, what is the status quo or situation look like if you're trying to buy and sell data?

Nick: What's funny, because we're a venture-backed company, so from the very earliest days through today, we're out talking to the venture community and the investment community. Typically, when I talk to people that don't know anything about buying and selling data, they say, "Well, you cut someone a check, and they ship you a thumb drive. That is how you buy and sell data." I can say, definitively, that is not how data is bought and sold.

The first challenge is, we talked about data as if it's a monolith, as if it's homogenous. Data comes, literally, in an infinite number of forms. Even datasets that represent similar things tend to come in different schemas, in different formats, in different units of measure, and all of these different things. Oftentimes, the early parts of buying and selling data are buyers and sellers finding each other. Historically, there hasn't been a directory for people that want to buy or sell data. So, they have to fill that out. Then there is a very long period where they are talking about, what is the data that you have? How do you collect the data? What happens in these scenarios with the data?

There's this almost translation layer where the buyer thinks about data in a very specific way. The seller thinks about data in a very specific way. They have to translate the way they think about data to something common to figure out if there's even a deal to be had. If there's a deal to be had, that's great. That's a checkmark along the way. But then, you get into how do we build a commercial agreement around this? How do we concern ourselves with government and regulations, to the extent data is regulated in any way? What are the technical implications of how data gets from point A to point B? Then, ultimately, how do we potentially rinse and repeat that process across additional partners — be it on the buy-side or the sell-side?

So, we usually tell people, if you're going to do a deal manually — just I'm a buyer, and I want to buy data from a seller — if everything goes well, it's probably six months from start to finish — from the time you say, "Hey, I've got an initiative that requires me to buy data," to actually getting that data productionalized and the commercial agreement signed, and starting to use that data. If everything goes well, it takes about six months.

Erik: Okay. It sounded very labor-intensive and very expensive labor — a lot of decision making, a lot of maybe lawyers and other specialists involved in this process, a lot of overhead.

Let's touch quickly on what data we're talking about. Because I guess when we're talking about IoT, we could be talking about virtually anything, from social media, personally identifiable innovative data that would be used for marketing, to data that would be used for R&D of new devices, or supply chain management. Are you here focused on particular types of datasets, or is this a horizontal platform that could be potentially supporting transaction of any potential dataset?

Nick: Yeah, it's a horizontal platform. In fact, we think of the underlying technology as basically being a database. We have a lot of the same constructs of a database. Companies can create tables. They can populate those tables with any arbitrary schema. We have query languages. We have ACLs or access rules that determines who can access what data and under what terms. So, it's very much a horizontal platform.

I'd say the big area that we don't focus on today are binary data BLOBs. We don't do things with satellite imagery, or LiDAR, or things that are producing truly unstructured data. We can do things with that type of data. But we're more focused on the metadata layer contained within those BLOBs, versus us trying to figure out that a satellite image is a picture of a parking lot at a Walmart in upstate New York.

Generally speaking, because data can be anything, we'd really be constraining ourselves if we said, "Hey, there's only one type of data that our platform will work with." Because I think, over time, most people may start with a single type of data that they care about. But very quickly, they want to start co-mingling and joining, and expanding that to a much larger variety of datasets.

Erik: So, where is the marketplace today? Is it still primarily buyer meets seller, they identify the opportunity for a transaction? Then instead of the manual process that used to exist, they would go on a platform like yours to streamline the transaction? Or is it more of a marketplace where you can make a micro transaction and say, "I'm going to buy a small bit of data for $300, and test it out. Then maybe I'll scale up. I can scale up and down like I'm using a cloud service?" Where are we today, one to one transaction versus marketplace dynamics?

Nick: Yeah, it's very much the latter. We support the former. But I think one of the big innovations we have in the platform is because we treat the software as a database, as a buyer, you can actually run a query.

Let's say that you're looking for weather data over a specific region that covers farmland that is owned by an agriculture conglomerate. As the buyer of data, you can go in and say, "I am looking for weather data that includes soil temperature, air temperature, precipitation rates, relative humidity, where that data was collected between June 1 2022 and September 1 2022 between the latitudes and longitudes of x, y, x, y." Assuming that the data is in the platform, and the sellers on the platform are willing to sell that to you, our query engine — again, this is almost a SQL statement — will basically go record by record to make sure that every record in those data sets, across multiple providers, meets your criteria. The transaction model in our marketplace is, you're actually buying the data on a record-by-record basis. So, no one technically buys a dataset. They buy a collection of thousands, or millions, or billions of records where the actual price is associated to each one of those records.

We have people say, "Well, how many transactions is your marketplace around?" I look at them and I say somewhere around 9 trillion last year. The average transaction size was a 1,000th of a penny or something very, very tiny. But we are very much taking that micro economic model of a marketplace. Again, no one's buying one record. They're still buying millions of records, typically. But they can craft an execution that gets them the exact right million records that they want based on their criteria versus buying a data set that might have 30 million records in it, of which they're going to throw out 95% of it because it's not germane to the problem they're trying to solve.

Erik: Okay. Great. This is super interesting. I think we have to approach it from a few different perspectives. So, let's take them one at a time. I'm probably going to miss some things, and you'll have to help me fill in some gaps at the end. But maybe we can start with the less technical bits here.

If we look at the use cases, I guess, marketing data is probably the thing that first comes to mind with people. But I guess, there's going to be a really long tail of use cases. Then there's probably a shortlist of more common use cases. Maybe we can cover the more common ones, and maybe some of the more interesting long tail use cases that you've seen users of the platform adopt.

Nick: Sure. There are certainly marketing use cases. We work with a number of customers that have relationships with their customers. They have a CRM, but they know a very limited amount about their customers. You can imagine if you sell beverages, if you sell Cola to customers. First, there's probably a number of people that you don't have a direct relationship with, because people don't go to Coca Cola and Pepsi directly to buy drinks. They go to grocery stores and bodegas and Amazon to buy drinks. But they may have events that they run where they're actually able to collect information around the people that drink their beverage of choice. But even in those examples, they know very little about their customers. They may know their email address. They may know that they signed up for a promotion around a particular product. But beyond that, they have a very limited understanding of who their customers are.

So, a use case that they execute against the platform is to basically enrich their CRM with additional information about their customers, so they can create better cohorts or better understanding of who is buying and consuming their products. And so, our platform allows them to push their existing CRM into our system. Say, I would like to buy additional information about these users. I may specifically want to buy their household income, or if they prefer to shop at Kroger or Piggly Wiggly. Going back to that concept of a micro transaction, they're only buying data on users that are already in their system, which is important from an efficiency play. Then they can choose which data they want to buy about those users to fill in the additional information in their CRM, so their marketing teams or their sales teams can go execute against their newly-found knowledge.

Erik: Okay. So, that one makes intuitive sense. I think a lot of businesses have acquired some data set around that use case. Our podcast is obviously heavily oriented on IoT. I think there's a lot of interesting areas to explore there, but also a lot more complexity when it comes to acquiring IoT data — transportation data, weather data, sensor data, whatever that might be. Are you already transacting IoT data today? What is the potential there that you see for that marketplace?

Nick: Yeah, we are. We see a huge potential. We work with a fairly large customer that has weather sensors all around the US. I think their traditional business model is helping farmers understand when frost is going to occur, and help them take precautions to make sure that that frost doesn't impact their crop yield. But they do that by putting sensors in fields all around the US. Those sensors collect all of the traditional weather-related information that we would expect as consumers, and then a bunch of things that are very specific to agriculture — pH levels, nitrogen levels, things that, frankly, I don't understand all of that well.

Historically, the company has collected all of this data. Then they basically provide predictions or reports to farmers or agricultural conglomerates that help them run their business. But by virtue of collecting all of this data from these sensors, they've actually opened up a number of other data monetization opportunities. They may want to sell some of that data to governmental organizations that are trying to better understand how climate change is impacting at the ground level, certain conditions that are related to agriculture. There may be financial services, institutions, that want to understand how the changing climate and modalities that are being collected might impact crop yield from a financial perspective, from a commodities perspective.

They've already collected the data. The data is already being collected. What we allow them to do is pipe all of that data into our platform, and then allow a fairly non-technical user to package that data into datasets that can be sold to financial institutions, governmental institutions, and NGOs, and really anyone else that's looking for that data that will help them solve a problem.

Erik: Okay. Great. Since you mentioned it, maybe we can get into the users here. I mean, this is an area where there's obviously a lot of technology involved. But you mentioned that it's possible for users that are not particularly technical to use the platform. So, what does it look like? If I go on the platform, and I'm trying to find a particular data set, I have to make sure that the metadata also matches my requirements. But I might be a little bit vague on what that actually entails. What would be the user journey for a non-technical individual?

Nick: We, basically, have a data catalog. So, you can open up a page and see hundreds of different types of data that are available in the platform. When I say types of data, I don't mean data sets, really. I mean, fields that exist in datasets. One of the things in that data catalog might be air quality, particulate matter per million parts. One of them might be air temperature. One might be soil temperature. One might be the collection timestamp — when a particular reading was selected.

Each of these things lives in a list. Usually, the first thing that people do is they define what they're looking for. So, if you're an analyst, or even if you're a business development person that's working with an analyst, and you say, "I need a dataset where I can correlate particulate matter, air quality, and soil temperature with crop yield. I already have a data set that shows me crop yields. I now just need to buy a data set that has the air quality, that has the soil temperature, that has the time of when those measurements happened, and maybe the location — the latitude and longitude of those measurements."

If I could get that data set, and I could combine it with the data set that I already have, I can now go produce a report that shows the correlation between air quality, soil temperature, and crop yield. So, they go into that data catalog. They find the four pieces of information that they need, that they don't already have. We can basically say, "Yes, we have one or more providers on the platform that can make that data available to you." If you tell us more specifically which states, which latitude and longitude you want that data from, we can actually give you a forecast of how much of that data is available and how much it would cost. Then even beyond that, if you like the cost and how much data is available based on the settings that you've put in, you can click a button and put in your credit card, and buy that data and have that data available on your hard drive within 15 or 20 minutes.

Erik: Okay. I'm fascinated. The cost structure here, is it the seller? Is it like you have a range, like a recommended range, but then the seller would decide how much they want to price their data? How do you align on pricing for the datasets?

Nick: Narrative is an entity that doesn't buy data. We don't sell data. We provide the software and the pipes that allow the buying and selling to happen. And so, we are very dogmatic that both the buyer and seller should be able to set their own business rules when working within this platform. From a seller's perspective, they can set whatever price that they want. They can set price based on the number of records that are purchased. They can have a flat fee for their data. They can give their data away for free, if they want to give their data away for free.

They also have controls on who can buy their data. We talk about it as if it's a fully public marketplace. Many data providers do make their data fully publicly available. But someone will actually constrain to say, "Hey, I'll sell this data, but I don't want to sell it to my competitor," or, "I'll sell this data, but I don't want to sell it to someone in an industry maybe where I don't have the contractual rights to sell the data to." So, our customers aren't losing any control over how they price, or how they monetize, or who they sell their data to.

Then on the buy-side, you can also say, "I'm willing to pay a particular price." If you're willing to pay a price that isn't congruent with the price that the seller is looking to get, then you can't buy their data without going into a process within the platform where you can actually say to the seller, "Hey, I'd really like your data. I work as an academic researcher in Iowa State University, and so I don't have a huge budget to buy your climate IoT data. But I'd really like to use this data to write a paper. Is there any chance that you would sell it to me for cheaper or give it to me for free based on my use case?"

So, you can actually have those conversations within the platform? Because while we'd like to automate everything, we understand that oftentimes, in business, there are negotiations. I think our goal is to keep those negotiations in the platform as much as possible, and have them happen as quickly as possible so we don't end up in a situation like previous to Narrative where those conversations might take months and require 10, 15, 20 meetings to get done.

Erik: Okay. Yeah, that's pragmatic. Then your business model, would it be charging a fraction of a cent on every piece of data that's moved? How does that work?

Nick: Yeah, we have two lines of revenue. Where we are the facilitator of the transaction, we will take a small percentage of that transaction. We actually take the percentage on both the buy-side and sell-side. If someone buys $100 worth of data, we invoice them for $110. We pay $90 to the seller. We do that because we don't see ourselves as a fiduciary for the buyer or the seller. Separately, we see ourselves as a fiduciary for both sides. We really want to be agnostic from an incentive perspective.

Then we also licensed the software itself. Just like Snowflake licenses, their database technology, you pay for Snowflake based on the amount of data that you store, the queries that you run, and how much processing happens. A lot of our value add is we take these really complex transactions and automate them. The size and the complexity of those transactions largely backs into what our cost basis is. And so, we will also charge just for use of the platform. Because we're providing compute and transfer and storage for all of our customers.

Erik: Got you. Okay. Interesting. Then there's the question of verification, I guess. I get probably one email a day or so if somebody's selling me some dataset about an event that they scraped. I'm sure that you're able to filter that out. But, still, there could be issues here regarding ownership of data. Does somebody really have the right to sell the data? Around accuracy, is this data quality? Somebody could be using an algorithm to just auto-generate datasets and make them look like they were valid. What's the process to guarantee ownership and accuracy?

Nick: There's a number of different protections in place. First and foremost, the platform is entirely transparent. When a buyer buys data from a seller, the buyer knows who the seller is. The seller knows who the buyer is. And so, if there are any concerns, if there's a level of diligence that either side wants to do on the other, they are free to do that — either inside or outside of the platform. As I like to say, sunlight is the best disinfectant. To the extent that no one's hiding their identity on the platform, it allows for each company to make their own decisions in terms of risk and reward, in terms of governance and quality, and all of the rest.

The second thing is, we have some apps that sit on top of our pipes that can actually do things like score the validity of datasets. Those are typically done in two different ways. One might be for a buyer to upload data that they believe is already very high quality. We can actually compare the buyer's data with the seller's data within the platform, without either the buyer or the seller seeing each other's data. Basically, create a confusion matrix that says, "Hey, where you think X thing is true, 98% of the time the supplier also thinks X thing is true. Where you think X thing is false, the supplier only thinks that thing is false 65% of the time." So, you can create some metrics around if you have a data set that you strongly believe in, how in line, how accurate, what is the precision of that data set when compared to the supplier's data set. At that point, the buyer can make a decision based on whatever it may consider a good score across those.

Then we also have third-party data providers whose entire business is doing data validation. They make their data available in the platform for a fee. So, if a buyer wanted to score some data, where maybe they didn't have any ground truth, they could basically license that trusted third-party's data to do that same process of scoring the supplier, and figuring out if it was effective for whatever their given use case might be.

Erik: Okay. It makes sense. Well, let's talk a bit about the use cases, then. Maybe before we get into specific cases, we can talk more generally about the users here. It's a horizontal platform, so I'm sure there's a very fragmented set of users. If we look at industry, if we look at organization type, are we talking about tech companies? Are we talking about research organizations? Are we talking about marketing teams at large consumer products companies? Who would be the most common set of users today?

Nick: I'll give you the same answer that I give my investors. They don't love the answer either. So, I don't know if you will. But it's kind of all of the above. We work with some very big CPGs, both within their research and their marketing arms, to access data. We work with some research groups with a couple of different polling and survey related firms. On the polling side, every question a pollster asks you, you're more likely to stop responding to the pollster. So, they're really focused on getting the answers to the most important questions they want to ask. But then, they want to add additional answers but in a way that is passive to the person. They enrich their datasets with additional things.

Within technology organizations, it's often product management teams, or data science, or data engineering, or data analytics teams. Data has become so pervasive through almost every organization, that we see almost every team, every vertical, and every size of company as a potential customer. I will say that our focus has been more on medium to large enterprises than it has been to the small business and mid-market. But we do have some two- and three-person startups or companies that maybe have 80 or 90 people that have a very specific data need. Most recently, we started working with a lot of retailers, as retailers look to figure out how they can use their data to augment their retail business. So, it really is broad-based.

I contend my investors should be very happy about that. I think the challenge becomes as we're a 40-person company trying to serve as everyone, it's a little bit of a fool's errand. So, we need to show a little bit of focus. But the software was built in such a way that almost anyone can use it. As a CEO, as a founder, I have trouble saying I have software that can help you solve a problem. But because you're not the specific focus of my company, I'm not going to let you go use it to solve your problem.

Erik: Yeah, sure. To some extent, you have to learn also, as a company, where is the value. That requires experimentation by your customers as well. What about on the other side of the marketplace? You've mentioned a couple types of contributors of data from the sell-side. Is it mostly people that they're running a business, and they have a data set? They say, "Well, we can earn an extra 3% on our profit margin if we monetize this data. It doesn't cost very much, so let's do it." Are there companies that are building businesses, specifically around monetizing data — that's a primary revenue stream for them — how does that marketplace, the sell-side look like?

Nick: It's both. I mean, there certainly are companies that that is their primary model. They sell data, and we give them easier, faster, better way for them to sell data. That happens where companies that have been selling data for 20 years, and we just give them a piece of technology that makes their lives easier. We also talked to companies that that's our primary business, and we've been doing it for six weeks. They're really nowhere.

Arguably, the ones that have been doing it for six weeks are a little bit easier to work with, because they don't have any sunk cost and the systems, they may have built themselves. They're just trying to get off the ground. We very much look like Salesforce for data monetization to them. No company in 2022 builds their own CRM. 30 years ago, 40 years ago, that wasn't true. You might build your own CRM, because it's really hard to get something turnkey that did what you wanted to. But we see both there.

What I find most interesting is we're seeing more and more companies. I think this is especially true with IoT or, really, anyone that builds hardware. It's that hardware, historically, is a very low margin business. If you go buy a 75-inch ultra-high-def television on Amazon, you can basically buy it for $700, I think, today. It may cost the television manufacturer $650 if you include their marketing and their distribution to actually sell that TV. The retailer has taken another 30% or 40% of the profit margin. So, you sell a $750 TV, you may keep 45, maybe less in profit.

So, over the lifetime of that TV — which may be five to 10 years — if you can also find ways to take data from that TV and monetize it, almost act like a Nielsen for folks that are trying to understand what people are watching on their TV, you may be able to earn another 50% to $100 for that same TV. On a top line basis, the revenue may seem like it's 5% or 10% of our gross revenue. But when you actually look at the profitability of those things, you can sell a lot of $750 TVs, but the profit remains fairly low. The profit margin on data is very, very high. And so, even if it doesn't make up a huge chunk of gross revenue, it can make up a much larger chunk of your profit margin.

Erik: Yeah, it makes sense. I mean, there's been a lot of talk in the automotive industry about this. Again, another industry with a lot of data, very high revenue, very low margins. Some interesting cases.

Nick: Erik, I think you're seeing that a ton in retail right now. I don't fully grok the grocery store margins. But my guess is they're pretty thin. I guess, places like Amazon selling groceries make their margins even thinner. And so, they certainly are looking at ways to augment revenue. If you have a family that does all of their grocery shopping at one grocery chain, that's very rich data about how those people behave. It can be monetized for all of those CPG brands. It can be monetized in any number of ways.

I think, almost ironically, the Amazons of the world are leading in the retail data monetization space. Because arguably, they're the ones that need to do that least. They also happen to be the most technically savvy. You're starting to see almost all of the other retailers, not just in grocery but in other areas start to say, "Hey, I'm sitting on a goldmine worth of data. My traditional business runs at very low margins. We need to figure this out sooner, rather than later."

Erik: Yeah, it makes sense. In cases like this, you're dealing with PII data. Is it up to the seller to ensure that they're complying with all, they're dealing with Europe and GDPR and so forth, that they're in compliance? Do you have that built into the platform?

Nick: A little bit of both. It certainly is up to them to make sure that they're complying. In GDPR terms, we are a processor, not a controller. That being said, we give our customers a bunch of tools to make that compliance easier — tools around right to be forgotten requests, us, actually, passing those right to be forgotten requests downstream to anyone else that may have access to that data, getting acknowledgments from the downstream providers that they receive the right to be forgotten request, receiving another acknowledgement when they've actually deleted the data, propagating that acknowledgement back to the originator of the data, and basically keeping a paper trail of the entire thing.

We tell people that we're not a silver bullet, and you no longer have to worry about GDPR, CCPA, or any of the other four-letter acronyms that go along with this stuff. Frankly, if anyone tells you they're silver bullet, run as fast as you can, because they're not. Our goal is to lessen the administrative burden, by giving folks a bunch of tools that let them be compliant, be confident in their compliance while still keeping an eye to what the regulatory frameworks are asking them to do.

Erik: Okay. Great. Well, let's talk about the future of it. You've mentioned already that you're venture-backed. I imagine that you always have a certain amount of pressure from your investors to look towards the future and carve out that vision. If you look, maybe, medium term, 24 months, what's on the horizon for your product roadmap, maybe new markets that you want to enter into? Then maybe if there are topics, longer term — 5, 10 years down the road of things — that you're particularly excited about in terms of potential developments, we'd love to hear those as well.

Nick: We're at somewhat of an interesting point for us. Early on, in any startup's lifecycle, there tends to be a lot of innovation. You're making a lot of step function improvements or changes in the product. That certainly was true for us over the first three or four years of the product. We've really hit a place where the product is very strong. I tell people — not facetiously. Although I think they assume it is — that I've got 100-year roadmap in my head.

So, there is a lot of work to be done. A lot of that work is now making incremental improvements of different modules and different components within the platform. A lot of the core functionalities is actually there. The thing that we rolled out earlier this year that we continue to improve on is — I like to refer to it as machine learning. The marketer in me would refer to it as artificial intelligence — basically, as data comes into the platform, our system is able to classify the data and also infer any transformations that may need to be made to that data to eliminate the normalization problem that we typically see in data marketplaces.

So, I've talked about weather data, right? Air temperature could be in Celsius. It could be in Fahrenheit. It could be in Kelvin. We've actually seen with geolocation data that everyone assumes that latitude and longitude is using the standard latitude and longitude that's used by consumer mapping applications. But oftentimes, they use these different earth projections that are based on standards released by the oil industry. So, you can't just take a latitude and longitude, and assume it's going to be the same across all datasets.

So, there's artificial intelligence. It's basically acting as the normalization and the standardization layer in the middle, which just removes the need for both buyers and sellers to have to try to figure out how each other talks about the data or is going to consume the data. We call the technology Rosetta Stone, because we think it access that universal translation layer. Certainly, it will be improving to that over time.

I think, to your question about verticals or use cases, our product today is, in some ways, generalized. You can go in and use it for agricultural use cases. You can go in and use it for marketing. You can go in and use it for financial services use cases, but you basically get the same interface across all of those. One of the things that we're really focused on is either building applications on top of our products or our platform ourselves that are much more use case or solution specific — taking the same functionality, but just creating a better user experience around it. And also, opening up via a developer SDK to allow other companies and other developers to add on to the platform.

I started out this by saying that data comes in an infinite number of forms. That's one of the reasons that data marketplaces are very challenging. The other challenging part is people want to do an infinite number of things with the data once they get it. Our focus as a company is not really giving people the answers from the data. It's getting them the data. But oftentimes, people want answers, or they want visualizations, or they want predictions to come from the data.

So, allowing other developers to extend our platform, to take the data and the data liquidity that's available there, and actually build applications that go into that much more specific realm of giving someone an answer but doing it natively within our platform will be a big part of our growth over the next couple of years.

Erik: That's interesting. I could imagine a use case like Tableau or something, where you're doing your analysis and you say, "Well, I would like to have that data set," and just plug in. Do you have any partnerships or anything on the horizon where people would be able to embed your datasets in an existing functionality? Maybe that's what you're referring to here.

Nick: Yeah, actually, there's a press release that's going out — literally, as we talk right now — about allowing people to take data from our platform and embed it into a marketing platform called The Trade Desk. So, that's an instance where we're actually taking the data and pushing it downstream on someone's behalf, but doing it in a very native way.

Then you asked the question about data quality. Our first externally developed application on our platform was from a company called Truthset. Truthset's businesses is they have data that is scored across a number of different criteria. They built an application on our platform that allows you to take their data scoring methodology, and apply it to data that you already own. Is this data that I have any good? Or apply it to data that you may be considering buying. So, that's not actually something we do natively. In fact, in some ways, us, wanting to be agnostic, I don't know if we really want to be the ones that say this data is good, or this data is bad. But they built an application on top of our platform where with four clicks of a mouse, you can basically score the data. So, you can make a better decision if you actually want to buy it or not.

Erik: Okay. Very interesting. Well, I think you got a great business. You certainly timed it well. I can imagine this market is just going to explode in the coming decades. Anything that we didn't touch on, Nick, that you think is important for folks to understand?

Nick: No, I think we talked about a lot. I guess, something that I don't say often enough is the platform is fully self-service. If anyone wanted to go play with it, and see how it works, they can go to app.narrative.io. They can sign up for an account. They can start using it without talking to our amazing enterprise sales team. They can get an idea of how the technology actually works.

Our real goal is to take things that traditionally are really hard or really painful for our customers, and make them easy. We think we've certainly taken a lot of good first steps in that direction. But we're always happy to hear feedback and things that we could do better or places that we may be failing that mission. So, we would love to hear feedback from any of your listeners.

Erik: Awesome. Nick, thanks for joining us today.

Nick: Thanks for having me.

test test