Podcast Technology EP061 - A primer on machine vision: technologies and use cases - Iain Smith, Managing Director, Fisher Smith

EP061 - A primer on machine vision: technologies and use cases - Iain Smith, Managing Director, Fisher Smith

Mar 25, 2020

In this episode, we give an introduction to machine vision technology, use case adoption trends, and key success factors for an high accuracy solution. We also review decision making process to determine the right tech stack, and cost structure for low and high complexity systems. 


This is part 1 of 2 with Iain Smith on machine vision. 

Iain Smith is Managing Director and Co-Founder at Fisher Smith. Fisher Smith designs and supplies machine vision systems for automatic inspection and identification of manufactured parts on industrial production lines. https://fishersmith.co.uk

 

Subscribe

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE. And our guest today will be Iain Smith, Managing Director and cofounder of Fisher Smith. Fisher Smith designs and supplies machine vision systems for automatic inspection and identification of manufactured parts on industrial production lines. And together we discuss trends in machine vision technology and use case adoption, as well as the key success factors for a high accuracy solution. We reviewed the decision making process to determine the right tech stack, and we broke down the cost structure for high and low complexity systems.

If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Thank you. Iain, thank you so much for joining us today.

Iain: Pleasure to be here.

Erik: So, Iain, before we get into the details of what Fisher Smith does, and the technologies or the machine vision technologies that you're working with, I want to just understand a little bit more where you personally are coming from. So I see that you've been with Fisher Smith for quite a while at about 15-16 years now. Were you part of the founding team? Or what's the background story for your time with Fisher Smith?

Iain: Yeah, so I'm one of the founder, owners of the business. We started it in 2004. I did a degree at University of Bristol here in the UK, which is engineering mathematics. And as part of that, my dissertation ended up drifting towards computer vision and some of the software aspects around that. And when I started working after university, I joined a company that was effectively a machine vision machine builder. So they were making complete machines with mechanical handling, part feeding, rejection systems, the electrical stuff, the mechanical design, the build, as well as the software, the cameras, the vision aspects as well.

And when that company ceased trading, myself and one of the other people that were working there decided that we would start up our own business to continue some of that work and cherry pick. So we changed our focus a little bit from that point because we could see that certainly, our personal skills were not mechanical design, electrical design and build, we were vision engineers. So it was the software, the configuration, the integration of the vision equipment that was the strengths that we had. So we focused on that and stopped building machines, and started looking for other partners in the marketplace that were either had machines that needed vision to be added to them, or they were building machines, and we started to work in partnership with them. And that's been one of our main business models going forwards.

Erik: So you've pretty much been a machine vision company from day one. I guess you probably made the right bet in terms of focusing on the software. That seems to be where most of the innovation has been. But what have been the big changes in your business since you started up in 2004?

Iain: So I guess there's two or three things that we could point to is really big changes. So, one is probably an economic factor that we had the big financial crash. And prior to that, a lot of the automation companies, robot integrators people making assembly machines in the UK were starting to grow their teams. So they not only had mechanical, electrical design and build. They also had software for PLC and machine control capability in-house. But they were also starting to expand the team and bring more stuff, add more value in-house. So they were starting to integrate vision cameras. They were starting to do other bits of ancillary equipment.

And then when the financial crash happened, a lot of people withdrew very much to their core. They trimmed out any additional staff that were not had 100% committed to their core work and started looking for subcontractors for some of the stuff that they might need do on every third machine or every fifth machine. And vision was certainly one of those. So that was a big change, which suited us quite nicely at that point because we were there offering the vision aspect of that as a subcontractor effectively. And that dovetailed in quite nicely with some of those automation companies who now no longer wanted to do the vision in-house.

Erik: And has that trend reversed as the economy has rebounded or?

Iain: I think a little bit. So one aspect is that people were also aware that when they were doing it themselves, they weren't always getting it right and they were struggling sometimes. And there's a number of reasons for that. But there's certainly a legacy from that where people who had that experience now value our expertise to solve those problems, get the solution right in the first place, and then implement it for them. So that's certainly one aspect.

The other aspect, which goes back to your earlier question of what's changed in this time is the maturity of the equipment on the market has really changed, certainly in the last 5 and 10 years. So when I first started working with this all in one smart camera, so a camera with a computer, some IO, all built into one unit were pretty rare. There were probably a few around, but they were pretty limited, pretty expensive and pretty big. So pretty much all the vision I was doing in my early years of working in industry was PC based. We were integrating often analog cameras with analog frame grabbers, which are boards that slotted into the industrial computers, then we were writing computer software in Visual Basic or C++ to give a user interface.

So every single system felt like it was very unique very much from the ground up. And therefore we're adding a lot of value but there was a lot of costs. And the projects were relatively larger but less frequent. The market has matured massively with a lot of different products now in the market going down to very low price points, which are quite capable for doing quite simple tasks, if they're deployed correctly. And then there's really a complete spectrum of products in the market now which allow you to solve most problems at a pretty cost effective level.

So that's also been a very big change that where we're now adding value to the vision hardware has changed that we're able to add less value to the products that have got better, more user friendly, more simple to use, they're quicker to deploy. Our involvement in them is slimming right down. But then at the higher end of the market, we're getting different and newer technologies coming in which are the ones that are still requiring quite a lot of value add for us.

Erik: I want to kind of dive into the technology and what the tech stack looks like and what some of these newer technologies that are coming to market are. But before we go there, I want to let everybody know a little bit more about what you do and where you sit in the value chain here. So how would you define Fisher Smith's value proposition and how you interface with the other technology providers in deployment of the solution?

Iain: So we, I guess, describe ourselves as machine vision integrators. So we're not manufacturing any of the hardware. We're taking generally hardware that's commercially available, making decisions on what's going to solve the problem for the customer and then deploying those. So that's putting the hardware together, writing the software, or configuring the software to whatever degree that needs.

Our customers are a mixture, but they probably fall into three distinct categories. So, one group of customers are the automation companies. So they're people that are building machines, maybe one offs, maybe multiple units of a similar thing. But they're the ones that are putting the robots together, making assembly systems, combining parts, making parts and then we're coming in and adding the vision bits to their offering.

Then we've got another group of customers which are the actual end users as we describe them where these are people that are actually manufacturing something. So they've got a factory, they're making widgets. They might be plastic bits, metal bits. And they have a production line and they need some vision to do some aspects of quality control or to assist in automatic handling on those lines.

But again, customers that we tend to work best with are the ones that have some degree of in-house engineering so they were able to take our equipment and mount it, put it in position, wire it up and then we come in and do the software aspects. Not a hard and fast line, sometimes we do little bits of those. But we do try and keep a line where we don't cross because we know that most of our customers, that's the skill set they have, and we don't want to tread on their toes, so they keep coming back to it to use us.

The third group of customers that we deal with are what we call OEMs, or Original Equipment Manufacturers. And these tend to be people that are building a specific commercialized system. So they're building multiples of the same machine over and over. And with those, we're often selling much more at a component level. So it's more like distribution that we do but more of a value added. Because often at the start of that process, we're making some quite detailed decisions about what equipment they need, possibly assisting in writing some of the software to go in those cameras. And then the deployment, once it's fixed, just rolls out as a hardware sale for us.

And even with what we consider our sort of standard customer, the automation customers, some of them have more or less in-house skill, someone just to work with them from very much early stage. So the really the best relationships we have are were an early point in the sales process we get presented with here are the parts, here is the customer's requirement for those parts. This is the problem that they've got, or they perceive that they're going to have that they need to guarantee the quality of it, or we need to have a way of automatically feeding this other bits of the process.

And then we can say, well, to do that, we're going to need to deploy this sort of technology. We get a price in as part of the quotation package and then we then have the ownership of the vision bit of that through the quotation to order when to the build, and then to the sign off. And usually, it's up to us to decide or to specify what the vision equipment is and to make sure that that will fulfill the customer's requirement. And to the point where we usually get paid for the last sections of those jobs when it's been fully accepted and proven that it's working in the environment on their sites with freshly manufactured parts.

Erik: I imagine when you first started up, you were working with a lot of customers who were doing this for the first time and maybe the project was approached as somewhat of an innovation project or there was a special project. I imagine a lot of your first customers didn't necessarily have processes and preferred vendors and so forth around this category. Has it matured to the point today where most of the companies you're working with already know pretty much what they want, they've done this before on maybe different factories or equipment, and now they're coming through a more standardized production process? Or is it often still driven is somewhat of a special project or an innovation related project?

Iain: I think we're somewhere between the two scenarios that you paint there. Vision is now an accepted technology. It's used very widely now. But there's still an element of, I say misunderstanding for a lot of end customers that they don't know enough about it to make a very informed choice. And the customers that we've done multiple projects with, they're starting to get a very good idea of we think that's going to be hard or we think that's going to be okay.

So it's starting to mature and in that respect that people are starting to give quotations early stages in the sales process with a budget figure for vision in it based on their own experience. And then as the sale progresses and it becomes more firm and more real, then they'll come back to us and say we've budgeted for single smart camera which we thought would be okay for this. Can you check this out and firm up those prices because we need to give a detailed breakdown quote now to the end customer for them to make the decision?

So we are seeing that more. But there's still the feeling that our customers could easily get it wrong, so they still prefer to use us to double check these, rather than saying, yeah, we've done all this before, it will be fine and just say we'll just buy the hardware from you this time. We don't need any integration time or any support. Certainly, the best relationships we have are the ones that value our input during that process.

Erik: Because I guess you also have a range of fairly mature technologies here but also a lot of quite newer technologies that are addressing challenges that maybe were not easily addressed with previous generations. Have you ever thought about productizing your expertise? I'm sure you've worked on projects before where you finished a large integration, and you probably sat back and thought, you know what, we could just take this product that we've just built and productize this, and there's probably 100 other potential customers out there. Is that something that you've done before, or you've seriously considered?

Iain: It's something that we have considered. It's often not quite as clear cut because often the vision is a part of a larger machine. So some of those we really need to work in tandem or in partnership with the automation company. So, actually, what we've done here is made a machine that does this and the vision inspection or the vision guiding bit is a very important part of that. But we need both parts to be fully functional.

With one of the automation companies we've done quite a lot with, they have effectively a couple of standard machines that have vision as a large portion of those. But I would say more generally, we are seeing most projects are one offs, they are fairly unique. And even the ones where they're apparently a repeat or a duplicate of an earlier system, there's often modification. We're doing exactly the same product, but it's twice as tall as the previous one or it's a different color. There's always something slightly different, which means you can't just repeat exactly the same. You might be able to repeat most of it, but you then need to tweak and vary it and alter it to suit the new scenario.

Erik: Let's talk about some of the use cases. So sounds like you're working primarily in industrial with machine builders or machine users, what are the primary use cases that you would be working in?

Iain: So the most often seen one is quality control. And that can go across almost all market sectors, where customers just want to guarantee the product they're making is correct. And different customers have different reasons to come to us for vision. We've just sent a big batch of products to our customer, and they've all had a fault and they've sent them all back and they've given us a big fine and it's costing us a lot to sort them and it's bad publicity. And it's all the negatives. They say we need to stop this happening again. What can we do? And then it's okay, we need some system to check that and visions often the way to look at that.

Other customers have a more proactive approach to their quality control and say we want to maintain our market position as being the best or the particular brand leader or market leader. So even though we haven't had any quality issues, we cannot afford for any quality issues to develop. So we're going to proactively say we're going to introduce inspection in this area. We're not expecting to find anything, but we want to just completely guarantee that nothing is going to get passed.

Every different product can be checked. For some of it, if you're looking at food side of things, then often that's the packaging. So checking that date codes are readable, barcodes are readable, the text or the label on the packaging is correctly applied in the right place and is for the right product that's being run, so you're not incorrectly labeling a box or something as something different.

We've historically done a lot of work with plastic, so bottles, caps, caps and closures. So various, either screw on caps or caps with flip tops, things like that where there are a bit of a bottle of ketchup for shower gel or shampoo bottle, where the consumer will be handling that bit of plastic. And therefore, it's quite critical for, certainly as a brands become more and more premium, that you don't have a bit missing from the cap or the cap doesn't close and seal properly, and then you end up with leak.

So, a big mess in transportation, a mess when it's on the supermarket shelves, and often with a lot of the plastic manufacturer, you're talking about high volumes of products as well. So it's very difficult for a human to be part of that process when you might be making tens of parts every second and they're whizzing along a production line straight into a box packaged up and being sensitive to the next part in the process. Then if a fault starts occurring in the process, then you need something that's constantly monitoring that line.

And this is where vision is really a big win that it's very objective in the decisions it makes. It doesn't need any time off. It doesn't go to sleep. It doesn't change its feelings after a weekend. And it can be really quick. And it's non-contact as well, which is often a benefit for these rather than having to stop or manipulate parts. We can often just take an image as the part is moving past without any intervention.

So other areas where you would think maybe a good use cases for vision that often don't use them are places like aerospace, and some of the say on auto sports. So we've, for instance, around us here where we are in the UK, we have two or three of the major Formula One motor racing teams. So they are making fantastically high quality, high specification bits of metal for the engines and things like that. But they're making maybe 5 of these a year, or maybe 10. And each one is going to take for maybe a week's worth of effort to make out of a solid block of titanium.

Although the quality requirement is through the roof, also, the fact if it takes somebody a week to check it manually, or semi-automatically, then it's no real change their process. It's a very different thing if you're talking about automotive parts that are going on to a major production line for a big car manufacturer and they need thousands of these to build one small SUV, and they need every single one of them to be right at the point of manufacture when it goes on to the car, or else it stops the production line. And then it's a cost of a loss of manufacturing.

Erik: What about manual production lines? So for I've kind of heard more people lately talking about using machine vision to tackle the challenge of a manual assembly production line while you're trying to understand, maybe it's not a quality issue here, it's what is the pace of output, where is there a bottleneck?

And I think the machine vision solutions here tend to be tracking the axes of a person's body and understanding that this person is taking 2.3 seconds to assemble this piece, pass it on to the next one, and then it takes 2.7 seconds to go to the next and there's maybe a bottleneck here, and maybe it's around how the person is not optimally structuring their movements. Because this has been really one of the areas that's been very difficult to manage efficiently because everything is being done manually, you don't have the automation in that you're describing. Have you deployed any solutions there?

Iain: So we've not done a thing where we're sort of actually spotting what the humans are doing and saying this human could be more efficient if they sat over here, or they reach and actually monitoring the human. But we have done systems where we've been checking the production either at a final assembly point. So it's been passed a number of workstations and then it gets to the end of the line and gets put to go to packaging or whatever the next process is, and at that point checking have all the assemble parts been correctly put together. So is the assembly complete or is anything got missed? Or not properly tightened or incorrectly placed. Sometimes that happens at the end of the process. And sometimes that's broken down.

So there's a key area. There's three screws manually put in by an operator here. If one of those is missing, we really don't want to add any more value to the product that needs to be rectified at that point. So you end up with a specific check, just in that area just to look at one thing and stop it going any further.

Obviously, if you've got a manual production line, then the variability can be greater and vision gets deployed. Possibly with a mixture of technology, so with some of these, you have a mixture of just reflective or inductive sensors just to say, yeah, there's a present here or not or a part of the product is present or not. Then possibly with a vision, looking for something else that a sensor would be unable to check, so you end up with a mixture of technologies doing pretty basic check, but it just stops. It's the no fault forward methodology to make sure that value is not added to component that's already wrong.

Erik: Maybe the industrial internet consortium, just a group of companies that are driving collaborative efforts, I think it was Huawei and one other company, but one of the solutions that they were building was for doing a QC of the axle of a car, so the welding of an axle. And the challenge there was, I think, the standard process was you take this axle after it's been welded, and you do an x-ray of it because you have to see the strength of the welding.

And so they had a solution that used infrared while the person was doing welding and then using some machine learning behind that to look at the heat pattern to determine whether something's been properly welded. I thought, okay, maybe I'm wrong, but it felt like a little bit of a cutting edge application. Are there any applications that are going to pushing the boundaries that you're maybe seen for the first, second time but that you think might scale up, applications that are maybe enabled by technologies that have just become commercially viable over the past couple years?

Iain: Absolutely. I mean, I think there's probably two areas that we're seeing, which are really opening up at the moment. So, one of those is 3D. And 3D is not really a new technology. We certainly picked on 3D systems 10 or more years ago. But what we're now seeing is that some of the 3D products are mature to the point of being products rather than us having to sort of almost put together a 3D system with a camera and a laser, and then write some software that was going to triangulate the system.

Now we're getting off-the-shelf products that are ready to deploy in industrial environments that will take ready to go 3D images, they're pre-calibrated out of the box, ready to give you millimeter measurements for a certain volume. And really, in the last five years, we've started to see a big advance of the software and the software tools catching up with the hardware. So we deal with a number of different brands in our marketplace.

And some of them have tool sets that they're very powerful, but they're almost at an academic level that you have to understand what's going on, put both things together to get to where you want. And now we're starting to see some of the other brands which are much more, I guess, commercially focused, these tools are starting to become to the point where you can, okay, I'm going to teach you on that and I'm going to look for it in this 3D image. And it's almost as easy as deploying a 2D solution.

So, it's opening doors that were performed with a said, is going to be really tricky, or it's going to need multiple cameras to achieve that. And now we're looking at it and thinking, actually, one of these 3D cameras would do that, will give us all the information we'd need. It helps because it cuts out often some of the color variations, backgrounds, things like that, that might have been an issue with a 2D image. We get all the data we want and it's now becoming quite easy to process that.

Erik: So just understand this a little bit more when you say a 3D image, does this mean that cameras looking at it from different sides, or you're creating the three image of this object, is that the objective here?

Iain: Yeah, so there's probably three main ways of generating a 3D image. One is to have multiple cameras looking at a scene, and you calibrate them all together. And the fact that each one can overlap and see features of the other, you can build up a 3D image, sort of stereo type of system. That tends to be used less because what we're seeing the two other technologies which are easier for hardware manufacturers to productize.

One is laser triangulation. So you've got, usually, a laser line, at either straight down or at an angle and a camera, how's to look at an angle at that laser line. And given the geometry of the laser and the camera, as an object, the 3D object passes underneath it, it displaces the laser line, the camera sees the profile change, and then you can make a height measurement from the difference of the laser line.

And those have become quite mature to the point where there's a number of manufacturers that will make this as a ready-to-go unit with the laser and the camera, pre calibrated often with onboard processing, which will actually find the line and give you our 3D points from the camera so that you end up with a 3D image coming into system. Historically, that's been limited by the fact that they need linear motion.

So you either need the parts going down a conveyor belt, underneath the camera, and then you have an encoder feedback to build up your image line by line, or you need to move the camera on the access over the parts to achieve the same thing. We've got a couple of manufacturers now where that technology has changed, or they've made innovations on that. So we're now got laser patterns that are being scanned by the camera.

So you have a static object and object is not moving and the camera scans a laser pattern over that with a little galvao mirror system. And then the camera is taking multiple images of this pattern moving across the scene, and then can build up a 3D image from that. We've used a few of those, and they've been very effective.

And then the other technology type is where you have a stereo pair of standard cameras. But then in the middle of the normally, you have a pattern projector, and that pattern projector puts on to your object a pseudo random pattern of small shapes which basically adds 3D texture to an image. So each camera is able to start to match up I can see these features in my image and the other camera can see the features in the other image and then they position the two of them get given the known geometry of the camera and patterns projector setup, triangulate that together and gives you a 3D points.

And then we're getting advances in that technology where the patterns they're changing and moving and being shifted around so that that starts to eliminate black spots, where you might have had a reflection on one camera, so you get no data so you don't get 3D points. If you can keep manipulating that image, then you can fill in quite a lot of those dark areas. So again, we've got ranges of cameras, which use this technology. And within a few 100 milliseconds will acquired a full 3D image of the top of the parts from a stationary perspective. So those are really commercial ready-to-go products now. Whereas five years ago, maybe that was the odd one or two, but you were breaking new ground to deploy them, whereas now they're becoming standard.

Erik: You said there was one other area that was quite interesting right now?

Iain: The other area is the deep learning. And again, deep learning is not a new technology. But it's now getting to the point where we've got software packages that are ready to deploy in quite short periods of time. So we've got several of the major players in our marketplace have deep learning software where the neural network has been pre-trained on a lot of industrial images. So then the overhead of training up to do your specific products is less because the network has already been pre-prepared to deal with images that are approximately what you might be looking at.

So this has really allowed us to look at some products and some projects, were to try and do it with a standard sort of rules based vision system, where you're counting pixels, making measurements, looking for patterns, you just couldn't do it. Whereas some of the deep learning is now opening up doors that you just couldn't have contemplated doing it before, it would have been so complicated that it would have never been robust. Whereas now we've got technologies that are still more complicated than deploying the something more standard, but they're very much ready for deploy and ready to solve some problems that we just couldn't have tackled before.

Erik: So maybe you can walk us through your thought process for how to determine what the right tech stack is for a situation because we have quite a range here between very standard off-the-shelf products, and then quite new technologies with different capabilities. Maybe it'd be interesting to understand what are the major variables that you consider, I mean, there's, of course, cost and reliability and so forth. But maybe there are other variables that also play into that decision process. Do you have a checklist or how would you evaluate what tech stack might make sense or what architecture for company?

Iain: So normally, we sort of start from the bottom of the range of products we've got, and then eliminate and work up. So our remit as a vision integrator is to solve the problem for the customer. But as you said, they don't want us to do that at any price. They want us to give them a cost effective solution. So we'd certainly start with, can this be done with the bottom end of the market. So very simple vision sensor, an all in one, you've got a camera, the light, the lens, and [inaudible 37:28] or some communication protocols all built into one box, very low cost, but very restricted.

If that will do it, then why not deploy that? That's easy to deploy. It's also very easy to hand over to the end customer and give them some training so that if they do need to change something in the future, the software that comes with it is very user friendly and they can easily pick that up and make a small modification, make a slight adjustment if they needed to in the future.

So although it's nice to do the complex projects, we also understand there's no overhead for us to deploy them. There's also a complexity issue that if we hand over a very complex system, then some customers will struggle to live with that and to maintain it without a lot of our input going forward. So there are several reasons why it suits our customers to go to the sort of simpler end of the market. But there's also good reasons for us to deploy simpler systems if we can. Whilst they're not put such big figures on our turnover figures, they often are easier to deploy, draw a line and stop supporting and move on to the next project.

So, typically, when a new product comes to us with a specification of we want to check for this and this and this, we check with our customer or the customer's customer, whoever is making any constraints on the system to say, how fast are they going, how are they going to be handled, are they're going to be stopped in front of the camera, are they going to be moving, are we doing once or multiples?

And then all of that starts to inform our decisions of, okay, if we're doing them really high speed, then we can't look at maybe one of these 3D cameras or really, really high resolution because we might only get to three, five frames a second from the camera. So can we do it with the technology that we've got in front of us? Or do we need to say, we needs to do four at the same time, because each camera needs to be going at a certain speed and you're telling us that the parts are coming off faster than that. So there's decisions around the quantity and then obviously, all of those decisions influence what systems we might need to put together.

So the bottom end of the market, you've got the smart sensors. So these tend to be a camera, with a built-in lens, with a built-in light that's shining from the front of the camera, and then some connections at the back as well. So we'll do everything, almost as a census. Once you set it up, you can unplug the configuration software, and it just works. They're fine if they work. Often, you then need to say, well, actually, I might have the right tool set in that that smart camera, I might have the right software inside there that will do the inspection I need. But the light is wrong. I can't say enough for the part. Or I can't focus in to a small enough area to give me the feature that I want.

So one of the key considerations is something we're not really talking about but it's absolutely critical to vision is the lighting and the optics. And that really is the first point, because however good your resolution in the camera is, however fast your camera can acquire, however fancy your software and the tools and the capabilities that you've got the deep learning and things behind there, if your image is not of a sufficient quality, then you've got very poor data to start with. So, however hard you process that data or capture that data, you're not improving the fact that the data you're starting with is poor.

So the first thing we try and do is get the image data as best as we can get it. So we're looking to highlight any features or any faults with as much contrast as possible. So this is often a question of choosing the right lenses, the right optics to focus in on the areas that we care about, possibly ignoring other areas if they're not deemed to be needed for inspection at all. And then very critically, it's getting the light in. And that might be the color of the light. It might be the direction the lights coming from.

So for some things like measurements, the ideal is to have a silhouette of the parts. So you really want light from behind the part, the camera in front, and you get a nice silhouette around the part. But for other aspects, you want to light the surface. But if you've got a product that's very reflective, you may not be able to shine lights straight back, straight out the product, because you just get reflections straight back into the camera, and then you don't see anything you need to see.

So there's often this is part of our sales process is to analyze these products under a number of different lighting scenarios, and work out what is going to give us a nice, strong high contrast image of the features we're looking at. And this is often something our customers struggle a little bit with is that we're not looking for a nice photographic image. We're not looking to make their product look nice on the screen. We're looking to highlight the faults, ideally, black to white, white to black, so that it is as obvious as we can make it so when that fault occurs, the software has to do the least possible to find it and find it reliably. The choice of the lenses and the lighting is probably the most critical bit of any system.

Erik: I imagine in some of these situations you then have to change the manufacturing environment, right, because you have a production line with parts spinning down and it's probably hard to, in some cases, configure this so that you have the optimal. Are manufacturers that you're working with often amenable to modifying their production line in order to create this ideal solution?

Iain: Sometimes. If we've got in an early enough stage of the process, usually at the point of quotation, we can start to specify some of this and say, in our experience, this is going to be really difficult unless we hold it like this or unless we stop it or unless we spin it in front of the camera or whatever the handling might be.

And then we can push that back to the automation company because they haven't yet fully designed the machine or the machines only designed on paper, on CAD and say if you put the past through like that, we're really going to struggle to reliably see that feature. Either can we move the inspection to a different place where we can see it reliably, or can you move the part, manipulate the parts in a different way to give us the optimal view of the component?

Sometimes, that's not possible and we've already got a production line, we need to add vision to this point. And those, it becomes more difficult because you're maybe not in control of some of those things. And we might have to say to some of those customers, well, in theory, you can do this. But if the handling isn't good enough, we're going to struggle or we're going to reject more your products. If you want us to guarantee only the good ones go through, the ones that have moved out of position, they've twisted, they've sat up, there's a bump in the conveyor belt, whatever other sort of issue that's happening on the process, we might have to say, well, we can't say it's good, so we've got to assume it's bad.

And then there's often a big learning curve then for those customers where they think that their production line is really smooth and works perfectly in everything, until you start to put a camera on there that's analyzing every single part and you need them to be well presented. And then you find out that the transport mechanism is not as smooth or it's not as reliable, or it's not as accurate, as the customer believed it to be, because they've never had to analyze it to the level we're asking.

If the faults are obvious, or they present very nicely, then it's not a problem. But if you start, it always rings alarm bells, if the customer says we need to measure this down to some tiny fraction of a millimeter, but it's going to be bumping down this old conveyor belt, then we're saying, well, probably the variation we're going to see in the presentation of the part is going to be greater than the tolerance you're trying to measure to, it's just not going to work. If you want to do this, you're probably going to have to stop it, hold it in a very controlled manner so that we can make a controlled measurement on it.

We're part of it a bigger conversation, rather than just bolting a camera on the side of the line. We've got to understand a bit of the process that our customers are either restricted with or already have. And they've also got to understand what we need. And if we don't get certain things that might compromise the inspection, sometimes those compromises are acceptable and sometimes they're not.

Erik: I've seen a lot of conveyor belts where other parts are also not necessarily even facing the same way because maybe it's not necessary for the existing process. But how would you consider what the main variables are? Because I guess you've got the hardware, you have the software, you have maybe some cloud deployment, or SaaS that's going to be a monthly or a regular bill on top in the long term. What are typically the big variables when somebody is looking at the lifetime cost of a machine vision solution?

Iain: Yeah. So I would say most of our customers, the cost is treated as a capital expenditure. So, they are looking at spending an amount of money upfront on day one, and then they're getting paid back for that investment for the next 2, 5, 10 years of production. There's not generally a lot currently where we're looking at SaaS or subscription models. Certainly, that's our experience. But the industrial market is very conservative sector and that change is happening very, very, very slowly.

In terms of cost, so at the bottom end of the market, so I've sort of touched on what we describe as smart sensors. So these all in one devices where you've got a camera, lights, lens, some processing, some IO and communication, all in one, they’re usually pretty small units. Those are available with varying levels of capability, but we're certainly looking at below 1,000 euros, $1,000 for a device like that. So with those, they're almost sold as sensors now. Often there's very little value we can add to them because by the time we've charged somebody for a day of our time as well as its come and set it up. We could have easily doubled the cost of the hardware.

So at one end, you've got very industrial terms, very low cost devices, but they are very restricted. So they might only have a single software tool in them. And they only make sense if the built-in lens and light works for your application and shows you up the feature you need to see. But that could be a cost as low as that which in industrial processes is pretty minimal really. I would say our standard middle of the range.

So a more fully featured smart camera, where we can choose the lens, we can choose the lighting, there are separate entities, but you've still got the processor and the software tools, and the IO and communications all built into one unit. You're talking certainly for an integrated solution, so including our sort of time and sign off. You're very round figures, 10,000 euros, $10,000, that sort of price points. And then if you were starting to go to multiple cameras, multiple, maybe 3D cameras, the deep learning, then you can be going to 5, 10 times that cost because you've just got lots more hardware. It's more expensive hardware. There's software licenses that may go with that as well.

And the deployment time for us goes right up because we've often got to write graphical user interfaces gooeys. We may also have to maintain subscriptions for the development licenses for some of those software tools. And rather than just configuring a bit of proprietary software to bolt some tools together in an order, you're talking about having a software library that you can now configure in a proper programming language, C++, C sharp, whatever it would be, to put all these building blocks together, solve the exactly solution that you're looking at, talk to multiple cameras, talk to other bits of automation, getting data to and from and then presenting all of that allowing the operator or the user to interact with, see statistics, see results, see the images, make changes. All of these things takes time to generate software for. So the range of costs can be rough $1,000 at one end to 100,000 at the high end.

Erik: This has been a great conversation. You're definitely the right person to talk to on the topic. Any critical things that we missed that we should be covering?

Iain: I think what we haven't really talks about is the how the data is used from these camera systems. And this is one of our frustrations really is that often we're putting certainly with quality control, we're putting a camera system on a production line, we're rejecting parts, so the bad parts don't get through to the customer. But the manufacturing line ideally should be taking that data from the camera and saying, what's happened in our process to make these failures? Why are we getting this? How can we stop that source so the camera now rejects nothing?

So we're often seeing this that customers are happy that they're not sending bad products out. But it's only when the rejects become to a level where either they're getting charged too much for recycling or destroying that that rejects, that waste product. Or their customer does an audit and says actually, you’re only 90% efficient on this or whatever. Why isn't it better, do they then start say, oh, I don't know.

So there's a lot of talk about Internet of Things, industry 4.0, all this data collection that's now possible. But we're seeing the uptake on actually taking that data, processing it, monitoring it, feeding it back into other bits of the process is still pretty disjointed. And often the camera system is put on the very end of the line; as a gatekeeper is just to stop bad things happening at the end. It's not seen as an integral part of, well, let's feed that back in, let's monitor that data. And if it's rejecting there, maybe that's ambient temperature, maybe it's a material change. We see a motor running high current. There's something sticking. There's something jamming or there's something change further upstream that's causing this reject, then you can start to build that into preventive maintenance, you can start to build that into monitoring what raw materials are critical, which ones aren't.

We've seen this occasionally with one particular customer we've dealt with where they're making a plastic part that's got, basically, a rubber aspect to it as well. And the rubber is black. And we're checking for bits that are missing out of that black rubber. And occasionally, I've changed the material and the rubber comes out with white spots on it black rubber with white spots. And every one of those white spots, we say, there's a bit of black rubber missing there. And they say to us, no, it's nice. You can wipe it off the surface. Well, the camera can't tell that. It just see there's a big bit of black missing.

But it should feed back into the thinking to say that material causes us rejects. If we pay a very tiny amount more for slightly better material, we can eliminate all those rejects. And actually, we get a higher efficiency machine out of the end of it. And that's what we’re not seeing enough of. A lot of these cameras are capable of outputting the data they gather. But we're seeing, although majority of these cameras now are capable of communicating with the standard industrial protocols, Ethernet, IP, PROFINET, Modbus, all these very standard protocols, it needs whoever's at the other end of that to be collecting that data, collating it and then doing something with it to start to improve the process, monitor the process.

Erik: And you have all these platforms coming out, Siemens, mindsphere, and I think that's pretty core to their value proposition. This is one of the reasons maybe I was making the assumption that people were moving more of this data to the cloud, if it's all firmware, or just on premise, hard to be processing this effectively. Do you advise in this space? Do you start to step more into the business model and say, yeah, maybe you have some vibration sensors and if we start to see scratches, maybe we should determine whether there was an increase in vibration in a particular piece of equipment, we may have [inaudible 58:24] that equipment then needs to be calibrated, do you get involved in these discussions?

Iain: Occasionally. All too often, we see that once the system has been put into the factory and been deployed, then often the people that were running the project initially, they might have had that overview to say, okay, maybe we do need to add additional sensor here, or maybe we do need to monitor this and join these up, make that decision. They've now moved on to another machine they're building into the factory or another project they're managing.

And the day to day running of the machine is now usually split between quality and factory maintenance, or production. And production wants parts to come off the machine. Maintenance generally don't want to have to intervene unless it's a breakdown. And quality are reluctant to change anything, but then they won't want to change any parameters. And no one of those groups of people are really making any of those sessions to say, actually, we could do this better by monitoring this data, monitoring that data. If this one goes up, we know we've got a problem. That means that maintenance needs to come in and fix this vibration or whatever it needs to be. It’s often falling between roles on the factory floor, which just means the conversations often not happening or certainly a lot of the customers that manufacturing sites that we deal with, those conversations just don't come out and therefore don't come up to us at all.

Erik: Okay, so we have a call to action to the general managers that are listening here think about the use of your data.

Iain: Clearly, there's big potential with the cloud to do a lot of this data processing. But we also see with a lot of the companies that we deal with, they're very cautious about linking up their factory processes, their industrial internal processes, and IT systems to the cloud. A lot of the systems that we deploy, we try and get remote support so to TeamViewer access. When the customer has a problem, rather than driving, flying, going to site to support them, we can actually see the screen, we can see what's happening, we can work with them to adjust the parameters if need be to address it.

But even just getting remote access to a computer that's on the shop floor, that can be really, really difficult because the IT departments are really cautious about allowing external access into their internal processes. And I think we're going to see the same where a lot of these big companies are going to have to decide, do they look at these commercial third party clouds as somewhere that they can store and process data? Or do they see that as we've got all this proprietary data that is ours that is valuable to our company?

We really don't want to trust that with some third party in the cloud on the other side of the world, even if there's a benefit to having the data processed by them. And again, I think it's something that will happen very slowly, certainly from US perspective in the industrial market, just because the markets so conservative, and people are worried about opening up doors into their IT systems that they're not in control of.

Erik: Well, then we can say the solutions have started to productize, but there is still a frontier of challenges and opportunities ahead for Fisher Smith exactly next decade taking care of. Iain, I really appreciate you taking the time to walk through us in this step. This has been super interesting for me, and I'm sure for our audience as well. What is the best way for people to reach out to you or to reach out to your team?

Iain: So obviously, our website has got case studies of what we've done, information about who we are and what we're doing. So that's www.fishersmith.co.uk. On there, you'll find links to our LinkedIn and Twitter and things like that. So you can see more about what we do day to day, have a look at our website, feel free to get in touch and we can have a chat about machine vision.

Eirk: Perfect. Well, we'll link through to that. And yeah, again, Iain, really appreciate the conversation today.

Iain: Nice. Thanks for having me on, Erik.

Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.

test test