How to be Innovative on a Budget using Simulation

How to be Innovative on a Budget using Simulation

16 min reading time

How to be Innovative on a Budget using Simulation

Presented by Arlen Ward, PhD, PE, from System Insight Engineering – October 11, 2018

Reading Time: 16 minutes


Dr. Ward : What can we do besides standard-issue testing an animal labs to get answers in a more cost effective manner?

Anyone that has ever built or designed a device from the ground up know, when it’s a sketch on a napkin, it’s cheap to change things. When it’s production tooling, that’s when people start to cry.

If it’s 1x in the concept phase, by the time you get to the production and test side of things, it’s 500 to 1000 times what the cost was in the beginning. So you do simulation work first.

Okay, so everybody wants to foster innovation, they want to create innovation, they want to bring innovative devices to market they want to do innovative things. And there’s lots and lots of theories about how to do that, right?

Every device company I know of talks about innovation in one way or another, from their mission statement all the way through to every corporate meeting they have in their R&D department, because they have the most innovative people and they come up with all these great devices. And they, every single one of them, changes the world, even if it is a laparoscopic device that we change the shaft length by an inch and a half. And it’s the most innovative thing that they’ve ever seen.

Not that I’ve ever been a part of those projects.

But here’s the debate, because everybody says, everybody knows, these are all self-evident things. So if you want to be innovative, half of the world says, you have to “Fail Fast.” In fact, you have to “Fail Often.” They write books about these things.

And I had to pick this one because it has the arrows and whatnot. But there’s probably three dozen books out there on innovation and failing fast, even Mark Zuckerberg is famous for the line, you know, move fast and break things at Facebook. And that’s what they credit with a success and things like that.

The other half of the camp is that none of that works. That’s just complete garbage. And you need a plan. And it’s not that we want failure, both of these people, both people are people in both of these camps, what they want is answers, right?

So the “Fail fast and fail often” crowd is “go try it and get an answer.” And the “All that failure stuff is garbage crowd” is just go get the answer.

Everybody’s after just getting that answer. And the way you get that is through testing things and trying things out and analyzing things. And using all those engineering skills that we’ve been talking about today

The problem is the budget.

If I say I want to try 300 different tests, nobody’s budget really will sustain that. You know, the product development budgets are smaller and smaller and smaller, you’re expected to do things faster and faster and faster.

And the small text over here in the corner says, “Pre-clinical data collection costs go up about 15% a year” – that’s independent of any changes to your product launch schedule.

So if you are a company that puts out things on a regular basis, you can expect those costs to go up 15% every year. And it’s not because only not only because those tests get more expensive.

It’s because the regulatory requirements, the questions that are asked at the FDA, or the EU, or the fringe cases that people are interested in, those are all things that have to be investigated. And so those costs go up.

So if you have a two year device development process, where you have a budget for your pre-clinical testing, by the time you get towards the end, where you’re burning most of that money, you’re off by about 30% or a little bit more than that.

What’s the alternative to standard issue testing in animal labs?

The question I wanted to talk to you about today was, you know, initially, no surprise, based on our conversation this morning, is, what else can we do besides the standard issue testing an animal labs that might get us those answers in a more cost effective manner? We’ve talked before about how much faster it might be. But really, at this point, we’re looking at the cost. And what do we save by looking at these in a different way.

So we want to address these changes as early as possible, right? So as everybody that has ever built a device, or has designed a device from the ground up, when it’s a sketch on a napkin, it’s really cheap to change things. When it’s production tooling, that’s when people start to cry when you tell them that they that they need to change something about their design.

And that’s where this line here in the middle comes from, the cost to extract defects. You know, if it’s 1x in the concept phase, by the time you get to the production and test side of things, it’s 500 to 1000 times what the cost was in the beginning.

And if you wait until you start testing things, which is in the production and test phase, where it’s literally called out right there. That’s an expensive time to start answering questions about about whether your device does what it’s supposed to do, and whether you really want to be making those design changes.

So we do simulation work again, that was the the hot seat question this morning, where Joe and I got a chance to chat.

And as a company, what we look at is using tissue testing as part of your development. We’re certainly we’re not against using tissue testing. In fact, it’s it’s definitely a requirement around understanding how your device works. But if there is a way that you can answer those questions that doesn’t involve the variability of tissue, you know, certainly worth investigating from the problem with invention vivo testing.

The problem with those is they’re time consuming, expensive and difficult. And the difficulty comes in the fact that the tissue is just not the same.

The more control we have over technology in that energy tissue interaction space, we’re looking at control of that energy. You can control lasers and electrical energy in ways today that was unheard of 25 years ago. Control systems are much faster, processors are much faster, sensors are more accurate.

So the question becomes if we start to have that fine level of control, and a lot of different knobs, and the way we are designing our device, if we’re looking at that effect, in something that’s expensive and difficult, difficult and noisy as a data source, you’re going to lose a lot of those subtle effects, not because it didn’t exist, but because of what you’re using to measure it. It covers that up unless you’re looking at a very large sample sizes.

Some studies we’ve been a part of looked at the variability of forcing renal arteries, which is used a lot in bustle ceiling, there was a 30% variance in that data collection, even when controlled to the same animal, the same side, the same day, and everything else they can think of, there was still that variance in terms of performance of the device that couldn’t be accounted by anything that they could come up with on the environmental side.

So collecting data through computer simulations, the FDA and other regulatory bodies refer to that as In Silico data, or In Silico trials. They view all of these different data collection sets in the same way. They consider simulation to be a model much in the way that they consider animal testing to be a model because of a model of what they expect to happen in humans. And in fact, even clinical data is considered a model because it should reflect what’s happening in the larger population, even though they’re working with a subset.

The FDA puts those all in the category of models. We have different ways of doing this. For the In Silico side of things we are looking at what is the device, the design of the device, how does the tissue behave, the tissue, whether you’re talking about liver versus a cardiac muscle or something, those things are going to behave differently, they’re going to react differently to the heat, to force, to energy absorption, like that. And also, the way that you apply that energy, if you turn it up to 11, as the show goes, you know, things are going to vaporize in different ways.

If you, if you apply the same amount of energy over a longer period of time, you’re going to get a different effect, if you pulse it, you’re going to get a different effect. If you put in a control system where you’re getting feedback from a sensor, you’ll get a different effect, those are all things that you have to take into account for specific cases when you’re doing a simulation.

So it used to be that in order to do these types of simulations, it was really in the purview of places that had a lot of computing power. And that was, you know, places like IBM, where they had people that spent their entire careers designing new mathematical models, pushing the envelope slightly, you know, because a lot of these can get very complicated.

If you’re looking at computational fluid dynamics, you know, you could have easily have multi-million degrees of freedom problems, even just when you’re looking at the fluid flow, much less anything else that’s happening in the system.

And so that required a lot of computing power are a lot of trade-offs in terms of simplifying your model in order to get to something that you could actually calculate in a reasonable amount of time versus what it is that you needed to answer from the standpoint of the application. And it was expensive, you know, you had to have in house experts that that was their full time job. You had to have it staff that can support those large computing centers, and things like that.

And that’s really no longer the case, because Amazon and Google and places like that have made made data centers available. And there’s even commercially available companies like Rescale now where you can use cloud-based computing resources to do the processing for you. So even though it used to be that IBM was a place that did all of this, all these calculations, now, even a startup has access to this, because when we do this, these types of simulations, and you run even hundreds of processors against a problem, and it runs for six hours, Amazon is super-excited about this, because it’s not user interface. So you don’t need it answer back in sub-second kind of things. And so when you say, I don’t care when it comes back, as long as it’s not days from now, they can load balance among all of their their different data centers around the world.

And then you get your answer back. And instead of something that would run on a very powerful workstation under your desk for a month, you get an answer back in a couple of hours. And then you can look at your result before you forget what it was that you changed in the model in the first place, which is key.
[Laughter]

Doug and I can commiserate about that – you know, where something runs for a few days. And then you realize that you forgot completely what it was that you changed from the last time you run it and, have to go back into it again.

But now, that’s a thing of the past. And it puts that computing power in the, in the hands of small startups and design houses, and you need to use it on a on a intermittent basis, you don’t have to build up your own computing systems, and then maintain them even when they’re idle.

At this point, you’re only paying for the time that you’re actually using, which is surprisingly inexpensive, when Amazon started to monetize their idle processing speed.

This was something that came up last April [at the 10x Conference], when we were talking about using simulation and speed of time-to-market simulation isn’t really a one-shot. And you haven’t answered everything.

I know from the very beginning of that my time and working in simulations and medical devices, the Holy Grail that everybody would love is to be able to take their SolidWorks model and upload it into a simulation and get an answer to everything that they ever possibly might want to know about that particular device in a very short amount of time.

But that’s not how this works. At least not yet. Not yet, is I’ve been working on this for 15 years. And it’s still not yet.

Instead, what we have to look at is that particular application and add enough complexity to answer those questions. And as we can see, these those orbits around and each orbit were kind of touching base with, with the physical validation, we get a little bit further out into the complexity space. But eventually we get out where we believe our models, we have confidence in our models.

You’re eventually out there, where you’re answering the questions that you need, with a model that you believe, and not only will you believe it, but also you have the data to show it to the FDA. And they’ll believe that as well.

So when you start looking at multiple design parameters, and you even if you just have two or three options for each one, those numbers of iterations get huge in a hurry. So once you have those models that you feel like you have all those things collected, where you’re, you have the physics represented in ways that that makes sense for your application, you can turn it loose on another parameter space and get response curves like this, where you’re looking at things like, you know, maybe this is, you know, tumor ablation size versus power and time, or electrode diameter, or whatever it is that you need to get an answer to. And you can look at maybe where your math minimums are, and drive it from there.

You can also look at things like designed tolerances. So if you know your middle-of-the-road cases, right? But then your manufacturing guy says, “Well, you know, how much room do I have to work here?” You can look at those best-case, worst-case scenarios or, the worst case scenario is if they’re on either end and and look at the performance and see if you if you’re on the edge of a cliff, or if you have some room to work because that can cut down on your production costs.

There are optimizations, but I’ll show you in a second. And then on the Monte Carlo side – Monte Carlo is where you basically, instead of putting just individual values in for things like tissue properties (because we all know the distribution of the thermal conductivity tissue isn’t one value, it’s a distribution for various patients and whatnot), you can put those distributions into the simulation and look at what kind of distribution you get out on the other end of the thing that you care about. Ablation size was the example we used earlier.

On the regulatory side, even though there are increasing requirements, we can start using some of the simulation information to address things around: patient BMI, differences between disease tissue and healthy tissue, you can look at things like we validated all of this in a porcine model and here’s the simulation that matches the porcine model, but when we change the properties to match them, and this is what we expect in our in our human cases.

And the FDA is completely on board with this. There’s two things that have come out recently, in the last couple years.

The first one is the guidance document that came out in 2016 around using reporting computational modeling studies as part of your device submissions.

What’s in there is there’s a whole bunch of checks that you need to hit in order to include simulation data as part of your submission, but the spolier alert is it’s exactly what you should be doing as a good simulation person anyway, where you’re validating your model, you’re you’re verifying that your code is calculating things correctly, all the things that you should be doing anyway, as a good simulation person are things that the FDA wants to see as part of that submission as well.

And the other one is ASME V&V 40: Verification, validation of computational modeling of medical devices. That’s a standard for the ASME that is supposed to come out this year, that’s a committee meeting. And in April, it was supposed to be out in July, and I haven’t seen it yet. So hopefully before the end of the year, that should be out.

But that is less about what actual verification validation you have to do and more things along the lines of context of use, what kind of risks are you looking at, and that’s going to drive how what kind of simulation is appropriate. And some places where it’s high risk, you’re going to need to do both, the simulation and the animal studies, and, you know, answering questions like applicability and things, things along those lines,

I put this slide in pretty much every presentation I do, because it’s important, and it’s surprisingly important in medical devices.

But you have to do validation, this isn’t an either-or sort of thing. We don’t get to just do simulation and never, ever actually go work in tissue.

And you have to do convergence tests, especially if you run lots and lots of these so that you know, that you’re getting to the right solution.

And the last line the bullet point down there is one that I never thought was going to be an issue in medical devices. But it turns out it is more often than, than I think we’d be comfortable with, but you can’t model what you don’t understand. If you don’t know the physics of why your device works. You can leverage simulation as part of your development because even though the FDA I don’t know if it’s still the case, but it certainly was a while ago where if you know that you do X, Y and Z and you always get the result that you want and you do that enough times and get it through the FDA if you don’t understand why x y&z drive that, that will prevent you from being able to use at least physics-based modeling for accelerating and saving your budget.

Rough examples real quick. One thing that we were involved in recently was, this is actually a device that came out of the Texas Medical Center that we were talking about with, with Lance Black. And I think he referred to someone at Methodist that was just handed their IP with no strings from the from the hospital.

This is that project. This is a urologist that came up with an idea for a device, they’re going to move from their initial concepts into a first-in-human trial. The top design is the end of the probe that they’re using. But actually they wanted to, rather than having to build their own prototypes that get those approved to us and humans, they wanted to use bipolar forceps as as their proxy electrodes and wanted to know what the difference was, whether there was any risk with damaging the tissue by applying these electrical pulses through bipolar foreceps, which is what’s in the bottom versus their device.

So we did a lot of simulation work around how far away is that from the ureter. We were looking for current concentrations of possible places where you get some thermal damage from application of these electrical pulses.

So we were able to create the visualizations and say it’s unlikely you’re going to get it have any thermal damage. They were able to take this to the IRB and get approval for their first-in-human, based off of this type of analysis.

So when they, rather than requiring more porcine models, another powerful technique using simulation is the optimization, where if you can describe something mathematically, you can turn the computer loose to solve those sorts of things.

I found videos are a good way to communicate what we do in simulation world with non-technical audiences, because it kind of gives them an idea and walks them through it at a reasonable pace. But what we’re looking at here is half of a jaw set, a hemostat-styled device where they’re trying to

minimize the mass of the device of the jaws themselves in order to increase visualization down at the tip of the device. But at the same time, it can’t be so flexible, that the jaws are going to deflect and touch and short-out if there’s no tissue between them.That was a subset of the different iterations that were done by the computer and try it and ran through all of this, for the shape optimization, I think it ran through about 350 different designs as it zeroed in on what would be the, the proper curve to that, given a certain load at the root, and then just simply supported at the tip. So it was an opportunity to drive through a bunch of those, have those then created, and move forward from as the first pass for prototypes.

Another example is a device where you’re looking at renal denervation. And this is a cooled catheter, where you run coolant through the balloon that occludes the vessel. There’s a nerve about four to six millimeters below the surface, that you want to apply RF energy, and ablate the nerve, but you want to run enough coolant through there, that you protect the vessel itself. And we were using optimization techniques on applying the energy into the end of the tissue to get a good idea what that behavior needed to be on the algorithm and energy delivery side rather than on the device itself.

Joe Hage: Are there some medical devices or situations that are not suitable for simulation first?

Arlen Ward: I would say that’s a balance, right? If you have a device that’s easy to prototype and not expensive to test, simulation is going to lose out to just building the prototype and and testing it.

If you if you’ve done a lot of these, whatever the devices that you’re looking at, and your experience tells you, you know, answers to those questions within a certain amount, that becomes an area where you’re just going to want to build it and test it rather than spend the time on the simulations. I certainly don’t say that the simulation is the be-all, end-all and applies in every case, I think it’s you use them both, it becomes just another tool in the toolbox.

Tor Alden: Great speech. Tor Alden, HS Design. We’re seeing a lot of AI, artificial intelligence and coming into the SolidWorks models and basically taking away our jobs eventually. But what at what point do you see the use of AI with your simulation tools? And when you mentioned you said you went through the ablation blades and yet you ran 137 – I forget how many models – did the machine optimize and pick the best one?

Did you just run it overnight and it gives you the right one, or do you have to do the post-modeling analysis to choose which one is the ideal?

So in that, in that particular case, for the job, we had a math equation that defined what would be optimal.

We basically said, we want to minimize the mass in the jaws by changing the shape by subtracting things away from the shape, but not exceeding a certain deflection in the job.

So those were the two things that had to be driven. In that case, it was just turned loose. And as it solved cases, it took an initial guess to the first one and looked at it, how it compared to the original and work its way through until it came to a minimum on in terms of mass, where anytime they would remove more mass than that or change shape in any direction, that it would increase the mass or exceed that limit of deflection.

So if you can describe your design goals, in terms of the optimization equations, you can turn it loose and have at the end, you have one one result that that simulation run says is the optimum based on where you’re at. For the most part, if you can describe what you’re trying to do in terms of that you can turn it loose.

Now there’s a whole other side of a whole other field of topology optimization were there it’s removing material from designs, and it works very well with additive manufacturing, because it’s no longer constrained by what you can machine. It’s now kind of printing out designs, that’s a whole other area that’s just getting started. And the impacts that I think are yet to be seen.

Srikoundinya Punnamaraju: I have a couple of questions.

At the interface of biology and engineering in a way. Are there other simulation tools available? The simulation is as good as the model of the inputs you to begin with. So, do you account for the biological environment of that to see the adverse impacts of, if any, of the simulation on the on the environment?

And, and the second question is, if you have to do like a number of models to get there? How does that compare to the testing?

Joe Hage: Well, let’s repeat the question because, despite protestation, he still spoke softly.

Arlen Ward: So the second half of that question was, if you have to go back and you’re running multiple simulations, how does that compare to to building these things, right.

A good time to use simulation is if you’re looking at six to eight weeks lead times to build something or you have a system where you have an energy delivery side and a disposable side and maybe you need to make progress on the disposable but your test fixture for the energy delivery side isn’t ready yet, that’s a great time to use simulation because even though you may not get 100% of the answer, if you’re 80 or 75% or something short of that, you’re still gonna be making progress towards for getting you know narrowing in on the answer that you want.

So time-wise I’ve yet to come across well I mean other than the things that we passed on doing the first place where you said you know you’re better off for your device just going out and testing it.

The one that I’m thinking of in particular was a needle-force insertion test where they had already built the prototype and it was a matter of sticking it through some porcine and skin and putting (??) on it. And it’s like, just go do that. You you don’t need us to develop models to address that. So time-wise, the simulation wins out.

When it’s the other side of that, where they’re complicated things to physically build and get answers to, but then you also have to instrument them up and find appropriate test.

On the boundary condition side, depending on whether you’re looking at mechanical (??) or Lachlan thermal boundary conditions, you can certainly match those to the soft tissue and the biological environment. Things like profusion are included in a lot of these models because it matters.

If you’re looking at polls, electrical pulses, and want to know whether it’s going to be a thermal damage risk, the blood that’s going through there is going to reduce that risk. So you want to include in the model, right? So those are those conditions are included as they need to be.

One of my pet peeves in the simulation world is companies that are using, especially implant companies that are using analysis to analyze their their implant, but they don’t analyze the soft tissue that’s around it, right? Because that’s the loading condition for the tissue, right.

Joe Hage: Dr. Ward Thank you very much. Thanks.

email me