Video: Where statistical rigor meets visual industrial analytics | Duration: 3030s | Summary: Where statistical rigor meets visual industrial analytics | Chapters: Webinar Introduction (4.942s), Spotfire Platform Overview (111.572s), STATISTICA Integration Overview (249.702s), STATISTICA Integration Overview (387.487s), Statistical Algorithms Integration (519.907s), Normality Test Demo (679.532s), Augmenting Visualizations (1091.582s), Classification Modeling (1219.087s), Scoring New Data (1507.492s), Spotfire Action Mods (1734.132s), Time Series Correlations (1997.417s), Action Mods Recap (2491.092s), Future Roadmap (2739.007s), Q&A and Wrap-Up (2909.232s), Closing Remarks (3012.872s)
Transcript for "Where statistical rigor meets visual industrial analytics":
Hello, everyone, and welcome to today's webinar where statistical rigor meets visual industrial analytics. We're thrilled to have you with us today. I'm JP Richard Charman, and I'll be your host for today's session. Now before we get started, I just wanted to cover a few housekeeping items to ensure you have the best experience during today's session. The webinar will last for around up to forty five minutes with a ten to fifteen minute, q and a segment that will be held at the very end of the session. Now if you have any questions during the presentation, please do not hesitate to use the q and a panel, which is located on the right side of your screen, and we'll address as many questions as we can during the q and a session. Additionally, we've made a few assets linked to today's webinar available in the doc section of our webinar platform. So please feel free to access these, and you can find the doc section right next to the q and a, section on the right hand side of your screen. Now after today's session, a recording of today's webinar will be made available on demand, and we will email you the link shortly after the event. Now with that, let's dive right in. I'm excited to introduce our presenters today. Dan Ropp, senior principal product manager here at Spotfire, Thomas Jurczyk, our Spotfire principal data scientist, and Adam Faskowitz, our lead data scientist here at Spotfire. Now with that, I'll hand it over to Dan to jump right into the presentation. Very good. Thanks, JP. Okay. So I'll just start with, a general, you know, introduction here. You know, Spotfire, of course, is visual industrial analytics platform. We uniquely combine advanced analytics and industry specific visualizations along with AI, and it really helps experts. You turn what is complex industrial data into something trusted, actionable, intelligence. So, okay. Really, there are three things that we think come together and make Spotfire truly unique. It's the combination of these three things. So first, visual driven analytics. This is the unique way that, those of you that use Spotfire understand that Spotfire combines analytics, that drive the visuals that you're looking at. Industry focus, this initiative, we began this a year or so or more ago. We've been adding, industry specific visualizations, industry specific analytics directly natively into Spotfire that you can leverage. And also combined with the enterprise scale, meaning governed, secure, and expense extensible to thousands of users. So it's extensible in terms of number of users, but also in terms of the the sizes of datasets as well too. So we think it's those three things that come together that really make Spotfire, the ultimate, platform for scientists and engineers. And these are our most advanced features come together. And in Spotfire industry pro, our most advanced analytics are are are industry specific visualizations, analytics, connectivity. This is the place where, where where our most advanced features come together, including the feature that we'll be talking about today. So some of you may have attended the what the recent what's new, in Spotfire webinar, And, hopefully, you saw mention on this new integration we have, an enhanced integration we have with Statistica, which is all about how you can more easily leverage the the power of Statistica directly from within Spotfire. And so that's what today's presentation is gonna be all about is is what the is what this is all about. Importantly, this is a feature that's not only coming out in '22, but it's also something that you can use in the fourteen six LTS with version fourteen six point two. Alright. So first off, a little bit of background. So STATISTICA, provides a rich set of statistical algorithms, been long time in use for that data science machine learning. It's very rich in capability, very widely used in in manufacturing, and very useful for those problems. Of course, it's no code, but it's oriented more towards someone who has that experience with data science. Generally, it tends to be, you know, oriented towards that that kind of experience. We do have an integration of STATISTICA with Spotfire. So if you create workflows in STATISTICA, you can then use them within Spotfire. It offers a lot of flexibility. And a lot of this is applied to to, manufacturing use. So we have manufacturing use cases in Spotfire that require the algorithms that this that Cisco provides. Our customers, you have told us that we wanna have that combination, but we wanna use one product and we want that product to be Spotfire. So that's essentially what is needed, and that's effectively what we have done here. So with Statistica Analytics and Spotfire, scientist engineer, if you're familiar with data science, you can directly apply these algorithms from Statistica, in from within Spotfire. And we'll talk a lot a lot about about how this combination makes it a lot easier to to use these algorithms now. There's wide application of these algorithms, in general manufacturing, high-tech manufacturing, quality control for doing quality control charts, process discovery, using multivariate analysis and clustering for finding hidden patterns, root cause analysis. A lot of these models can lead to factors that can help you understand what's actually driving yield, quality readiness, making sure that your data is fitting the right or is coming from the correct distributions that so your your assumptions are valid for further analysis, time series temporal behavior, finding anomalies, time series smoothing, autocorrelations for individual time series, cross correlations between time series as well too. And finally, making use of models that you might have trained, in Statistica, and then making use of the scores, the predictions that those models have in your Spotfire visualizations. So wide variety of of application here for this, what we believe, what we have here. Now just to, revisit a little bit about the the existing integration of STATISTICA. Statistica. This has been available in Spotfire for for many years now, just to give a contrast as to what's changed. Right? So start with what's there, prior to '22. So the way it would work is you would use Statistica directly. You would create a workspace in Statistica, then you would come over to Spotfire, and you could register that data funk. You create a data function in Spotfire using the tools menu and the Statistica path provided that. You'd register that as a data function in Spotfire. It would reference the workspace that you created. Then you can you you would have to you define the parameters. So in Statistica, when you're using it, you expose various parameters. In Spotfire, you would declare what those are, And then, ultimately, you would, assign the, you know, as you would any any data function, map that actual data function to what you have in your actual data. So this is, this is, still there, will continue to be supported. It provides ultimate flexibility because, you know, when you're working with that workflow and statistical, you can essentially do anything that you want. And so that will still be there. But in contrast, we've simplified for many use cases, the ability to use statistical. So this is a little bit about what the the enhanced integration looks like. For those of you that have used the integration, you'll notice that there is a new option on the menu called algorithms. When you click that, you'll be able to choose from a list of algorithms that are available to you. There's 23, in this in the in fourteen eight that are available, and you can click on them, see what they're all about. And then when you choose to use one, you actually you can define the parameters optionally, and that's because they they come predefined with common parameters that are used for these use cases. So you may often find that the way that they come out of the box is essentially ready to go, and you don't even need to do this step. And then, of course, the final step is to then apply that data function to your specific data, fill in your variables and so forth, and then and then go forward with your analysis. So this provides a a lot easier, more seamless way in Spotfire to access these these algorithms. And so we wind up with, you know, 23 statistical algorithms ready for use, for manufacturing problems, statistics, machine learning, general data science. You can access them directly from Spotfire. They easily become Spotfire data functions, and you can even do things like train models from Statistica and then make use of those models for scoring directly in Spotfire. And so, for example, if you're looking at some machine data and you wanna be assured that it's from a normal distribution, you first might use standard features that we have new in Spotfire, things like a box plot to see that distribution, overlay it with a violin plot, which is a new recent, recent feature that we've added as well too to sort of see the shape of the distribution. But you could introduce a statistical data function to actually give you statistical evidence as to whether or not that's actually coming from a normal distribution by using, for example, a, a Kolmograf Smirnov test statistic that you can integrate directly within that visualization. So we think that this this combination allows, you know, the the ultimate combination of of of easy to bring in these algorithms and being able to use them in your visuals directly. Now we recognize that the algorithms themselves, is is part of the problem, but also the to really leverage what you can do in Spotfire, you wanna be able to to a lot of these algorithms have natural visualizations that are associated with them. And so we're developing a series of actions that execute the statistical algorithm and then portray the results in a way that's kind of expected for those algorithms. So, like, if you're doing correlations, being able to easily see a correlation matrix, quality control, being able to easily quickly have a quality control chart appear. When you're looking at predictive models, being able to to assess how well of a fit it was, there's standard visualizations for that. So we're developing actions that can with a click can actually provide you these diagnostics, for time series, auto and cross correlation, being able to easily see what the results of what the algorithm is saying in visual form so that you don't have to construct those visuals themselves. And so we think that really completes, you know, what the full analysis, you know, workflow that that you need to do as well. Okay? And with that, we have some demonstrations of this so you can see what it actually looks like. First, we're gonna have Tamash directly use the data function so you can see how that works. And then Adam will come up and show you some of the actions that have been we'll be working on that to use some of these data functions directly, as well too. So I will, transition now to Tamash. Let me take a look. Thank you then. Share my screen. Okay. Good. Great. So, yeah, I will I will try to show you everything in in real, how it looks like, and and what is the feel of that. I'm also trying to explain the things around that, the new features. So, in fact, I will I will use as a first example, I will use something what Dan described. So on the screen, only to know what I'm doing, I have a two datasets. One is a typical, let's say, raw data here with some measurements, many parameters, and then then also some categories. And then I have second table with with some limits, like control limits, upper, lower control limits, for each parameter. And then on the right, I have some box plot, which is typically important for manufacturers to check, like, if you have if you have where is your distribution? And and, of course, if it is, like, properly in in control limits. And, also, in the end, you can you can, like, you can see these these bell shaped, estimations of violence here, but you maybe would like to check by some tests if they are normal or not. So if you have this need, you do not need to develop anything. You can use, for example, normality test from our new statistica menu, and you will find it as was mentioned in tools. If you have a statistica, extension installed in this new version, you will have here two options. There is a statistica menu. And in the past, there there were no no options, right, there were only one. This old integration is, in fact, what is now going into that workspaces section And this new, way of of, really applying algorithms directly from Spotfire is is here from algorithms. So if I click on it, you can see I have here the list. So I do not need to, like, wire some some workspace or I I do not care what is under the hood. And and from these from these features, in my case, I would like to choose a normality test. You have some description here, and you can right away go forward and and try to, like, set up the function. So, these algorithms are are prepared to to to, like, expose the most important parameters for you. In this case, it's simple. You need only to, select, the data which you would like to use. So let's say I would like to do normality test for these variables. Of course, you have a descriptions And and and outputs, there are two default outputs, for descriptive stats and normality test again with some descriptions. So if I click here, you have seen that it was, like, simple process. I only define inputs and outputs, nothing more. And in the in the background, in the data canvas, there was a created, data function, in fact. So you can see it looks like a normal data function. You have your inputs. You can change them here in the canvas. As you know, there is a there is a standard way how to see the data functions. And also, you have the results, which is, like, descriptive statistics table and normality test results where you have some normality tests in in various forms, with some with some results you can you can use right away. Of course, if there is a data function here created, then you can access this data function also from the typical places where you have data functions. So if you go to data function properties, you will see that. There is a type, which is a new type, which is statistical algorithms. Right? And you can edit the parameters from here as well. Or, you can, in fact, do the same thing because it's it's it's also a statistica feature. So if you go here to statistica and and workspaces, you can find that this function is there, and there is a new flag here, which is of type algorithm. And if you if you, are familiar with the with the statistica integration in the past, This menu was where you were loading your your custom, workflows from from workspaces from your from your disk or Statistica environment. And here, there was also possibility to modify the data or modify the parameters. Sorry. And that's that's, let's say, this intermediate step. So if you are not happy with, with the options, because we have only really, like, simple options for for the inputs and outputs. But if you want, to define something more, there is some set of parameters which we expose to which you can, add to your to your analysis. So for example, if you want to have, like, this multivariate test, if you click on it, you can see what it is doing. You can see that this this switch, this parameter is in fact to to if this test will be outputted or not. And in outputs, I can say I want this test. Right? So you can kind of modify the existing function. And on the canvas, it will change. So now I have here these test results. I do not have I have a new table, which is this multivariate, multivariate output. And, also, I have a I have here a new parameter to set up. And there are no results because, in fact, the the value I need to say that I want to have this this output. Right? This parameter is if I if I have that if I want that multivariate test or not, and now it will be recalculated. And, of course, I need to specify where it will go because I did not do it. Also, please be aware that you can also, like, add the results to the previous tables or add add and, create a new table from it, all as possible. So now I have a, like, test multivariate test here as well for this function. And, of course, you can afterwards, save this function to the library if you want. You can you can save it to the library, and then afterwards, you can access it from your from your f x file like any other data function. So that was how it is how it is working. Of course, you can when you have these results, you can play with these tables, and and you can, of course, add add that table to to your to your visuals. You can you can even add here add add this, tests test results to this, to this table by by by simply adding a new calculation to to your statistics table. For example, I can have you, like, should promote test. And now I have it it here. Yeah. It's it's still not not perfect before because I need to do a column matches that, this normality tests are connected, with with my original table. So that will be that my variable is, in fact, column name, this raw data, and then I have it here with the proper values. So I kind of augmented my my box plot with, with these, normality tests, normality test information, and I can make a better decision based on that. So that was a first example. And then I would like to show you also another example which is, important. Like, of course, we have a lot of lot of features there, but, I would like to show you something which, which is a bit new, and then and you need to see that that this how this concept works, and that's the modeling. So, you can see that we have here some, like, classification section, clustering, regression section, and then we have also here a scoring section. So if if you have if you look at the data, what we have in the background, it could be data from from any manufacturing process, and we can have our data's dataset, which which, we are we are monitoring if if if something was scrapped or not scrapped. Then we have some, some categorical, predictors here, and then we have a lot of lot of parameters which could contribute to that, scrapping or not scrapping. So we can we can create a classification model, based on this data. So let us do it to to show you how it is working. Yeah. And maybe one more thing, which I did not mention before. I'm sorry about that. If I go again to tools and algorithms, and these normality tests, then there is this, this checkbox. It's it's a bit hidden. But this one, if you if you know that the default options are not not so satisfactory for you, you can check this one, and you will go to that intermediate menu where where we were, like, we we get to this one from from a bit different menu, but only to let you know. If you want to define your your parameters a bit differently, then please please use that, that advanced menu. And now, sorry for jumping, but let us go back to that modeling. So we can use, for example, boosted trees. And, again, we need to specify some parameters here. We have more because, of course, it's not that, it's not that, simple here. You need to specify, like, what is your target and all other things. So so data, that's the data we which are which are coming in. You can see it from the description. And then we need to specify, like, what is the target variable so that would be target. And see there is already some, like, clever logic under the hood. It's not showing me all the parameters. It's showing me only these which are categorical. So it's a bit helping user. And and here, you're the same. I can I can like, I have, thousands of parameters, but let us use only several of them at the moment? And, of course, there is another function which will pick for you which parameters are most important, but let us use these. And for categorical, I can do the same. Maybe equipment and station. ID is not relevant here. Right? And then there is, one, let's say, more, like, use use useful feature, which is you can define which variables will go into the output together with predictions, which would be typically some ID if you want afterwards to merge it with some other datasets or want to drill down afterwards. So that's that's very useful. And outputs, we have many outputs. So one of which is predictions and then a lot of things which are connected with some performance of this particular modeling and model. But what is also very important here, in the end, one of the output is the model. And that's very, very important. So now the the modeling is happening under the hood and and again, like, there will be a data function created for that. Right? So if I go to data canvas, now I can see I have here, my more complex data function, which has these inputs and outputs. And and I I can see that I have I have my predictions here, which is, like, probability of, of not not not not scrap here, probability of scrap. Then I have some also some predictor importance, which of my predictors were most important in that modeling. Some risk estimate estimates, maybe confusion metrics. So even when I I pick the model kind of, these variables randomly, it's not that bad. And then I have a lift table if I want to construct a lift chart. But in the end, important thing is that I have here a model. And this model is, in fact, is the PMMO code, which was which was created, and and I can reuse this. It it's implemented in the way that it's in the form of table, in fact. So you have this table, and you can use this table for scoring of the new data. So if I will add the data, let's say, a new data. I will do it here. Data for scoring. So now I do not have here a target. I have the same variables, same same parameters, but I have no information about if it was scrapped or not scrapped, then I can I can, score these lines based on my model, and and I can do it, with the other functions, which are in scoring section? So this one is, of course, classification scoring, and you can see from the description that it will use it will apply my PMML model table or this PMML model on my new data. So or or or any data you will pick here. So I will pick, I will pick data, of course, to make my life simpler. I will pick the full dataset, but, in fact, I need only the variables which are important or which are which were used by that model. Right? So I do not need necessarily all, but I will do that here. And then important is that you can really, like, impute here the model, and the model is, in this in the way how it was implemented, it's in the form of table. And it could be one or more models. It's also important. If you have more lines in this table with models, each each line is one model, then you will have, predictions, like predicted values for more models at once. And then there is the last one, which is the same, like, you want to maybe transfer over also your, ID column, maybe, or the risk risk response, but you do not have it here. And that's it. And the output is is one table with with predictions. Maybe what I would do only to make it more interesting is, and that it's doing something. I will use that scoring only for the marked data here. So the the new data function should be should be created here, and I can see that it's it's, outputting eight, eight lines, with, this is what what was predicted. This is in theory the residual. So if you have information about the real label, then there will be, like, correct or not correct, and then you have some, probabilities for these two groups. And then you have ID, which I transferred from, which I decided to transfer and only to show you that it's doing this marking stuff, classification scores. So if I highlight something else, it's recalculating and then and, moving here to predictions. So I wanted to to share with you mainly this new concept of of models, which you can have in these in these tables. And and if it's stable, then you can store it in the library and use it use it in other DXPs or in other dashboards or applications. So, yep, that was, that was my my demo. And now Adam will show something, more even more interesting, which is, like, packaging these data functions together in actions and creating some additional visuals. So with that, I will hand over to Anam. Great. Thanks, Dimash. And let me go ahead and share my screen. Okay. So today, I'm really excited to tell you guys a little bit more about how we can use Spotfire actions to be able to build on top of all the great statistical algorithms that are now available in the product. So if you're not familiar with action mods, they're similar in a way to IronPython and being able to expose the Spotfire APIs. So you're able to build all sorts of cool automations with these actions. And what we're doing here is we're actually running these statistical algorithms through an action mod and then creating an output, result, whether that be a visualization or additional information, and actually create a visual output for, something like, in this case, this modeling step. So what we have here is basically just a snapshot of where Tomasz was, in his modeling process where he had built a model that was able to predict whether some something was gonna be scrapped or not. And we have the tables, but, naturally, what we're gonna wanna do is view diagnostic tables to see whether this model is any good. So what we see here is that there's this button. If I click on this button, view model diagnostics, it'll create a new page, and you'll see basically instantaneously that has all sorts of visualizations for us to evaluate our model. So we have the confusion matrix showing us, whether where the model was right, not right, see where the errors were. We have the ROC curve, which shows us basically our false positive rate versus our true positive rate. And what we wanna see here is that this curve hugs the, you know, upper left corner of this chart, which it does pretty well. We have an error rate versus cutoff, basically looking at different cutoff points, in the model. So typically for a tree based model, which is what Thomas used here, the cutoff is gonna be at 0.5. But if we want a model that, you know, is a little bit little bit more specific and does a better job at picking up when something was scrapped or not or not scrapped, we might prefer something that has a higher cutoff or a lower cutoff. And then we also have, charts in the bottom here for gains and lift, basically saying how well, the model is at differentiating between, these two classes. Also, as was alluded to earlier, we have variable importance as one of the outputs here for this type of model. So this can be very nice in being able to find exactly which predictors were most important in this case, especially if we have, you know, let's say, a thousand parameters that we put into a model. Variable importance can obviously be very nice in being able to help highlight, what the model thinks was most important in being able to predict between these classes. And we also just have a bar chart here to see how many of each of these, in the target class, was predicted by the model. But we see that all this was available just with a click of a button, and that was that's what action was able to do here. What I can also show is that, in terms of classification, so this is a classification action model. And we have the view model diagnostics, which we dragged in as a button on a visualization. We can also invoke it here from the fly out, and we can also, not only just create the visualizations as we did. There's also a train classification model script that we can use that will actually, build the model as part of the action mod process. So if I click on this, you'll see that it has many of the same things exposed as the data function that Thomas showed. But instead of just having the result, of just the tables, instead, this output and this new page will be there, when the model is done training. So, really great way to be able to kind of just get that easy visualization and easy added value, from the this algorithm. So that's just the first of a couple examples I'm gonna show here. The next one that I'm going to take a look at is some reactor data. I'm kinda showing you how we can use these out, these algorithms and these actions, from start to end. So in this case, what I'm looking at is some reactor data. I have variables on pressure, flow, temperature, and, essentially, I'm trying to figure out, how all this all affects, yield in the end. So I I start out in many cases. You know, maybe I'll I'll build a scatter plot, which I see on the right here. So I'm looking at the scatter plot, feed flow versus pressure. I'm looking at, coloring by the catalyst rate. I'm I'm kinda seeing some interesting patterns here. But as a data scientist, what I like to obviously start with is, a correlation matrix. So to do that, I can go now into this side panel, this actions, and I have a calculate correlation matrix, action. And if I click on this, I can actually run it. So I have the data here. I'm gonna select over, these variables, and then there's a drop down here for the different correlation methods available. So here we have Pearson, Spearman, Kendall tau, and gamma. Each of these actually corresponds to a different statistical algorithm that we expose in product, and it the action mod is deciphering which one's selected, and then we'll conditionally, create, and run this algorithm based off that. But I'm gonna keep it at Pearson, and I've clicked run. And what this is doing is it's picking up that algorithm, from the library. It's going ahead, and it's actually running it, and you'll see that it created this new page as a result for us to take a look at. So we have this heat map here that shows all of the correlations. This is interactive where you can actually click on it and see what's happening on the bottom left. So there's actually a scatter plot that's able to show you, what these relationships look like. And then the top left, we have a KPI chart that shows us what the strongest correlations are. So I click on this. It shows us that there's very strong relationship here between these two variables. I click on a few other ones. We can see that the same is there. These these in red are those have very strong negative correlation. And I'm able able to get all of this with a very, you know, easy click of a button. Now something that, appeared to me when looking at this data is that this conversion efficiency percentage, kind of like our our yield in this case, actually has a a low correlation with the catalyst rate. I was thinking that this does make some sense. I mean, it's possible that the catalyst, you know, it needs a little bit of time to see its, its effect on yield. So suppose I suppose that might be why, the correlation is low. But I'm wondering if there's if I can still find that relationship, and I can do so using our time series auto and cross correlation action. So if I go here on the left again, I'll see that I have a different action here that's calculate time series correlations. And for this, what it's gonna do well, first, I'm gonna select on that same data that I was looking at earlier. So you can put in all sorts of time series here. I'm going to add them into the action, and then I'm going to set a lag. I'm gonna set this to 20. I'm not going to do anything with the rest of them, and I'll click run. So what this does is it takes a look at the time series, and it is looking at the correlations with itself when it's lagged one, two, three, all the way up to 20, lags. And what this is saying is, looking at the relationship and being able to determine if there's a high correlation with, you know, lagged one with itself, saying that there's a strong relationship between the different time stamps at lag one, two, three, four. And as we see on this results page, there is this case on this first variable that we're looking at with catalyst rate. So you'll see that there's a strong autocorrelation here, amongst, the kind of lowest lag, so the closest steps, to the original time series. But what's neat here too is that the partial autocorrelation is able to isolate the relationship and show that there's really just a very strong correlation with this variable when you lag one. The rest of it is kind of just an intermediate step, in that. So, interesting to see, and I and you'll see on this new results page, I can unclick this. I can see all of autocorrelations at once as well. So what kinda stands out to me here is that the reactor temp, kinda has a similar looking pattern to the catalyst rate. So you'll see that there's kind of this gradual autocorrelation kinda showing there's a gradual effect, in this variable, but the partial autocorrelations stands very high at one. And then another thing that, is interesting to me is I see that the coolant flow kinda has this case where, you know, very strong autocorrelation to start, but then it actually gets negative. Maybe there's some sort of, cyclic, nature to this variable. But I'm gonna move on to the next page here where we can look at the cross correlation results. So what we looked at before was autocorrelation. So that's just one time series looking at itself, with a bunch of different lags, but we can do that same thing in looking at the the correlations between all of our different variables with all these different lags. And in the end, what we have and we can look at is a pretty neat chart that we have in the bottom left that shows, all of our different variables, versus all other variables, but then colored here by all the different lags that we have. So, in this case, what we see is kind of just like a a gradient that's showing where the highest correlation is between these two variables. So it like, in this case, it's strongest or highest in the middle. So this is where the it's the deepest blue, and this is showing that, okay, right around zero, that this is the strongest lag be or the strongest correlation between these two variables. So the temperature and purity, basically and we can see from the visual output that these do look very much aligned. They have a very similar pattern, starts off high, and then kinda descends. But what's neat is if I look at the rest of these variables, you'll see that some of them aren't centered at zero with their, relationship. Instead, if I take a look at the original variables that kinda, wanted me to kinda start this analysis. I have the catalyst rate, and I have the conversion efficiency percentage. And by clicking on it, I'm gonna click on, you know, the highest blue point here, and you'll see that it wasn't centered at zero. Instead, it was around a lag of 10. And if we take a look at the time series next to each other, we see that there is this effect where the catalyst rate raises, you know, a little bit offset to what we see above. So it's showing that with around the 10, you know, to lag of 10 time stamps here, we're able to see the effect of the catalyst rate and kind of mimic that same pattern, but after the fact. So, the time series autocorrelation, cross correlations, you know, this very, strong statistical, algorithm that's now exposed in the product, I'm able to find this type of relationship that might be hidden or not really available to me, before. So this is a really cool case, and I'm going to show you just one more example of how we can use these actions. So right here, what I have is some SPC data. So I have, measurements, across all different type of machine of sensors. And what I'm going to do is, you know, this table isn't enough to be able to decipher whether something was expected or not. Instead, I wanna create these quality control charts to be able to tell, if anything is out of control or look more into the data. So going into my actions fly out, I can click on visualize quality control charts. For this, I can select on the input data. So I'm gonna have a whole lot of variables that are available to me to look at. Then I'm going to click on a sample ID and pick out a measurement analyze. And for this, I have a few of these, a few of these parameters exposed to me on the control limits. So I'm going to actually calculate the control limits here. I'm going to use some of the spec limits if they're there, and I'm going to set, the graph type. So what this is doing is it's basically leveraging the statistical algorithm to create all sorts of information about these quality control charts. And then the action is creating all of these visualizations and creating this new page. I'll say that this one is quite complex, you know, probably the most complex of all the ones that I've shown today because so many so much configuration is happening behind the scenes in these visualizations. You'll see that there are many lines here. There's conditional coloring. We have an added, curve here to show whether this is a normal histogram or not, but all sorts of information is is packed into this action mod to, be able to create, these control charts on the fly. So what we have in the top is looking at the actual measurement variables. We're here trying to see if everything is as expected. So here I see that this is looking mostly normal, which is good. You'll see that, there are a couple of points on this chart that are yellow. So, that is indicating that, this has gone a bit over the warning limit, and that's above this yellow line in this case. But it nothing is out of control or beyond the red here, so that's okay. But on the bottom, what we're looking at is actually the variance of this data, and we see in this case that there are actually a couple points that are in red. So this is, these are alarms basically that said that, the variance has gone out of the control limits, and it's beyond what's expected, and we should investigate it because something might have gone wrong. So that's all the visual output. And what's really great about this too is that these actions are built to meet you where your data is and in your own analysis. So you don't have to go to an SBC template or something on the community to be able to leverage this. Instead, if you download, the statistic or sorry, the action mod, you can run it. And when you run it, we can create this new page in your DXP. And being able to do this is very valuable because we can work on your data. You don't have to deal with all the different steps on being able to, you know, work with a template or create these visualizations on your own. Instead, we're doing that for you and leveraging, the statistical algorithm to be able to do so. So these have been the three algorithms or the sorry. The three examples that I wanted to show you today that are leveraging the action mods on top of all this great new functionality. And, I hope it was helpful, and I'm more than happy to answer any questions that we may have. Alright. I think thanks, Adam. Let me just pull up the presentation again. Alright. So that I thought that was a really fantastic demonstration. You could really see the, you know, the richness of the statistical computations, the calculations, the advanced analytics, and then you can see how they really kind of come to life and really the patterns are revealed when that's combined via the actions, and you can actually see those results, in in a curated way, from from Spotfire. So, I think that was a really, really great, great demonstration. So thank thanks, Adam, and thank thanks, Dimash. So we do have plans for the future on this. Generally, we're intending to double the number of algorithms from the first release. This is a few examples of the ones that we're looking at, on the predictive modeling side, partial least squares, loss of linear and logistic regression, generalized linear models for working with, you know, more complex models that you might need for having more accurate predictions, model comparisons, being able to more easily choose, like, a best performing model. As Tomas mentioned, you could have a series of models and you wanna know kinda which one's the best. More advanced we have feature selection in what we have now, but we're gonna be adding more advanced feature selection that's sort of model driven. ANOVA, MANOVA, adding that as well too. On the industrial analytics side, process capability, you know, more just more about distribution fitting, getting different distributions and so forth, and more advanced clustering, more accurate clustering, and gauge R and R as well too is one that really can as well too. And there'll probably be, there will be others as well too. So I just wanna give you a sense as to sort of where we're going, with this. And so to summarize here, we have advanced analytics powered by Sadistika. They're available, in Spotfire fourteen six two LTS and also fourteen eight. Either of those, this is this feature is available in. These are algorithms for quality control, cause analysis, and a and a lot of other, cases that are applicable for manufacturing and also general, use cases as well too. We have a series of actions that are being developed to make use of these algorithms and produce visuals to make it easy to interpret what the algorithm is actually telling you about your data, in a way that's very natural, in Spotfire, and there'll be plenty more to come. So I think with that, I will, turn it back over to JP. We have a few webinars coming up as well too. Beautiful. Thank you very much, Dan. And, once again, thank you to Tomasz and Adam for those great demos as well, and to all three of you for that insightful presentation. Just wanted to also thank everyone who joined us today. Now before we get onto our q and a segment, a few things that we wanted to share In terms of our upcoming webinars, so we've got two great series are being added to on a regular basis. So whether you're looking to find out more about Spotfire or looking to learn about the latest in terms of what's new, please don't hesitate to register to the full series. And as you'll see, we've got a great session around AI taking place on the May 5. So that's upcoming. In terms of on demand access for today's webinar, a recording of today's webinar will be available soon, so please do keep an eye on your inbox for the link. Now if you're interested in learning more, please feel free to visit our website at supportfire.com or contact us directly. There are lots of ways to interact with us, whether it is via our socials, through our community. Additionally, our blog site has lots of great content where we share the latest on visual industrial analytics, dive into Spotfire industry pro in more detail. And last but not least, if there are any enhancements that you'd would like to see within Spotfire or have ideas that you'd like to share with us, please do not hesitate to visit our ideas portal and log your ideas there. Now once again, we'd like to thank everyone for joining us today, and we hope to see you at one of our future webinars very soon. And with that, wishing everyone a great day, and we'll see you all very shortly.