Glossary

The Virtualitics AI Platform supports model validation in a couple of different ways. One option is to leverage Virtualitics Explore to validate models within the no code environment. This allows users with or without a technical background to run embedded AI routines to uncover insights about model performance and validation. A common analysis path is to use Smart Mapping, one of our proprietary AI routines to identify key drivers of error in the model along with suggested visualizations to flag areas for improvement.

Another option would be to leverage the Virtualitics Developer Experience and the sdk to create applications that monitor and assess model performance over time. This also allows users to create AI apps to continuously monitor model performance and leverage open source python packages to test, evaluate, and refine their models. By monitoring data and model results over time, you can establish possible changes or drifts in data that could impact your model’s accuracy. This ongoing validation is crucial in sustaining trustworthy and effective working models when the circumstances change.