How is accuracy measured in Relevance AI?

How is accuracy measured in Relevance AI?

State-of-the-art train and test metrics

For most of our AI-driven analysis, we use state-of-the-art neural networks trained and tested on large datasets and different tasks. Hence, these models have already passed critical and frequently used evaluation metrics in AI. Experiments and feedback from clients indicate the same.

Unsupervised methods

When possible, we apply unsupervised methods of testing. Such as using clustering metrics to evaluate how the clustering results follow the general criteria of "Good clusters".

Human evaluation

A randomly selected subset of analysis results undergoes human assessments to identify weak aspects and improvements are planned accordingly.

Supervise tests

One of the limitations of working with real-world data is the lack of datasets for supervised testing (labelled datasets). In labelled datasets, each entry is accompanied by its expected analysis results. Our systems are designed in a way that if such datasets are available, we can produce test results on them (i.e. reports showing comparisons and overlaps between analysis results and the expected results).