top of page


The only model fairness and interpretability platform for SMEs


Lack of transparency and underlying bias are key challenges in the deployment of advanced machine learning (ML) models. Fairness, accountability, auditability and trust in AI are difficult for most organizations to achieve. That is where StarFair comes in – leveraging propriety methods and clear visualizations to help users who are not data scientists understand the decisions made by complex ML models and estimate their fairness.


The capabilities of ML models are constantly improving, and companies across a range of industries are interested in leveraging their power. However, complex ML models lack transparency and accountability – qualities that are particularly important in regulated sectors like finance, healthcare and insurance. The challenge of assessing objectively is further amplified because the consumer of the model’s decisions is usually not a trained data scientist. Thus, there is a need to bridge the gap between the technical complexity of ML models and in-house domain expertise – and to ensure results are transparent and interpretable.

The Problem: Black Box Models


Most work in the field of ML is confined to academia or aimed at helping data scientists develop their models. Existing tools require advanced technical know-how and each one is intended for use in a narrow field. StarFair shifts this focus by improving ML model interpretability and explainability – i.e., the extent to which the internal mechanics of an ML
system can be explained in human terms – giving
domain experts the power to truly understand what is happening behind the scenes of the model. StarFair gives domain experts simple access to advanced techniques and augments raw outputs with clear and actionable insights.

Putting the Domain Expert in the Center


StarFair is a complete solution for assessing model
fairness, interpretability and performance. By incorporating advanced explainability techniques combined with proprietary methods, StarFair helps users who are not data scientists understand the decisions of complex ML models – creating a language that can be fully understood by the domain experts who utilize the models. StarFair’s model-agnostic solution runs multiple analyses and automatically raises alerts. Its intuitive visualizations help users obtain a clear, detailed picture of model behavior, clarifying its strengths and weaknesses.

The Solution: “Glass Box” Transformation


  • Clear and intuitive visualizations that explain model behavior

  • Prediction analysis using meaningful, representative samples

  • Actionable insights that augment raw outputs

  • Global model explanations

  • Advanced counterfactual analysis

  • Model agnostic

  • Quick, easy-to-use interface

  • Establishes a common language for domain experts and data scientists

Key Features


Recent advancements in ML present an immediate opportunity for businesses to employ more accurate and complex models without compromising fairness or transparency. StarFair is domain and model agnostic and can be used across a wide range of industries to significantly improve business processes, including:



Evaluate ML model
decisions on treatments or claims


Evaluate ML model decisions on loans, credit scores and risks


Evaluate ML model decisions on claims, policy costs and risks


Inspect whether a
model is fair
or biased
bottom of page