Empowering Data Governance with AI: A UX Case Study of Anomaly Detection at Solidatus
Overview
At Solidatus, a leading data lineage and governance platform, I led the design and implementation of an AI-powered co-pilot/assistant designed to revolutionise users' interactions with and responses to data anomalies. This initiative focused on leveraging machine learning to automatically detect anomalies in data lineage, compliance, and governance workflows, transforming complex data irregularities into actionable insights.
By integrating an intelligent assistant into the Solidatus platform, we aimed to enhance user decision-making, improve operational efficiency, and reduce risk exposure for our clients, which include major financial institutions and public sector organisations. This case study details the journey of creating a user-centred AI solution that met technical and business objectives and set a new standard for user experience in the data governance industry.
Business Context and Objectives
Solidatus empowers organisations to map, manage, and understand their data landscape, helping them overcome complex data problems. Recognising the growing need for proactive data management, we embarked on a project to integrate AI-driven anomaly detection to enhance Solidatus's value.
Objectives
- Enable Proactive Decision-Making: Equip users with an AI co-pilot that identifies and alerts them to real-time anomalies, enabling proactive rather than reactive data management.
- Simplify Complex Data Insights: Design intuitive interfaces that translate complex anomaly patterns into understandable insights for users of all technical levels.
- Enhance Trust in AI Outputs: Incorporate explainability and transparency into the anomaly detection features to build user confidence in the AI's recommendations.
- Streamline Workflow Integration: Seamlessly integrate the AI co-pilot into existing Solidatus workflows, minimising disruption and enhancing user productivity.
Challenges
Developing an AI-driven anomaly detection system that was both powerful and user-friendly presented several key challenges:
- Technical Complexity: Translating intricate data anomaly patterns and behaviours into a language non-technical users could quickly grasp.
- User Trust: Striking the right balance between providing enough transparency into the AI's reasoning without overwhelming users with technical jargon.
- Integration Complexity: Ensure the new feature seamlessly integrates with the existing Solidatus platform and can adapt to diverse client data ecosystems and legacy systems.
- User Adaptation: Designing a solution that added value from day one and fits the diverse needs of various user roles, each with a different understanding of the underlying data.
Approach
Designing the AI Co-Pilot
Functionality: The co-pilot was designed to proactively monitor data lineage, regulatory compliance, and data governance workflows, flagging anomalies and potential risks. Contextual Guidance: Beyond simple alerts, the co-pilot provides users with tailored recommendations, explanations, and potential next steps to investigate and resolve detected anomalies. Integration: We prioritised a seamless integration with existing Solidatus features, ensuring users could access the co-pilot's insights without leaving their current workflow.
User-Centered Design for Anomaly Insights
Research: Conducted in-depth interviews with key clients across financial services and the public sector to understand their pain points, needs, and expectations regarding anomaly detection. Dashboard Design: Developed intuitive dashboards that present anomalies, including the type of anomaly (e.g., data inconsistency, compliance breach), severity level, and potential impact. Notifications: Implemented a real-time notification system to alert users to critical anomalies as they occur. Guided Resolution: Designed step-by-step workflows within the co-pilot to guide users through the process of understanding, investigating, and resolving anomalies.
Collaboration with AI Engineers
Model Refinement: Worked closely with AI engineers to fine-tune the machine learning models, ensuring high accuracy in anomaly detection while minimising false positives. Explainability: Collaborated on designing features that clearly explain why an anomaly was flagged, including the specific patterns or thresholds that were breached. Ethical Considerations: We ensured that the anomaly detection models were developed and implemented using ethical AI principles, addressing potential biases and promoting fairness.
Rapid Prototyping and Testing
Prototyping: Utilised Figma and ProtoPie to create interactive prototypes that simulated the end-to-end user experience of detecting, understanding, and resolving anomalies with the co-pilot's assistance. Usability Testing: Conducted iterative rounds of usability testing with target users to evaluate the clarity of anomaly presentation, the co-pilot's guidance's effectiveness, and the solution's overall usability. Iteration: Continuously refined the design based on user feedback, simplifying complex information and optimising the user flow.
Visualising Anomalies in Data Lineage
Enhanced Diagrams: Enhanced existing data lineage diagrams with real-time anomaly markers, visually highlighting problematic data points or flows. Drill-Down Functionality: This feature enabled users to drill down into specific anomalies to access detailed contextual information, such as the source of the issue, impacted data elements, and historical trends.
Ensuring Ethical and Trustworthy AI
Explainability Tooltips: We implemented "How was this anomaly detected?" tooltips, which provide concise explanations of the AI's reasoning. Confidence Indicators: Clear visual indicators of the AI's confidence level were incorporated in each anomaly prediction. User Controls: Empowered users with the ability to adjust anomaly detection sensitivity thresholds, giving them control over the system's behaviour and tailoring it to their needs.
Results
The implementation of the AI-powered co-pilot for anomaly detection over two months yielded valuable outcomes:
- Improved user workflows by providing proactive and actionable insights, reducing the burden of manual data analysis.
- Enhanced the ability of teams to identify and address anomalies efficiently, contributing to better decision-making and reduced risk.
- Strengthened user trust in AI outputs through explainability and transparency features, fostering confidence in automated recommendations.
- Streamlined integration into existing workflows, ensuring minimal disruption and maximum usability for diverse client needs.
Conclusion
The AI-powered co-pilot for anomaly detection at Solidatus exemplifies the transformative potential of user-centred AI solutions in the data governance domain. By prioritising user needs, focusing on transparency and explainability, and seamlessly integrating AI into existing workflows, we successfully created a powerful tool that empowers users to proactively manage data anomalies, improve decision-making, and reduce risk. This project not only delivered significant value to Solidatus and its clients but also set a new standard for user experience in data governance, demonstrating a commitment to innovation and user empowerment.