Educational analysis of LLM alignment, safety behavior, and framing-sensitive response patterns.
-
Updated
Nov 4, 2025
Educational analysis of LLM alignment, safety behavior, and framing-sensitive response patterns.
A tool for auditing bias through large language models
Market Bias AI is a professional, XGBoost-based algo-trading analysis engine that analyzes multi-timeframe OHLC data to generate market bias, probabilistic liquidity target (DOL), and trust score.
Analyzing geographic and cultural bias in AI therapy advice. Interactive visualization showing how AI systems draw from predominantly Anglophone sources when advising users about culturally specific dilemmas in India, Nigeria, and the Philippines.
AI-powered skin analysis prototype with Power BI dashboard highlighting model performance, bias, and generalisation challenges in melanin-rich datasets.
A Disability Justice Approach to Fine-Tuning Language Models for Mental Health and Neurodiversity Contexts
Open-source audit toolkit for Global South developers to benchmark, document, and reduce AI tool bias in their markets.
In-depth exploration of Large Language Models (LLMs), their potential biases, limitations, and the challenges in controlling their outputs. It also includes a Flask application that uses an LLM to perform research on a company and generate a report on its potential for partnership opportunities.
A research framework for testing whether AI systems reflect and amplify human strategic communication patterns (‘game language’) across multiple models and runs.
Simulation framework for NHS emergency department triage optimization using a Mixture-of-Agents (MoA) architecture — built with SimPy, LangGraph, and FHIR-compliant synthetic data to reduce patient wait times and improve resource utilization.
Repository for the LWDA'24 presentation on 'Psychometric Profiling of GPT Models for Bias Exploration', featuring conference materials including the poster, paper, slides, and references.
This project investigates bias in image classification AI models, specifically addressing an historical misclassification problem. Using a modified ResNet-50 architecture and SHAP values, we analyze how model decisions are made, explore potential biases, and aim to contribute to the development of fairer AI systems.
Recursive Aesthetic Reinforcement (RAR) | First Author | Midwest Graduate Research Symposium 2026
🌍 Visualize cultural bias in AI therapy advice, revealing how local knowledge is overshadowed by dominant psychological frameworks in Filipino, Indian, and Nigerian contexts.
🔍 Track contradictions in AI and human content with LBOS-LCAS, enhancing bias and coherence analysis for clearer understanding and insights.
Add a description, image, and links to the ai-bias topic page so that developers can more easily learn about it.
To associate your repository with the ai-bias topic, visit your repo's landing page and select "manage topics."