What is the Cognitive Bias Foundation?

The Cognitive Bias Foundation is an Open Source collaborative project designed to help document and understand methods to identify cognitive bias and provide resources for identification.  Five organizations are currently involved, a number which we’re happy to expand.

Want to help?  Contact us at: bias@AGILaboratory.com

*Learn about what a Cognitive Bias is here.

Ways to get involved:

Whether you are talented with academic writing, coding, linguistics, psychology, analytics, engineering, mathematics, solutions architecture, or any number of other specialties there are ways for you to contribute. To volunteer for advancing humanity’s understanding of cognitive bias through our ability to detect, differentiate, and measure it, please reach out to us via bias@artificialgeneralintelligenceinc.com, the form at https://norn.ai/cognitive-bias-detection-system/, or message our project coordinator Kyrtin Atreides directly on LinkedIn at https://www.linkedin.com/in/kyrtin-atreides/.

We’ve also added a mechanism for direct contribution of bias-positive sentences here: Bias Classifier

Additional details, clarifications/corrections, references, and notes may be submitted for any given bias to provide more data for contributors to work with on the site.  Academic writers, linguists, and psychology professionals, for example, could offer great value in this.

Project Status

In 2019 when this project began, interest in cognitive bias detection was severely lacking, but since generative AI has entered the public’s interest in late 2022 recognition of the urgent need for bias detection systems has begun to emerge. We’ve since integrated the prototype bias detection system with our more recent work on graph algorithms, as well as other tooling and recent advances in the field of AI.

We’ve conducted 3 rounds of testing on the new system as of May 3rd, 2023, including running a dataset of 150 quotes from 25 individuals, 6 quotes each, through a system that attempts to detect the presence of cognitive bias in 4 high-level categories and 20 subcategories. This tool operates in the cloud, and a version of it that drills down to detect each of the 188 individual cognitive biases listed in the Cognitive Bias Codex (2016) is stable in a local version, pending a cloud-accessible version for further testing.

The current round of volunteer testing follows the instructions and methodology described here: Round 3 Participant Instructions

Plans for the subsequent rounds will apply the scientific method to iteratively validate, isolate problems with, and further refine the detection system. Some steps in this process will require new groups of volunteers to participate, as we attempt to compare human bias detection with that of the system from many different angles.

 

[ Codex, Reference ]