Other
Responsible AI and Ethics Statement
Oct 27, 2025
Introduction
Dalil is developed and owned by Siren Analytics. At Siren Analytics, we are committed to building and deploying AI technologies that serve the public good and protect fundamental human rights. Dalil, our AI-powered disinformation detection system, is designed to support truth, accountability, and information integrity across digital platforms. This document outlines the principles, safeguards, and ethical commitments that guide the development and use of our AI technology.
Purpose and scope
This Responsible AI statement applies to all components of our disinformation detection platform, including:
Machine learning models (e.g. credibility scoring, clustering, threat detection, etc...)
Human-in-the-loop workflows (e.g. analyst validation, lead confirmation, fact-checking, ...)
Data collection and labeling
Model evaluation and deployment
It also governs internal processes, third-party integrations, and external use cases of the system.
Core ethical principles
Fairness and Non-Discrimination
We ensure that our AI models are trained, tested, and monitored to reduce biases across race, gender, ethnicity, geography, political orientation, and language. Disinformation impacts people unequally; our system strives to mitigate such disparities without amplifying them.
Privacy and Data Protection
User and source data are handled in compliance with applicable data protection laws (e.g. GDPR, LGPD). We implement strong security protocols and only collect minimal, necessary data. Our system does not profile individuals or store identifiable user data without consent; except standard usage related data for billing and performance purposes.
Transparency and Explainability
We prioritize interpretability by providing:
Scoring rationales
Justifications for clustering decisions
Audit trails for any flagged content
We acknowledge AI limitations and communicate clearly when results are probabilistic or AI-assisted.
Human Oversight and Accountability
All major decisions — including the classification of a threat or campaign — involve human analysts/leads. AI augments, but does not replace, expert judgment. We assign clear roles and responsibilities for oversight, including:
Ethical AI lead
Product team accountability
Incident reporting mechanisms
Safety and Harm Prevention
We proactively assess our models for the potential to cause direct or indirect harm (e.g., wrongful labeling, censorship risks). We reject cases that may suppress legitimate free speech or be used for surveillance purposes against vulnerable groups.
Integrity Against Misuse
We implement safeguards to:
Prevent adversarial attacks (e.g., poisoning, evasion)
Restrict misuse by authoritarian regimes, troll farms, or commercial disinformation actors
Vet partners and clients through ethical reviews
Model Governance & Lifecycle
The following outlines the ethical considerations throughout the AI lifecycle:
Phase | Ethics Action |
Design | Conduct Ethical Impact Assessment |
Data | Curate diverse, high-quality, non-harmful data |
Training | Use bias-aware methods and evaluate on fairness metrics |
Evaluation | Test for accuracy, bias, robustness, and explainability |
Deployment | Set thresholds, apply risk flags, enable human overrides |
Monitoring | Continuously audit outputs and allow for public feedback |
Stakeholder Engagement
We actively seek input from:
Journalists, researchers, and fact-checkers
Civil society and human rights organizations
Policy experts and regulators
This helps ensure our system remains aligned with evolving ethical standards and real-world needs.
Commitments to Continuous Improvement
Public disclosure of key findings or updates when risks are identified
Dedicated feedback channel for users, watchdogs, and researchers
Contact and Feedback
For inquiries, partnerships, or reporting ethical concerns, please contact: hello@dalil.io





