
Ethical whitepaper
13 apr 2024
Ethical whitepaper
Dembrane is launching an initiative to maximise user agency in our tools, accelerating a move away from naively implementing large frontier AI models. While over using these powerful models might be easier, it doesn't align with our goal of empowering people with simple and lovable tools.
Concretely, this means we strive to:
Develop AI components that put users in control, with clear boundaries around AI-generated content. Users will have the ability to accept, reject, or modify AI suggestions.
Implement transparency measures so users understand when they are interacting with AI and how it is enhancing their experience. No hidden AI, minimal black boxes.
Rigorously test for biases and blind spots in our AI applications, and proactively seeking out diverse perspectives in our data and user feedback loops.
Prioritise AI applications that augment human creativity and decision-making rather than automating it away. Our tools should require active human engagement and critical thinking.
Continuously monitor and assess the impacts of our AI systems in the real world, and being willing to iterate and adjust our approach as needed for more ethical outcomes.
This blog post serves as an ethical white paper, outlining our position and commitment to investing in human-centered “low AI” tools. We believe this approach, though more challenging in the short term, will yield more empowering and ethically-aligned technologies in the long run. Our goal is to be leaders in responsible AI innovation that genuinely benefits society.

Attending to Complexity and Pioneering "Low AI" for Ethical Collective Intelligence
As artificial intelligence (AI) continues to advance at a rapid pace, governments, businesses, and individuals are grappling with the implications. The Dutch government's recent actions, embracing generative AI while simultaneously preparing for potential regulation, highlight the complex ethical considerations surrounding this transformative technology. At Dembrane, we recognize the need to navigate this uncharted territory with care, and as a team, we often draw upon the wisdom of ethical luminaries to guide our approach.
Context
The Dutch government's apparent contradiction highlights the difficult balancing act that governments must navigate in the era of AI. They must find a way to harness the benefits of AI while simultaneously mitigating its risks and ensuring that its development and deployment align with societal values and priorities. This is no easy task, as it requires grappling with a host of complex ethical, social, and political considerations.
Moreover, the government's stance reflects the broader societal tensions surrounding AI. On one side, there is a growing public enthusiasm for the potential of AI to transform our lives for the better. Many people are excited about the prospect of more efficient services, personalized experiences, and data-driven decision-making. On the other side, there is a rising tide of concern about the potential downsides of AI, including job displacement, privacy violations, and the amplification of existing biases and inequalities, and potentially creating new ones.
“Bias is a problem, even if it suits me.”
a short story about the challenge of bias, wherever it arises
While analysing the results of a particular participatory process, I had my first experience with AI bias. I had asked for a summary of a set of documents, and the answer I was presented with was perfectly reasonable. Exactly what I asked for, even including the perspectives of minorities and presenting a small conflict between two stakeholder groups.
It felt too polished. I asked which kinds of stakeholders would feel unseen, perhaps even offended by the summary. The AI proceeded to tell me:
Small business owners
People living in the suburbs
Frequent car drivers
Emergency services
I do not belong to any of those groups. Blinded by my own bias, I had almost submitted the report without including their crucial perspectives.
I now saw ”In the wild” how the development and deployment of AI is not a neutral process, but one that can both perpetuate and amplify existing inequalities and discrimination, as well as create new biases. This bias was not intentional or malicious, but rather a reflection of the values embedded in the data and algorithms that powered the AI system. It was a stark reminder that AI is not a panacea, but a powerful tool that must be wielded with care and responsibility.
Staying with the trouble
Haraway's call to "stay with the trouble" and "muddle through" is a powerful reminder that the goal of AI development should not be to achieve some mythical state of perfection or neutrality, but rather to engage with the messy realities of the world as it is. It's about getting our hands dirty, making mistakes, and learning from them. It's about embracing the uncertainty and ambiguity that comes with charting new territory.
This perspective is deeply resonant with Edgar Morin's paradigm of complexity. Morin argues that the dominant paradigm of simplification, which seeks to reduce the to simple, deterministic models, is fundamentally inadequate for understanding the world in all its richness and diversity. Instead, he advocates for a paradigm of complexity that embraces uncertainty, contradiction, and emergence as essential features of reality. For Morin, engaging in complex thought is not just an intellectual exercise, but an ethical imperative. It requires us to resist the temptation of easy answers and quick fixes, and to grapple with the full complexity of the challenges we face. It demands a willingness to consider multiple perspectives, to engage in dialogue and debate, and to remain open to new ideas and possibilities.
Low AI
What If it we fail?
Potential and direction
Finally, I’d like to leave you with this.
Between saying and doing, many a pair of shoes is worn out. - Iris Murdoch
Sources
Dutch government to embrace generative AI: https://www.government.nl/latest/news/2024/01/18/dutch-government-dutch-government-presents-vision-on-generative-ai
Leaked preperations on a ban on generative AI use by dutch civil servants and their suppliers: https://www.volkskrant.nl/tech/regering-bereidt-verbod-voor-op-gebruik-ai-software-door-ambtenaren~b8948cc7/
Edgar Morin, On Complexity: https://www.amazon.com/Complexity-Advances-Systems-Theory-Sciences/dp/1572738014
Rosi Braidotti, the Ethics Of Joy, Posthuman Glossary: https://www.bloomsbury.com/uk/posthuman-glossary-9781350030244/
Iris Murdoch: https://plato.stanford.edu/entries/murdoch/
Audre Lorde, Your Slience Will Not Protect You: https://www.amazon.com/Your-Silence-Will-Not-Protect/dp/0995716226
Rutger Bergman, De Meeste Mensen Deugen: https://www.amazon.com/meeste-mensen-deugen-nieuwe-geschiedenis/dp/9082942186
Donna Harraway, Staying With the Trouble: https://www.amazon.com/Staying-Trouble-Chthulucene-Experimental-Futures/dp/0822362244
In collaboration with Clause 3 Opus: https://www.anthropic.com/claude