Dutch Innovation Days

24 jun 2024

At this year's Dutch Innovation Days festival, we had the unique opportunity to both give a keynote about the potential of combining democracy with AI and AI with democracy, but we also got to showcase the latest iteration of our app: the Dembrane Participation Portal.

The QR code to enter the portal was in the “Do Book”, a really fun resource for the whole festival written by the director of DID, Peter Oosterwijk. The app was up for the duration of the festival and we recorded a total of 13 conversations, from adversarial tests where participants asked the app to transcribe whale sounds (they rather amusingly got it to hallucinate Chinese characters), to deep conversations with some of the keynote speakers, Jaap Peters, Wendy van Ierschot and Bernhard Lenger.

The “Do book”

The method:




The experiment not only demonstrated the potential of our technology but also perfectly complemented the event's themes of innovation and societal impact. Let's dive into the insights we gathered, organised by key areas:

About the app

Conversations about our tool's performance at the event provided valuable insights into its strengths, weaknesses, opportunities, and threats (SWOT), as well as sparking discussions about participation and stakeholder engagement. Here’s what the participants discussed:

Strengths:

  • Efficient gathering of ideas from large groups

  • Immediate summarisation of conversations

  • Inclusion of more voices, especially quieter participants

One attendee highlighted the potential for inclusivity: "You can break it down into groups and let them all talk in a smaller group, then more voices can be heard. That would be the idea."

Another attendee enthusiastically noted: "I think it's super efficient, so that you can quickly gather a lot of ideas and things by a small group of people. It can be summarized immediately."

We’re glad you think so!

Weaknesses:

  • Lack of real-time interaction compared to in-person discussions

  • Potential for capturing unrefined thoughts

  • Privacy concerns

  • Transcription errors and potential for hallucinations

One participant expressed concern about the depth of conversation: "I'm worried [about] the weakness... it keeps people in the immature thoughts... interacting with each other but not going deep." The potential for errors was humorously highlighted: "Okay, now he thinks we're Chinese. Oh, nice. Okay, so this is going completely wrong."

Opportunities:

  • Application in civic participation and decision-making processes

  • Use in business meetings and conferences for efficient note-taking

  • Integration with other technologies for enhanced experiences

  • Potential for analysing reactions and feedback in various contexts

A participant saw potential in political applications: "I feel that there is something that, so what I understand is that they are trying to introduce this to the political aspect."

Another suggested broader applications: "Well, you can also look at it as, it's not always in a political sense, elections, etc., but you also can have on a basic level, what kind of meetings, on a village level, in your streets, if you have a street meeting or in your football club."

Threats:

  • Potential misuse of data and privacy violations

  • Over-reliance on AI summaries vs. human judgment

  • Possible amplification of misinformation if not properly filtered

  • Job displacement for human note-takers/transcriptionists

Privacy concerns were evident in a few conversations: "It feels weird to talk knowing that we're being recorded."

Our response:

We love to receive thoughts and feedback such as this! It keeps us honest and humble and lets us know where we can improve. We are happy to announce that we have included a language setting that lets the host set the language of the speaker, and we are working on capabilities to highlight potential errors so that hosts can dive into the data and catch whatever falls through the cracks.

As for the fear of being recorded, we note in other tests that this is a typical reaction when using the app for the first time, but people quickly adapt to the new situation and forget the app is recording within a few minutes. Obviously, as a startup we have to still build up a reputation you can trust, which we strive to earn by being as careful as possible with the data entrusted to us.

As for the fear about the app not encouraging deeper conversations: that’s exactly what our model wants to avoid! Were working on that in a number of ways:

  • A great insight isn’t just a great answer, it’s also a great question: that’s why we are working on tools to enhance interactivity and iteratively improve the questions asked.

  • By increasing the speed of iteration into the group’s perspective, people are able to go deeper into what matters.

  • Because our tool is so minimally invasive, all other methods for going deeper into the conversation are still available! We have used our tool in sessions with facilitators and interview specialists, who all note that because they don’t have to focus on note taking, they can better focus on the people and the content during the conversation, leading to more complete insights during analysis.

In general though, I both recognise and cherish the responses we received. We’re not here to replace cherished jobs, or amplify misinformation. We are here to stop conversations from going in circles by providing timely interactive documentation. We’re here to reduce the workload of high quality participation for SME’s and public organisations. We’re here to make sure every voice that want’s to contribute is heard, and no good ideas are lost in piles of post-its or piles of reports. We want to make the wisdom of the crowd accessible to all communties.

Talks with Keynote speakers

At the heart of our conversation with Jaap Peters, Wendy van Ierschot, and Bernhard Lenger, a fascinating thread emerged: the evolving nature of human responsibility and decision-making in an increasingly AI-driven world. This discussion wove through various topics, from organisational models to justice systems, highlighting the tension between optimisation for the few and the well-being of the many.

The Rijnlands Model: Decentralization and Responsibility

The conversation kicked off with a discussion of the Rijnlands Model, a business philosophy that stands in stark contrast to the Anglo-Saxon or American model. Jaap Peters, author of several books on the subject, succinctly described its core principles:

"Het is decentraal. Het is gebaseerd op samenwerking in plaats van, wij doen alles zelf. Het is gebaseerd op langetermijndenken in plaats van korte termijn shareholder value."

As found on slideshare:


Anglo-American

Rijnland

Core / Choice Starting point

Market is the engine of society

Cooperation is the engine of society

Consequences ⬇️

The winner takes all

Respect for minorities


Also in not-for-profit sector (care, education, government)

Attention, love, beauty, Collective ambition


Do → think: Begin before you consider

Think → Do: Consider before you begin


State is the problem

State plays a key role


Freedom

Freedom, order and sense of community


Mechanical / rational

Organic / intuitive


Dogmas

Axioms


Every man for himself and God for us all

All for one and God for oneself


This model emphasises flipping the organisational hierarchy 90 degrees, and pushing responsibility forward through the organisation, enabling those closest to the action to make decisions. As Jaap noted:

"De verantwoordelijkheid zo ver mogelijk naar voren brengen. Want hoe dichter je bij de echte tijd zit, in plaats van de geplande tijd... Hoe beter die mensen een mandaat moeten hebben om daar iets te kunnen spelen."

(Bringing responsibility as far forward as possible. Because the closer you are to real time, instead of planned time... The more important it is for those people to have a mandate to be able to play.)

The Tension: Optimisation vs. Holistic Success

The discussion then pivoted to comparing the American and European business models, highlighting a fundamental tension: optimising for a select few versus raising the bar for all. This echoes ideas from Nassim Nicholas Taleb: He emphasises the importance of robustness and anti-fragility, suggesting that systems should be designed to withstand extreme events and failures (focusing on the good of the whole) rather than just focusing on incremental improvements (the good of some parts).

One participant, replying to a comment that there is more investment in innovation in America, replied:

Je hebt hier gewoon gelijk in. Er is daar heel veel meer geld. En het werkt ook dat wel met die energie en dingen uit de grond stampen enzovoort, het werkt ook. Alleen het gaat ook ten koste van iets. […] Het ten koste gaat van onze klimaat, van de gezondheid van mensen enzovoort. Dat nemen we niet mee, ook niet in ons beeld van hoe succesvol het is. Het is financieel heel succesvol, dat mag. Maar ja, er zijn ook altijd mensen die daar in een burn-out komen en er is extra armoede.

This tension between optimisation and holistic success is at the core of many contemporary debates about business ethics, wealth distribution, and societal progress.

As another participant notes that a system where excellence is not rewarding does not allow for the radical innovations that can lift up a society, whereas a system that only optimises excellence creates a lot of wealth, but dehumanises those less fortunate. How can we combine the best of these worlds? The challenge is finding the right balance. Implementing robust safety nets to ensure no one is left behind, while also creating environments that encourage innovation and high achievement.

AI and the Future of Responsibility

As the conversation shifted to AI and its implications, a provocative idea emerged: in a world where AI can do almost everything, human responsibility might become our most valuable asset. One participant hypothesised:

"Mijn hypothese is, er is een prijs op verantwoordelijkheid en die prijs gaat omhoog. En uiteindelijk als AI alles kan, dan is het enige wat overblijft is verantwoordelijkheid nemen."

(My hypothesis is, there's a price on responsibility and that price is going up. And ultimately, if AI can do everything, then the only thing left is taking responsibility.)

This idea challenges us to reconsider the role of humans in an AI-driven world. AI alignment is fundamentally a double question of “How do we align AI to human values?” as well as “Which values do we align to?” Rather than being replaced, our uniquely human capacity for ethical decision-making and responsibility-taking might become more crucial than ever as we take responsibility for the actions of AI’s acting on our behalf.

The Human Need for Recognition in Justice

The conversation took an interesting turn when discussing justice systems, highlighting the human need for recognition of harm done. This ties back to the theme of responsibility in a profound way. One participant shared an anecdote about a victim to a crime who demanded a personal justice:

These insight reminds us that even as we develop more sophisticated AI systems, including those that might be involved in decision-making or justice processes, we must not lose sight of the deeply human aspects of responsibility, accountability, and reconciliation.

Conclusion: Balancing AI Capabilities with Human Values

As we navigate the complex landscape of AI integration into our businesses, organisations, and society at large, the insights from this conversation remind us of the importance of maintaining and enhancing human agency and responsibility.

The Rijnlands Model, with its emphasis on decentralisation and long-term thinking, offers a valuable framework for considering how we might structure our organisations and societies to promote responsible decision-making at all abstractions, but especially in the thick of the action. At the same time, the potential of AI challenges us to reconsider and perhaps elevate the uniquely human capacity for taking responsibility.

As we move forward, the key challenge will be to harness the power of AI while preserving and enhancing the human elements that make our societies and organisations truly thrive. This means not just optimising for efficiency or profit, but consciously designing systems that raise the floor for everyone, promoting responsibility, ethical decision-making, and human dignity.

As we continue to develop and deploy AI technologies, let's ensure we're building a society that not only leverages the power of AI but also amplifies our human capacity for responsibility, empathy, and collective progress.

These discussions, captured in real-time by our tool and analysed by me Jorim, working with Dembrane’s AI, give a taste of the rich conversations at the Dutch Innovation Days. They also highlight the potential of our AI-powered feedback tool to capture and synthesize complex, multifaceted conversations in a way that can inform future innovation and decision-making processes.

Isn't that something!

Ready to bring your next event to the next level with dembrane?