Skip to content
THE MANIFESTO

We are not building an AI that governs communities.

We are building open infrastructure to help communities decide better, understand better, and remember better. Humans always in the loop. Civic safety as a feature, not a footnote.

CIVIC SAFETY IS THE PRODUCT

Two columns, one promise.

The list of things this platform refuses to do is part of the product. It is what makes it safe to use in real communities, and what makes it credible to fund.

WHAT IT WILL DO

Support thinking.

  • Organize local documents, agreements, needs, and priorities.
  • Compare two or three non-critical scenarios for everyday decisions.
  • Detect patterns across authorized feedback.
  • Remember prior agreements across changes of authority.
  • Help citizens understand public procedures.
  • Indicate when a topic needs review by a human expert.
  • Prepare preliminary documents and assembly summaries.
  • Retrieve information from official public sources.
  • Surface aggregated community concerns without naming individuals.
  • Flag uncertainty, missing information, and unsupported claims.
  • Show its sources, every time.
  • Refuse out-of-scope asks.
WHAT IT WILL NOT DO

Replace humans.

  • No legal, medical, or emergency advice.
  • No accusations or rumor validation.
  • No voting recommendations.
  • No surveillance, no manipulation, no propaganda.
  • No replacement of assemblies or authorities.
  • No public-works decisions or budget commitments.
  • No diagnosis, no engineering judgments, no policing instructions.
  • No naming of individuals in aggregated outputs.
  • No bypassing of community processes.
  • No automated procedures on behalf of authorities.
  • No data extraction beyond explicit, scoped consent.
  • No pretending the AI is a person, an expert, or in charge.
THE PRIVACY PROMISE

Three modes, one default.

Citizens choose how their words travel. The system supports public, confidential, and private modes, and defaults to the most protective option.

01
Public

A citizen contributes information that may be part of community memory. Stored and citeable.

02
Confidential community

Shared input that may inform aggregated patterns without exposing the individual. Aggregated only above a minimum count.

03
Private, no memory

A question or message is answered without retention. The conversation is not stored as community memory.

SOURCE HIERARCHY

How the system decides what to trust.

  1. 1. Official public sources.
  2. 2. Community-approved documents.
  3. 3. Assembly notes and minutes.
  4. 4. Authorized citizen feedback.
  5. 5. AI inference, always flagged as such.
THE CORE PRINCIPLES

"The platform does not take decisions or execute actions. It helps people and community authorities think better with more context."

"Supporters fund a public good. They do not buy control over community data, governance, roadmap, outputs, pilots, or public decisions."

"IAldea respects assemblies, authorities, committees, oral traditions, local language realities, cultural norms, indigenous normative systems, local autonomy, and community consent."

OPEN SOURCE

Released under the most open license possible.

Self-hostable. Works with free, paid, local, or remote AI models. Configurable per community via SOUL.md and policy_config.yaml. The product is the principles. The code is how the principles run.