Love the bottom-up assembly. You do a great job when building the out-going flows from individual to group to society. But there is a strange gap or leap that you make when it comes to in-coming flows.
"Over time, the society can see that one district is consistently exporting food while importing debt, another is consuming fuel without replenishing local ecological capacity, and unpaid care work is concentrated in households that appear “inactive” in formal markets.... Legible commitments would make the pattern available for judgment, response, and repair."
This just sounds like the solution you offer is the welfare state, with the professional class having a new improved surveillance device, thanks to CP. Surveillance is the problem, not the solution. We need less data colonialism, less surveillance capitalism, less vulnerability of our cherished rights to privacy invaded. Look at current Republican proposals: collecting voting information, collecting data on which purchases women make so they can arrest anyone who attempts reproductive control of their own bodies.
I think a better model is one in which the CP begins, as you say, as a small group of mutual aid. The commodities they decide upon during curation and valuation include their own data. They decide what will be publicly legible. This is how biology does it too. Your body has amazing layers of security: for example the brain/blood barrier acknowledges the fact that free exchanges need to take place with the CP of neurons, but that is somewhat privatized from flows outside the brain. This is true at every scale down to single cells and the semi-permeable membrane. It is NOT "fully transparent" -- just the opposite, it has selective flows controlling what can go in or out.
The problems you point towards--one region is out of balance, lacking generative justice--are indeed real problems, and the solutions need semi-permeable flows. From the state we need laws that nurture the flows missing from the bottom-up. For example, laws in Northern Italy have for many decades protected and nurtured worker-owned collectives. In California, new laws permit cities to start community banking systems. Support, not surveillance.
Thank you for such a careful and generous reading. I really appreciate how deeply you engaged … especially in catching that asymmetry between out-going and in-coming flows. That’s a real gap, and your response helps sharpen what the article is actually trying to say.
I think the key may be here: The example of “society seeing imbalances” can sound like a centralized, top-down surveillance system. But that’s not the model I’m trying to point toward.
If anything, I’m trying to question exactly that.
The core idea of the article is not that more visibility is good, or that we need better surveillance. It’s that:
groups need to decide what becomes legible to each other in order to care for one another.
That applies at every scale:
• a family
• a mutual aid group
• a cooperative
• a community
• a state
The question is: what should be visible, to whom, and under what conditions, so that care becomes possible without collapsing into control?
This is where commitment pools come in.
They are not about full transparency. They are inherently semi-permeable:
• curation (what is included)
• valuation (how it is interpreted)
• limitation (how much can flow)
• exchange logic (under what conditions flows occur)
In that sense, I agree very strongly with your biological analogy.
A commitment pool is much closer to a cell membrane than a surveillance system.
It is not trying to expose everything.
It is trying to make certain relationships legible within a bounded context, for a specific group, for a specific purpose.
And importantly: different pools can choose different levels of legibility.
Some may be highly private.
Some may be more open.
Some may share only aggregates.
Some may share nothing externally at all.
So the model is not: society becomes fully visible to a central authority
It is: many semi-permeable groups choose what to make legible to each other
..... and selectively, across groups
…. On the STATE question:
I share your concern. I grew up with a very similar distrust of how state apparatuses use information…. (often to punish, exclude, or control.)
So I don’t see the state as the default locus of this legibility.
But I also don’t think the answer is simply “less visibility.”
There is another problem: harm and neglect can remain invisible when nothing is legible.
The example points to ... regions out of balance ... unpaid care hidden .... ecological depletion unseen
These are not problems created by visibility.
They are often problems sustained by lack of shared perception.
So the tension is real:
• too much legibility → control, surveillance, coercion
• too little legibility → neglect, invisibility, inability to respond
The essay here is trying to sit inside that tension, not resolve it prematurely. (hope that is more clear)
…
The deeper point I was aiming at is this:
A society (social organism at any scale) cannot care for what it cannot perceive.
But what it perceives must never collapse the people (or cells) within it into objects of control.
That’s why the question is not transparency vs privacy.
It is: what forms of legibility support care without enabling domination?
….
I think where your critique really strengthens the piece is here:
You’re emphasizing that:
• legibility must be chosen, not imposed
• flows must be semi-permeable, not open
• support should come from institutions that nurture, not surveil
…
One last thought:
You mention the welfare state model. I think that’s a useful contrast…. And not at all what I’m aiming for.
The welfare state typically works like:
→ central authority gathers data
→ classifies populations
→ redistributes based on categories
What I’m exploring is closer to:
→ local groups make commitments legible to each other
→ those commitments form pools
→ pools connect selectively
→ larger patterns become perceptible without requiring total exposure
More like: confederated perception, not centralized surveillance.
And your biological framing is exactly right:
Not full transparency.
Not opacity.
But selective permeability.
….
As usual … I’m grateful for your push on this … it helps clarify that the article isn’t about “seeing everything,” but about: learning how to see enough, in the right way, to care without controlling.
I would welcome a closer re-read of the (very long) essay and would really welcome your thoughts on whether that framing lands better.
That is a great response (as usual my remarks are not "you are wrong" but more "I am pretty sure you do not intend to have this read as saying the following"). It is still not clear to me how this matches up with the Sarafu code. You have every transaction routed through blockchain, including purchase information about my health, my sexual practices, my reading list etc. that I do not want made public. How, at the coding side, do we make Sarafu into Noneofyourdamnedbusiness-fu?
Most of what we currently call “private systems” are not actually private.
They are honeypots.
They gather information in one place, under the promise of protection, but all it takes is one relayer, one admin, one compromised server, one subpoena, one frightened participant, one incentivized insider… and the whole thing becomes exposed. In many cases, the system is not just vulnerable to this, it quietly depends on it.
Even highly sophisticated stacks, including zero-knowledge approaches, often assume a stable and trustworthy surrounding environment that simply does not exist in most real-world contexts.
So I don’t actually believe that “perfect privacy” systems, in the way we often imagine them, are the answer. In practice, they tend to create the illusion of safety, while concentrating risk. (they certainly make law enforcement suspicious .. and often for good reason)
And I say this from experience, not theory.
In 2012, our entire team, along with a network of businesses in Bangladesh, was arrested.
We were not using blockchain.
We were not using digital systems.
We were using paper vouchers and paper ledgers.
Total privacy. Zero on-line transparency.
And still, a local police station decided they didn’t like what we were doing.
That moment clarified something very deeply for me:
privacy alone does not protect you.
If anything, it can leave you more vulnerable.
Because when something is invisible, it is also easier to misrepresent, to criminalize, to shut down without scrutiny.
What protected us was not privacy.
It was legibility in the right direction.
As a non-profit, working openly with the local chief, with transparent books, we were helping people make their real economic lives visible:
a woman paying school fees over time with tomatoes
debts being settled across multiple parties
credit clearing through relationships, not cash
These are not new ideas. Multilateral clearing, systems like mweria, these are ancient.
We were simply making them more legible.
And when the case went to the high court, that legibility mattered.
That woman stood and testified:
she did not have Kenyan shillings!
she would gladly use them if she did!
she just wanted to send her child to school!
That truth, made visible, changed the outcome.
We won.
And more than that, the process exposed something else:
that the system trying to stop us was out of alignment with people’s lived reality.
That exposure was powerful.
It didn’t harm us.
It strengthened the legitimacy of what people were already doing.
So when you ask:
how do we make Sarafu into “none-of-your-damned-business-fu”?
My answer is:
that is not the only question.
The deeper question is:
what must remain private, and what must be made legible so that people are protected, supported, and able to coordinate with care?
And importantly:
to whom?
Because transparency is not one-directional.
If people are visible to a system, that system must also be visible to them.
If commitments are legible, the governance around them must also be legible.
Otherwise it is surveillance.
In the case of Sarafu and commitment pools, the design is not about exposing everything.
It is about creating semi-permeable boundaries.
Different pools can:
choose what is visible
choose what is aggregated
choose what stays internal
choose what is shared across groups
And yes, we can build more privacy-preserving versions.
If you want a fully private version of Sarafu.Network, please do build it. That is exactly why it is open source!
Use the code. Use the docs. Use AI. Ask for help!
That experimentation is necessary.
But I don’t think the answer lies in trying to make systems completely opaque.
Because in the real world, opacity does not eliminate power.
It often just hides it.
.....
What I am trying to explore is something more difficult:
how to create forms of legibility that protect people, reveal care, and expose injustice when it happens… without collapsing into control.
.... I think it requires letting go of the idea that privacy will save us.
And instead working with:
selective visibility
shared governance
mutual accountability
and systems that can show the good as clearly as they could show the harm
Great points about the positive role that visibility can play.
"Make it entirely opaque" is not what I said. My suggestion was "The commodities they decide upon during curation and valuation should include their own data. They decide what will be publicly legible."
You seem to be OK with that *in theory*. You said "Different pools can choose what is visible". I just don't see any code development for that in Sarafu. Blockchain does not allow that option.
Maybe what you mean is "Different pools can choose what is visible in theory, but that is not possible in Sarafu, it would have to be an different platform, one that is not using blockchain ."
If that's the case, I would not think less of Sarafu (or you!!). But I would really like to know.
ok, this helps clarify your point much more precisely, and I think I see exactly where the gap is.
You’re not arguing for opacity.
You’re arguing for agency over legibility .... that the group itself decides what becomes visible. (please provide a spec.)
And I agree with that.
Where the confusion seems to be is in what layer that choice exists in Sarafu Network.
.... So let me try to say this very concretely.
In the current Sarafu design, there are two distinct decisions:
First, what you choose to express as a commitment at all
Second, what you choose to pool (i.e. make exchangeable within the network)
Only that layer (your chosen commitments and pools) becomes public.
So if you create a voucher, or list an offering into the pool, that is intentionally legible ..... very much like listing a product on Amazon, or posting an offer in a market. You are saying: this is something I am willing to stand behind and exchange with others. .. Honestly you should never publish anything if you can't make this clear to your self and others.
But/and everything outside of that .... your internal life, your relationships, your private exchanges that you don’t formalize into pubic commitments ... is not / should not be part of the sarafu.network registry at all. (DO NOT PUBLISH IT.)
So the “choice of visibility” in Sarafu is not happening after the fact through privacy controls on transactions.
It is happening before the fact, at the level of:
what do I choose to make into a formal, exchangeable commitment?
That’s the boundary.
Now, you are absolutely right about something important:
Once a commitment is made legible in Sarafu, it is public within that system.
And that is not accidental.
Grassroots Economics, as a foundation, is legally and ethically on the hook for what is listed and exchanged. That means we have to curate the registry. We cannot allow illegal or harmful commitments to circulate. So there is a governance layer there that necessarily implies visibility.
In that sense, Sarafu is not trying to be a private system.
It is closer to a public curation (registry) of commitments and pools, with explicit legal accountability.
Different people/groups can choose what they bring into legibility at all.
And once they do, that commitment enters a shared, accountable space.
Your question then becomes very sharp I think Ron:
Could there be a version where visibility is controlled after commitments are created .... where data itself is treated as a commodity, and selectively shared?
Yes. (spec that out Ron and share / ask for help ... my theory is that it is actually not trivial at all and may actually backfire atm ... see the history of privacy with Bangla-Pesa).
But that would atm require a fundamentally different (ledger) architecture.
Not just a tweak to Sarafu.Network, but a different approach to:
storage
access control
validation
governance
And likely a different set of tradeoffs (some not so great).
So I think the honest answer is:
Sarafu, as it exists, does not provide fine-grained control over transaction visibility within the network (once you publish your vouchers or pools they are public).
What it provides is:
control over what becomes a commitment (you choose)
control over whether you participate (you choose)
governance over what is allowed into your pools (you choose)
But once something is in, it is legible.
And I want to say .... I don’t see that as a flaw, but as a design stance.
It is trying to create a space where:
commitments are clear
exchanges are accountable
and trust can be built across people who may not know each other
That is healthy and requires a certain level of shared visibility.
At the same time, I fully agree with your instinct:
There is room ( and probably a need ) for systems where:
data itself is curated as a privately held commodity
visibility is selectively shared
and groups define their own permeability more granularly
If you want to explore that direction, I genuinely think that’s valuable.
You can fork it, modify it, layer privacy mechanisms on top, or build something entirely different using the same core ideas.
And I would be very happy to support that (while weighting the tradeoffs).
.... Sarafu answers one question:
What happens if we make commitments legible in a shared, governed space?
Your question is another:
What would it look like if legibility itself were selectively governed as a shared resource? (n.b. this is part of what cosmo-local credit DAO is looking at)
I think both are important.
And they don’t have to be the same system - and can still be interoperable with the same commitment pooling protocol.
The data part of this is becoming increasingly important in the context of AI. In the grant we received from the OpenAI foundation we initially proposed that the platform we were building for African artisans (ubuntu-AI.net) would allow them to market their creative images as data. But that turned out to be a poor approach because once sold, they lose control over the design images. So we have now switched to a system in which they can create AI models for themselves, and those can be shared internally, marketed externally, whatever they decide they want to do. Interestingly this means the raw data of their design images remains hidden, even if the AI model is made public. As you say there is room for research on selective legibility (i.e. society's semi-permeable membranes!) More about this at https://www.researchgate.net/publication/391972500_Ubuntu_AI_Machine_Learning_for_Regenerative_Design_Ecologies
Thanks for this, it validates what a growing number have claimed, some for more than a decade. I've referenced your article in this essay:
The Pattern Was Always There - How structural analysis, vindicated across a decade, points toward what comes next
https://www.outersite.org/the-pattern-was-always-there/
Love the bottom-up assembly. You do a great job when building the out-going flows from individual to group to society. But there is a strange gap or leap that you make when it comes to in-coming flows.
"Over time, the society can see that one district is consistently exporting food while importing debt, another is consuming fuel without replenishing local ecological capacity, and unpaid care work is concentrated in households that appear “inactive” in formal markets.... Legible commitments would make the pattern available for judgment, response, and repair."
This just sounds like the solution you offer is the welfare state, with the professional class having a new improved surveillance device, thanks to CP. Surveillance is the problem, not the solution. We need less data colonialism, less surveillance capitalism, less vulnerability of our cherished rights to privacy invaded. Look at current Republican proposals: collecting voting information, collecting data on which purchases women make so they can arrest anyone who attempts reproductive control of their own bodies.
I think a better model is one in which the CP begins, as you say, as a small group of mutual aid. The commodities they decide upon during curation and valuation include their own data. They decide what will be publicly legible. This is how biology does it too. Your body has amazing layers of security: for example the brain/blood barrier acknowledges the fact that free exchanges need to take place with the CP of neurons, but that is somewhat privatized from flows outside the brain. This is true at every scale down to single cells and the semi-permeable membrane. It is NOT "fully transparent" -- just the opposite, it has selective flows controlling what can go in or out.
The problems you point towards--one region is out of balance, lacking generative justice--are indeed real problems, and the solutions need semi-permeable flows. From the state we need laws that nurture the flows missing from the bottom-up. For example, laws in Northern Italy have for many decades protected and nurtured worker-owned collectives. In California, new laws permit cities to start community banking systems. Support, not surveillance.
Thank you for such a careful and generous reading. I really appreciate how deeply you engaged … especially in catching that asymmetry between out-going and in-coming flows. That’s a real gap, and your response helps sharpen what the article is actually trying to say.
I think the key may be here: The example of “society seeing imbalances” can sound like a centralized, top-down surveillance system. But that’s not the model I’m trying to point toward.
If anything, I’m trying to question exactly that.
The core idea of the article is not that more visibility is good, or that we need better surveillance. It’s that:
groups need to decide what becomes legible to each other in order to care for one another.
That applies at every scale:
• a family
• a mutual aid group
• a cooperative
• a community
• a state
The question is: what should be visible, to whom, and under what conditions, so that care becomes possible without collapsing into control?
This is where commitment pools come in.
They are not about full transparency. They are inherently semi-permeable:
• curation (what is included)
• valuation (how it is interpreted)
• limitation (how much can flow)
• exchange logic (under what conditions flows occur)
In that sense, I agree very strongly with your biological analogy.
A commitment pool is much closer to a cell membrane than a surveillance system.
It is not trying to expose everything.
It is trying to make certain relationships legible within a bounded context, for a specific group, for a specific purpose.
And importantly: different pools can choose different levels of legibility.
Some may be highly private.
Some may be more open.
Some may share only aggregates.
Some may share nothing externally at all.
So the model is not: society becomes fully visible to a central authority
It is: many semi-permeable groups choose what to make legible to each other
..... and selectively, across groups
…. On the STATE question:
I share your concern. I grew up with a very similar distrust of how state apparatuses use information…. (often to punish, exclude, or control.)
So I don’t see the state as the default locus of this legibility.
But I also don’t think the answer is simply “less visibility.”
There is another problem: harm and neglect can remain invisible when nothing is legible.
The example points to ... regions out of balance ... unpaid care hidden .... ecological depletion unseen
These are not problems created by visibility.
They are often problems sustained by lack of shared perception.
So the tension is real:
• too much legibility → control, surveillance, coercion
• too little legibility → neglect, invisibility, inability to respond
The essay here is trying to sit inside that tension, not resolve it prematurely. (hope that is more clear)
…
The deeper point I was aiming at is this:
A society (social organism at any scale) cannot care for what it cannot perceive.
But what it perceives must never collapse the people (or cells) within it into objects of control.
That’s why the question is not transparency vs privacy.
It is: what forms of legibility support care without enabling domination?
….
I think where your critique really strengthens the piece is here:
You’re emphasizing that:
• legibility must be chosen, not imposed
• flows must be semi-permeable, not open
• support should come from institutions that nurture, not surveil
…
One last thought:
You mention the welfare state model. I think that’s a useful contrast…. And not at all what I’m aiming for.
The welfare state typically works like:
→ central authority gathers data
→ classifies populations
→ redistributes based on categories
What I’m exploring is closer to:
→ local groups make commitments legible to each other
→ those commitments form pools
→ pools connect selectively
→ larger patterns become perceptible without requiring total exposure
More like: confederated perception, not centralized surveillance.
And your biological framing is exactly right:
Not full transparency.
Not opacity.
But selective permeability.
….
As usual … I’m grateful for your push on this … it helps clarify that the article isn’t about “seeing everything,” but about: learning how to see enough, in the right way, to care without controlling.
I would welcome a closer re-read of the (very long) essay and would really welcome your thoughts on whether that framing lands better.
(you may also find this essay that touches on points (benefits and dangers of radical transparency https://willruddick.substack.com/p/on-letting-go )
That is a great response (as usual my remarks are not "you are wrong" but more "I am pretty sure you do not intend to have this read as saying the following"). It is still not clear to me how this matches up with the Sarafu code. You have every transaction routed through blockchain, including purchase information about my health, my sexual practices, my reading list etc. that I do not want made public. How, at the coding side, do we make Sarafu into Noneofyourdamnedbusiness-fu?
Most of what we currently call “private systems” are not actually private.
They are honeypots.
They gather information in one place, under the promise of protection, but all it takes is one relayer, one admin, one compromised server, one subpoena, one frightened participant, one incentivized insider… and the whole thing becomes exposed. In many cases, the system is not just vulnerable to this, it quietly depends on it.
Even highly sophisticated stacks, including zero-knowledge approaches, often assume a stable and trustworthy surrounding environment that simply does not exist in most real-world contexts.
So I don’t actually believe that “perfect privacy” systems, in the way we often imagine them, are the answer. In practice, they tend to create the illusion of safety, while concentrating risk. (they certainly make law enforcement suspicious .. and often for good reason)
And I say this from experience, not theory.
In 2012, our entire team, along with a network of businesses in Bangladesh, was arrested.
We were not using blockchain.
We were not using digital systems.
We were using paper vouchers and paper ledgers.
Total privacy. Zero on-line transparency.
And still, a local police station decided they didn’t like what we were doing.
That moment clarified something very deeply for me:
privacy alone does not protect you.
If anything, it can leave you more vulnerable.
Because when something is invisible, it is also easier to misrepresent, to criminalize, to shut down without scrutiny.
What protected us was not privacy.
It was legibility in the right direction.
As a non-profit, working openly with the local chief, with transparent books, we were helping people make their real economic lives visible:
a woman paying school fees over time with tomatoes
debts being settled across multiple parties
credit clearing through relationships, not cash
These are not new ideas. Multilateral clearing, systems like mweria, these are ancient.
We were simply making them more legible.
And when the case went to the high court, that legibility mattered.
That woman stood and testified:
she did not have Kenyan shillings!
she would gladly use them if she did!
she just wanted to send her child to school!
That truth, made visible, changed the outcome.
We won.
And more than that, the process exposed something else:
that the system trying to stop us was out of alignment with people’s lived reality.
That exposure was powerful.
It didn’t harm us.
It strengthened the legitimacy of what people were already doing.
So when you ask:
how do we make Sarafu into “none-of-your-damned-business-fu”?
My answer is:
that is not the only question.
The deeper question is:
what must remain private, and what must be made legible so that people are protected, supported, and able to coordinate with care?
And importantly:
to whom?
Because transparency is not one-directional.
If people are visible to a system, that system must also be visible to them.
If commitments are legible, the governance around them must also be legible.
Otherwise it is surveillance.
In the case of Sarafu and commitment pools, the design is not about exposing everything.
It is about creating semi-permeable boundaries.
Different pools can:
choose what is visible
choose what is aggregated
choose what stays internal
choose what is shared across groups
And yes, we can build more privacy-preserving versions.
If you want a fully private version of Sarafu.Network, please do build it. That is exactly why it is open source!
Use the code. Use the docs. Use AI. Ask for help!
That experimentation is necessary.
But I don’t think the answer lies in trying to make systems completely opaque.
Because in the real world, opacity does not eliminate power.
It often just hides it.
.....
What I am trying to explore is something more difficult:
how to create forms of legibility that protect people, reveal care, and expose injustice when it happens… without collapsing into control.
.... I think it requires letting go of the idea that privacy will save us.
And instead working with:
selective visibility
shared governance
mutual accountability
and systems that can show the good as clearly as they could show the harm
Because sometimes, being seen is not the danger.
Sometimes, being unseen is.
Great points about the positive role that visibility can play.
"Make it entirely opaque" is not what I said. My suggestion was "The commodities they decide upon during curation and valuation should include their own data. They decide what will be publicly legible."
You seem to be OK with that *in theory*. You said "Different pools can choose what is visible". I just don't see any code development for that in Sarafu. Blockchain does not allow that option.
Maybe what you mean is "Different pools can choose what is visible in theory, but that is not possible in Sarafu, it would have to be an different platform, one that is not using blockchain ."
If that's the case, I would not think less of Sarafu (or you!!). But I would really like to know.
ok, this helps clarify your point much more precisely, and I think I see exactly where the gap is.
You’re not arguing for opacity.
You’re arguing for agency over legibility .... that the group itself decides what becomes visible. (please provide a spec.)
And I agree with that.
Where the confusion seems to be is in what layer that choice exists in Sarafu Network.
.... So let me try to say this very concretely.
In the current Sarafu design, there are two distinct decisions:
First, what you choose to express as a commitment at all
Second, what you choose to pool (i.e. make exchangeable within the network)
Only that layer (your chosen commitments and pools) becomes public.
So if you create a voucher, or list an offering into the pool, that is intentionally legible ..... very much like listing a product on Amazon, or posting an offer in a market. You are saying: this is something I am willing to stand behind and exchange with others. .. Honestly you should never publish anything if you can't make this clear to your self and others.
But/and everything outside of that .... your internal life, your relationships, your private exchanges that you don’t formalize into pubic commitments ... is not / should not be part of the sarafu.network registry at all. (DO NOT PUBLISH IT.)
So the “choice of visibility” in Sarafu is not happening after the fact through privacy controls on transactions.
It is happening before the fact, at the level of:
what do I choose to make into a formal, exchangeable commitment?
That’s the boundary.
Now, you are absolutely right about something important:
Once a commitment is made legible in Sarafu, it is public within that system.
And that is not accidental.
Grassroots Economics, as a foundation, is legally and ethically on the hook for what is listed and exchanged. That means we have to curate the registry. We cannot allow illegal or harmful commitments to circulate. So there is a governance layer there that necessarily implies visibility.
In that sense, Sarafu is not trying to be a private system.
It is closer to a public curation (registry) of commitments and pools, with explicit legal accountability.
Different people/groups can choose what they bring into legibility at all.
And once they do, that commitment enters a shared, accountable space.
Your question then becomes very sharp I think Ron:
Could there be a version where visibility is controlled after commitments are created .... where data itself is treated as a commodity, and selectively shared?
Yes. (spec that out Ron and share / ask for help ... my theory is that it is actually not trivial at all and may actually backfire atm ... see the history of privacy with Bangla-Pesa).
But that would atm require a fundamentally different (ledger) architecture.
Not just a tweak to Sarafu.Network, but a different approach to:
storage
access control
validation
governance
And likely a different set of tradeoffs (some not so great).
So I think the honest answer is:
Sarafu, as it exists, does not provide fine-grained control over transaction visibility within the network (once you publish your vouchers or pools they are public).
What it provides is:
control over what becomes a commitment (you choose)
control over whether you participate (you choose)
governance over what is allowed into your pools (you choose)
But once something is in, it is legible.
And I want to say .... I don’t see that as a flaw, but as a design stance.
It is trying to create a space where:
commitments are clear
exchanges are accountable
and trust can be built across people who may not know each other
That is healthy and requires a certain level of shared visibility.
At the same time, I fully agree with your instinct:
There is room ( and probably a need ) for systems where:
data itself is curated as a privately held commodity
visibility is selectively shared
and groups define their own permeability more granularly
If you want to explore that direction, I genuinely think that’s valuable.
Sarafu.Network is open source for exactly that reason.
You can fork it, modify it, layer privacy mechanisms on top, or build something entirely different using the same core ideas.
And I would be very happy to support that (while weighting the tradeoffs).
.... Sarafu answers one question:
What happens if we make commitments legible in a shared, governed space?
Your question is another:
What would it look like if legibility itself were selectively governed as a shared resource? (n.b. this is part of what cosmo-local credit DAO is looking at)
I think both are important.
And they don’t have to be the same system - and can still be interoperable with the same commitment pooling protocol.
The data part of this is becoming increasingly important in the context of AI. In the grant we received from the OpenAI foundation we initially proposed that the platform we were building for African artisans (ubuntu-AI.net) would allow them to market their creative images as data. But that turned out to be a poor approach because once sold, they lose control over the design images. So we have now switched to a system in which they can create AI models for themselves, and those can be shared internally, marketed externally, whatever they decide they want to do. Interestingly this means the raw data of their design images remains hidden, even if the AI model is made public. As you say there is room for research on selective legibility (i.e. society's semi-permeable membranes!) More about this at https://www.researchgate.net/publication/391972500_Ubuntu_AI_Machine_Learning_for_Regenerative_Design_Ecologies