librelist archives

« back to archive

Thoughts on decentralization: "I want to believe."

Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-01 @ 23:07
I just started a personal blog, and my first post includes some thoughts 
I've wanted to get down for a while:

http://adamierymenko.com/decentralization-i-want-to-believe/

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Michael Rogers
Date:
2014-08-07 @ 16:32
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Hi Adam,

This is a great post. I share your frustration with the difficulty of
building decentralised systems that are usable, efficient and secure.
But I have some doubts about your argument.

I don't think the Tsitsiklis/Xu paper tells us anything about
centralisation vs decentralisation in general. It gives a very
abstract model of a system where some fraction of a scarce resource
can be allocated wherever it's needed. I'm not surprised that such a
system has different queueing behaviour from a system with fixed
allocation. But it seems to me that this result is a poor fit for your
argument, in two respects.

First, the result doesn't necessarily apply beyond resource allocation
problems - specifically, those problems where resources can be moved
from place to place at no cost. I don't see the relevance to the
lookup and routing problems you're aiming to solve with ZeroTier.

Second, the advantage is gained by having a panoptic view of the whole
system - far from being a blind idiot, the allocator needs to know
what's happening everywhere, and needs to be able to send resources
anywhere. It's more Stalin than Lovecraft.

I'm not denying that a touch of centralisation could help to make
ZeroTier more usable, efficient and secure - I just don't think this
paper does anything to support that contention.

You mention split-brain and internet weather as problems ZeroTier
should cope with, but I'm not sure centralisation will help to solve
those problems. If the network is partitioned, some nodes will lose
contact with the centre - they must either stop operating until they
re-establish contact, or continue to operate without the centre's
guidance. A distributed system with a centre is still a distributed
system - you can't escape the CAP theorem by putting a crown on one of
the nodes.

It's true that nobody's been able to ship a decentralised alternative
to Facebook, Google, or Twitter. But that failure could be due to many
reasons. Who's going to buy stock in a blind-by-design internet
company that can't target ads at its users? How do you advertise a
system that doesn't have a central place where people can go to join
or find out more? How do you steer the evolution of such a system?

All of these questions are easier to answer for infrastructure than
for public-facing products and services. Facebook, Google and Twitter
sit on top of several layers of mostly-decentralised infrastructure.
Since you're building infrastructure, I wonder whether it would be
more useful to look at how centralisation vs decentralisation plays
out at layers 2-4, rather than looking at the fully-centralised
businesses that sit on top of those layers.

The blind idiot god is a brilliant metaphor, and I agree it's what we
should aim for whenever we need a touch of centralisation to solve a
problem. But if we take into account the importance of metadata
privacy as well as content privacy, I suspect that truly blind and
truly idiotic gods will be very hard to design. A god that knows
absolutely nothing can't contribute to the running of the system. So
perhaps the first question to ask when designing a BIG is, what
information is it acceptable for the BIG to know?

Cheers,
Michael

On 02/08/14 00:07, Adam Ierymenko wrote:
> I just started a personal blog, and my first post includes some
> thoughts I've wanted to get down for a while:
> 
> http://adamierymenko.com/decentralization-i-want-to-believe/
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJT46oJAAoJEBEET9GfxSfM+qkH/24J+2VPW4kOokwA20PAg285
Fk9Snzuuz7ruP9qIfuMJVfN5k7+01on7H/VnDuW6gAnc8oGTqye9RWQzmCzMbghe
Z2CzadRufFDTgPYk73pyLWAlFLujqu0N/cqHeWGqw8K7wmDmfucnnimKEnQQ2eBP
uODbPyJUzc3NahRE42yMeXurC7A0HlcHyMhg7rPkdGZVzJzQ9RDJAkLVo+lpdJ3V
ovBpN+QjMg9IJoTKH1Rc5pApTZawoBFPap/o7s3PWLdDY8CL8Oyie2N0NwfyFrVe
fN6xBva1PuZ7I2rNhzQijy7hDRhGFeVMH3z6sI2OiUT/UUsHLpHbIEukq9x28c8=
=4/Oc
-----END PGP SIGNATURE-----

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-19 @ 19:52
Getting to this a bit belatedly… :)

On Aug 7, 2014, at 9:32 AM, Michael Rogers <michael@briarproject.org> wrote:

> I don't think the Tsitsiklis/Xu paper tells us anything about
> centralisation vs decentralisation in general. It gives a very
> abstract model of a system where some fraction of a scarce resource
> can be allocated wherever it's needed. I'm not surprised that such a
> system has different queueing behaviour from a system with fixed
> allocation. But it seems to me that this result is a poor fit for your
> argument, in two respects.
> 
> First, the result doesn't necessarily apply beyond resource allocation
> problems - specifically, those problems where resources can be moved
> from place to place at no cost. I don't see the relevance to the
> lookup and routing problems you're aiming to solve with ZeroTier.

I have an admission to make. I did a very un-academic right-brainy thing, 
in that I made a little bit of a leap. When I read “phase transition” it 
was sort of an epiphany moment. Perhaps I studied too much complexity and 
evolutionary theory, but I immediately got a mental image of a phase 
transition in state space where a system takes on new properties. You see 
that sort of thing in those areas all the time.

But I don’t think it’s a huge leap. The question Tsitsiklis/Xu were 
looking at was storage allocation in a distributed storage pool (or an 
idealized form of that problem). Their research was backed by Google, who 
obviously is very interested in storage allocation problems. But I don’t 
think it’s a monstrous leap to go from storage allocation problems to 
bandwidth, routing, or trust. Those are all “resources” and all can be 
moved or re-allocated. Many are dynamic rather than static resources.

It’d be interesting to write these authors and ask them directly what they
think. Maybe I’ll do that.

If you’ve been reading the other thread, we’re talking a lot about trust 
and I’m starting to agree with David Geib that trust is probably the root 
of it. These other issues, such as this and the CAP theorem, are probably 
secondary in that if trust can be solved then these other things can be 
tackled or the problem space can be redefined around them.

> Second, the advantage is gained by having a panoptic view of the whole
> system - far from being a blind idiot, the allocator needs to know
> what's happening everywhere, and needs to be able to send resources
> anywhere. It's more Stalin than Lovecraft.

I think it’s probably possible to have a coordinator that coordinates 
without knowing *much* about what it is coordinating, via careful and 
clever use of cryptography. I was more interested in the over-arching 
theoretical question of whether some centralization is needed to achieve 
efficiency and the other things that are required for a good user 
experience, and if so how much.

ZeroTier’s supernodes know that point A wants to talk to point B, and if 
NAT traversal is impossible and data has to be relayed then they also know
how much data. But that’s all they know. They don’t know the protocol, the
port, or the content of that data. They’re *pretty* blind. I have a 
suspicion it might be possible to do better than that, to make the blind 
idiot… umm… blinder.

It would be significantly easier if it weren’t for NAT. NAT traversal 
demands a relaying maneuver that inherently exposes some metadata about 
the communication event taking place. But we already know NAT is evil and 
must be destroyed or the kittens will die.

> It's true that nobody's been able to ship a decentralised alternative
> to Facebook, Google, or Twitter. But that failure could be due to many
> reasons. Who's going to buy stock in a blind-by-design internet
> company that can't target ads at its users? How do you advertise a
> system that doesn't have a central place where people can go to join
> or find out more? How do you steer the evolution of such a system?

Sure, those are problems too. Decentralization is a multifaceted problem: 
technical, political, business, social, ...

But it’s not like someone’s shipped a decentralized Twitter that is 
equivalently fast, easy to use, etc., and it’s failed in the marketplace. 
It’s that nobody’s shipped it at all, and it’s not clear to me how one 
would build such a thing.

Keep in mind too that some of the profitability problems of 
decentralization are mitigated by the cost savings. A decentralized 
network costs orders of magnitude less to run. You don’t need data centers
that consume hundreds of megawatts of power to handle every single 
computation and store every single bit of data. So your opportunities to 
monetize are lower but your costs are also lower. Do those factors balance
out? Not sure. Nobody’s tried it at scale, and I strongly suspect the 
reason to be technical.

The bottom line is kind of this:

Decentralization and the devolution of power are something that lots of 
people want, and they’re something human beings have been trying to 
achieve in various ways for a very long time. Most of these efforts, like 
democracy, republics, governmental balance of power, anti-trust laws, 
etc., pre-date the Internet. Yet it never works.

When I see something like that — repeated tries, repeated failures, but 
everyone still wants it — I start to suspect that there might be a law of 
nature at work. To give an extreme case — probably a more extreme case 
than this one — people have been trying to build infinite energy devices 
for a long time too. People would obviously love to have an infinite 
energy device. It would solve a lot of problems. But they never work, and 
in that case any physicist can tell you why.

Are there laws of nature at work here? If so, what are they? Are they as 
tough and unrelenting as the second law of thermodynamics, or are they 
something we can learn to work within or around? That’s what I want to 
know.

> The blind idiot god is a brilliant metaphor, and I agree it's what we
> should aim for whenever we need a touch of centralisation to solve a
> problem. But if we take into account the importance of metadata
> privacy as well as content privacy, I suspect that truly blind and
> truly idiotic gods will be very hard to design. A god that knows
> absolutely nothing can't contribute to the running of the system. So
> perhaps the first question to ask when designing a BIG is, what
> information is it acceptable for the BIG to know?

Good point about metadata privacy, but I think it’s ultimately not a 
factor here. Or rather… it *is* a factor here, but we have to ignore it.

The only way I know of to achieve metadata privacy with any strength 
beyond the most superficial sort is onion routing. Onion routing is 
inherently expensive. I’m not sure anyone’s going to use it for anything 
“routine” or huge-scale.

… that is unless someone invents something new. I have wondered if linear 
coding schemes might offer a way to make onion routing more efficient, but
that there would be an awfully big research project that I don’t have time
to do. :)

We can get most of the way there by making it at least difficult to gather
meta-data, and by using encryption to make that meta-data less meaningful 
and transparent. There’s a big difference between Google or the NSA or the
Russian Mob being able to know everything I’ve ever bought vs. them being 
able to know with some probability when and where I’ve spent money but not
what on and not how much. The latter is less useful.

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Michael Rogers
Date:
2014-09-12 @ 16:14
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 19/08/14 20:52, Adam Ierymenko wrote:
> Getting to this a bit belatedly… :)

Likewise. :-)

> If you’ve been reading the other thread, we’re talking a lot about 
> trust and I’m starting to agree with David Geib that trust is 
> probably the root of it. These other issues, such as this and the
> CAP theorem, are probably secondary in that if trust can be solved
> then these other things can be tackled or the problem space can be 
> redefined around them.

I totally agree. Perhaps Tor would be an interesting example to think
about, because it's decentralised at the level of resource allocation
but centralised at the level of trust. The Tor directory authorities
are the closest thing I can think of to a Blind Idiot God: they act as
a trust anchor for the system while remaining deliberately ignorant
about who uses it and how. They know even less than ZeroTier's
supernodes, because they're not aware of individual flows and they
don't relay any traffic themselves.

> It would be significantly easier if it weren’t for NAT. NAT
> traversal demands a relaying maneuver that inherently exposes some
> metadata about the communication event taking place. But we already
> know NAT is evil and must be destroyed or the kittens will die.

NAT is the biggest and most underestimated obstacle for P2P systems.
I'm glad you're tackling it head-on.

> Good point about metadata privacy, but I think it’s ultimately not
> a factor here. Or rather… it *is* a factor here, but we have to
> ignore it.
> 
> The only way I know of to achieve metadata privacy with any
> strength beyond the most superficial sort is onion routing. Onion
> routing is inherently expensive. I’m not sure anyone’s going to use
> it for anything “routine” or huge-scale.

Onion routing will always be more expensive than direct routing, but
bandwidth keeps getting cheaper, so the set of things for which onion
routing is affordable will keep growing.

Latency is a bigger issue than bandwidth in my opinion. In theory you
can pass a voice packet through three relays and still deliver it to
the destination in an acceptable amount of time, but the system will
have to be really well engineered to minimise latency. Tor wasn't
built with that in mind - and again, the question is who's going to
pay an engineering team to build a decentralised anonymous voice
network they can't profit from?

> … that is unless someone invents something new. I have wondered if 
> linear coding schemes might offer a way to make onion routing more 
> efficient, but that there would be an awfully big research project 
> that I don’t have time to do. :)

There have been some papers about anonymity systems based on secret
sharing and network coding, but nothing that's been deployed as far as
I know. In any case, they all used multi-hop paths so the bandwidth
and latency issues would remain.

> We can get most of the way there by making it at least difficult
> to gather meta-data, and by using encryption to make that meta-data
> less meaningful and transparent. There’s a big difference between
> Google or the NSA or the Russian Mob being able to know everything
> I’ve ever bought vs. them being able to know with some probability
> when and where I’ve spent money but not what on and not how much.
> The latter is less useful.

Again, I totally agree and I'm happy to see any progress towards
somewhat-more-blind, somewhat-more-idiotic internet deities.

Cheers,
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)

iQEcBAEBCAAGBQJUExvwAAoJEBEET9GfxSfMnfoH/RRZpfvSsHx6gDS3jCTlLrPP
wbJ7zVuMJdtnxRC/wgkTOQ/AkQG9N13VKqE10YtrWZoMw1TX6wj4uGOFascH7gUK
uKkf023m1tSHE05x2IaYusGdGDlOXlwKY8+LoP8a3OFI8DSX8ous+3vOANPpT+kZ
8MQ/ryiNa40ck369ew3lmxwMVycTxPgISM+WpAonQWADCqyGW/wiIZcebbFM+tIq
zaZeomkc9s6BLU/TJE8TAIGkhS5xcEsPDJrETYIPhGNNQ6gjFZE1S1DFyTYrReRP
LuyDLLC9x5sFYqTqtzqcIR36HDaPYb1iEbN3vkw5iMjEuS0F9Y6+/jwEdWP31A8=
=RVi3
-----END PGP SIGNATURE-----

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-12 @ 15:30
On Aug 5, 2014, at 11:11 PM, David Geib <trustiosity.zrm@gmail.com> wrote:

> > I was thinking: does this almost reduce to the "hard AI problem?"
> 
> Detecting which nodes are malicious might not even be computable. It's 
the lack of verifiable information. Unless you have some trust anchors to 
create a frame of reference you can never tell who is defecting vs. who is
lying about others defecting. And as I think about it, the only way to 
distinguish a targeted attack from a node being offline is to establish 
that it is online, which requires you to have a communications path to it,
which would allow you to defeat the attack. So unless you can efficiently 
defeat the attack you can't efficiently detect whether one is occurring. 
> 
> So I guess "detect then mitigate" is out. At least without manual 
intervention to identify that an attack is occurring. 

I think you're ultimately right, and you've shifted my thinking just a 
little. The CAP theorem, while relevant, is probably not the central 
bugaboo. The central problem is trust.

What and who do you trust, and why, and how do you compute this?

The solution most of the Internet uses is for real-world political 
entities (corporations, governments, etc.) to create signing certificates.
This is also the solution ZeroTier uses, more or less. Supernodes are 
designated as such because they're hard coded, which will soon be 
determined by a signing certificate that I plan to put somewhere very safe
(and keep encrypted) when I'm done signing the topology root dictionary.

Trust without some centralized "god" somewhere is extraordinarily hard for
the reasons you discuss. How do I trust? How do I compute trust? How do I 
cooperate with peers to compute trust while being sure these peers are not
defecting.

If there is an answer, it's going to come from game theory.

Finally, on the subject of "manual intervention..."

That manual intervention must by definition take place over some other 
network, not the network in question, since the network being intervened 
with may be compromised.

It reminds me of Godel's incompleteness theorem. To intervene on behalf of
a decentralized network requires that the conversation be taken somewhere 
*outside* that network. We see this with Bitcoin's response to GHASH.IO 
temporarily getting 51%. The response was rapid, and was coordinated via 
sites like Reddit /r/bitcoin and other things completely separate from the
block chain.

This also makes me think more and more about hybrid systems where you've 
got multiple types of systems -- including both centralized and 
decentralized -- that back each other to create an "antifragile" network.

> > The Bitcoin network solves the trust problem by essentially trusting 
itself. If someone successfully mounted a 51% attack against Bitcoin, 
nothing would be broken as far as the network is concerned. But that's not
what *we*, the sentient beings that use it, want. We want the network to 
do "the right thing," but what's that? How does the network know what the 
right thing is? As far as its concerned, when 51% of the network extends 
the block chain that's the right thing... right?
> 
> Another way of putting this is that the Bitcoin users solve the trust 
problem by trusting the majority, where resistance to a Sybil attack comes
from allocating votes proportional to computing power. Which works great 
until some entity amasses enough computing power to vote itself God. And 
you can do similar with other scarce or expensive things. IPv4 public 
addresses come to mind. Useful for banning trolls on IRC but if your 
attacker has a Class A or a botnet you're screwed.

Yep. It's one of the reasons I don't think Bitcoin in its present form is 
necessarily *that* much more robust than central banks and other financial
entities.

Don't get me wrong. It's awesome, especially on a technical level, and 
it's clearly useful. I just don't think it's the absolute panacea that 
some think it is.

It has other issues too. A lot of people call it "anonymous virtual 
currency." It most certainly is not. That particular piece of Bitcoin 
evangelism is almost Orewellian in its doublespeak-iness. Bitcoin is the 
least anonymous currency ever devised. Every single transaction is 
recorded verbatim forever. Yes, the addresses can be anonymous but... 
umm... educate yourself about data de-anonymization with machine learning 
and data mining techniques. That's all I'm gonna say.

... and once one Bitcoin address is de-anonymized, you can now begin 
traversing the transaction graph and de-anonymizing others. If the geeky 
methods fail you you can always fall back on gumshoe detective work. "Hey 
dude, you sell Bitcoins on localbitcoin right? Who did you meet with on X 
date?"

> > You could solve that problem pragmatically though by shipping with 
acceptable defaults. If a user wanted to change them they could, but they 
don't have to.
> 
> Right.

Maybe a good solution to the trust problem is exactly this:

Build in acceptable trust defaults, but let the user change them if they 
want or add new entities to trust if they want.

The challenge is making the *interface* and *presentation* of trust 
comprehensible to the user so the user understands exactly what they're 
doing and the implications of it clearly (without having to be an expert 
in PKI). Otherwise malware will game the user into trusting things they 
shouldn't. Of course you can never be totally safe from social 
engineering, but at least you should present the system in a way that 
makes the social engineers' job harder.

Complicated things like webs of trust are, I think, a no-go because they 
ask the user to solve the same non-computable trust problems a trustless 
network would have to solve except with lots of people and other entities.
If something is non-computable for machines it is also non-computable for 
humans.

> > One idea I've had is a hybrid system combining a centralized database 
and a decentralized DHT. Both are available and they back each other. The 
central database can take over if the decentralized DHT comes under attack
and the decentralized DHT will work if the central system fails or is 
blocked (e.g. in a censorship-heavy country).
> 
> I've been considering doing federation similar to that. You have some 
node which is essentially a dedicated DHT node and a bunch of clients 
which use it as a gateway to access the DHT instead of participating 
themselves. So you have a lot of ostensibly related clients all using the 
same gateway and when they want to contact each other they get one hop 
access and no Sybil exposure. And if the gateway is down the clients can 
still participate in the DHT themselves so it isn't a single point of 
failure. 

Yeah, that's basically the identical idea except in your model the 
centralized node(s) are the defaults and the DHT is fallback.

> > Everything related to TUN/TAP on every platform is nearly 
documentation-free. :)
> 
> The Linux implementation never gave me any trouble. 
https://www.kernel.org/doc/Documentation/networking/tuntap.txt says how to
create one and then you configure it the same as eth0.
> 
> Maybe the trouble with TAP-Windows is that it's idiosyncratic (to be 
kind) in addition to undocumented. Have you discovered any good way 
identify your TAP-Windows interface as something not to be molested by 
other TAP-Windows applications like OpenVPN? There is some language in the
.inf about changing the component ID which seems to imply recompiling the 
driver and then probably needing a code signing key from Microsoft to make
it work, but there has to be some less ridiculous way of doing it than 
that. 

Umm... sorry to break this to you, but that's exactly what I did.

I had to do it anyway because I had to add a new IOCTL to the tap driver 
to allow the ZeroTier service to query multicast group subscriptions at 
the Ethernet layer. Windows has no such thing natively, while on OSX/BSD 
you can get it via sysctl() and Linux exposes it in /proc.

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
David Geib
Date:
2014-08-13 @ 00:23
> Trust without some centralized "god" somewhere is extraordinarily hard
for the reasons you discuss. How do I trust? How do I compute trust? How do
I cooperate with peers to compute trust while being sure these peers are
not defecting.

I think the problem is trying to compute trust algorithmically. In a
completely decentralized network the information necessary to do that is
not intrinsically available so you have to bootstrap trust in some other
way.

Everybody trusting some root authority is the easiest way to do that but
it's also the most centralized. It also doesn't actually solve the problem
unless the root authority is also the only trusted party, because now you
have to ask how the root is supposed to know whether to trust some third
party before signing it. That's the huge fail with the existing CAs.
They'll sign anything. Moxie Marlinspike has had a number of relevant
things to say about that.

> That manual intervention must by definition take place over some other
network, not the network in question, since the network being intervened
with may be compromised.

In a theoretical sense that's true, because if the network is totally
compromised, meaning no communication can take place between anyone, then
you can't do anything in the direction of fixing it without having some
external network to use to coordinate. But that's only a problem before
bootstrap. If you can discover and communicate with several compatriots
using the network and over time come to trust them before any attack is
launched against the network, you can then designate them as trusted
parties without any external contact. This is like the Bitcoin solution
except that instead of using processing power as the limit on Sybils you
use human face time. Then when the attack comes you already have trusted
parties you can rely on to help you resist it.

So you *can* bootstrap trust (slowly) but you have to do it before the
attack happens or suffer a large inefficiency in the meantime. But using an
external network to bootstrap trust before you even turn the system on is
clearly a much easier way to guarantee that it's done before the attack
begins, and is probably the only efficient way to recover if it *isn't*
done before the attack begins.

> This also makes me think more and more about hybrid systems where you've
got multiple types of systems -- including both centralized and
decentralized -- that back each other to create an "antifragile" network.

That definitely seems like the way to go. Homogenous systems are inherently
fragile because any attack that works against any part of the system will
work against the whole of it. It's like the Unix Way: Make everything
simple and modular so that everything can interface with anything, that way
if something isn't working you can swap it out with something else. Then as
long as you have [anything] that can perform the necessary function (e.g.
message relay or lookup database), everything requiring that function can
carry on working.

> Yep. It's one of the reasons I don't think Bitcoin in its present form is
necessarily *that* much more robust than central banks and other financial
entities.

I tend to think that Bitcoin is going to crash and burn. It has all the
makings of a bubble. It's inherently deflationary which promotes hoarding
and speculation which causes the price to increase in the short term, but
the whole thing is resting on the supremacy of its technical architecture.
So if somebody breaks the technology *or* somebody comes up with something
better or even a worthwhile but incompatible improvement to Bitcoin itself,
when everyone stops using Bitcoin in favor of the replacement the Bitcoins
all lose their value. For example if anyone ever breaks SHA256 it would
compromise the entire blockchain. Then what do you do, start over from zero
with SHA3?

> The challenge is making the *interface* and *presentation* of trust
comprehensible to the user so the user understands exactly what they're
doing and the implications of it clearly (without having to be an expert in
PKI).

A big part of it is to reduce the consequences of users making poor trust
decisions. The peers that are "trusted" should be trusted only to the
smallest extent possible and the consequences of one peer making poor trust
decisions should have minimal consequences for the others. That's one of
the reasons web of trust is so problematic. Using web of trust for key
distribution is desperation. Key distribution is the poster child for
applying multiple heterogenous methods. It's the thing most necessary to
carry out external to the network but they're trying to handle it
internally using one method for everyone.

The ideal would be for nodes to only trust a peer to relay data and then
have the destination provide an authenticated confirmation of receipt. Then
if there is no confirmation you ask some different trusted peer(s) to relay
the message. That way all misplaced trust costs you is efficiency rather
than security. If a trusted peer defects then you try the next one. Then
even if half the peers you trusted will defect, you're still far ahead of
the alternative where 90% or 99.9% of the peers you try could be Sybils.
And that gets the percentage of defecting peers down to the point where you
can start looking at the Byzantine fault tolerance algorithms to detect
them, which might even allow defecting peers to be algorithmically ejected
from the trusted group.

> Yeah, that's basically the identical idea except in your model the
centralized node(s) are the defaults and the DHT is fallback.

Part of the idea is to decentralize the centralized nodes. Then there are
big nodes trusted by large numbers of people but there is no "root" which
is trusted by everybody. And big is relative. If each organization (or
hackerspace or ...) runs their own supernode then there is nothing to shut
down or compromise that will take most of the network with it, and there is
nothing preventing a non-supernode from trusting (i.e. distributing their
trust between) more than one supernode. Then you can have the supernode
operators each decide which other supernodes they trust which shrinks the
web of trust problem by putting a little bit of hierarchy into it, without
making the hierarchy rigid or giving it a single root. The result is
similar in structure to a top down hierarchy except that it's built from
the bottom up so no one has total control over it.

> Umm... sorry to break this to you, but that's exactly what I did.

Argh. Why does everything related to Windows have to be unnecessarily
complicated?

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-14 @ 04:04
On Aug 12, 2014, at 5:23 PM, David Geib <trustiosity.zrm@gmail.com> wrote:
> 
> > Trust without some centralized "god" somewhere is extraordinarily hard
for the reasons you discuss. How do I trust? How do I compute trust? How 
do I cooperate with peers to compute trust while being sure these peers 
are not defecting.
> 
> I think the problem is trying to compute trust algorithmically. In a 
completely decentralized network the information necessary to do that is 
not intrinsically available so you have to bootstrap trust in some other 
way.
> 
> Everybody trusting some root authority is the easiest way to do that but
it's also the most centralized. It also doesn't actually solve the problem
unless the root authority is also the only trusted party, because now you 
have to ask how the root is supposed to know whether to trust some third 
party before signing it. That's the huge fail with the existing CAs. 
They'll sign anything. Moxie Marlinspike has had a number of relevant 
things to say about that.
> 

That's the general pattern that I see. The easiest approach is the most 
centralized approach... at least if you neglect the longer term systemic 
downsides of it. Maybe over-centralization should be considered a form of 
technical debt.

I agree that root CAs are horrible. I have had them do things like send me
a private key unencrypted to gmail. I am not making that up. No 
passphrase. To gmail.

Hmm... Yeah, I think doing trust better is a must.

Btw... Some folks responded to my post lamenting that I had given up on 
decentralization. That's not true at all. I am just doing two things. One 
is trying to spin the problem around and conceptualize it differently. The
other is giving the problem the respect it deserves. It's a very, very 
hard problem... Which is part of why I like it. :)

> > That manual intervention must by definition take place over some other
network, not the network in question, since the network being intervened 
with may be compromised.
> 
> In a theoretical sense that's true, because if the network is totally 
compromised, meaning no communication can take place between anyone, then 
you can't do anything in the direction of fixing it without having some 
external network to use to coordinate. But that's only a problem before 
bootstrap. If you can discover and communicate with several compatriots 
using the network and over time come to trust them before any attack is 
launched against the network, you can then designate them as trusted 
parties without any external contact. This is like the Bitcoin solution 
except that instead of using processing power as the limit on Sybils you 
use human face time. Then when the attack comes you already have trusted 
parties you can rely on to help you resist it.

I'm not sure those kinds of approaches can work on a global scale. How do 
people in Russia or South Africa determine their trust relationship with 
someone in New York? I guess you could traverse the graph, but now you are
back to trying to compute trust.

> So you *can* bootstrap trust (slowly) but you have to do it before the 
attack happens or suffer a large inefficiency in the meantime. But using 
an external network to bootstrap trust before you even turn the system on 
is clearly a much easier way to guarantee that it's done before the attack
begins, and is probably the only efficient way to recover if it *isn't* 
done before the attack begins. 

Another point on this... History has taught us that governments and very 
sophisticated criminals are often much more ahead of the game than we 
suspect they are. My guess is that if a genuine breakthrough in trust is 
made it will be recognizable as such and those forces will get in early. 
The marketing industry is also very sophisticated, though not quite as 
cutting edge as the overworld and the underworld.

On a more pragmatic note, I think you have a chicken or egg problem with 
the idea of bootstrapping before turning the system on. History has also 
demonstrated that in computing release early release often wins hands 
down. Everything that I am familiar with, from the web to Linux to even 
polish obsessed creatures like Mac have followed this path. If it doesn't 
exist yet nobody will use it, and if nobody is using it nobody will 
bootstrap trust for it because nobody is using it therefore nobody will 
ever use it therefore it's a waste of time...

> Then as long as you have [anything] that can perform the necessary 
function (e.g. message relay or lookup database), everything requiring 
that function can carry on working. 

You can have your cake and eat it too. It's easy. Just make two cakes. 
Make a centralized cake and a decentralized cake.

> I tend to think that Bitcoin is going to crash and burn. It has all the 
makings of a bubble. It's inherently deflationary which promotes hoarding 
and speculation which causes the price to increase in the short term, but 
the whole thing is resting on the supremacy of its technical architecture.
So if somebody breaks the technology *or* somebody comes up with something
better or even a worthwhile but incompatible improvement to Bitcoin 
itself, when everyone stops using Bitcoin in favor of the replacement the 
Bitcoins all lose their value. For example if anyone ever breaks SHA256 it
would compromise the entire blockchain. Then what do you do, start over 
from zero with SHA3? 

I think the tech behind it is more interesting than Bitcoin itself. It 
reminds me of the web. Hypertext, browsers, and the new hybrid thin client
model they led to was interesting. The internet was certainly damn 
interesting. But pets.com and flooz? Not so much.

I still need to take a deep, deep dive into the block chain technology. I 
get the very basic surface of it, but I am really curious about how it 
might be used as part of a solution to the trust bootstrapping problem. If
hybrid overlapping heterogenous solutions are the way forward for network 
robustness, then maybe a similar concurrent cake solution exists for 
trust.

At some point I think someone is going to successfully attack Bitcoin. 
What happens then? I don't know. It has some value as a wire transfer 
protocol if nothing else, but the sheen will certainly wear off.

> The ideal would be for nodes to only trust a peer to relay data and then
have the destination provide an authenticated confirmation of receipt. 
Then if there is no confirmation you ask some different trusted peer(s) to
relay the message. That way all misplaced trust costs you is efficiency 
rather than security. If a trusted peer defects then you try the next one.
Then even if half the peers you trusted will defect, you're still far 
ahead of the alternative where 90% or 99.9% of the peers you try could be 
Sybils. And that gets the percentage of defecting peers down to the point 
where you can start looking at the Byzantine fault tolerance algorithms to
detect them, which might even allow defecting peers to be algorithmically 
ejected from the trusted group. 

This is basic to any relayed crypto peer to peer system including the one 
I built. Every packet is MAC'd using a key derived from a DH agreement, 
etc.

I think the harder thing is defending not against Sybils vs. the data 
itself but Sybils vs the infrastructure. Criminals, enemy governments, 
authoritarian governments, etc. might just want to take the network down, 
exploit it to carry out a DDOS amplification attack against other targets,
or make it unsuitable for a certain use case.

> Part of the idea is to decentralize the centralized nodes. Then there 
are big nodes trusted by large numbers of people but there is no "root" 
which is trusted by everybody. And big is relative. If each organization 
(or hackerspace or ...) runs their own supernode then there is nothing to 
shut down or compromise that will take most of the network with it, and 
there is nothing preventing a non-supernode from trusting (i.e. 
distributing their trust between) more than one supernode. Then you can 
have the supernode operators each decide which other supernodes they trust
which shrinks the web of trust problem by putting a little bit of 
hierarchy into it, without making the hierarchy rigid or giving it a 
single root. The result is similar in structure to a top down hierarchy 
except that it's built from the bottom up so no one has total control over
it. 

I like this...especially the part about shrinking the problem.

It reminds me of how old NNTP and IRC and similar protocols were run. You 
had a network of servers run by admin volunteers, so the trust problem was
manageable. But there was no king per se... A bit of an oligarchy though.

> 
> > Umm... sorry to break this to you, but that's exactly what I did.
> 
> Argh. Why does everything related to Windows have to be unnecessarily 
complicated?
> 

That's nothing. Get a load of what I had to pull out of my you know what 
to get windows to treat a virtual network properly with regard to firewall
policy. As far as I know I am the first developer to pull this off, and 
it's not pretty. I think I am first on this one by virtue of masochism.


https://github.com/zerotier/ZeroTierOne/commit/f8d4611d15b18bf505de9ca82d74f5102fc57024#diff-288ff5a08b3c03deb7f81b5d45228018R628

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
David Geib
Date:
2014-08-14 @ 08:30
> That's the general pattern that I see. The easiest approach is the most
centralized approach... at least if you neglect the longer term systemic
downsides of it. Maybe over-centralization should be considered a form of
technical debt.

It's more like a security vulnerability. Single point of failure, single
point of compromise and a choke point for censorship and spying.

> I agree that root CAs are horrible. I have had them do things like send
me a private key unencrypted to gmail. I am not making that up. No
passphrase. To gmail.

And don't forget that they're all fully trusted. So it's completely futile
to try to find a secure one because the insecure ones can still give the
attackers a certificate with your name.

> Btw... Some folks responded to my post lamenting that I had given up on
decentralization. That's not true at all. I am just doing two things. One
is trying to spin the problem around and conceptualize it differently. The
other is giving the problem the respect it deserves. It's a very, very hard
problem... Which is part of why I like it. :)

It's definitely a fun problem. Part of it is to pin down just what
"decentralization" is supposed to mean. If you start with the ideologically
pure definition where each node is required to be totally uniform you end
up banging your head against the wall. You want a node running on batteries
with an expensive bandwidth provider to be able to participate in the
network but that shouldn't exclude the possibility of usefully exploiting
the greater resources of other nodes that run on AC power and have cheap
wired connections. So once you admit the possibility of building a network
which is both decentralized and asymmetrical it becomes an optimization
problem. How close to the platonic ideal can you get without overly
compromising efficiency or availability?

> I'm not sure those kinds of approaches can work on a global scale. How do
people in Russia or South Africa determine their trust relationship with
someone in New York? I guess you could traverse the graph, but now you are
back to trying to compute trust

But that's the whole problem, isn't it? If you have no direct contact and
you have no trusted path you really have nothing. That's why web of trust
is the last resort. It's the thing that comes closest to working when
nothing else will. Which is also why it's terrible. Because you only need
it when nothing else works but those are also the times when web of trust
is at its weakest.

The key is to find something better from the context of the relationship.
Even if you live far apart you might be able to meet once and exchange
keys. If you have a mutual trusted friend you can use that. If you have an
existing organizational hierarchy then you can traverse that to find a
trusted path. If one of you has a true broadcast medium under your control
then you can broadcast your key so that anyone can get it.

If you don't have *anything*, you have to ask what it is you're supposed to
be trusting. If you start communicating with some John Doe on the other
side of the world with no prior relationship or claim to any specific
credentials, does it actually matter that he wants to call himself John
Smith instead of John Doe? At that point the only thing you can really ask
to be assured of is that when you communicate with "John Smith" tomorrow
it's the same "John Smith" it was yesterday.

> Another point on this... History has taught us that governments and very
sophisticated criminals are often much more ahead of the game than we
suspect they are. My guess is that if a genuine breakthrough in trust is
made it will be recognizable as such and those forces will get in early.
The marketing industry is also very sophisticated, though not quite as
cutting edge as the overworld and the underworld.

Oh sure. Trust is a social issue. Criminals and marketing departments (now
there's a combination that fits like a glove) have engaged in social
engineering forever. That's nothing new. Maybe the question is whether
there are any new *solutions* to the old problems. Some combination of
global instantaneous communication and digital storage might make it harder
for people to behave dishonestly or inconsistently without getting caught.
But then we're back to computing trust.

And maybe that's not wrong. The real problem is trying to compute trust
with no points of reference. Once you have some externally-sourced trust
anchors we're back to heterogeneous and hybrid solutions.

> On a more pragmatic note, I think you have a chicken or egg problem with
the idea of bootstrapping before turning the system on.

Just the opposite. Bootstrapping first *is* the ship early method because
you bootstrap based on existing trust networks rather than trying to
construct a new one from whole cloth. The question is how to gather the
existing information in a way that provides a good user experience. You can
imagine something like Facebook: You need to add a couple of friends
manually but then it can start asking whether their friends are your
friends. Though that obviously brings privacy implications; maybe something
like homomorphic encryption could improve it? But now it's starting to get
complicated. I wonder if it makes sense to factor it out. Separate the
trust network from the communications network. A personal trust graph as a
local API could be extremely useful in general. And then the entities can
start tagging themselves with other data like their email address, PGP key,
snow key, website, etc. A little bit social network + web of trust +
key:value store.

> I think the tech behind it is more interesting than Bitcoin itself. It
reminds me of the web. Hypertext, browsers, and the new hybrid thin client
model they led to was interesting. The internet was certainly damn
interesting. But pets.com and flooz? Not so much.

Agreed. It's interesting because it solves a lot of the hard problems with
digital currencies but not all of them. It's clearly an evolutionary step
on the road to something else. Which is what concerns me about it: Inertia
and market share will allow it to survive against competitors that are only
slightly better but that just means more people will have built their homes
on the flood plain by the time the rain comes.

> I still need to take a deep, deep dive into the block chain technology. I
get the very basic surface of it, but I am really curious about how it
might be used as part of a solution to the trust bootstrapping problem. If
hybrid overlapping heterogeneous solutions are the way forward for network
robustness, then maybe a similar concurrent cake solution exists for trust.

Relevant: http://www.aaronsw.com/weblog/squarezooko

This is essentially the roadmap that led to namecoin, which (among other
things) disproved Zooko's Triangle.

Actually that's an interesting point. Zooko's triangle was supposed to be
that you couldn't have a naming system which is decentralized, has global
human-readable names and is secure. And it fails by the same
overgeneralization as we had here. You don't need centralization as long as
you have trust. So bitcoin/namecoin puts its trust in the majority as
determined by processing power and solves the triangle by providing trust
without centralization.

An interesting question is what might we use instead of computing power to
create a trust democracy that would allow the good guys to retain a
majority.

> This is basic to any relayed crypto peer to peer system including the one
I built. Every packet is MAC'd using a key derived from a DH agreement, etc.

Right, the crypto is a solved problem. The issue is that if you send a
packet to a Sybil, it throws it away. After the timeout you send the packet
via some other node. If it's also a Sybil it throws it away. If the large
majority of the nodes are Sybils that's where the inefficiency comes from.
You would essentially have to broadcast the message in order to find a path
that contains no Sybils. Trust should be able to solve the problem by
making available several "trusted" paths only a minority of which contain
Sybils.

> I think the harder thing is defending not against Sybils vs. the data
itself but Sybils vs the infrastructure. Criminals, enemy governments,
authoritarian governments, etc. might just want to take the network down,
exploit it to carry out a DDOS amplification attack against other targets,
or make it unsuitable for a certain use case.

Some attacks are unavoidable. If the attacker has more bandwidth than the
sum of the honest nodes in the network, you lose. But those are the attacks
that inverse scale. The more honest nodes in the network, the harder the
attack. And the more you can reduce the number of centralized choke points,
the harder it is to take down the network as a whole.

Amplification is also relatively easy to mitigate. Avoid sending big
packets in response to small packets. And if you have to do that, first
send a small packet with a challenge that the target node has to copy back
in order to provide evidence that the target node is the requesting node.
Relevant: http://tools.ietf.org/html/draft-eastlake-dnsext-cookies-02

It's the targeted attacks that are a bear because they're heterogeneous
*attacks*. In order to contact a node, the node itself has to be online,
there has to be an honest path between you and the node, and you have to be
able to discover that path. So the attacker can dump traffic on the
low-capacity honest paths to take them offline and then create a bunch of
Sybils to make discovering the higher-capacity paths more difficult, and
you have no way to distinguish between the target legitimately being
offline and merely all the paths you've tried being compromised. The answer
is to somehow know ex ante which paths are honest, but easier said than
done.

> That's nothing. Get a load of what I had to pull out of my you know what
to get windows to treat a virtual network properly with regard to firewall
policy. As far as I know I am the first developer to pull this off, and
it's not pretty. I think I am first on this one by virtue of masochism.

Thanks again, Microsoft. Though I think the OpenVPN users might have beat
you to the equivalent solution, e.g.

http://superuser.com/questions/120038/changing-network-type-from-unidentified-network-to-private-network-on-an-openvpn

(And as a public service announcement, 1.1.1.1 is no longer a "fake"
address as the 1.0.0.0/8 block was assigned to APNIC.
http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml)

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-19 @ 19:22
On Aug 14, 2014, at 1:30 AM, David Geib <trustiosity.zrm@gmail.com> wrote:

> It's more like a security vulnerability. Single point of failure, single
point of compromise and a choke point for censorship and spying. 

Not a bad way of framing it…

Try listing all the “trusted” CAs in a modern OS for example. You’ll see 
crap in there like the Chinese Government and a ton of corporations whose 
names I don’t recognize and it’s all “trust: ultimate". It makes a total 
joke out of https:// security.

> > Btw... Some folks responded to my post lamenting that I had given up 
on decentralization. That's not true at all. I am just doing two things. 
One is trying to spin the problem around and conceptualize it differently.
The other is giving the problem the respect it deserves. It's a very, very
hard problem... Which is part of why I like it. :)
> 
> It's definitely a fun problem. Part of it is to pin down just what 
"decentralization" is supposed to mean. If you start with the 
ideologically pure definition where each node is required to be totally 
uniform you end up banging your head against the wall. You want a node 
running on batteries with an expensive bandwidth provider to be able to 
participate in the network but that shouldn't exclude the possibility of 
usefully exploiting the greater resources of other nodes that run on AC 
power and have cheap wired connections. So once you admit the possibility 
of building a network which is both decentralized and asymmetrical it 
becomes an optimization problem. How close to the platonic ideal can you 
get without overly compromising efficiency or availability?

Yes, and I also think it’s a toy version of an even larger problem: how to
devolve power in general.

Human societies are networks too. I think this work has political and 
philosophical implications inasmuch as the same information theoretic 
principles that govern computer networks might also operate in human ones.

If we can fix it here, maybe it can help us find new ways of fixing it there.

> But that's the whole problem, isn't it? If you have no direct contact 
and you have no trusted path you really have nothing. That's why web of 
trust is the last resort. It's the thing that comes closest to working 
when nothing else will. Which is also why it's terrible. Because you only 
need it when nothing else works but those are also the times when web of 
trust is at its weakest. 

Hmm… so it’s a six months of canned food solution. You’ll only need it if 
things are so bad you’re probably already dead.

> The key is to find something better from the context of the 
relationship. Even if you live far apart you might be able to meet once 
and exchange keys. If you have a mutual trusted friend you can use that. 
If you have an existing organizational hierarchy then you can traverse 
that to find a trusted path. If one of you has a true broadcast medium 
under your control then you can broadcast your key so that anyone can get 
it.

Broadcast. That brings up another thing I’ve wondered about for a while.

We have the ability to broadcast radio signals globally, and cheaply, via 
things like shortwave. The bandwidth is awful but keys aren’t very big, 
nor are node identifiers, top-of-hierarchy trust lists, etc.

I wonder what might be done if we could pair mesh nets with broadcast 
media? Has anyone looked into that? I picture a digitally encoded 
shortwave analog to a “numbers station” that continuously broadcasts the 
current mesh net consensus for trust anchor points and high-availability 
nodes.

Or… maybe every mesh box has a transmitter and a receiver and some kind of
constantly revolving byzantine agreement can occur via 
frequency-modulation-consensus or something weird like that… something 
that would sound to the ear a lot like a modem carrier negotiation 
happening all day long. Shortwave nuts would call that frequency “the pig”
cause it’d just squeal. :)

Probably violates a hundred FCC regulations but we’re talking theory.

I’m sure somebody’s done Ph.D thesis work or something in that area… we’ve
had a half-century of digital telecom research at this point. Have to dig 
sometime.

> If you don't have *anything*, you have to ask what it is you're supposed
to be trusting. If you start communicating with some John Doe on the other
side of the world with no prior relationship or claim to any specific 
credentials, does it actually matter that he wants to call himself John 
Smith instead of John Doe? At that point the only thing you can really ask
to be assured of is that when you communicate with "John Smith" tomorrow 
it's the same "John Smith" it was yesterday. 

You might condense this down to “it is what it is.” If it’s an unknown 
endpoint, it’s an unknown endpoint.

> Oh sure. Trust is a social issue. Criminals and marketing departments 
(now there's a combination that fits like a glove) have engaged in social 
engineering forever. That's nothing new. Maybe the question is whether 
there are any new *solutions* to the old problems. Some combination of 
global instantaneous communication and digital storage might make it 
harder for people to behave dishonestly or inconsistently without getting 
caught. But then we're back to computing trust.
> 
> And maybe that's not wrong. The real problem is trying to compute trust 
with no points of reference. Once you have some externally-sourced trust 
anchors we're back to heterogeneous and hybrid solutions.

Or… we are admitting that trust is inherently asymmetrical because of 
course it is! Nobody trusts everyone equally. The question then is whether
people need to agree at the “meta” level on some common things that they 
all trust, and if so how this is accomplished. Seems to me that they do 
otherwise cooperation becomes difficult (game theory territory).

> Just the opposite. Bootstrapping first *is* the ship early method 
because you bootstrap based on existing trust networks rather than trying 
to construct a new one from whole cloth.

Of course you do. Duh.

> Separate the trust network from the communications network. A personal 
trust graph as a local API could be extremely useful...

And yeah. Duh. It’s also the Unix Way, which seems to work— separation of 
concerns into reusable small modules.

But that itself gets problematic if you do it badly. Those small modules 
have to themselves be clean, reusable, well-packaged, well-behaved, not 
hodgepodges of rube goldberg crap requiring fifty dependencies to install.
Unfortunately a lot of modern software authors do that kind of thing cause
people don’t know how to write installable software, but that’s another 
rant for another day.

> [Bitcoin...] It's clearly an evolutionary step on the road to something else.

That’s a very concise statement of what I think too.

I’m not even sure if the thing it’s on the road to is a “currency” per se.
I’m also not sure Bitcoin is a currency.

I was chatting at a conference with a tech-biz VC type once about Bitcoin 
and he said “think of it this way. Imagine you have a company that 
provides wire transfers that work everywhere, can be fairly anonymous, 
and… well… Bitcoin. How much would that company be worth? Look at the 
total value of BTC and then imagine that’s a stock for that company. Then 
it makes sense.”

I think he was right. Bitcoin isn’t so much a currency as it is a virtual 
corporation in which BTC is its stock. Then it gets weird… the service 
that this corporation provides *is* its stock in the form of 
cryptographically secure tradable commodities that people can use *like* a
currency or as a wire transfer mechanism.

When you look at it that way I’m not sure if it’s brilliant or a 
ridiculous bubble. Maybe both.

But that also gets me closer to what I think Bitcoin might be an 
evolutionary step toward: a new kind of financial entity, a 
cryptographically-defined and possibly leaderless (in the conventional 
sense) corporation.

Remember that the corporation and the concept of “stock” was itself a late
middle ages to early Renaissance invention. Such a notion has not always 
existed. There’s innovation in the arena of social and financial 
constructs just like there is everywhere else. Perhaps we are seeing the 
birth of a new creature in that landscape.

“Cryptocorps" remind me a lot of the Loa from William Gibson’s Sprawl 
trilogy (Neuromancer, Count Zero, Mona Lisa Overdrive). But then again we 
are living in a cyberpunk novel. Nobody writes that stuff anymore because 
it would be placed in the nonfiction section of the library.

> Actually that's an interesting point. Zooko's triangle was supposed to 
be that you couldn't have a naming system which is decentralized, has 
global human-readable names and is secure. And it fails by the same 
overgeneralization as we had here. You don't need centralization as long 
as you have trust. So bitcoin/namecoin puts its trust in the majority as 
determined by processing power and solves the triangle by providing trust 
without centralization.
> 
> An interesting question is what might we use instead of computing power 
to create a trust democracy that would allow the good guys to retain a 
majority. 

Right. Computing power is expensive. Bitcoin gets away with it because 
it’s riding on a speculative frenzy. It’s also not necessarily a heuristic
for “good guys.” Lots of bad actors can buy lots of silicon, and like I 
said there might be people (if Bitcoin gets big enough) that might just 
want to burn it down (a.k.a. actors who are not rationally profit 
motivated, like militaries and terrorists).

I wonder if we could actually define good guys in some meaningful way, 
like via game theory? Are they actors that tend toward cooperation in an 
environment of mostly cooperators?

> It's the targeted attacks that are a bear because they're heterogeneous 
*attacks*. In order to contact a node, the node itself has to be online, 
there has to be an honest path between you and the node, and you have to 
be able to discover that path. So the attacker can dump traffic on the 
low-capacity honest paths to take them offline and then create a bunch of 
Sybils to make discovering the higher-capacity paths more difficult, and 
you have no way to distinguish between the target legitimately being 
offline and merely all the paths you've tried being compromised. The 
answer is to somehow know ex ante which paths are honest, but easier said 
than done. 

I think the bottom line is that there will never be a system that *can’t* 
be attacked. But then again if you’re really powerful you can just nuke 
people with hydrogen bombs, so that line of reasoning ends in reductio ad 
absurdium. The goal is just to build a system where the cost of an attack 
is so high that (a) most potential attackers are priced out (a.k.a. way 
above skr1pt k1dd13 threshold), (b) economically “rational” actors would 
see no benefit minus cost, and (c) anyone both able and “irrationally” 
motivated (terrorists, police state, etc.) could attack you in so many 
other ways the point is moot.

This logic works domestically as well as in re: foreign or criminal 
agencies. If you create a robust network that is reliable, easy to use, 
and generally useful, then it’s going to get used for commerce. That in 
turn means that your own government can’t shut it down without interfering
with commerce, imposing a “tax” on stifling communication.

> Thanks again, Microsoft. Though I think the OpenVPN users might have 
beat you to the equivalent solution, e.g. 
http://superuser.com/questions/120038/changing-network-type-from-unidentified-network-to-private-network-on-an-openvpn

Almost. I read those threads. But their solution requires that one use a 
*real* IP on the other side of the tunnel, and if that IP is not up it 
doesn’t work. It also means that you must go to the trouble of determining
a real IP, updating it if that IP changes, not to mention the security 
issues. Since it uses ARP then anyone capable of ARP spoofing can change 
the security designation of a Windows network. (This doesn’t just work 
over VPNs of course… anyone at a coffee shop can make your laptop think 
it’s “at work” if they somehow know your work router’s MAC and IP. Those 
aren’t terribly hard to infer.)

The solution I wrote uses static ARP and a static fake IP to literally 
hold down the Windows network classifier and instruct it which network 
this is, giving it an opaque static unique faux-MAC. Not spoofable, and 
always works.

> (And as a public service announcement, 1.1.1.1 is no longer a "fake" 
address as the 1.0.0.0/8 block was assigned to APNIC. 
http://www.iana.org/assignments/ipv4-address-space/ipv4-address-space.xhtml)

Thanks. I’ll change that.

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
David Geib
Date:
2014-08-20 @ 04:56
> Yes, and I also think it’s a toy version of an even larger problem: how
to devolve power in general.
> Human societies are networks too. I think this work has po litical and
philosophical implications inasmuch as the same information theoretic
principles that govern computer networks might also operate in human ones.
> If we can fix it here, maybe it can help us find new ways of fixing it
there.

Or the other way around for that matter. Look at the societies that work
best and see how they do it.

> I wonder what might be done if we could pair mesh nets with broadcast
media? Has anyone looked into that? I picture a digitally encoded shortwave
analog to a “numbers station” that continuously broadcasts the current mesh
net consensus for trust anchor points and high-availa bility nodes.

I think we have two different problems here and it makes sense to
distinguish them.

The first problem is the key distribution problem, which is an
authentication problem. You have some name or other identity and you need a
trustworthy method of obtaining the corresponding public key.

The second problem is the communication problem, which is a
reliability/availability problem. You have some public key and you want to
make a [more] direct connection to it so you need to identify someone or
some path that can be trusted to reliably deliver the request.

Traditional broadcast media can actually solve both of them in different
ways. Key distribution has the narrower solution. If you're The New York
Times or CBS then you can e.g. print the QR code of your public key
fingerprint on the back page of every issue. A reader who picks up an issue
from a random news stand can have good confidence that the key isn't forged
because distributing a hundred thousand forged copies of The New York Times
every day or setting up a 50KW transmitter on a frequency allocated to CBS
would be extremely conspicuous and would quickly cause the perpetrator to
get sued or arrested or shut down by the FCC. But that only works if you
yourself are the broadcaster (or you trust them to essentially act as a
CA). And pirate radio doesn't have the same effect because the fact that
the FCC will find you and eat you is *why* you can trust that a broadcast
on CBS is actually from CBS. Without that it's just self-signed
certificates.

By contrast, broadcasting could theoretically solve the availability
problem for everyone. If anyone can broadcast a message and have it be
received by everyone else then you've essentially solved the problem. The
trouble is the efficiency. That's just the nature of broadcast. NBC
broadcasting TV to millions of households who aren't watching it is an
enormous waste of radio spectrum but it's a sunk cost (at least until the
FCC reallocates more of their spectrum). You can even do the same thing
without a broadcast tower, it just has the same lack of efficiency. It's
simple enough to have every node regularly tell every other node how to
contact it but it's not very efficient or scalable.

> Or... we are admitting that trust is inherently asymmetrical because of
course it is! Nobody trusts everyone equally. The question then is whether
people need to agree at the “meta” level on some common things that they
all trust, and if so how this is accomplished. Seems to me that they do
otherwise cooperation becomes difficult (game theory territory).

That's probably true in an "all must agree on what protocol to use" sense
but I don't think dynamic global consensus is actually required in general.
The things like that which everyone has to agree about are relatively
static. Meanwhile if Alice and Bob want to communicate then Alice and Bob
have to agree on how to do it but that doesn't require everybody else to do
it in the same way or trust the same parties.

> I wonder if we could actually define good guys in some meaningful way,
like via game theory? Are they actors that tend toward cooperation in an
environment of mostly cooperators?

The hard part is that the bad guys can behave identically to the good guys
until they don't. So establishing a trusted identity has to be in some way
difficult or expensive so that burning one would be a significant loss to
an attacker.

> The goal is just to build a system where the cost of an attack is so high

Right, of course. The trouble is there could be realistic DoS attacks
within the capabilities of various notorious internet trolls which are
legitimately hard to defend against.

> Decentralization and the devolution of power are something that lots of
people want, and they’re something human beings have been trying to achieve
in various ways for a very long time. Most of these efforts, like
democracy, republics, governmental balance of power, anti-trust laws, etc.,
pre-date the Internet. Yet it never works.

I don't really agree that it never works. For all the failings of free
market capitalism, it's clearly better than a centrally planned economy.
The thing about functioning decentralized and federated systems is that
they often work so well they become invisible. Nobody notices the *absence*
of a middle man.

And it seems like the more centralized systems work even less. Look at
Congress. Their approval ratings are lower than hemorrhoids, toenail
fungus, dog poop, cockroaches and zombies. Say what you will about PGP, at
least it's preferable to zombies.

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Mike
Date:
2014-08-20 @ 11:04
On Wed, 2014-08-20 at 00:56 -0400, David Geib wrote:
> ...

> I don't really agree that it never works. For all the failings of free
> market capitalism, it's clearly better than a centrally planned
> economy. The thing about functioning decentralized and federated
> systems is that they often work so well they become invisible. Nobody
> notices the *absence* of a middle man. 


This is a great conversation and I'm enjoying the way the ideas are
flowing.  This paragraph has pushed one of my buttons, so I'm weighing
in.

I agree with the failure of the planned economy experiment, but I
think the comparison with the free market needs expansion.  It's
important to emphasise that we don't actually _have_ a free market,
not as Hayeck and his followers envisaged.  The potential for market
imbalances (of power, knowledge, choice and such) is too great, so we
end up with laws, against fraud, weights and measures abuse, and stuff
that is not marketable quality, and regulations, to reduce power and
knowledge imbalances.  Of course we also have deliberate imbalances,
such as immigration restrictions, to control the market in workers,
trade tariffs, to re-inforce local industry, and trade agreements, to
enhance power imbalances.

Most of these problems come out the sheer size of states and
corporations, and most of the normal human interactions that might
protect against abuse assume relatively small groups.  A sports club,
church community, even a village, are all self managing.  Regulation
still happens, but the detection and response are (or can be)
relatively lightweight.  This doesn't work even with cities, where
everyone is a stranger, and police are required.

To bring the point home, we can consider a market as a collection of
protocols.  This conversation, or the re-decentralise thing, probably
started by assuming these protocols all work perfectly, as per Hayeck.
Clearly not a worker.  We need rules and regulations, we need
detection and response, and the response has to have some real impact.
These are, I suspect, human things.  Humans are interacting, and humans
need to address problems.  As a direct outcome of the human model, we
might look at community size.  This depends on the facilities being
offered.  Distributed search, YaCy, for example, could have a very
large number of users.  Social networks, on the other hand, might need
very focused small communities.  I can imagine a sort of federated
facility, using something like Diaspora, where smallish groups can
share a server, but servers can talk to each other in some limited way
to allow for groups that overlap. Problems can then be resolved
through side channels and appropriate server management tools. (and a
'server' could be a collection of distributed nodes, of course)

OK, that'll do from me.  Thanks for listening.


Mike S.

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Jörg F. Wittenberger
Date:
2014-08-20 @ 11:10
Am 20.08.2014 06:56, schrieb David Geib:
> Or the other way around for that matter. Look at the societies that 
> work best and see how they do it.

BTW: That's been the concept we followed when we came up with Askemos.

Understanding that we currently have an internet akind to some kind of 
feudal society, we asked: what came next and how did they do it?

Next came democracy (again), in terms of constitutional states. Power of 
balance, social contracts, bi- and multilateral contracts etc.

Let's not argue that we see them eventually failing all to often. Maybe 
we can make real societies better (i.e., the governments less broken) 
once we understood how to implement it with the rigor required in 
programming.

So instead of inventing anything anew – which people would the have to 
learn, adopt and accept – we tried to map these concepts as good as we 
can into a minimal language.  The we implemented a prototype interpreter 
for this language (BALL) to learn how this could work.

Best

/Jörg

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
David Burns
Date:
2014-08-20 @ 22:49
On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko <adam.ierymenko@zerotier.com
> wrote:

> Human societies are networks too. I think this work has po litical and
> philosophical implications inasmuch as the same information theoretic
> principles that govern computer networks might also operate in human ones.
>
> If we can fix it here, maybe it can help us find new ways of fixing it
> there.
>


And networks are human societies, every node has at least one person
associated with it, trying to cooperate/communicate with at least one
other. But it seems like it would be easy to push the analogy too far, as
custom, law, contracts, etc. are only vaguely similar to software. I would
expect at least a few very interesting and annoying differences, though
maybe also some surprising and useful isomorphisms.
Dave

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Jörg F. Wittenberger
Date:
2014-08-21 @ 09:23
Am 21.08.2014 00:49, schrieb David Burns:
>
> On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko 
> <adam.ierymenko@zerotier.com <mailto:adam.ierymenko@zerotier.com>> wrote:
>
>     Human societies are networks too. I think this work has po litical
>     and philosophical implications inasmuch as the same information
>     theoretic principles that govern computer networks might also
>     operate in human ones.
>
>     If we can fix it here, maybe it can help us find new ways of
>     fixing it there.
>
>
>
> And networks are human societies, every node has at least one person 
> associated with it, trying to cooperate/communicate with at least one 
> other. But it seems like it would be easy to push the analogy too far, 
> as custom, law, contracts, etc. are only vaguely similar to software. 
> I would expect at least a few very interesting and annoying 
> differences, though maybe also some surprising and useful isomorphisms.

That's pretty much our experience.

You don't want to push the analogy too far.  After all it *is* an 
analogy.  Not only would it be too complicated, we *know* there are 
inconsistencies at least in law. (Let alone custom!)  Which we might 
want to fix.

The useful isomorphism is pretty obvious.  At least to computer 
scientists, lawyers and ethics professionals as it turned out during the 
project.  And the rigor it enforces upon the programmer, when she is 
must treat code as if it was a contract did actually help in the end.  
But it *is* a nightmare to the newcomer.

"Annoying differences" we did not find so far.  Widespread we found 
incomplete understanding of the actual business case.  We barely found a 
CS master student who could identify all the contracts of even a single 
trade transaction.  When we began the project I would have failed badly 
myself.

However I don't understand you "vaguely similar".  It seems not to be 
that vague.  It's just a different "machine" executing it: physical 
hardware or human agents.  But both are supposed to stick precisely to 
the rules until the software is changed.  (And both are usually buggy.)

/Jörg

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Stephan Tual
Date:
2014-08-22 @ 13:41
I've been a bit silent because I don't want to be seen as 'plugging' our 
project, but we're having this wonderful conversations about 
decentralization, social contracts and privacy without having mentioned 
http://www.ethereum.org.   

We're in PoC5 and about to release PoC6, which include a decentralized 
namereg and various other prototype contracts. We intend to also release 
an embryo of reputation system, limited in its scope as ethereum is in 
effect featureless, and where the existence of one reputation system 
doesn't preclude the existence of others.  

Taylor Gerring wrote a recent blogpost on what ethereum means for the 
decentralized web: 
https://blog.ethereum.org/2014/08/18/building-decentralized-web/
I've made a little video explaining what ethereum is, also in 
non-technical terms at: https://www.youtube.com/watch?v=Clw-qf1sUZg

For those we are in London we have a panel on the topic of personal 
liberties and decentralization of social contracts on the 12th September:
http://www.meetup.com/ethereum/events/201709682/. On that note I'd love to
meetup with other redecentralize members, is there a regular meetup?

Cheers!


ethereum (http://ethereum.org/)  

Stephan Tual  

--  
t. @stephantual
s. stephan.tual



On Thursday, 21 August 2014 at 10:23, Jörg F. Wittenberger wrote:

> Am 21.08.2014 00:49, schrieb David Burns:
> >  
> > On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko 
<adam.ierymenko@zerotier.com (mailto:adam.ierymenko@zerotier.com)> wrote:
> > > Human societies are networks too. I think this work has po litical 
and philosophical implications inasmuch as the same information theoretic 
principles that govern computer networks might also operate in human ones.

> > >  
> > > If we can fix it here, maybe it can help us find new ways of fixing 
it there.  
> >  
> > And networks are human societies, every node has at least one person 
associated with it, trying to cooperate/communicate with at least one 
other. But it seems like it would be easy to push the analogy too far, as 
custom, law, contracts, etc. are only vaguely similar to software. I would
expect at least a few very interesting and annoying differences, though 
maybe also some surprising and useful isomorphisms.
>  
> That's pretty much our experience.
>  
> You don't want to push the analogy too far.  After all it *is* an 
analogy.  Not only would it be too complicated, we *know* there are 
inconsistencies at least in law. (Let alone custom!)  Which we might want 
to fix.
>  
> The useful isomorphism is pretty obvious.  At least to computer 
scientists, lawyers and ethics professionals as it turned out during the 
project.  And the rigor it enforces upon the programmer, when she is must 
treat code as if it was a contract did actually help in the end.  But it 
*is* a nightmare to the newcomer.
>  
> "Annoying differences" we did not find so far.  Widespread we found 
incomplete understanding of the actual business case.  We barely found a 
CS master student who could identify all the contracts of even a single 
trade transaction.  When we began the project I would have failed badly 
myself.
>  
> However I don't understand you "vaguely similar".  It seems not to be 
that vague.  It's just a different "machine" executing it: physical 
hardware or human agents.  But both are supposed to stick precisely to the
rules until the software is changed.  (And both are usually buggy.)
>  
> /Jörg

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-22 @ 20:51
I don’t think there’s anything wrong with plugging a project. Part of what
this group is about is discussing various work going on in this area.

I’ve been following Ethereum for a long time, and I’m really fascinated by
it. It strikes me as a step out beyond just currency for the block chain 
and into the realm of being able to truly define autonomous organizations,
etc. I think “cryptocorps” is much more interesting than currency 
personally.

I of course have a lot of questions about how these things work, and in 
particular how you avoid some of the scaling problems with the block chain
— namely that it grows without bound. I’ve heard various checkpointing 
solutions to that, but so far nothing definitive.

On Aug 22, 2014, at 6:41 AM, Stephan Tual <stephan.tual@ethereum.org> wrote:

> I've been a bit silent because I don't want to be seen as 'plugging' our
project, but we're having this wonderful conversations about 
decentralization, social contracts and privacy without having mentioned 
http://www.ethereum.org. 
> 
> We're in PoC5 and about to release PoC6, which include a decentralized 
namereg and various other prototype contracts. We intend to also release 
an embryo of reputation system, limited in its scope as ethereum is in 
effect featureless, and where the existence of one reputation system 
doesn't preclude the existence of others. 
> 
> Taylor Gerring wrote a recent blogpost on what ethereum means for the 
decentralized web: 
https://blog.ethereum.org/2014/08/18/building-decentralized-web/
> I've made a little video explaining what ethereum is, also in 
non-technical terms at: https://www.youtube.com/watch?v=Clw-qf1sUZg
> 
> For those we are in London we have a panel on the topic of personal 
liberties and decentralization of social contracts on the 12th Se ptember:
> http://www.meetup.com/ethereum/events/201709682/. On that note I'd love 
to meetup with other redecentralize members, is there a regular meetup?
> 
> Cheers!
> 
> 
> <signature0.png>
> ethereum
> 
> Stephan Tual
> 
> --
> t. @stephantual
> s. stephan.tual
> On Thursday, 21 August 2014 at 10:23, Jörg F. Wittenberger wrote:
> 
>> Am 21.08.2014 00:49, schrieb David Burns:
>>> 
>>> On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko 
<adam.ierymenko@zerotier.com> wrote:
>>>> Human societies are networks too. I think this work has po litical 
and philosophical implications inasmuch as the same information theoretic 
principles that govern computer networks might also operate in human ones.
>>>> 
>>>> If we can fix it here, maybe it can help us find new ways of fixing it there.
>>> 
>>> 
>>> And networks are human societies, every node has at least one person 
associated with it, trying to cooperate/communicate with at least one 
other. But it seems like it would be easy to push the analogy too far, as 
custom, law, contracts, etc. are only vaguely similar to software. I would
expect at least a few very interesting and annoying differences, though 
maybe also some surprising and useful isomorphisms.
>> 
>> That's pretty much our experience.
>> 
>> You don't want to push the analogy too far.  After all it *is* an 
analogy.  Not only would it be too complicated, we *know* there are 
inconsistencies at least in law. (Let alone custom!)  Which we might want 
to fix.
>> 
>> The useful isomorphism is pretty obvious.  At least to computer 
scientists, lawyers and ethics professionals as it turned out during the 
project.  And the rigor it enforces upon the programmer, when she is must 
treat code as if it was a contract did actually help in the end.  But it 
*is* a nightmare to the newcomer.
>> 
>> "Annoying differences" we did not find so far.  Widespread we found 
incomplete understanding of the actual business case.  We barely found a 
CS master student who could identify all the contracts of even a single 
trade transaction.  When we began the project I would have failed badly 
myself.
>> 
>> However I don't understand you "vaguely similar".  It seems not to be 
that vague.  It's just a different "machine" executing it: physical 
hardware or human agents.  But both are supposed to stick precisely to the
rules until the software is changed.  (And both are usually buggy.)
>> 
>> /Jörg
> 

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Stephan Tual
Date:
2014-08-24 @ 21:15
Scalability is indeed the #1 'problem' with decentralized consensus at 
scale technologies. It's not the only challenge of course, others include 
speed of transaction, useful PoW (if PoW is used), etc. Vitalik Buterin 
made a little compendium with proposed avenues of research (at the time) a
few months ago: https://github.com/ethereum/wiki/wiki/Problems.

Note that I put the word 'problem' above in quotes, because while it's 
certainly a long term issue, it's by no means a game ender, especially 
while we are still in the development phase, and even a few months (if not
a year) after our launch next Feb/March. While I'd love to see Ethereum 
Dapps ('decentralized apps') hit the mainstream in the first few months 
(think instant adoption of a decentralized ebay, or decentralized 
facebook), history tells us that the adoption of new technologies tends to
be far more gradual than that. Furthermore, Visa Europe processes 10,000 
transactions per second on Christmas Eve, and Bitcoin as it stands is 
hard-capped at 7 (yes, seven). Does it makes bitcoin useless, not worth 
using, researching and improving? Of course not. The same principle 
applies to Ethereum's concepts of DAOs (decentralized autonomous 
organizations) and immutable social contracts.  

That being said, in the particular case of Ethereum, we have already done 
quite a bit of work on alleviating scalability problems. In terms of file 
sharing (videos, music, long text), we'll be leveraging a DHT-like network
we codenamed swarm. Files would therefore not be stored in our blockchain,
but only pointers to the said files. This opens up a new can of worms for 
particular use-cases such as Dropbox, and the incentivisation of nodes is 
in itself a fascinating topic, on which we've written more at: 
https://blog.ethereum.org/2014/08/16/secret-sharing-erasure-coding-guide-aspiring-dropbox-decentralizer/

In terms of offloading high frequency messaging, we're also working on 
Whisper, a secure, anonymous, onion routed messaging platform. This is 
also in the early stages, but I digress. In terms of pure-blockchain 
scalability, we're already are not UTXO based, but accounts-based. It will
be possible to download only the current state and authenticate it without
bothering with most of the ethereum history. We're also working on 
Microchains, something Gavin announced 2 weeks ago in London to be 
potentially featured in PoC6 C++.

And yes, I totally agree with you that 'currency' is just one of the many 
applications of these types of technology, and (in my humble opinion), not
the most interesting when taken at face-value: far more interesting are 
DAOs, incentivization of 3rd party processes (think BOiNC on steroids), 
decentralized and censorship-proof social networks, and ultimately, social
change (by bringing 'reputation' as a first class citizen in pseudonymous 
financial dealings - think ROSCAs and ASCAs on a massive scale).  

For anyone interested in Ethereum-specific questions, I won't hog the 
thread - instead ping me on skype at stephan.tual and we can keep the 
convo going there.  

Cheers!  

ethereum (http://ethereum.org/)  

Stephan Tual  
Chief Communications Officer
--
t. @stephantual
s. stephan.tual



On Friday, 22 August 2014 at 21:51, Adam Ierymenko wrote:

> I don’t think there’s anything wrong with plugging a project. Part of 
what this group is about is discussing various work going on in this area.
>  
> I’ve been following Ethereum for a long time, and I’m really fascinated 
by it. It strikes me as a step out beyond just currency for the block 
chain and into the realm of being able to truly define autonomous 
organizations, etc. I think “cryptocorps” is much more interesting than 
currency personally.
>  
> I of course have a lot of questions about how these things work, and in 
particular how you avoid some of the scaling problems with the block chain
— namely that it grows without bound. I’ve heard various checkpointing 
solutions to that, but so far nothing definitive.
>  
> On Aug 22, 2014, at 6:41 AM, Stephan Tual <stephan.tual@ethereum.org 
(mailto:stephan.tual@ethereum.org)> wrote:
> > I've been a bit silent because I don't want to be seen as 'plugging' 
our project, but we're having this wonderful conversations about 
decentralization, social contracts and privacy without having mentioned 
http://www.ethereum.org.   
> >  
> > We're in PoC5 and about to release PoC6, which include a decentralized
namereg and various other prototype contracts. We intend to also release 
an embryo of reputation system, limited in its scope as ethereum is in 
effect featureless, and where the existence of one reputation system 
doesn't preclude the existence of others.  
> >  
> > Taylor Gerring wrote a recent blogpost on what ethereum means for the 
decentralized web: 
https://blog.ethereum.org/2014/08/18/building-decentralized-web/
> > I've made a little video explaining what ethereum is, also in 
non-technical terms at: https://www.youtube.com/watch?v=Clw-qf1sUZg
> >  
> > For those we are in London we have a panel on the topic of personal 
liberties and decentralization of social contracts on the 12th Se ptember:
> > http://www.meetup.com/ethereum/events/201709682/. On that note I'd 
love to meetup with other redecentralize members, is there a regular 
meetup?
> >  
> > Cheers!
> >  
> >  
> > <signature0.png>
> > ethereum (http://ethereum.org/)
> >  
> > Stephan Tual
> >  
> > --
> > t. @stephantual
> > s. stephan.tual
> >  
> >  
> >  
> > On Thursday, 21 August 2014 at 10:23, Jörg F. Wittenberger wrote:
> >  
> > > Am 21.08.2014 00:49, schrieb David Burns:
> > > >  
> > > > On Tue, Aug 19, 2014 at 9:22 AM, Adam Ierymenko 
<adam.ierymenko@zerotier.com (mailto:adam.ierymenko@zerotier.com)> wrote:
> > > > > Human societies are networks too. I think this work has po 
litical and philosophical implications inasmuch as the same information 
theoretic principles that govern computer networks might also operate in 
human ones.  
> > > > >  
> > > > > If we can fix it here, maybe it can help us find new ways of 
fixing it there.  
> > > >  
> > > > And networks are human societies, every node has at least one 
person associated with it, trying to cooperate/communicate with at least 
one other. But it seems like it would be easy to push the analogy too far,
as custom, law, contracts, etc. are only vaguely similar to software. I 
would expect at least a few very interesting and annoying differences, 
though maybe also some surprising and useful isomorphisms.
> > >  
> > > That's pretty much our experience.
> > >  
> > > You don't want to push the analogy too far.  After all it *is* an 
analogy.  Not only would it be too complicated, we *know* there are 
inconsistencies at least in law. (Let alone custom!)  Which we might want 
to fix.
> > >  
> > > The useful isomorphism is pretty obvious.  At least to computer 
scientists, lawyers and ethics professionals as it turned out during the 
project.  And the rigor it enforces upon the programmer, when she is must 
treat code as if it was a contract did actually help in the end.  But it 
*is* a nightmare to the newcomer.
> > >  
> > > "Annoying differences" we did not find so far.  Widespread we found 
incomplete understanding of the actual business case.  We barely found a 
CS master student who could identify all the contracts of even a single 
trade transaction.  When we began the project I would have failed badly 
myself.
> > >  
> > > However I don't understand you "vaguely similar".  It seems not to 
be that vague.  It's just a different "machine" executing it: physical 
hardware or human agents.  But both are supposed to stick precisely to the
rules until the software is changed.  (And both are usually buggy.)
> > >  
> > > /Jörg
> >  
>  

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
David Burns
Date:
2014-08-23 @ 06:30
On Wednesday, August 20, 2014, Jörg F. Wittenberger <
Joerg.Wittenberger@softeyes.net> wrote:
>
>
> However I don't understand you "vaguely similar".  It seems not to be that
> vague.  It's just a different "machine" executing it: physical hardware or
> human agents.  But both are supposed to stick precisely to the rules until
> the software is changed.  (And both are usually buggy.)
>

I was trying to compensate for my bias by using understatement and
ambiguity. But now that you challenge me, I feel obligated to try
to respond.

Has anyone written a mathematical analysis of the isomorphism, it's
features and limits? Custom and law typically operate by defining
constraints that must not be violated, leaving agents free to pursue
arbitrary goals using arbitrary strategies within those limits. Software
typically provides a menu of capabilities, defined (usually) by a
sequential, goal oriented algorithm, often employing a single prechosen
strategy. Constraints limit software, but do not dominate the situation as
in law.
I must obey the traffic laws while driving to work. The law knows nothing
about my goal. I am in charge. If/when we all have self-driving cars,
traffic laws will serve no purpose, but the car has to know where I want to
go, in addition to the constraints and heuristics that allow it to navigate
safely there. I am still in charge, but not in control. Action in each case
combines intent, strategy, resources and constraints, but the mix is
different. Or maybe the level of abstraction?

I can use software to break the law, and I can use the law to break
software, but it is an accident of language that I can make these
statements, the meaning is not at all similar.

I would be delighted for you to convince me that I am being too
pessimistic, ignorant and unimaginative. I would prefer to be on the other
side of this argument.
Cheers,
Dave


-- 
"You can't negotiate with reality."
"You can, but it drives a really hard bargain."

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Jörg F. Wittenberger
Date:
2014-08-25 @ 09:58
Am 23.08.2014 08:30, schrieb David Burns:
>
>
> On Wednesday, August 20, 2014, Jörg F. Wittenberger 
> <Joerg.Wittenberger@softeyes.net 
> <mailto:Joerg.Wittenberger@softeyes.net>> wrote:
>
>
>     However I don't understand you "vaguely similar".  It seems not to
>     be that vague.  It's just a different "machine" executing it:
>     physical hardware or human agents.  But both are supposed to stick
>     precisely to the rules until the software is changed.  (And both
>     are usually buggy.)
>
>
> I was trying to compensate for my bias by using understatement and 
> ambiguity. But now that you challenge me, I feel obligated to try 
> to respond.
>
> Has anyone written a mathematical analysis of the isomorphism, it's 
> features and limits?

We can't claim a mathematical analysis.  At least not of the full problem.

We did however build a programming environment however to gather 
experience.  The limits of the system are rather tight: it is 
essentially a system to collect/assert proofs of the state of software 
agents.  The agent's code however is treated like a contract: no change, 
no upgrade.  The system starts actually by creating a social contract 
holding all the code required to boot the system.  By analogy this would 
be the constitution and the body of law a human inherits.


> Custom and law typically operate by defining constraints that must not 
> be violated, leaving agents free to pursue arbitrary goals using 
> arbitrary strategies within those limits. Software typically provides 
> a menu of capabilities, defined (usually) by a sequential, goal 
> oriented algorithm, often employing a single prechosen 
> strategy. Constraints limit software, but do not dominate the 
> situation as in law.

At this point we might want to subclass "software".  Customer grade 
software as you're talking about here look like the assembly 
instructions coming with your furniture not so much like law.  Both are 
expressed in words.

So "law-alike software" would probably a class of assertions. 
Application code would be supposed to include checks for relevant 
assertions.

> I must obey the traffic laws while driving to work. The law knows 
> nothing about my goal. I am in charge. If/when we all have 
> self-driving cars, traffic laws will serve no purpose, but the car has 
> to know where I want to go, in addition to the constraints and 
> heuristics that allow it to navigate safely there. I am still in 
> charge, but not in control. Action in each case combines intent, 
> strategy, resources and constraints, but the mix is different. Or 
> maybe the level of abstraction?

I'd say: the level of abstraction.  We can't take the human intent out 
of the game.  (In our model, agents representing users may be free send 
arbitrary messages.  Akin to no regulation and freedom of expression.)

>
> I can use software to break the law, and I can use the law to break 
> software, but it is an accident of language that I can make these 
> statements, the meaning is not at all similar.

Again: what is software for you?  Can I use software to break software?  
What is "to break"?

IMHO software is first and foremost an expression.  In some language. 
For which some interpreter exists. Which maintains some ongoing process.

>
> I would be delighted for you to convince me that I am being too 
> pessimistic, ignorant and unimaginative. I would prefer to be on the 
> other side of this argument.

As a programmer, I'd say: given enough time I can program everything I 
can understand well enough to express it in a formal language.

This risk is that in formalizing law, we might discover inconsistencies 
in the law.  To bad ;-)

/Jörg


Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
David Burns
Date:
2014-08-03 @ 21:38
On Friday, August 1, 2014, Adam Ierymenko <adam.ierymenko@zerotier.com>
wrote:

> I just started a personal blog, and my first post includes some thoughts
> I've wanted to get down for a while:
>
> http://adamierymenko.com/decentralization-i-want-to-believe/
>
>
Adam, your blog post interested me a lot. Best of luck with your efforts.
One quibbly question:

>*efficiency, security, decentralization, pick two.*

 Assuming certain sorts of threats, decentralization contributes a lot to
security. In those circumstances, your trichotomy devolves to a
dichotomy, "efficiency or security, pick one."

Fortunately, your actual approach, the peer-(super) peer-peer idea,
finesses the problem nicely. Instead of "I am Spartacus," "I am the blind
idiot god." Still, might attackers find a vulnerability there? In order to
assure the efficiency you desire, someone must provide some resources
intended to act as the superpeer or superpeers. Attacker censors those
nodes, network efficiency falls below the tolerable threshold, bad guys
win. How do you plan to defend against this attack?

Cheers,
Dave


-- 
"You can't negotiate with reality."
"You can, but it drives a really hard bargain."

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-04 @ 22:58
On Aug 3, 2014, at 2:38 PM, David Burns <tdbtdb@gmail.com> wrote:

> 
> On Friday, August 1, 2014, Adam Ierymenko <adam.ierymenko@zerotier.com> wrote:
> I just started a personal blog, and my first post includes some thoughts
I've wanted to get down for a while:
> 
> http://adamierymenko.com/decentralization-i-want-to-believe/
> 
> 
> Adam, your blog post interested me a lot. Best of luck with your 
efforts. One quibbly question:
> 
> >efficiency, security, decentralization, pick two.
> 
>  Assuming certain sorts of threats, decentralization contributes a lot 
to security. In those circumstances, your trichotomy devolves to a 
dichotomy, "efficiency or security, pick one." 

You're absolutely correct there. Decentralized systems are more robust 
against censorship, most naive denial of service attacks, and the failure 
of critical systems.

What they usually don't offer is a good user experience and high 
performance the other 99% of the time when everything is working well.

A decentralized network under attack will be more robust, but a 
centralized network *not* under attack will be faster, more 
consistent/reliable, easier to reach, consume less resources at the edge 
(important for mobile), and generally be easier to use... at least 
according to any known paradigm. Facebook is down every once in a while, 
but when it's up it's fast and incredibly easy to use compared to 
alternatives.

Everything I've written on this subject comes with a caveat: something new
could be discovered tomorrow. Everything I write assumes the current state
of the art, so obviously any big discoveries could change the whole 
picture. Personally I think a discovery in an area like graph theory that 
let us build *completely* center-less networks with the same performance, 
efficiency, and security characteristics as centralized ones would rank up
there with the discovery of public key cryptography. It'd be Nobel Prize 
material if there were a Nobel Prize for CS.

> Fortunately, your actual approach, the peer-(super) peer-peer idea, 
finesses the problem nicely. Instead of "I am Spartacus," "I am the blind 
idiot god." Still, might attackers find a vulnerability there? In order to
assure the efficiency you desire, someone must provide some resources 
intended to act as the superpeer or superpeers. Attacker censors those 
nodes, network efficiency falls below the tolerable threshold, bad guys 
win. How do you plan to defend against this attack?

Yeah, that's basically it. All my current thinking is around the idea of 
minimal central hubs that allow us to have the benefits of central points 
without the downsides. I'm working on a follow-up blog post going into 
more detail about zero-knowledge hubs and what might be required there.

If I can find the time I might try to hack something up, but don't count 
on it in the next few months... so much other stuff going on.

-Adam

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Jörg F. Wittenberger
Date:
2014-08-03 @ 09:31
Adam,

I've got a question:

Am 02.08.2014 01:07, schrieb Adam Ierymenko:
> I just started a personal blog, and my first post includes some thoughts
I've wanted to get down for a while:
>
> http://adamierymenko.com/decentralization-i-want-to-believe/
>

In this blog post you wrote:

 > I designed the protocol to be /capable/ of evolving toward a more 
decentralized design in the future without disrupting existing users, 
but that's where it stands today.

My situation: we wrote a p2p network for replicating state machines with 
byzantine fault tolerance 
<http://en.wikipedia.org/wiki/Byzantine_fault_tolerance>.

That would be a kind of a "global database no single individual 
controls"; I actually like you "blind idiot god" term.  We always 
thought of it implementing some "general will" like the legal system in 
a constitutional state.  Not so different, isn't it?

So far we concentrated on building a practical, working system (e.g., 
self-hosting).  The networking layer is just a plug in.  And the default 
plugin was always intended to be replace with state-of-the-art 
implementations.  It will probably not scale and hence we never tested 
how it scales.  When looking at zerotier I'm asking: could this possibly 
be a transport plugin?

What we need:

A) Our identifiers are self-sealing.  That is, they are required to 
match some hash of the (initial) content and 4 more predefined meta data 
elements.  (We need this to prove their correctness; like in Ricardian 
Contracts etc.)

So we'd need to register one such identifier per peer in a DHT.

B) We need some kind of byzantine (Paxos alike) protocol, which is 
capable to convey hash verifying agreement on the proposed update. (This 
is slightly more than most paxos implementations provide, since those 
are for some reason beyond me, designed to TRUST the origin of an 
update.)  Fortunately we have this code.  So what we really need is 
"network traffic" between peers identified by some key.

In understand that zerotier provides (B).  But since I see "some kind" 
of "noise" as identifier in zerotier, I'm unsure how easy it would be to 
get (A) too.


Further I take you "capable of evolving" as a warning: how far does the 
implementation deviate?

Thanks

/Jörg

PS:

As you are sharing my reservations wrt. Bitcoin while at the same time 
looking for trust and accountability you might want to look at how those 
alternatives compare.  The 51% of hash power is just one way.  Byzantine 
agreement requires 67% of traitors.  However the latter are *well known* 
and contractually bound.  Advantages: a) speed: transactions take a 
fraction of a second over WAN, b) privacy: data lives precisely where 
you expect it to be and is not leaked elsewhere.  Downsides: Bitcoin is 
open-join.  Anybody can participate.  With Askemos you get close-join.  
Like WhatsApp: the owner needs to accept the other party before messages 
are taken.

Actually I'm currently gathering more info towards a fair comparison.  
Comments welcome:
http://ball.askemos.org/?_v=wiki&_id=1786

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Jörg F. Wittenberger
Date:
2014-08-03 @ 12:21
Sorry, this was supposed to be a private message.
(But hitting "reply" instead of "reply to list" sends it to the list 
anyways.)

One more question: am I correct to understand that zerotier serves 
essentially the same purpose as cjdns?
https://github.com/cjdelisle/cjdns

Thanks

/Jörg

Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
> Adam,
>
> I've got a question:
> ...
> In this blog post you wrote:
>
> > I designed the protocol to be /capable/ of evolving toward a more 
> decentralized design in the future without disrupting existing users, 
> but that's where it stands today.
>

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-04 @ 23:06
Not exactly, but close. CJDNS is a mesh protocol that creates a single L3 
IPv6 network. ZeroTier One is a hybrid peer to peer protocol that creates 
virtual Ethernet networks (plural). ZeroTier is more like SDN for 
everyone, everywhere. (SDN is software defined networking, and refers to 
the creation of software defined virtual networks in data centers.)

I've been following CJDNS for a while. I know it's being used by several 
community meshnet projects. Anyone tried it? I admit I haven't yet, but 
I've heard it basically does work but not perfectly. I'm curious about how
large it could scale though. I'll try it out at some point.

On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger 
<Joerg.Wittenberger@softeyes.net> wrote:

> Sorry, this was supposed to be a private message.
> (But hitting "reply" instead of "reply to list" sends it to the list anyways.)
> 
> One more question: am I correct to understand that zerotier serves 
essentially the same purpose as cjdns?
> https://github.com/cjdelisle/cjdns
> 
> Thanks
> 
> /Jörg
> 
> Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
>> Adam,
>> 
>> I've got a question:
>> …
>> In this blog post you wrote:
>> 
>> > I designed the protocol to be capable of evolving toward a more 
decentralized design in the future without disrupting existing users, but 
that's where it stands today.
>> 
> 

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Eric Mill
Date:
2014-08-05 @ 04:48
One line of research and technology that I personally find very exciting,
and highly relevant to the idea of zero-knowledge centralization -- even
though it's still some time off from being scalably useful -- is homomorphic
encryption <http://en.wikipedia.org/wiki/Homomorphic_encryption>.

Homomorphic encryption is a technique where you take two inputs, encrypt
them with a private key, hand them off to some other machine, have that
machine perform a known computation *on the ciphertext*, and give you back
the encrypted result, so you can decrypt it and get the answer. The machine
that did the computation knows nothing about the inputs or the outputs --
it can only blindly operate on them.

While some techniques (like RSA) were partially homomorphic, what you need
to make arbitrary homomorphic computation is a system that can do both
multiplication and addition (together, these are Turing complete), and no
system to do this was found for 40 years, until Craig Gentry's PhD thesis
<http://crypto.stanford.edu/craig/> showed a working algorithm to do it.

The bad news it is many many orders of magnitude too slow to be useful --
and uses "lattice encryption", which requires very large private/public
keys (like GBs). IBM has since scooped up Gentry, and made advances on the
original scheme that have sped it up by a trillion times -- but it is still
a trillion times too slow.

But, someday -- and maybe someday sooner than we think, as these things go
-- maybe it will be feasible to have things like zero-knowledge search
engines. Maybe low-level zero-knowledge tasks, like packet-switching or
whatever, could be feasible much sooner.

It's something to watch!

http://crypto.stackexchange.com/a/9706/16707
http://www2.technologyreview.com/article/423683/homomorphic-encryption/
https://github.com/shaih/HElib

-- Eric


On Mon, Aug 4, 2014 at 7:06 PM, Adam Ierymenko <adam.ierymenko@zerotier.com>
wrote:

> Not exactly, but close. CJDNS is a mesh protocol that creates a single L3
> IPv6 network. ZeroTier One is a hybrid peer to peer protocol that creates
> virtual Ethernet networks (plural). ZeroTier is more like SDN for everyone,
> everywhere. (SDN is software defined networking, and refers to the creation
> of software defined virtual networks in data centers.)
>
> I've been following CJDNS for a while. I know it's being used by several
> community meshnet projects. Anyone tried it? I admit I haven't yet, but
> I've heard it basically does work but not perfectly. I'm curious about how
> large it could scale though. I'll try it out at some point.
>
> On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger <Joerg.Wittenberge
> r@softeyes.net <Joerg.Wittenberger@softeyes.net>> wrote:
>
>  Sorry, this was supposed to be a private message.
> (But hitting "reply" instead of "reply to list" sends it to the list
> anyways.)
>
> One more question: am I correct to understand that zerotier serves
> essentially the same purpose as cjdns?
> https://github.com/cjdelisle/cjdns
>
> Thanks
>
> /Jörg
>
> Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
>
> Adam,
>
> I've got a question:
> …
>
>  In this blog post you wrote:
>
> > I designed the protocol to be *capable* of evolving toward a more
> decentralized design in the future without disrupting existing users, but
> that's where it stands today.
>
>
>
>


-- 
konklone.com | @konklone <https://twitter.com/konklone>

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-05 @ 18:57
Oh definitely.

Homomorphic crypto could have a *lot* of uses. It opens up the potential 
for things like black box certificate authorities that could be 
distributed as open source software. The CA signs your key. With what? A 
key pair it generated internally that cannot *ever* be viewed by *anyone*.
:)

-Adam

On Aug 4, 2014, at 9:48 PM, Eric Mill <eric@konklone.com> wrote:

> One line of research and technology that I personally find very 
exciting, and highly relevant to the idea of zero-knowledge centralization
-- even though it's still some time off from being scalably useful -- is 
homomorphic encryption.
> 
> Homomorphic encryption is a technique where you take two inputs, encrypt
them with a private key, hand them off to some other machine, have that 
machine perform a known computation *on the ciphertext*, and give you back
the encrypted result, so you can decrypt it and get the answer. The 
machine that did the computation knows nothing about the inputs or the 
outputs -- it can only blindly operate on them.
> 
> While some techniques (like RSA) were partially homomorphic, what you 
need to make arbitrary homomorphic computation is a system that can do 
both multiplication and addition (together, these are Turing complete), 
and no system to do this was found for 40 years, until Craig Gentry's PhD 
thesis showed a working algorithm to do it.
> 
> The bad news it is many many orders of magnitude too slow to be useful 
-- and uses "lattice encryption", which requires very large private/public
keys (like GBs). IBM has since scooped up Gentry, and made advances on the
original scheme that have sped it up by a trillion times -- but it is 
still a trillion times too slow.
> 
> But, someday -- and maybe someday sooner than we think, as these things 
go -- maybe it will be feasible to have things like zero-knowledge search 
engines. Maybe low-level zero-knowledge tasks, like packet-switching or 
whatever, could be feasible much sooner.
> 
> It's something to watch!
> 
> http://crypto.stackexchange.com/a/9706/16707
> http://www2.technologyreview.com/article/423683/homomorphic-encryption/
> https://github.com/shaih/HElib
> 
> -- Eric
> 
> 
> On Mon, Aug 4, 2014 at 7:06 PM, Adam Ierymenko 
<adam.ierymenko@zerotier.com> wrote:
> Not exactly, but close. CJDNS is a mesh protocol that creates a single 
L3 IPv6 network. ZeroTier One is a hybrid peer to peer protocol that 
creates virtual Ethernet networks (plural). ZeroTier is more like SDN for 
everyone, everywhere. (SDN is software defined networking, and refers to 
the creation of software defined virtual networks in data centers.)
> 
> I've been following CJDNS for a while. I know it's being used by several
community meshnet projects. Anyone tried it? I admit I haven't yet, but 
I've heard it basically does work but not perfectly. I'm curious about how
large it could scale though. I'll try it out at some point.
> 
> On Aug 3, 2014, at 5:21 AM, Jörg F. Wittenberger <Joerg.Wittenberge 
r@softeyes.net> wrote:
> 
>> Sorry, this was supposed to be a private message.
>> (But hitting "reply" instead of "reply to list" sends it to the list anyways.)
>> 
>> One more question: am I correct to understand that zerotier serves 
essentially the same purpose as cjdns?
>> https://github.com/cjdelisle/cjdns
>> 
>> Thanks
>> 
>> /Jörg
>> 
>> Am 03.08.2014 11:31, schrieb "Jörg F. Wittenberger":
>>> Adam,
>>> 
>>> I've got a question:
>>> …
>>> In this blog post you wrote:
>>> 
>>> > I designed the protocol to be capable of evolving toward a more 
decentralized design in the future without disrupting existing users, but 
that's where it stands today.
>>> 
>> 
> 
> 
> 
> 
> -- 
> konklone.com | @konklone

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Tom Atkins
Date:
2014-08-02 @ 10:16
Thank you so much! I thoroughly enjoyed reading that and had lots of
'this makes so much sense' moments. Time to start thinking more about
'provably minimal hubs'. :-)

On 02/08/14 00:07, Adam Ierymenko wrote:
> I just started a personal blog, and my first post includes some thoughts
I've wanted to get down for a while:
> 
> http://adamierymenko.com/decentralization-i-want-to-believe/
> 

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Nicholas H.Tollervey
Date:
2014-08-02 @ 14:59
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/08/14 00:07, Adam Ierymenko wrote:
> I just started a personal blog, and my first post includes some 
> thoughts I've wanted to get down for a while:
> 
> http://adamierymenko.com/decentralization-i-want-to-believe/
> 

Bravo Adam,

You have succinctly put into words many of the fuzzy "hand wavy"
thoughts I've been reaching for. Reading your blog post has allowed them
to become more concrete.

Regarding peer-(super)peer-peer: when I read this turn of phrase I had
an "aha!" moment. My work/thinking about DHT (via the drogulus project)
has led me to wonder about the nature of hierarchy - when someone or
some node in a network is more important than another. I skirt around it
in my recent Europython talk.

Given the way the Kademlia DHT algorithm I'm using works I expect
several things to happen in this context:

* Individual nodes prefer peers that are reliable (for example, they're
always on the network and reply in a timely fashion). The reliable peers
are the ones that end up in the local node's routing table (that keeps
track of who is out there on the network).

* Nodes share information from their routing tables with each other to
discover who else is on the network and keep themselves up-to-date (it's
part of the process of a lookup in the DHT).

* I would expect (super)peers to *emerge* from such interactions (note
my comments in the Europython talk on hierarchy based upon evidence
rather than architecture).

* If a (super)peer fails or doesn't "perform", the algorithm works
around it - i.e. I expect (super)peers to both emerge via evidence and
for the network to ignore them if or when they fall over or die. This
addresses the "how do we get rid of you?" question from the blackboard
slide in my talk.

Also, I like your use of the word "feudal". I've been exchanging emails
with an old school friend who (surprisingly to me at least) is
interested in P2P. Here's a quote from a recent (private) exchange via
email about the Europython talk:

"Consider my slide about hierarchy and power: it works when the people
with authority derive their power through evidence. Unfortunately,
technology can be used to manipulate power in a way that is analogous
to the way aristocratic power works: "Why are you my King?", "Because
my father was your King!" It's the result of an imposed system
(Feudalism or a certain technical architecture) rather than merit or
consensus of opinion based upon tangible evidence (the king has
authority via accident of birth, the website has authority because of
the client/server model; contrast that to a doctor who has authority
because they have years of training and demonstrate a certain skill -
making ill people feel better)."

Finally, you end with "in the meantime, please do your own hacking and
ask your own questions". For this very reason I'm sitting in my shed on
my 17th wedding anniversary hacking on the drogulus (I have a young
family, a "real" job and organise UK Python community stuff - so I get
very little time to work on it; this needs to change). I'd be interested
to know how you'd see this sort of decentralised / peer to peer work
being funded. The best plan I can come up with is to save money and then
take some months off (likely around March next year).

Once again, congratulations on such an interesting and thought provoking
blog post..!

All the best,

Nicholas.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iQEcBAEBAgAGBQJT3Py6AAoJEP0qBPaYQbb6UQMH/icGQvgEmLLsEczPP4pS1vvz
gRWMf4MLeW8ROLR1+Xp+NtiTk85chDYmtXOndTc7mdR1IKUC5PbSiosuhR8Pk1aH
p9dtuzO9IVbn608KwbRTjtQjgEDzZysm1q8JNlfj64x2NtJP2h22yGMpvhoOwwLB
AJUJOQaGfAk3t9MQBouQ3Ocm/wV6RV4/hiTW0C3e7q4F7bJLdmfDR8zlf2X03d1b
usLTnIeTnIl0MYt6q9TotCUZMeyeYwp0SrKG0S5qfc1oqYIODXfHBUrLLM5PMhPW
7TaZ+Bg3tRvA210/+HVRJTOxSTVV8v9FG0HEVpDDFVcwBbkDnkV0wntezYzDdtw=
=XCUD
-----END PGP SIGNATURE-----

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Steve Phillips
Date:
2014-08-02 @ 04:02
Hi Adam,

Great post!  About 75% of the way through, my mindset shifted from, "I am
disappointed that he has given up on decentralization" to "*zero-knowledge
centralization* is a fucking fantastic idea."

0. Suppose I'm trying to, say, send an IM over a maximally-decentralized IM
network that uses a centralized zero-knowledge server for tracking the IPs
and open port numbers of people or devices connected to said network, which
chat clients somehow query so they know where the IM should be sent.

In this scenario, do you think it's possible for me to get this information
without the server also getting it (by decrypting the IP/port pairs however
I'd decrypt them), thereby eliminating the critical zero-knowledge aspect?
 Is this the kind of system and situation you have in mind?

1. Could something like the Fluidinfo API, which is world-writable
(assuming it's still working), play the role of The People's Zero-Knowledge
Data Store?

2. Similarly, what if we all shared some world-writable DB-backed API
running on Heroku, GAE, or some other free architecture?  Couldn't that
serve as such a system, which we'd only write encrypted data to?  We could
even have several of these servers, which perhaps exchange information with
one another (simple DB replication?), in which case we'd have a
*federated zero-knowledge
system* hosted by many providers.  (If the servers are independent and
don't communicate, we could have one server that publicly lists the IPs of
the other servers.)  This is basically the Fluidinfo scenario, but hosted
my multiple parties.

Would either of these be helpful?

3. For a year or so I've had a design for a zero-knowledge server that
nonetheless implements partial search/querying functionality for anyone
with the key.  Perhaps this could also play some role in the ecosystem.
 I'll try to write something up.

Thanks for jump-starting this conversation (thread), whose core focus is so
critical to the future of (maximally-)decentralized systems.

--Steve


On Fri, Aug 1, 2014 at 4:07 PM, Adam Ierymenko <adam.ierymenko@zerotier.com>
wrote:

> I just started a personal blog, and my first post includes some thoughts
> I've wanted to get down for a while:
>
> http://adamierymenko.com/decentralization-i-want-to-believe/
>
>

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-02 @ 19:47
On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote:

> 3. For a year or so I've had a design for a zero-knowledge server that 
nonetheless implements partial search/querying functionality for anyone 
with the key.  Perhaps this could also play some role in the ecosystem.  
I'll try to write something up.

I've been thinking about that too, but I think it's important to take a 
step back and think through the problem. I really want to push through the
Little Centralization Paper (Tsitsiklis/Xu) a little more.

To me the key thing is this:

Our hypothetical "blind idiot God" must be as minimal as possible. That's 
why I said "provably minimal hub." The Tsitsiklis/Xu paper gives us a 
mathematical way to calculate exactly what percentage of traffic in a 
network must be centralized to achieve the phase transition they describe,
but they do not give us an answer for what functionality is required.

Imagine a stupid-simple key-value store with PUT and GET. Each key has a 
corresponding public key submitted with it that can be used to authorize 
future updates of the same key. Keys expire after one year. That's it.

Or could we go even more minimal than that?

In Turing-completeness there are shockingly minimal systems that are 
universal computers: 
https://en.wikipedia.org/wiki/One_instruction_set_computer

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Jörg F. Wittenberger
Date:
2014-08-03 @ 08:45
Am 02.08.2014 21:47, schrieb Adam Ierymenko:
> On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote:
>
>> 3. For a year or so I've had a design for a zero-knowledge server that 
nonetheless implements partial search/querying functionality for anyone 
with the key.  Perhaps this could also play some role in the ecosystem.  
I'll try to write something up.
> I've been thinking about that too, but I think it's important to take a 
step back and think through the problem. I really want to push through the
Little Centralization Paper (Tsitsiklis/Xu) a little more.
>
> To me the key thing is this:
>
> Our hypothetical "blind idiot God" must be as minimal as possible.

I'm with you.

We've been toying with such an idea for a while too.

But looking into this "little centralization paper" I'm left puzzled 
what *function* the centralized thing should provide?


My over-all impression so far is, that the paper mostly concerns 
efficiency and load balancing.  I'm not yet convinced that these are the 
most important points.  IMHO reliability and simplicity are much more 
important (as you mentioned in your blog post too).  I view efficiency 
more like an economic term applicable to central service providers 
operating services like FB.

I can only guess what the to-be-centralized functionality would be: your 
#1 of your problem definition, the name lookup.

Why?  Because any following operation could be arranged to only ever 
talk to know peers.

>   That's why I said "provably minimal hub." The Tsitsiklis/Xu paper 
gives us a mathematical way to calculate exactly what percentage of 
traffic in a network must be centralized to achieve the phase transition 
they describe, but they do not give us an answer for what functionality is
required.
>
> Imagine a stupid-simple key-value store with PUT and GET. Each key has a
corresponding public key submitted with it that can be used to authorize 
future updates of the same key. Keys expire after one year. That's it.
>
> Or could we go even more minimal than that?

Maybe: forget the keys, don't allow any peer to simply update a value.  
(Why? Assume the "value" is the ownership property of some coin or other 
unique resource.  How to manage transfer? The update could be malicious.)

Instead: link the value to some script, which is invoked as the 
"storing" (better now "maintaining") node to compute the update upon 
request.  (Take the script as some kind of "contract" governing the 
update rules.  No peer simply accepts updates, the check the contract to 
verify the update complies with the terms.)

At this point the design is down to a simple pointer stored with the 
first value pointing to a second value (the script, which in turn points 
to an update policy; since this is a contract chances are that no 
updates are allowed).

All handling of keys, expiration time etc. would suddenly be user defined.

> In Turing-completeness there are shockingly minimal systems that are 
universal computers: 
https://en.wikipedia.org/wiki/One_instruction_set_computer

I'm afraid there needs to be some compromise.  That's too simple to be 
usable.  How about allowing some kind of hashbang syntax in the script 
to pull the language of users choice to execute the update?

/Jörg

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Adam Ierymenko
Date:
2014-08-04 @ 23:04
On Aug 3, 2014, at 1:45 AM, Jörg F. Wittenberger 
<Joerg.Wittenberger@softeyes.net> wrote:

> Am 02.08.2014 21:47, schrieb Adam Ierymenko:
>> On Aug 1, 2014, at 9:02 PM, Steve Phillips <steve@tryingtobeawesome.com> wrote:
>> 
>>> 3. For a year or so I've had a design for a zero-knowledge server that
nonetheless implements partial search/querying functionality for anyone 
with the key.  Perhaps this could also play some role in the ecosystem.  
I'll try to write something up.
>> I've been thinking about that too, but I think it's important to take a
step back and think through the problem. I really want to push through the
Little Centralization Paper (Tsitsiklis/Xu) a little more.
>> 
>> To me the key thing is this:
>> 
>> Our hypothetical "blind idiot God" must be as minimal as possible.
> 
> I'm with you.
> 
> We've been toying with such an idea for a while too.
> 
> But looking into this "little centralization paper" I'm left puzzled 
> what *function* the centralized thing should provide?

That's what I'm scratching my head about too. Their work is so theoretical
it simply doesn't specify *what* it should do, just that it should be 
there and its presence has an effect on the dynamics of the network.

I'm toying around with some ideas, but it's still cooking.

> My over-all impression so far is, that the paper mostly concerns 
> efficiency and load balancing.  I'm not yet convinced that these are the 
> most important points.  IMHO reliability and simplicity are much more 
> important (as you mentioned in your blog post too).  I view efficiency 
> more like an economic term applicable to central service providers 
> operating services like FB.

Efficiency is really important if we want to push intelligence to the 
edges, which is what "decentralization" is at least partly about. Mobile 
makes efficiency *really* important. Anything that requires that a mobile 
device constantly sling packets is simply off the table, since it would 
kill battery life and eat up cellular data quotas. That basically 
eliminates every mesh protocol I know about, every DHT, etc. from 
consideration for mobile.

>> In Turing-completeness there are shockingly minimal systems that are 
universal computers: 
https://en.wikipedia.org/wiki/One_instruction_set_computer
> 
> I'm afraid there needs to be some compromise.  That's too simple to be 
> usable.  How about allowing some kind of hashbang syntax in the script 
> to pull the language of users choice to execute the update?

I agree... I just furnished it as an example to show that the complexity 
*floor* for systems like this can be pretty low. Usually the practical 
design is less minimal than what theory allows.

Re: [redecentralize] Thoughts on decentralization: "I want to believe."

From:
Jörg F. Wittenberger
Date:
2014-08-05 @ 10:31
Am 05.08.2014 01:04, schrieb Adam Ierymenko:
>> >My over-all impression so far is, that the paper mostly concerns
>> >efficiency and load balancing.  I'm not yet convinced that these are the
>> >most important points.  IMHO reliability and simplicity are much more
>> >important (as you mentioned in your blog post too).  I view efficiency
>> >more like an economic term applicable to central service providers
>> >operating services like FB.
> Efficiency is really important if we want to push intelligence to the 
edges, which is what "decentralization" is at least partly about. Mobile 
makes efficiency*really*  important. Anything that requires that a mobile 
device constantly sling packets is simply off the table, since it would 
kill battery life and eat up cellular data quotas. That basically 
eliminates every mesh protocol I know about, every DHT, etc. from 
consideration for mobile.

I did not want to say that efficiency is not important at all.

But I don't really see a value in an application, which is not 
reliable.  What's the value of n virtual asset stored at mobile when the 
mobile is lost?  Manual backup is no solution.  As long as data does not 
outlive gadgets there is little value left.