What would a web protocol look like if it wasn’t influenced by SGML?
I think about how tech on earth would have developed without the military industrial complex, and if a document sharing protocol looks different. Fascinating, ne?
So wait is this for non http transmission protocols of documents ? Or non sgml descendant markup languages ? or both ? Because both of these things are somewhat intoxicating on their own in a way.
I keep thinking IPFS inherits so much junk, at least on the web side (I don’t know all the details). And folks always talk about how difficult HTTPS is, centralized CAs and embedded devices.
Also, what lessons did we learn from UUCP? NNTP?
This came to me when I realized that nearly every resource I come upon for webcraft is geared to treating the “web” platform as a commercial platform.
Still digesting but it seems like we need an alternative, like our browser, but if we are loading protocols arbitrarily (and I want a world where that is true), then we might rethink our human-knowledge focused doc share protocol, without the having to fit it into today’s web.
Because who among us really loves today’s web? What could we love?
SO im going to paint in kind of very large brush stokes a kind of day dream of what im envisioning here. Which also radically rethinks DNS at the same time. (Because why not).
Every client/server is both serves as host for documents it is publishing and a kind of cache for documents it has recieved from other peers in some time frame we will call recent. Some elements of the cache may be permenantly recorded because the use in charge of that host has bookmarked them.
There is no filesystem hiearchy per se. Simply metadata. Each one with a computable hash. And possibly a cryptographic signature where identifying an author may be important. Also totally encrypted documents meant for a specific recepient would be possible.
Every client/server is in a P2P network. Where via some agreed upon query structure can identify the matching documents. Should a specific document be needed the network would be queried for it’s unique hash. Documents are queried and retrieved onion routing style from any peer which has a cached copy.
If we admit that we all have tricorders in our pockets; and are allowed to daydream that they are ethical. Then it makes sense we could simply share QR codes of document hashes and drop web addresses entirely. It doesn’t matter where we retrieve the document from, as long as it’s hash proves it is the correct document.
Documents will need some sort of metadata or markup which will be able to express relationships between documents. In such a model all forms of social interactions via webpage today would happen via lots of small peer to peer published micro-documents, with relationships to one another. Browsers could cisualize these relationships as a forum, or a live chat or as a social media stream as needed based on metadata.
Everything is published, nothing is submitted. Identies are authenticated but disposable. eComerce could be facilitated by publishing a ledger of desired goods encrypted to the public cryptographic key of a seller. eComerce agents would have to run regular queries on who is requesting what. But each could only decrypt the requests and the target requester’s identity.
Cryptographic identities could be distributed web of trust style with trusted friends of family via Shamir’s secret sharing scheme or something similar. Such that should you loose your primary internet device, or your digital identity compromised and you need some sort of revocation key you yourself cannot get. Your friends and family could vouch for your identity to recover a lost key or revocation certificate.