A reader points me to something that Gray Mirror wrote last year.
Let’s call a protocol transparent if anyone can send or read a message in the protocol. In a transparent protocol, the whole public has both the technical information and the legal right to encode or decode messages in a transparent protocol, at every layer of the protocol stack. The opposite is opaque.
His proposal is to require that protocols be transparent. Anyone should be able to write an application that uses the protocol. This would change Facebook from a walled garden to an open database.
Here is an example, using Fantasy Intellectual Teams. Suppose that I maintain the definitive database for keeping score, and a schematic of the file format looks like this:
var teams = [
{
teamname: “Clan Graham”, owner: “Geoff”, players:
[
{name: “Joe Rogan”, Bets: 0, Memes: 1, Steelmans: 0},
{name:”Matt Ridley”, Bets:0, Memes:2, Steelmans:0}
]
},{teamname: “Tim the Enchanter”, owner: “Jon T”, players:
[
{name: “Tyler Cowen”, Bets: 1, Memes: 3, Steelmans:2}
]
}
]
In the walled-garden model, only I know the file format, and thus only I can write reports based on the data. In a transparent-protocol model, pretty much anyone who has ever composed code could write reports based on the data. Gray Mirror would force me to use the transparent-protocol model. As an aside, if a team owner wants to keep his name secret, this is something that could be accommodated in the transparent-protocol model. Data security and protocol transparency are different features, and they are not incompatible.
In the walled-garden model, since I control the reporting, I can sell advertising to be placed on the reports. In the transparent-protocol model, I would have to sell subscriptions to the database. The transparent-protocol model still allows me to have a monopoly, but it strictly limits the uses that I can make of that monopoly.
Would you go to the trouble to create the data protocol for Facebook and, most importantly, undertake the effort to induce people to enter data into your database, if you knew that sooner or later you would be forced to make the protocol transparent? If the answer is “yes,” then Gray Mirror’s suggestion might be a good one.
Perhaps the answer is ‘no’ and the suggestion still a good one, if the effort it create Facebook in is current form was better spent elsewhere anyway.
https://developers.facebook.com/docs/apis-and-sdks/
The “transparent to middleware” idea has been around a long time. The first time I remember reading about it was back in the late-mid 90’s days of IRC, ICQ then AOL-IM and the whole Cambrian explosion of instant messenger protocols and interfaces. ICQ used OSCAR, Microsoft and Yahoo used MSNP14 and then MySpace came up with something totally proprietary.
Almost immediately – keep in mind this was about a quarter-century ago – people were getting annoyed about lock-in, network effects, centralization, having to maintain multiple accounts and keep multiple applications open to chat with different people who mostly preferred to operate on just one chat service, and the inability to bring people using different services all together into one group chat – which was easy in IRC times.
And so, also almost immediately, you saw a bunch of attempts to solve this problem and develop cross-platform interfaces, IIRC one attempt was called “OneTalk”, but poking around it looks like a lot of different companies have tried to use that name since, and so it is probably lost in the sands of the internet’s wild west era. And the very next day they got into prohibitive amounts of legal trouble, AOL was particularly protective and with a reputation for being litigious and vicious. And seeing this, and starting to grasp the nature of the issue, and in line with the kind of “general vibe” at the time, people started doing the “information wants to be free” and calling for protocol transparency. Well, it didn’t work out.
This has all been revived in the past few years for obvious reasons. Fukuyama also put out Report of the Working Group on Platform Scale about five months ago, and the consistently great Stewart Baker has a good interview with him about it on the CyberLaw podcast (recommended!)
“The internet is for … ” actually it turns out to be for the extremely lucrative incessant and penetrating surveillance operations that would make Stalin blush and surprise even Orwell and which feed into the advertising, shopping, and everything else.
No one is going to give up maximum control without total war and an existential struggle for their very lives, and the war chests of the players make Croesus look like a destitute bum.
Curtis argues we can’t trust the law to help with enforcing common carrier requirements on the individual level, because the people running the system aren’t reliable to adjudicate those questions in the intended manner, so better to just ask them to enforce it at the protocol level. I admit he’s got a point. But there is also the question of how much of a fight these very interested parties would put up.
Since there is a lot of money at stake and the need to preserve option values – not to mention non-negligible control over the future, and I am not exaggerating at all – they are going to tooth and nail to defend those garden walls, like the Night’s Watch in Game of Thrones.
One the other hand, they may only put up a pro-forma fight to keep up appearances on the matter of being dumb-pipe common carriers, who can still monitor and control everything, without intermediaries, and keep their spy-apps on every smartphone. They will be able to have a socially-acceptable excuse and alibi, “Sorry, I’d love to, but there’s nothing I can do, my hands are tied!”
While they can’t admit it, they may even secretly *want* to get out of the business of moderation and censorship. It’s expensive, requires a small army of (troublesome) employees and the equivalent in ‘AI’, and it causes them to get caught in the middle of a lot of political tension and churn they might prefer to avoid. Not to mention having to constantly bribe hundreds of politicians and, it is alleged, a sizable portion of the mainstream conservative commentariat.
Furthermore, not having your hands tied means that you are constantly going to be lobbied, pressured, and subtly threatened by the usual suspects to use those hands in the way they want. Tied-hands means that kind of pressuring is talking to the wall, which sucks all the oxygen out of the miasma and dramatically lowers the incentive for engaging in precisely the kind of bad behaviors we all want to see much less of.
It seems like ipfs could enable this, the problem as usual is getting a critical mass to use it.
There was a social media experiment called app.net that tried something like this. It gave users storage space that would host their content (posts, pictures, audios, etc.) and an open API that allowed developers to make whatever kind of app they wanted. Developers were creative and made apps that interacted with the data in interesting ways. In addition, app.net was a paid service so they were not beholden to ad impressions and all of the ways that steered the service.
It was formed after the general angst caused by Twitter clamping down on their API. App.net thought that by being an open infrastructure they could allow customers to interact with their data how they saw fit. Because the developers made such different apps users could have completely different experiences. Most just did microblogging but there were also full length blogs, private messaging apps, podcasting, image sharing, online annotating, the list goes on. In other words the API allowed it to be a replacement for Facebook, Instagram, Signal, Pastebin, Anchor, etc.
Back then the emphasis was on users having their choice of how to use their data. They could migrate from app to app seamlessly because the backend was the same. It was the best experience I have ever had with a service. Alas, not enough People were willing to pay the $36 a year to use it and it closed up after 5 years. Dalton Caldwell (now at Y Combinator) and Bryan Berg were the driving forces behind it. When they closed down they released a lot of the API under an open source license. Two micro services mainly built around microblogging were built by and for the app.net diehards. 10Centuries and Pnut are small, but thriving communities.
Anyway, app.net solved the customer experience problem with social media but I’m not so sure how it would have handled the current social media climate. It had the problem of being the central holder of info and so would have been the final arbiter for what was allowed on the service.
A social media challenge which may be so hard it’s probably unsolvable as a technological matter even in principle is how to deal with the “paper trail” problem. Even some theoretically possible proposals that are “almost sci-fi” at this point – like beaming certain hard-to-intercept exotic quantum states directly into neurons, like the ultimate version of the “cone of silence” gag from Maxwell Smart – are nowhere near ‘fully developed’.
To state the problem: you want to communicate something now with verification, assurance, etc., but then you want to preserve the option to be able to make it disappear from digital records forever, so that it is only a matter of disputable human memory and testimony, if that should become necessary in the future.
Obviously there are some attempts at ‘ephemeral’ records and apps that try to prevent screen-capturing or other video / audio recording (e.g., Clubhouse), but all of these can already usually be done on smartphones because those are more controlled devices, and anyway, they usually get cracked pretty quickly (e.g., also Clubhouse) and of course people can always use other recording devices, and then use ‘AI’ to identify voices or faces and make transcripts with attribution, and so forth.
People try to delete their twitter records or blogs sometimes, but if it’s been archived, it can come back to haunt you anyway. Indeed, there are companies and even law firms that specialize in the “secretly scrape and store” game, because when you need to vet or dig up dirt on someone, that kind of stuff can prove pretty valuable.
This is a real problem if you can’t predict the kind of thing that seems innocuous today but is deemed to be cancellation-worthy tomorrow. Or if you think you’re having a private conversation under implicit honor codes of something akin to the Chatham House rules, and then somebody with a grudge wants to throw you under the bus later on at the moment calculated to do the most damage.
Like I said, not only are there no feasible technical solutions to the problem, even the attempts at mere mitigation below the level of ‘solved’ are also not very good.
Unless one embraces Brin’s ideas about a transparent society, we’ve inadvertently painted ourselves in a corner for which our instincts and culture are not well-adapted. We don’t have what it takes to coordinate and trustworthy commitments to never capture or reveal these records.
My hobby-horse argument – you knew this was coming – is that this seems like “technical failure” akin to “market failure” and thus justifies some kind of sovereign intervention to protect people from reasonable fears of unjust treatment.
Could talk in person.
I think you’re right, though. Moderns find it impossible to use such difficult ancient techniques. Requires spending time and effort to meet somewhere, and who has friends that are worth such extravagant expenditure?
Well yes, real is the alternative to virtual, but finding a way to do it virtually is the problem. Because there is no solution to the problem, face to face conversation is what people do in practice when they believe the personal costs of disclosure of the content of confidential conversions is extremely high, which it is perceived to be at the top tiers of most major organisations.
I believe this is a generally unappreciated reason for urban centralization which puts a hard limit on what can be done remotely via telework. Those who can telework the most are like a bureaucratic middle class, while front-line, customer-facing workers must show up in person, and top level executives must be able to have credibly confidential conversations with each other.
This is the telework U curve where I work for a sure.
It’s interesting how nearly all criticism of Facebook and Twitter, both on MSM and alt-media, is limited to their practices of moderating and banning both users and individual message content, and never mentions the ways in which both companies market and use our private information, even though the latter is much more dangerous if it gets into the hands of “cancel culture” people who want to render their opponents unemployed and homeless just for disagreeing with them.
Now that the very institutions that protect you from physical attack in your home and your neighborhood are being attacked politically, the need for individuals to keep secrets is becoming much more important. Social or software practices which break that security will have to be discarded as untenable, or they will make serfs of their users.
Because of the technical difficulties in understanding the alternatives with different pros and cons, most folk will accept both the current status quo and any significant change that gets thru the political process.
Would you go to the trouble to create the data protocol for Facebook and, most importantly, undertake the effort to induce people to enter data into your database, if you knew that sooner or later you would be forced to make the protocol transparent?
If I knew my company would be valued over $1 billion in market capitalization before being made to make the protocol transparent, I would.
Which is my direct, tho less likely, proposal: companies gaining advertising revenue thru digital means using the internet, once they reach $1 billion in market capitalization for 12 continuous months, must begin making their data protocols transparent. They have 12 months to complete the transparency promulgation, with significant fines. If their market cap drops by more than 90%, so their cap drops below $100 million, they can cease making their protocols public.
This or any direct, single law seems very unlikely, less than 10% chance in the next 4 years.
Much more possible, and thus interesting to talk about IRL (in real life), would be the creation of a Digital Utilities Commission. Then this DUC would make such a rule, or some other rule, to reduce the Facebook problems with data.
And in 5, 10, 2 years (? months?) be at least somewhat “captured” by Big tech. Still, getting the DUC up and running allows some action to be taken soon, and focuses the politics on what policies the DUC should be oriented at.
Including what level of transparency for what companies and for which customers.