Restarting from scratch the whole computer stack is not possible in a foreseeable future. The thing is even if at the processor level things are broken (heartbleed, et al) not all processors are broken (RISC-V?).
I can not imagine things to change overnight, even if **some** people size the means of production. Like you wrote the whole (software) system is built around domination. Sizing production facilities will, in my opinion, change the people in power, not the system. In particular, it will not change people's minds Earth-wide.
FOSS was a step in a good direction. Because the system is the way it is. FOSS has put another oligarchy in power and created new monsters, but those kind of monsters existed before FOSS. The good thing about FOSS is that it exhibits, once more, that **together we are stronger**.
One can swap a processor with another, given enough work.
One needs to give some existence to an alternative system, and that should be done at every level of the stack.
edit: that should be done at *every* level of the stack
The lowest and medium layers must be fully re-engineered, let's say up to kernel level, and eventually we can easily adapt existing OS on top of this.
You know why I think like this. Hope you did not changed your mind.
This is why I have asked you to participate to the definition of crypto-anarchism & demilitarized technologies. It's a way to adjust goals and clearly remember them, allowing several strategies to reach them to naturaly emerge. Theruran, we've both done a lot of good work toguether with our informal chat on several matters.
We should deepen all this.
yes, we need a decentralized economic system to support the decentralized information system development and stewardship. to the extent possible, these should be designed together.
I sent you a DM earlier about definitions. Lemme know and we can iterate on some things.
can testify that we did a nice work, a conceptual and theorical one, leading to what we can call crypto-anarchism situationism, that diffetenciate itself from classical crypto-anarchism, by the enphasis put both on architectures of all known technological layers and time.
I introduce to you @mouloud who he is more aware of philosophical implication and sometime technical details of the actual the (sort of) system we have been working on.
I forward a very **serious** question from him, since the convo was not federated on his instance yet:
What is the point of money or crypto-money?
My question is indeed what is the point of money whether it is crypto or not. I know a little about bitcoin (proof-of-work: evil for climate so far) and Ethereum (algorithmic contracts, good idea as far as I understand, but still PoW).
The idea of a single source of thruth is neat, and would be useful to avoid lies or fakes in a distributed system.
Money used to be used a means of exchange something of value for the rich that is gold. That by itself shows how dubious money seems, because gold is almost useless in practice. Nowadays, money has mostly only virtual value, because people trust the system, and the people in power somewhat trust each other and they agree through the market on exchange rates.
Anyway, take of instance "carbon budget" of countries, it can be exchanged for money. And they that "carbon budget" can be used to produce new products.
With the money, a low carbon footprint country can bargain to buy some products.
During this exchange the low carbon footprint might have lost value because conditions of the exchanges and the dubbed added value to the products.
It is far fetched, but to me their should be no money, hence probably no crypto-money.
A single-source of truth is helpful, but I am not convinced it is necessary, and is certainly not necessary in a fully cooperative system with no evil.
Thanks including me in the convo.
They have locked down our ability to change most of the architectures because they know that cyber-powers and cyber-rights models are exclusively the consequence of those architectures, and they want to impose us their own models, by forcing us to play with their architectures, protecting their models, and therefore, their political and failing economical system, in essence, capitalism. Doing so, they are preventing a crypto-anarcho-communist revolution.
Here it is.
But in order to implement this at world scale, we obviously need alternative cyberspace architecture that offer the equivalent of blockchain functionnalities in its core, as a service, scalable to billions transactions per second.
And this cannot be clearly achieved with the current cyberspace architecture design and paradigm, not with current digital system architectural paradigm.
In such paradigm, every citizen would have a kind of multi-wallet attached to him, beside standard wallet and bank accounts, to count those credit left for him on those hundreds of "criterias", and he would not have the possibility to "recharge" a specific line with monay. In order to buy a good or service, citizen would be obliged to have credit left on all fields, beside having the money to buy the good or service. This incentively would force citizen to @zig @theruran @emsenn
By the way, homomorphic cryptography would be very usefull to help create a cyberspace architecture allowing to easily handle for each citizen the multi-wallet holding those hundred "credit lines".
It's typically a functionnality that would require to be provided at cyberspace architecture level in order to be scalable. And this is not possible with the current cyberspace architecture paradigm.
Cybernetics of trust cannot be achieved with current cyberspace.
@mouloud This kind of logic I presented you here is anarcho-communist compatible. It would lead to a money less and class less society, without fascism, just with incentive logic, but it can work only if it is incorruptible.
This is because such functionnalities, scalable, real time, can only be achieved with revolutionnary alternative cyberspace architectures, enabling such cybernetics of trust, that we advocate, as crypto-anarchist situationist, to change of
What is hard, and Theruran knows it, is how to ensure those fundamental blocks cannot be "hacked", and how to garantee they will really work as expected with no treachery possible. This is indeed what we are working on. Globaly, this is called cybernetics of trust, but it is also fulyl demilitarized as it is not hackable, there are no backdoors possible of any kind.
Yes, and about code - that is the kind of knee-jerk reaction that people have nowadays and it prevents everyone else from understanding what they are doing. Documentation of every kind is key, unlike the prevailing software engineering practices that lack rigorous conceptual design development phase. Visual documentation is also of course important, and to maximize the utility it must also be an executable architecture model.
Well - if we can theorize another way of achieving an equivalent security model and utility to Bitcoin without the energy consumption, that would be incredible. As far as I know, there is no alternative yet conceived and the energy consumption keeps the system honest. And unfortunately, no one I have met in the fediverse thus far is qualified to theorize such an alternative. There are real engineering constraints and trade-offs that are glossed over in these kinds of discussion, and I doubt that billions of transactions per second is achievable due to laws of physics. It is my expectation that such a decentralized and trustworthy cybersystem will be slower in many ways but is nevertheless fast enough for us to get real work done and not just mindlessly consume Big Media.
P.S. come to hackers.town - we got 10,000-character toots!
@theruran can be achieved when integrating this natively inti the cyberspace architecture itself. Time will tell. I'm still thinking about this and working on it, exploring possible native implementations a lot. I tend to mix DHT concept with PoW in a native mesh cyberspace architecture to do it for the moment, but I am exploring other possibilities too. Will tell if I find something promizing. @mouloud @zig
@theruran things become fun.... And new possibilities for simpler algorithms, and with less energy consumption than classical PoW become possible, to enable many of the current blockchains functionnalities, at rate reaching billions of operations per second. But the technological digital paradigm is radicaly different, we're talking about fully synchronous time sensitive digital systems and cyberspace architectures. So you see, I'm confident. @mouloud @zig
@theruran In such paradigm, simpler protocols, with fully distributed small mining capabilities in each node of a truly mesh cyberspace architecture, and within each microprocessor, can bring the time sensitive trust chain needed to replace the current blockchain implementation, with its energyvore mining. That's what I curently think.
@theruran How shall I understand time-server here ?
We are in a truly serverless paradigm. I prefer you to talk about a fully distributed way to create a global synchronous clock. I know it is hard, so much we have been druged and brainwashed with the client-server paradigm elevated to the rank of religion by cyber-creationists, but wipe it from your head, at least when reasonning in those alternative cyberspaces architectures paradigms.
It is called TrueTime
@zig @mouloud @theruran Sincerely, very sincerely, the Kernel concept as we know it, fully dependent of microprocessor's programming model, is to me fully outdated. The OS concept is outdated too, and must be re-engineered. Both concepts, kernel and OS, suffer a centralized aspect that nobody ever managed to get rid of, because nobody ever asked himself if these centralized conceptions where necessary, and why we had better prefer decentralized approaches. To me, kernels will disappear, and OS
Of course, the cyber-powers that be and rule, crypto and cyber creationists (It's another way to politely say cyber-imperialist hegemonist) will disagree, but I don't give a fuck.
May I ask all of you a question ?
What is the most important constraint to ensure an alternative cyberspace architectures belongs to, and servs, equaly, all the people on earth ? I will answer last.
What is the most important constraint to ensure an alternative cyberspace architectures belongs to, and serves, equally, all the people on earth?
Thinking about this at first it seemed that decentralized governance of a copyleft architecture specification and implementation would be the deciding factor its sustainability. I don't think this is what you're asking though.
Even with a successful global multistakeholder cooperative that provides architecture governance, the architecture itself could still be shit. We see this today already and we should be aware that people like to pile on their CVs because they're careerists and don't care about doing great work. A co-op would look nothing like the Linux Foundation of today though, but that's a topic for another time.
The deciding factor in ensuring the alternative cyberspace architecture remains free to everyone forever -- I am convinced -- is that the architecture model is fully self-documenting. We don't see this anywhere except in aerospace / military industrial applications which are obviously classified and proprietary. Even there it is extremely expensive and not widely practiced. Fully model-driven engineering is extremely powerful but again, every model element needs to be richly documented so full documents and books can be generated from the model.
Part of the problem is the garbage tooling available. The rest is the capitalist culture of withholding information for profiteering. When only an elite cadre of engineers understand the system, everyone suffers. We see this today with our computing systems both proprietary and FOSS. Public domain or strong copyleft are of course important but useless without the effort spent documenting the architecture.
We may understand constraints differently, so I still don't know if I am answering your question as you intended. Constraints have a specific engineering definition and are dealt with differently by systems engineers - I am just learning they may be recast as optimization goals.
What is most dangerous centralized aspect in software kernels like Linux in terms of cyber-security model?
The MMU is provided by the microprocessor, so do you mean the virtual memory / page tables managed by the kernel? Linux' fault is that it's a monolithic kernel of C code. But even advanced separation kernels are not saved from the vulnerabilities of the microprocessor microarchitecture. Then the idea of an Abstract C Machine is all about sequential processing so that's why things like the MMU exist and there's your architectural bottleneck - Achilles' heel right there! So yeah - there is software maintaining security barriers around memory and that software sucks; and the microcode and hardware architecture of the microprocessor also tries to place security barriers but due its unmanaged complexity creates more side channels instead.
I am impressed by your answers, not only because they are according to me fully true, but because I was desperate to find some people able to write such thing.
As promized, I am going to give mine, very close from yours, you will see, and another one I got from a sociologist that is also computer hacker & FOSS + Open hardware entousiast, as his answer was mainly driven by his sociological culture and he through a very interesting idea in how to answer the question.
You and I are mostly influenced by our technical skills, therefore taking care of architectural and implementation aspect of the question, but a sociologist sees such question differently, and this is why such question should be asked to other motivated people who have different skillset than ours.
Going to generate a pic as writting them doing doesn't fit in a post. Plus definitely going to move to your instance because I am fed up about this.
Now we can start prioritizing more things, and open some collective pads to do it, but at least between the two os us, the consensus is huge, is indeed total. We both intellectualy stimulate each other's creativity very efficiently, and I am personnaly very satisfied by the conceptual work and innovation and exploration we are doing. We are more than complementary.
Kind regards to you,
@theruran A few public words besides the pad about the meta-cyberspace definition and concept : You could have the impression regarding the first draft of a few definitions I started developping that my approach is very nation centered, but it is not ultimately. I will detail all the meta-cyberspace multiple goals, one of them being favoring and stimulating global consensus finding by democratic nations of worldwide common cyber-powers and cyber-rights models.
@stman this is great!! and we are in agreement on all these points.
the universal translator is a great idea and will definitely be incorporated. it is related to the issue of the cyber-imperialist Unicode standard which cannot be used.
and it reminds me of another important feature that must be discussed. in order for the fully-mesh complex-adaptive cybersystem to maintain its resiliency and robustness against a variety of environmental conditions, it must somehow measure its own performance and adapt accordingly. there are many possibilities here for adaptive optimization to environmental constraints and internal emergent behaviors. I believe this can be done without compromising user security and privacy, which means the system metrics must also be cryptographically secured or otherwise blinded.
would like your feedback on the above because it may also be one of the most important constraints to ensure the viability of the system.
@theruran I fully agree. I share the same goal. It is needed, and indeed, as we have discussed in private "clock distribution issues", criticized objectively current Time Sensitive Networks normalized approaches (TSN) paradigm that is indeed a multi-stratum hierarchical client-server approach needing a master clock oracle, I wanted to take advantage of your remarks for performance metering, dynamic adaptability, and resiliency to introduce the bases of my ideas ideas regarding a way
@theruran to have a distributed synchronization clocking accurate adaptative mecanism in the paradigm of such mesh network : It is based on node performance measurements, with a simple averaging mecanism that would lead all node to synchronize to a clock frequency resulting in each node self-measurements. I won't text all the details of the ideas I am currentky digging and deepening, but I can already claim that I consider this decentralized clocking distribution and sychronization
@theruran mecanism solved with a simple algorithm based on each node performance self-measurement.
I am convinced, same as you, that such principle can solve many other issues, in a fully resilient way. I there fully agree with your remark. Such integrated service is an important functionnality to implement from the very begining. It is something that will be very handy to test and benchmark alternative cyberspaces architectures hosted on the meta-cyberspace, or to optimize the
@theruran meta-cyberspace architecture itself. Even if we would choose not to have a meta-cyberspace but a single alternative cyberspace architecture handle by such mesh network, it would allow better evolutions of it. Still, I am convinced we won't have difficulties to reach consensus on the meta-cyberspace concept, as it gives way much more agility to implement, test, benchmark, modify, upgrade several specialized or localized custom alternative cyberspaces architectures and
@theruran new concepts. In all this, objective evaluations and benchmarking of everything is a way to facilitate and speed up finding of global worldwide consensus in many fields, that go far beyond those cyber-powers and cyber-rights models. So yes, I am totaly in synch with you on that.
@theruran As long as we take all the necessary design contraints, as for all the rest, so that these metrics cannot be used unwillingly as covered meta-data or side channels. Indeed, these metrics are data that should also be addressed with choosen cyber-powers and cyber-rights models, at the meta-cyberspace level if the meta-cyberspace concept is accepted. I'm very confident it will : Everybody prefers fair objective model benchmarking leading to concepts merging rather than cat fights.
@theruran Fortunately, in those citizen owned mesh physical network topologies, we can restrict and control the broadcast / use of such data metric very finely so that most needed calculation of the network self optimization or resilience to a limited number of surrounding peers, and usage of homomorphic cryptography does the rest of the securing of such data. I am very cautious with adaptative mecanism implementations, because it is very easy, if not well enginered, to build side
@theruran channels out of them. Anyway, it is not the moment to discuss implementztions details yet, I will just personnaly claim that I think we have enough technological bricks available to prevent further fuckeries with these things.
I am longing at showing you the best idea I am digging and deepening so far for a fully distributed "master-clock less" accurate synchronous clock adaptative distribution mecanism.
I will do it once I have found an operationnal solution for the
@theruran issue I mentionned you regarding the possibility to reuse existing networks, asynchronous, with usual known protocols, to inter-connect remote zones like on the picture I sent you, with the red / black links issue. For what I could see so far, it should be possible at the cost of a slightly longer latency on those red links, in order to emulate the black links synchronous caracteristics, and allow remote dense zone interconnexion reusing existing infrastructure when we
@theruran have no other way to do it.
So your theorized TSN synchronizes the clock frequency, meaning it has only an internally-consistent time measure that does not rely on a time oracle?
So let me ask you what is the difference between real-time networks and time-sensitive networks? I am just now learning about the former, e.g. CANbus, that guarantee message delivery within a bounded latency.
I really liked your diagram of the meshnet region interconnects as that is what I was imagining too. Based on your description, I think we will agree that there will be different classes of network traffic based on real-timeness guarantees such that services like teleconferencing can function without jarring delays - that can be achieved with copper wiring or fiberoptics. The three (or maybe there are more or less) classes of #meshnet traffic could be:
@theruran Yes, this is idea.
@theruran You understood what I think is best for decentralized clock synchronization without clock oracles. Then, within digital end-points connected through such meshnet, PLL's will simply allow higher frequencies clocks for mecanisms requiring so. Please note that all current FPGA, even if not architecturaly perfects according to me, allow such thing already natively. This is just something to keep in mind for implementation issues later on.
For the rest, I mostly agree with you
@theruran but what you are describing is somehow very close from ATM paradigm. ATM paradigm is cool, but we can do better, much better.
As a preliminary, I'd say that ATM was designed as a universal telecommunication protocol intentes to cary other protocols, with its adaptation layer notion. That was a very good idea, because the rest of ATM concept, fast cells switching, was a fucking good justified idea for low lantency.
But within massive mesh networks, low latency is even
@theruran more important, and other tricks, better tricks like the ATM "Fast cell switching matrix" within ATM switches must be implemented. And it's here that I innovate a lot. But not randomly for the sake of low latency. Lowest latency possible is very high on top constraint list, but it's not the only one.
I using a personnal design approach metodology that tries to servs the goals of the meta-cyberspace concept as I defined it. It is similar to ATM with its possibility to
@theruran transport any higher level protocol of any kind, except this time the goal is much wider, much deeper, because it is to be able to hosts as much very different new conceotual alternatives (or even existing ones) cyberspaces architectures as possible.
So my self imposed design and conceptual constraints are of a similar kind of those who have lead to what ATM as it was ultimately designed. I said similar. For people reading these line to understand what is actually driving
@theruran my creativity and best solution finding researches and evaluation process. I have no definite prefered solution yet for everything, but the constraints & goals ruleset my brain works with keeps being refined day after day. I indeed do my research work with a kind of iterative process, a bit like artificial intelligence does. And I am confident about finding several different smart implementations leading to meeting my constraints and goals very soon. I used this same
@theruran technic to find my first definitive solution to stack and buffer overflows and ROP/JOP, and then to optimize it. You know, the thing that was later on stolen to me by CIA & UK spies working for Intel and BAE System - something I will not forgive them personnaly by the way, and I miss on their mind control compromission attempts counter attacks by the way too.
What is important in my work is to have clear goals, a consistente list of goals and constraints. Once you have
@theruran this and keeps refining it, and if you voluntarily choose to start blank page, the creativity can do its conceptual exploration work rather easily. I have other personnal tricks I use to boost my creativity, and until now, they have not failed me. This is why the way we are working toguether should lead to collective good results, if we both play the game, and the game is publishing best ideas / concept as they emerge, peacefully.
@theruran I choose to publish the "answer" given by this sociologist hacker-friendly guy I know, because he is not obsessed or focusing on the same things as we do. He has other specialities, but in the case he would have really tried to answer the question honnestly (I can't be surrounded exclusively by ass hole evil spies, even if there are a lot arround me because of my Antitrust case against Amazon and Postal Operators), then as you mentionned once, it is usefull to have his
@theruran view of "the most important constraint" that would lead to the political goal included into the question. It bring another angle of view about the complex question at hand. I sure you must have been surprised, positively, by the nature of his simple answer because for once, it was not a technicaly/technologicaly/implementation oriented answer, but a high level conceptual one. And we need more of these, for people like us to further integrate them as goals / constraints.
@theruran Unfortunately, very few persons have the honnesty, the integrity and the peacefull crypto-anarchist ethics to play this so exciting game, that is so important for people like you and I, to refine & improve our goals and constraints lists, to debate about it, and further to be able to release our creativity in the wild and boost it to its maximum levels, and ultimately innovate, collectively and truly, deeply, lastingly.
@theruran And here, I want to put in direct accusation all the fucking spying agencies, all nations concerned, no exceptions, for deploying crazy efforts to infiltrate, destroy, instrumentalize and corrupt the workdwide true crypto-anarchist internationalist and universalist scene in order to preserv the current inacceptable statut quo. May they all go and fuck themselves, they are the biggest problem to our collective ability to innovate fsster. Fucking bastards. And when we do,
@theruran they fucking steal our work, deposit patents, and then play with their mind control toyz to blackmail us and have us flagged or seen as garbage.
This is totaly unacceptable.
@theruran By the way, this sociologist hacker-friendly folk is @af who through the concept of embedded universal translator as "the most important constraint" into the mix. He doesn't know it, but this idea, conceptualy, is very interesting to address the "equality" part of the question asked. It also reminds us essential caracteristics of the data itself, forcing us to integrate what data really are, and this is important, for conceptual innovation and ultimately, for the political goals.
@theruran One last thing I wanted to highlight to you about those spies stealing our work : The persistant danger is, and we must integrate this as a constraint as we work toguether, is that they would reuse parts of our work to reach opposit fascist political goals. We must both be aware of this. There are a few smart strategies to prevent them from doing so, and we must talk about them, it is another important point I wanted to mention you. @af
@theruran See my choice for the "most important constraint" at answering the question asked serving several goals as I described in my answer, but also this goal as described in the previous toots : Preventing them from instrumentalizing our research and innovations for opposit fascist political goals.
@theruran But it not enough to prevent them from doing so. It helps a lot, but it is not enough, we must talk and debate about other complementary strategies. This is really a very interestingand important topic. They abused me once with my research on stack and buffer overflow, and this won't happen again, even if it leads to fuckingnew constraints in our work, we'll handle it.
@mouloud I know. I think it's way t early to focus on implementation details. What is important in this concept / idea, it that an uncorruptible integrated way of translating should be available as a universal service translator. Personnaly, I don't deduce anything yet either as implementation or final concept, I just note that a sociologist saw this "functionnality" as essential, and that it deal with a category of architactural caracteristics that enforce equality. @theruran @af
Again (?), even if I do not know electronics, I am not just a bytes or DOM nodes wrangler. I can do other things, I know other things and I know that I don't know.
About implementation, a word-for-word translation system will go a long way. And those systems are much more manageable that seq2seq alignment approach that rely on machine learning black boxes (and which among other things requires lot of data and time to train). That is having a big multi-way dictionary will go a long way. About grammar, humans can bridge the gap. Learning the vocabulary is much more difficult task for which there is already existing data.
The ground truth is that computers are dispensable: on a local basis, one could fallback to horses or foot to communicate.
The thing computers and cyberspaces enable is a communication at a level Earth level.
An EMP could take down all UTs. An nuke could take down all humans but then they would be no one to do the essential tasks (farming, healing, garbage collection and others?)
I volontarily play with conceptuals approaches, for many reasons.
Of course I have thought about esperanto too.
It like the definition of the meta cyberspace or of cyberspace I have written so far in the pad and that are incomplete, I want to add many comments that justify them.
There several schools about the definition of cyberspace, and none of them are neutral either in terms of political ideology, or cyber-geopoliticaly speaking.
@mouloud Take for what it is, an opportunity to discuss about this primary idea sent into the mix by a sociologist that I think was justifying his choice for many reasons. It's the opportunity to have a closer look on what it can mean conceptualy. I have worked similarily with time synchronization and its current implementation and representation in current centralized computer systems.
It's the occasion to rethink to these abstract concepts in a fully decentralized paradigm,
and see where it can lead. It can lead to several new high level abstract concepts, and their possible decentralized implementation, and this is typically a creativity booster.
Maybe we will conclude it needs not to be intregrated as a root service embedded into the cyberspace architecture, maybe not. Raising the question what are the "limits" or criterias to take such decision ?
@theruran Your question :
"So let me ask you what is the difference between real-time networks and time-sensitive networks ? I am just now learning about the former, e.g. CANbus, that guarantee message delivery within a bounded latency."
This depends on how we define them strictly. As far as I know, the first rely on a master clock distributed to all network routing nodes through a master-to-many direct signal sent throught things like GPS, leading to truly synchronous networks,
@theruran with real-time protocols like ATM, while the second exploit asynchronous networks protocols like raw TCP/IP or TCP/IP over ATM, and use a master clock oracle that distribute clock with Time server prorocols, being a multi-stratum hierarchized distribution scheme". TSN networks as standardized by IEEE as described in wikipedia being another approach unable to cross switches that don't comply to their spechs (Unable to reuse existing asynchronous links), and my proposal
A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.