Nym promises to be the 'better than #Tor' anonymous network - a bold claim but one she says they don't make lightly.
@stman what's your take on Nym? (see OP)
@theruran Cool some new homework. I let you know, I have to read first. Changes are, as it is something "over TCP/IP" like Tor (Vulnerable to the hidden channels + IC serial numbers tagging attack ), and running on unsafe PC's, without any critical execution protection like SGX, that it will be equivalent to Tor in terms of anonymity protection regarding major players like NSA. Still, as it is new, it way work better for a short while, before they adapt their govware to it.
@theruran By the way, talking about the fight agaibst hidden channels in general, I see two complementary approaches :
• Designing and using hidden channel safe protocols. In the current paradigm it's completely fucked up as TCP/IP itself is full of possible hidden channels.
• Garanteeing strict code execution to ensure no malware can insert data into hidden channels. Fully fucked up in the current paradigm.
And then they say guys like us are crazy to desire a new paradigm. LOL.
@theruran And my latest bet is to try to suppress the protocol notion as we know it, that should contribute solving or solve completely hidden channels issues. But this mean a complete change of paradigm for digital systems and cyberspace concept.
Do you have any other idea complementary or approches to stop hidden channels ?
@stman what about printing the messages on paper? or passing thru a transformer that displays an easily verifiable artifact?
unused bits as indicated in specs must either be formally-verified to ensure against their use or removed altogether - possibly making for awkward packing or inefficient representation.
@theruran There are two categories of hidden channels :
• Time based ones,
• Data formats / protocols ones.
Having digital systems and cyberspace concepts fully synchronous can make solving the time based ones easy.
For unused bits issue, I see two approaches : First have no unused bits (Data formats or protocols hidden channel safe), but the second approach I tend to prefer today is get rid of the protocol notion as we know it.
@theruran File formats are to be considered a special category of protocols indeed.
Visualizing things this way is helpfull to envision how to suppress both, protocols and file format.
And here, we're falling back in the path or direction I am exploring by revisiting the memoryspace concept and its dimentional caracteristics and what concept we would push for data.
As you can see, by pushing a new concept for data and memoryspaces, we can feel that we can almost solve everything.
@theruran To improve visualization, we can try to list the differences between file formats, that include the notion of file, and protocols.
We're close from a solution.
We can't see it yet because our mind are still too poluted by the existing paradigm, but you can feel as I do that we're very close.
In 2013, while giving a public conference on free integrated circuits with a nice cypherpunk red mohawk, a french military came to me and told me that hidden channels were their
@stman I think what you want is to transmit Abstract Syntax Trees, i.e. executable programs. They are only serialized when present in the transmission cables. There are even cryptographic ways for the sender to ensure the program is executed properly. The program's environment is encapsulated or can be swapped with a trusted environment more safely than the joke that are sandboxes today.
It's kinda like sending the image decoder with the image data and metadata. Except these programs can be a lot simpler and more standardized than they are today. We essentially know the breadth of common use cases and can design for that.
@theruran It is clear that the concept of Abstract Syntax Trees is the closest from what I am trying to extract from my visions. But it is still a concept formulated in the current digital systems paradigm.
In this regard, I am trying to improve it by trying to generalize it better, by reworking underlying concepts or fundamental concepts it relies on. I think we must stay in a concept fuzzing and merging state of mind. But we are on the right path anyway.
Remember what I used to
@theruran repeat when we started working together : There is no difference between hardware, software, protocols and network physical topologies : Everything is code. I was saying this to underline the fact that the boundaries between these specialities are purely subjective, and are shaping our mind in a way that prevents innovating or seeing other approaches to digital systems. Same corolar remark for the commonly understood concepts of personnal computers, CPU and networks.
@theruran I am reminding this because we shall try to see the AST concept in such light, but also within the new paradigm slowly forming in our minds by inventing new alternative memoryspaces concepts, but also this full synchronicity constraint, and what alternative processing units could indeed be as concepts (plurial), in a fully decentralized way.
We are in the most fascinating phase of our researches where we have visions, many elementary selected usefull concepts in mind,
@theruran several strong new constraints or caracteristics, and we're about to merge all this into brand new global approach.
We are very close at finding, inventing or simply seeing, discovering several new stunning alternative paradigms proposals that will all make sense.
It's really starting being fascinating.
We are close.
@theruran What cryptographic ways are you thinking about ?
And yes, would remain only a serialization issue, but by the way, such issues would be greatly simplified if we are in a fully synchronous paradigm.
The directions of research we have been revealing slowly with all our talks and debates are very coherent. It is obvious we are on the right path to what we want to achieve.
This is the project I was remembering and referencing in my post:
An Ironclad App lets a user securely transmit her data to a remote machine with the guarantee that every instruction executed on that machine adheres to a formal abstract specification of the app’s behavior. This does more than eliminate implementation vulnerabilities such as buffer overflows, parsing errors, or data leaks; it tells the user exactly how the app will behave at all times.
Going through the cryptography section of their website and I found some related topics that may be of interest: verifiable computing, homomorphic encryption, Secure Multi-Party Computation and EzPC, Certification of Symbolic Transactions, and Differential Privacy (also under Database Privacy)
This may catch your attention, from the EzPC page:
Secondly, to execute these protocols, one must express the computation at the low-level of circuits comprising of AND and OR gates, which is both highly cumbersome and inefficient.
So there's a lot here that ought to stimulate your imagination. This is what I imagine for the future of computing is that these cryptographic mechanisms are native and used to guarantee privacy of data and computation.
@theruran Remember our discussion about Abstract State Machines : If at the sight they were interesting, we concluded that the introduction of a VM was ruining most of the benefits in terms of proven execution because the machine could be compromized.
I would argue the same here : So instead of sending a AND / OR gates schema, we would send a kind of VHDL code, compiled and assembled in AND / OR gates on the remote device, but if such compiler / assembler is
@theruran compromized on the remote machine, what can we do then ?
• Ensure these compiler / assemblers cannot be compromized.
For this, we can use redundancy strategies, choosing two remote devices randomly, and sending the same code, with the same data set, and we should obtain the same output and implement a mecanism that checks output are the same before validating the result. It is not that hard to do since we would be in a fully synchronous paradigm.
@theruran @yaaps In order to ensure an integrated circuit integrity, typicaly an FPGA, but not only them, redundancy sending the same bitfile on two random remote FPGA is from far the most simple strategy we have at hand today, until we invent something better, which is a topic I am researchng too.
Two years ago, during the french conference organized by LibreSilicon, I was surprised that no researcher's project were dealing with post manufacturing full integrity check
@theruran @yaaps issues. We have been presented many interesting and usefull, yet very promizing research projects, mainly focused on free tool chains, but nothing to ensure IC post manufacturing FULL integrity or "on site" full integrity checking by end-users.
To me this is an essential matter that must be addressed.
When one know the NSA TAO program, their ability to intercept any parcel, change its content, and then reinject it silently into a postal or logistic
@theruran @yaaps operator stream, this means that even if you have a fab making your own chips you would personnaly check their integrity directly in the fab itself by analyzing a few random samples, you have no garantee that these IC will not be replaced by backdoored ones when the fab send them to you with logistic operators.
I have been researching several ways to implement on site full integrity checks, but also strategies like redundancy that can somehow, under
Still, the best would be to have some kinds of IC whose full integrity check can be performed localy, on site, by any end user, with a low cost apparatus.
The small international free integrated circuits community is betraying itself somehow by not researching this topic enough in my humble opinion.
@theruran @yaaps In this regarding, inventing a new mecanism allowing checking IC's dices integrity on site, with a low cost apparatus, I have been asking this week David from LibreSilicon if it was possible to create some LEDs whose light wavelenght could be modulated by a signal (A few bits). He tokd me such LEDs are actually Light Emitting Transistors. They already exist, but don't allow to modulate the wavelenght yet.
Such light emitting transistors would be very
Having the possibility to modulate the wavelenght allows to transmit more informations for a given density of LET (Light Emitting Transistors) on a dice, lowering the cost of such low cost apparatus needed to perform the integrity check of an IC equipped with such LET at strategic points.
@theruran @yaaps As the wavelenght is directly a function of the material used for junctions, I asked him if a kind of LET could implement a FET transistor whose junction would not be made of the same material on the whole surface, so that, depending on the electric field applied, the "used" zone of the LET junction could be of a different material selected by the electric field.
Here the idea is to lower the density of such LETs in an IC like an FPGA to further be able
@theruran @yaaps to collect these lights with low cost CCD matrix and classical microscope optical lenses. In other words, to enable low cost CCD apparatus to collect those LETs integrity check lights on an IC like an FPGA equipped with them for "in situ" checking by end users.
David told we he would contact me we he manages to prototype such LET transistor with wavelenght modulation capabilities.
Such integrity check through LET can be performed without wavelenght
Reducing density of such LETs in an FPGA is according to me one of the major variable to lower the cost of a corresponding checking apparatus for end users.
All what have just said here are just ideas. They may not be pratical, but it is worth investigating them, according to me.
It is complementary to the redundancy
@stman @yaaps I like your idea of using LETs to display the self-test process and output. Earlier what I mentioned about printing or displaying an artifact is really a checksum that is easier for humans to visually check. If there are 1000 LETs that display a stable output at the end of the self-test, it can be very easy to miss a few LETs that are not illuminated but which may indicate a drastic difference of the IC behavior. So a checksum is more user-friendly for this purpose, but maybe you are concerned about its implementation correctness, since after all, how do we know the checksum algorithm is correct or can be trusted? What I am imagining is a printed image in the computer user manual of the correct self-test output, that the user can verify at any time by running the self-test.
@stman @yaaps It's a more difficult issue because you are talking of compilers, but the research I referenced above describes how to ensure the remote machine is executing your program according to formal semantics. The benefit of sending HDL instead of TTL would be less information on the wire.
I don't understand what you say here about Abstract State Machines. It's another abstract machine that is represented mathematically (formal semantics). As with any abstract machine, we can theoretically implement it in hardware just as they did with the LISP microprocessor. The output and stepwise process of an ASM can be tested on many different machines using different software, thereby using the principle of redundancy to verify correctness. It can even be implemented as an FPGA softcore (again theoretically, I don't really know about the space requirements of typical ASMs versus what's available on a run-of-the-mill FPGA).
A bunch of technomancers in the fediverse. Keep it fairly clean please. This arcology is for all who wash up upon it's digital shore.