Maggio 19, 2022
Da Bu
239 visualizzazioni

[Last edited on Friday, May the 20th, 2022, 07:30]

Nowadays there’s a huge security problem with webapps claiming to provide end-to-end secure encryption. The problem does not reside into end-to-end encryption itself: it can be secure enough for every kind of end user, when it’s well implemented and its implementations are periodically enough audited by affordable third parties. The main and by far biggest problem with end-to-end encryption through webapps (that is, through web sites) currently resides in the fact that any webapp, with its javascript client-side code, is delivered to any user – that is you, me, anyone else – every time he/she/* opens a “page” (an URL). This means that there can’t be any effective auditing activity by affordable third parties to ensure a webapp claiming to be secure does what it should and does not what it shouldn’t, since any malicious actor with access to the web server(s) it runs on could at any time change the code to steal any (possibly targeted) user’s supposedly client-side-only and secure data.

I think there can be various ways to address this problem. What follows is the best way i currently can think of to address it without requiring end users to change their habits, except for a tiny bit. It would require to define a new secure attribute for the opening <html> tag, and some other new and official HTML extensions (see the next paragraphs), in a future release of the HTML language definition by W3C, plus only one related rule that any browser should follow: that when an HTML page will have the secure attribute set into its opening <html> tag, any browser should not execute any javascript nor any other existing or future client-side code language coming from the web server, and in presence of such code on the same page, it should stop the rendering process and display a big “This page is insecure and won’t be rendered” alert, or something like that. In presence of any page with the secure attribute set into its opening <html> tag and no client-side code, any browser should display a new “green symbol” (better if standardized by W3C) somewhere, meaning that page offers the maximum security.

W3C could define a new hash="<hash algorithm>" attribute for the already existing HTML <input type="password"> element, that would enforce any browser to hash the password entered by the user before sending it to the web server. W3C should then also define an initial set of currently secure enough hash algorithms which any browser should support as <hash algorithm> values, reserving to itself the role of deprecating those algorithms that will become insecure in the future, and the role of adding to said set new secure enough hash algorithms which will come.

W3C could define a new HTML <openpgp-private-key-picker> element that would allow the user to pick his/her/* private OpenPGP key file for local, internal use by the browser only, that should also make the browser ensure the picked private OpenPGP key file path will be disposable on other pages only on the very same domain the page with <openpgp-private-key-picker> element comes from, and only in the very same related browser tab; and a new HTML <openpgp-forget-private-key-path> element that could and should be included by web developers into any logout page. In any case, closing the browser or the tab containing the secure webapp should cause the browser to forget the picked private OpenPGP key file path.

These new HTML elements which i have drafted would allow to build login pages which would only require the usual “username” and “password” fields, plus the private OpenPGP key file picker element, which could be rendered as a simple button with a “Please pick your private OpenPGP key file path” text (or something like that) on it when the private OpenPGP key file path has not already been picked by the user through the usual local file browser, and a “Private OpenPGP key file path is set” text (or something like that) on the button itself, that would remain usable to pick another private OpenPGP secret key file path until the user will push the usual “Login” button, or the like.

Any HTML “input” element could support these new attributes: openpgp-encrypt, openpgp-sign, openpgp-decrypt and openpgp-verify; any HTML element which can be used to define an “output text area” could support only the openpgp-decrypt and openpgp-verify new attributes.

I don’t think this first draft i wrote today (Thursday, May the 19th, 2022) in some hurry would cover all the use cases, and the paragraphs above this one surely would benefit from better defining some details, and some other HTML extensions may be necessary; but i’m sure the underlying concepts do represent a good example of the feasibility of truly secure enough end-to-end encrypted communications through the web, requiring only a periodical enough auditing activity by affordable third parties on those web browsers which would like to define themselves “secure end-to-end encryption compatible”, or something like that, and on their own OpenPGP implementations, or the OpenPGP libraries implementations they may use.

Nonetheless i personally think it’s unlikely something like this will actually be implemented, because i think we are facing a power struggle between institutional actors (like the EU with its CSAM proposed legislation) trying to gain the power to read every encrypted content they want, and the biggest private actors in this field, trying to keep this power they already have for themselves.




Fonte: Bu.noblogs.org