• 0 Posts
  • 40 Comments
Joined 2 years ago
cake
Cake day: July 2nd, 2023

help-circle
  • I did indeed have a chuckle, but also, this shouldn’t be too foreign compared to other, more-popular languages. The construction of func param1 param2 can be found in POSIX shell, with Bash scripts regularly using that construction to pass arguments around. And although wrapping that call with parenthesis would create a subshell, it should still work and thus you could have a Lisp-like invocation in your sh script. Although if you want one of those parameters to be evaluated, then you’re forced to use the $() construction, which adds the dollar symbol.

    As for Lisp code that often looks like symbol soup, like (= 0 retcode), the equal-sign is just the name for the numerical equality function, which takes two numbers. The idea of using “=” as the function name should not be abnormal for Java or C++ programmers, because operator overload allows doing exactly that.

    So although it does look kinda wonky for anyone that hasn’t seen Lisp in school, sufficient exposure to popular codebases and languages should impart an intuition as to how Lisp code is written. And one doesn’t even need to use an RPN calculator, although that also aids understanding of Lisp.

    Addendum: perhaps in a century, contemporary programmers will find it bizarre that C used the equal-sign to mean assignment rather than equality, when the <= arrow would more accurately describe assignment, while also avoiding the common error of mixing up = and == in an if-conditional. What looks normal today will not necessarily be so obvious in hindsight.


  • im not much of a writer, im sure its more clear from AI than if i did it myself

    Please understand this in the kindest possible way: if you were not willing to write documentation yourself, why should I want to want review it? I too could use an AI/LLM to distill documentation rather than posting this comment but I choose not to, because I believe that open discussion is a central tenant of open-source software. Even if you are not great at writing in technical English, any attempt at all will be more germane to your intentions and objectives than what an LLM generate. You would have had to first describe your intentions and objectives to the LLM anyway. Might as well get real-life practice at writing.

    It’s not that AI and LLMs can’t find their way into the software development process, but the question is to what end: using an AI system to give the appearance of a fully-flushed out project when it isn’t, that is deceitful. Using an AI system to learn, develop, and revise the codebase, to the point that you yourself can adequately teach someone else how it works, that is divine.

    With that out of the way, we can talk about the high-level merits of your approach.

    how the authentication works: https://positive-intentions.com/docs/research/authentication

    What is the lifetime of each user’s public/private keypair? What is the lifetime of the symmetric key shared between two communicating users? The former is important because people can and do lose their private key, or have a need to intentionally destroy the key. In such instance, does the browser app explicitly invalidate a key and inform the counterparty? Or do keys silently disappear and also take the message history with it?

    The latter is important because the longer a symmetric key is used, the more ciphertext that a malicious actor can store-and-decrypt later in time, possibly in the future when quantum computers can break today’s encryption. More pressing, though, is that a leak of the symmetric key means all prior and future messages are revealed, until the symmetric key is rotated.

    how security works: https://positive-intentions.com/blog/security-privacy-authentication

    I take substantial notice whenever a promise of “true privacy” is made, because it either delivers a very strange definition of privacy, or relies upon the reader to supply their own definition of what privacy means to them. When privacy is on offer, I’m always inclined to ask: privacy from whom? From network taps? From other apps running in the same browser?

    This document pays only lip service to some sort of privacy notion, but not in any concrete terms. Instead, it spends a whole section on attempting to solve secure key exchange, but simply boils down to “user validates the hash they received through a secure medium”. If a secure medium existed, then secure key exchange would already be solved. If there isn’t one, using an “a priori” hash of the expected key is still vulnerable to hash attacks.

    this is my sideproject and im trying to get it off the ground

    I applaud you for undertaking an interesting project, but you also have to be aware that many others have also tried their hand at secure messaging, with more fails than successes. The blog posts of Soatok show us the fails within just the basic cryptography, and that doesn’t even get to some of the privacy issues that exist separately. For example, until Signal added support for username, it was mandatory to reveal one’s phone number to bootstrap the user’s identity. That has since been fixed, but they go into detail about why it wasn’t easy to arrive at the present solution.

    am i a cryptographer yet?

    I recall a recent post I saw on Mastodon, where someone who was implementing a cryptographic library made sure to clarify that they were a “cryptography engineer” and not a cryptographer, because they themselves have to consult with a cryptography regarding how the implementation would work. That is to say, they recognized that although they are writing the code which implements a cryptographic algorithm, the guarantees comes from the algorithm itself, which are understood by and discussed amongst cryptographers. Sometimes nicely, and other times necessarily very bluntly. Those examples come from this blog post.

    I myself am definitely not a cryptographer. But I can reference the distilled works of crypgraphers, such as from this 1999 post which still finds relevancy today:

    The point here is that, like medicine, cryptography is a science. It has a body of knowledge, and researchers are constantly improving that body of knowledge: designing new security methods, breaking existing security methods, building theoretical foundations, etc. Someone who obviously does not speak the language of cryptography is not conversant with the literature, and is much less likely to have invented something good. It’s as if your doctor started talking about “energy waves and healing vibrations.” You’d worry.

    I wish you the very best with this endeavor, but also caution as the space is vast and the pitfalls are manifold.



  • This doesn’t answer OP’s question, but is more of a PSA for anyone that seeks to self-host the backend of an E2EE messaging app: only proceed if you’re willing and able to upkeep your end of the bargain to your users. In the case of Signal, the server cannot decrypt messages when they’re relayed. But this doesn’t mean we can totally ignore where the server is physically located, nor how users connect to it.

    As Soatok rightly wrote, the legal jurisdiction of the Signal servers is almost entirely irrelevant when the security model is premised on cryptographic keys that only the end devices have. But also:

    They [attackers] can surely learn metadata (message length, if padding isn’t used; time of transmission; sender/recipients). Metadata resistance isn’t a goal of any of the mainstream private messaging solutions, and generally builds atop the Tor network. This is why a threat model is important to the previous section.

    So if you’re going to be self-hosting from a country where superinjunctions exist or the right against unreasonable searches is being eroded, consider that well before an agent with a wiretap warrant demands that you attach a logger for “suspicious” IP addresses.

    If you do host your Signal server and it’s only accessible through Tor, this is certainly an improvement. But still, you must adequately inform your users about what they’re getting into, because even Tor is not fully resistant to deanonymization, and then by the very nature of using a non-standard Signal server, your users would be under immediate suspicion and subject to IRL side-channel attacks.

    I don’t disagree with the idea of wanting to self-host something which is presently centralized. But also recognize that the network effect with Signal is the same as with Tor: more people using it for mundane, everyday purposes provides “herd immunity” to the most vulnerable users. Best place to hide a tree is in a forest, after all.

    If you do proceed, don’t oversell what you cannot provide, and make sure your users are fully abreast of this arrangement and they fully consent. This is not targeted at OP, but anyone that hasn’t considered the things above needs to pause before proceeding.



  • Can I expose webserver, SSH, WireGuard to the internet with reasonable safety?

    Yes, yes, and yes. Though in all three cases, you would want to have some sort of filtering and IPS in place, like fail2ban or similar, at an absolute minimum. There are port scanners of all kinds scanning for vulnerable software that can be exploited. Some people suggest changing the port numbers away from the default, and while security through obscurity can be a valid tactic, it alone is not a layer of your security onion.

    A reverse proxy plus tunnel is a reasonable default recommendation because it is easy and prevents a large class of low-effort attacks and exploits, but tunneling has its drawbacks such as adding a component that exists outside of your direct control. It is also not a panacea. Reverse proxying on its own ISP is also workable, as it means just one point of entry to reinforce with logging and blocking.

    But I feel like if I cant open a port to the internet to host a webserver then the internet is no longer a free place and we’re cooked.

    The Internet is still (for now) a free place, but just like with free speech, effort must be expended to keep it free. The threats have increased and while other simpler options have arisen to fill demand for self hosting, this endeavor is about investing sufficient time and effort to keep it going.

    In my estimation, it is no different then tending to a garden in the face of rising environmental calamities. You can and should do it, so long as you’re fully informed about the effort required.



  • Setting aside the cryptographic merits (and concerns) of designing your own encryption, can you explain how a URL redirector requiring a key would provide plausible deniability?

    The very fact that a key is required – and that there’s an option for adding decoy targets – means that any adversary could guess with reasonable certainty that the sender or recipient of such an obfuscated link does in-fact have something to hide.

    And this isn’t something like with encrypted messaging apps where the payload needs to be saved offline and brute-forced later. Rather, an adversary would simply start sniffing the recipient’s network immediately after seeing the obfuscated link pass by in plain text. What their traffic logs would show is the subsequent connection to the real link, and even if that’s something protected with HTTPS – perhaps https://ddosecrets.com/ – then the game is up because the adversary can correctly deduce the destination from only the IP address, without breaking TLS/SSL.

    This is almost akin to why encrypted email doesn’t substantially protect the sender: all it takes is someone to do a non-encryted reply-all and the entire email thread is sent in plain text. Use PGP or GPG to encrypt attachments to email if you must, or just use Signal which Just Works ™ for messaging. We need not reinvent the wheel when it’s already been built. But for learning, that’s fine. Just don’t use it in production or ask others to trust it.


  • litchralee@sh.itjust.workstoSelfhosted@lemmy.worldWifi Portal
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    4 months ago

    But how do they connect to your network in order to access this web app? If the WiFi network credentials are needed to access the network that has the QR code for the network credentials, this sounds like a Catch 22.

    Also, is a QR code useful if the web app is opened on the very phone needing the credentials? Perhaps other phones are different, but my smartphone is unable to scan a QR code that is on the display.



  • Before my actual comment, I just want to humorously remark about the group which found and documented this vulnerability, Legit Security. With a name like that, I would inadvertently hang up the phone if I got a call from them haha:

    "Hi! This is your SBOM vendor calling. We’re Legit.

    Me: [hangs up, thinking it’s a scam]

    Anyway…

    In a lot of ways, this is the classic “ignore all prior instructions” type of exploit, but with more steps and is harder to scrub for. Which makes it so troubling that GitLab’s AI isn’t doing anything akin to data separation when taking instructions vs referencing other data sources. What LegitSecurity revealed really shouldn’t have been a surprise to GitLab’s developers.

    IMO, this class of exploit really shouldn’t exist, in the same way that SQL injection attacks shouldn’t be happening in 2025 due to a lack of parameterized queries. Am I to believe that AI developers are not developing a cohesive list of best practices, to avoid silly exploits? [rhetorical question]


  • Typically, business-oriented vendors will list the hardware that they’ve thoroughly tested and will warranty for operation with their product. The lack of testing larger disk sizes does not necessarily mean anything larger than 1 TB is locked out or technically infeasible. It just means the vendor won’t offer to help if it doesn’t work.

    That said, in the enterprise storage space where disks are densely packed into disk shelves with monstrous SAS or NVMeoF configurations, vendor specific drives are not unheard of. But to possess hardware that even remotely has that possibility kinda means that sort of thing would be readily apparent.

    To be clear, the mobo has a built-in HBA which you’re using, or you’re adding a separate HBA over PCIe that you already have? If the latter, I can’t see how the mobo can dictate what the HBA supports. And if it’s in IT mode, then the OS is mostly in control of addressing the drive.

    The short answer is: you’ll have to try it and find out. And when you do, let us know what you find!


  • Congrats on the acquisition!

    DL380 G9

    Does this machine have its iLO license? If so, you’re in for a treat, if you’ve never used IPMI or similar out-of-band server management. Starting as a glorified KVM, it then has full power control authority (power on/off, soft reset, hard reset), either a separate or shared Ethernet connection, virtual CD and USB, SNMP reporting, and other whiz-bang features. Used correctly, you might never have to physically touch the machine after installation, except for parts replacement.

    What is your go-to place to source drive caddies or additional bays if needed?

    When my Dell m1000e was missing two caddies, I thought about buying a few spares on eBay. But ultimately, I just 3d printed a few and that worked fine.

    Finally, server racks are absurdly expensive of course. Any suggestions on DIY’s for a rack would be appreciated.

    I built my rack using rails from Penn-Elcom, as I had a very narrow space I wanted to fit my machines. Building an open-frame 4-post rack is almost like putting a Lego set together, but you will have to take care to make sure it doesn’t become a parallelogram. That is, don’t impart a sideways load.

    Above all, resist the urge to get by with a two-post rack. This will almost certainly end in misery, considering that enterprise servers are not lightweight.


  • Yep, sometimes acetone will do that. But other times, another solvent like gasoline might do the trick. Or maybe a heat gun.

    I see it as an engineering challenge, how to best remove intrusive logos from stuff. IMO, all this is part-and-parcel to the second part of: reduce, reuse, recycle. Also, sometimes certain logos can be clipped in very creative ways haha


  • It doesn’t work for backpacks that might have the company name embroidered on, but for cheaper print-on-demand items like hats and water bottles, acetone will cause the logo to dissolve or shift.

    That says, I have personally removed embroidered logos from clothes before, when the product itself is excellent but aesthetically ruined by a logo. It’s very finnicky work with a seam ripper, and has gained me a lot of nice thrift store finds.


  • I agree with this comment, and would suggest going with the first solution (NAT loopback, aka NAT hairpin) rather than split-horizon DNS. I say this even though I have a strong dislike of NAT (and would prefer to see networks using flat IPv6 addresses, but that’s a different topic). It should also be fairly quick to configure the hairpin on your router.

    Specifically, problems arise when using DNS split-horizon where the same hostname might resolve to two different results, depending on which DNS nameserver is used. This is distinct from some corporate-esque DNS nameservers that refuse to answer for external requests but provide an answer to internal queries. Whereas by having no “single source of truth” (SSOT) for what a hostname should resolve to, this will inevitably make future debugging harder. And that’s on top of debugging NAT issues.

    Plus, DNS isn’t a security feature unto itself: successful resolution of internal hostnames shouldn’t increase security exposure, since a competent firewall would block access. Some might suggest that DNS queries can reveal internal addresses to an attacker, but that’s the same faulty argument that suggests ICMP pings should be blocked; it shouldn’t.

    To be clear, ad-blocking DNS servers don’t suffer from the ails of split-horizon described above, because they’re intentionally declining to give a DNS response for ad-hosting hostnames, rather than giving a different response. But even if they did, one could argue the point of ad-blocking is to block adware, so we don’t really care if SSOT is diminished for those hostnames.



  • I previously proffered some information in the first thread.

    But there’s something I wish to clarify about self-signed certificates, for the benefit of everyone. Irrespective of whichever certificate store that an app uses – either its own or the one maintained by the OS – the CA Browser Forum, which maintains the standards for public certificates, prohibits issuance of TLS certificates for reserved IPv4 or IPv6 addresses. See Section 4.2.2.

    This is because those addresses will resolve to different machines on different networks. Whereas a certificate for a global-scope IP address is fine because it should resolve to the same destination. If certificate authorities won’t issue certs for private IP addresses, there’s a good chance that apps won’t tolerate such certs either. Nor should they, for precisely the reason given above.

    A proper self-signed cert – either for a domain name or a global-scope IP address – does not create any MITM issues as long as the certificate was manually confirmed the first time and added to the trust store, either in-app or in the OS. Thereafter, only a bona fide MITM attack would raise an alarm, the same as if a MITM attacker tries to impersonate any other domain name. SSH is the most similar, where trust-on-first-connection is the norm, not the outlier.

    There are safe ways to use self-signed certificate. People should not discard that option so wontonly.


  • After reviewing the entire thread, I have to say that this is quite an interesting question. In a departure from most other people’s threat models, your LAN is not considered trusted. In addition, you’re seeking a solution that minimizes subscription costs, yet you already have a VPN provider, one which has a – IMO, illogical – paid tier to allow LAN access. In my book, paying more money for a basic feature is akin to hostage-taking. But I digress.

    The hard requirement to avoid self-signed certificates is understandable, although I would be of the opinion that Jellyfin clients that use pinned root certificates are faulty, if they do not have an option to manage those pinned certificates to add a new one. Such certificate pinning only makes sense when the client knows that it would only connect to a known, finite list of domains, and thus is out-of-place for Jellyfin, as it might have to connect to new servers in future. For the most part, the OS root certificates can generally be relied upon, unless even the OS is not trusted.

    A domain name is highly advised, even for internal use, as you can always issue subdomains for different logical network groupings. Or maybe even ask a friend for a subdomain delegation off of their domain. As you’ve found, without a domain, TLS certificates can’t be issued and that closes off the easy way to enable HTTPS for use on your untrusted LAN.

    But supposing you absolutely do not want to tack on additional costs, then the only solution I see that remains is to set up a private VPN network, one which only connects your trusted devices. This would be secure when on your untrusted LAN, but would be unavailable when away from home. So when you’re out and about, you might still need a commercial VPN provider. What I wouldn’t recommend is to nest your private VPN inside of the commercial VPN; the performance is likely abysmal.


  • For timekeeping, you’re correct that timezones shouldn’t affect anything. But in some parts of law, the local time of a particular place (eg state capital, naval observatory, etc…) is what might control when a deadline has passed or not.

    If we then have to reconcile that with high speed space travel, then there’s a possibility of ending up in a legal pickle even when the timekeeping aspect might be simple. But now we’re well into legal fanfiction, which is my favorite sort but we don’t have any guardrails ground rules to follow.