Up until a year ago, the README explicitly said they didn’t claim to be an open source project: https://github.com/jgraph/drawio/commit/8906f90ac0cc50a0c6da77c28cf9b2b2339277b1#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5L10
Up until a year ago, the README explicitly said they didn’t claim to be an open source project: https://github.com/jgraph/drawio/commit/8906f90ac0cc50a0c6da77c28cf9b2b2339277b1#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5L10
For starters, it was never “open source”…
From your link:
Instead, as Winamp CEO Alexandre Saboundjian said, “Winamp will remain the owner of the software and will decide on the innovations made in the official version.” The sort-of open-source version is going by the name FreeLLama.
While Winamp hasn’t said yet what license it will use for this forthcoming version, it cannot be open source with that level of corporate control.
If I upload the source code for my project on Github/Forgejo/Gitlab/Gitea and license it under and open source license, allowing you to fork it and do whatever you want (so long as you follow the terms of my copyleft license), and I diligently ensure that code is uploaded to my repository before being deployed, but I ignore all issues, feature requests, PRs, etc., is my project open source?
Yes.
Likewise, if Winamp had been licensed under an open source license, it would have been open source, regardless of how much control they kept over the official distribution.
Winamp wasn’t open source because its license, the WCL, wasn’t open source.
You could’ve scrolled down to the bottom, clicked on “Links,” then clicked on the repo link
The repo has instructions to install a Snap or build from source. If you build from source, it looks like you should download an archive from the releases page rather than just pulling from master.
Open-Webui published a docker image that has a bundled Ollama that you can use, too: ghcr.io/open-webui/open-webui:cuda
. More info at https://docs.openwebui.com/getting-started/#installing-open-webui-with-bundled-ollama-support
For the purposes of this project, you could at least reproduce them by running wget
and downloading them from the original projects.
I made a typo in my original question: I was afraid of taking the services offline, not online.
Gotcha, that makes more sense.
If you try to run the reverse proxy on the same server and port that an existing service is using (e.g., port 80), then you’ll run into issues. You could also run into conflicts with the ports the services themselves use. Likewise if you use the same outbound port from your router. But IME those issues will mostly stop the new services from starting - you’d have to stop the services or restart your machine for the new service to have a chance to grab the ports while they were unused. Otherwise I can’t think of any issues.
I’m afraid that when I install a reverse proxy, it’ll take my other stuff online and causes me various headaches that I’m not really in the headspace for at the moment.
If you don’t configure your other services in the reverse proxy then you have nothing to worry about. I don’t know of any proxy that auto discovers services and routes to them by default. (Traefik does something like this with Docker services, but they need Docker labels and to be on the same Docker network as Traefik, and you’re the one configuring both of those things.)
Are you running this on your local network? If so, then unless you forward a port to your server on the port your reverse proxy is serving from, it’ll only be accessible from the local network. This means you can either keep it that way (and VPN in to access it) or test it by connecting directly to your server on that port and confirm that it’s working as expected before forwarding the port.
I don’t know that a newer drive cloner will necessarily be faster. Personally, if I’d successfully used the one I already have and wasn’t concerned about it having been damaged (mainly due to heat or moisture) then I would use it instead. If it might be damaged or had given me issues, I’d get a new one.
After replacing all of the drives there is something you’ll need to do to tell it to use their full capacity. From reading an answer to this post, it looks like what you’ll need to do is to select “Change RAID Mode,” then keep RAID 1 selected, keep the same disks, and then on the next screen move the slider to use the drives’ full capacities.
upper capacity
There may be an upper limit, but on Amazon there is a 72 TB version that would have to come with at least 18 TB drives. If 18 TB is fine, 20 TB is also probably fine, but I couldn’t find any reports by people saying they’d loaded 20 TB drives into theirs without issue.
procedure
You could also clone them yourself, but you’d want to put the NAS into read only mode or take it offline first.
I think cloning drives is generally faster than rebuilding them in RAID, as well as easier on the drives, but my personal experience with RAID is very limited.
Basically, what I’d do is:
In terms of timing… I have a Sabrent offline cloning hub (about $50 on Amazon), and it copies data at 60 Mbps, meaning it’d take about 9 hours per clone. Startech makes a similar device ($96 on Amazon, that allegedly clones data at 466 Mbps (28 GB per minute), meaning each clone would take 2.5 hours… but people report it being just as slow as the Sabrent.
Also, if you bought two offline cloning devices, you could do steps 1-3 and 4-6 simultaneously, and do the same again with steps 7-8.
I’m not sure how long it would take RAID to rebuild a pulled drive, but my understanding is that it’s going to be fastest with RAID 1. And if you don’t want to make the NAS read-only while you clone the drives, it’s probably your only option, anyway.
Good to know! I saw that mentioned on some (apparently outdated) Comodo marketing copy as a benefit over LE
EV certs give you an extra green bar or something along those lines. If your customers care about it, then you have to. If they don’t - and they probably don’t - it’s a waste.
What exactly are you trusting a cert provider with and what are the security implications?
End users trust the cert provider. The cert provider has a process that they use to determine if they can trust you.
What attack vectors do you open yourself up to when trusting a certificate authority with your websites’ certificates?
You’re not really trusting them with your certificates. You don’t give them your private key or anything like that, and the certs are visible to anyone navigating to your website.
Your new vulnerabilities are basically limited to what you do for them - any changes you make to your domain’s DNS config, or anything you host, etc. - and depend on that introducing a vulnerability of its own. You also open a new phishing attack vector, where someone might contact you, posing as the certificate authority, and ask you to make a change that would introduce a vulnerability.
In what way could it benefit security and/or privacy to utilize a paid service?
For most use cases, as far as I know, it doesn’t.
LetsEncrypt doesn’t offer EV or OV certificates, which you may need for your use case. However, these are mostly relevant at the enterprise level. Maybe you have a storefront and want an EV cert?
LetsEncrypt also only offers community support, and if you set something up wrong you could be less secure.
Other CAs may offer services that enhance privacy and security, as well, like scanning your site to confirm your config is sound… but the core offering isn’t really going to be different (aside from LE having intentionally short renewal periods), and theoretically you could get those same services from a different vendor.
You can get wildcard certs with LetsEncrypt (since 2018): https://community.letsencrypt.org/t/acme-v2-production-environment-wildcards/55578
Eligible libraries, archives, and museums have a few exemptions to the DMCA’s anti-circumvention clauses that aren’t available to ordinary citizens, but these aren’t unique to the Internet Archive. For example:
Literary works, excluding computer programs and compilations that were compiled specifically for text and data mining purposes, distributed electronically where:
(A) The circumvention is undertaken by a researcher affiliated with a nonprofit institution of higher education, or by a student or information technology staff member of the institution at the direction of such researcher, solely to deploy text and data mining techniques on a corpus of literary works for the purpose of scholarly research and teaching;
(B) The copy of each literary work is lawfully acquired and owned by the institution, or licensed to the institution without a time limitation on access;
© The person undertaking the circumvention views the contents of the literary works in the corpus solely for the purpose of verification of the research findings; and
(D) The institution uses effective security measures to prevent further dissemination or downloading of literary works in the corpus, and to limit access to only the persons identified in paragraph (b)(5)(i)(A) of this section or to researchers or to researchers affiliated with other institutions of higher education solely for purposes of collaboration or replication of the research.
This exemption doesn’t allow them to publish the content, though, nor would it provide them immunity to takedown requests, if it did.
These exemptions change every three years and previously granted exemptions have to be renewed. The next cycle begins in October and they started accepting comments on renewals + proposals for expanded or new exemptions in April, so that’s why we’re hearing about companies lobbying against them now.
Dunno, I think regardless of the method used by the extension, I think any extension called “Bypass Paywalls” that does what it says on the tin can pretty unambiguously be said to be designed to circumvent “technological protection measures”.
“Bypass” and “Circumvent” are nearly synonymous in some uses - they both mean “avoid” - but that’s not really the point.
From a legal perspective, it’s pretty clear no circumvention of technological protection measures is taking place*. Yes, bypassing or circumventing a paywall to get to the content on the site itself would be illegal, were that content effectively protected by a technological measure. But they’re not doing that. Rather, a circumvention of the entire site is occurring, which is completely legal (an obvious exception would be if they were hosting infringing content themselves or something along those lines, but we’re talking about the Internet Archive here).
* - to be clear, I’m referring to what was detailed in the request, not the part that was redacted. That part may qualify as a circumvention.
In this case, it circumvents the need to login entirely and obviously it circumvents the paywall.
Following the same logic, Steam could claim that a browser extension showing where you can get the same game for cheaper or free circumvents their technological protection measure. It doesn’t. It circumvents the entire storefront, which is not illegal.
That’s the same thing that’s happening here - linking to the same work that’s legally hosted elsewhere.
Though as you said, these guys should probably be sending DMCAs to the Internet Archive
Yes - if they don’t want their content available, that’s what they should do. They might not want to do that, because they appreciate the Internet Archive’s mission (I wonder if it’s possible to ask that content be taken down until X date, or for content to be made inaccessible but for it to still be archived?) or they might be taking a multi pronged approach.
Maybe archive.today is the problem? Maybe they don’t honor DMCA requests.
Good point. If so, and if their site isn’t legally compliant in the same ways, then the extension becomes a lot less legally defensible if it’s linking there. That’s still not because it’s circumventing a technological protection, though - it’s because of precedent that “One who distributes a device with the object of promoting its use to infringe copyright, as shown by clear expression or other affirmative steps taken to foster infringement, going beyond mere distribution with knowledge of third-party action, is liable for the resulting acts of infringement by third parties using the device, regardless of the device’s lawful uses,” (Source), where “device” includes software. Following that precedent, plaintiffs could claim that the extension promoted its use to infringe copyright based off the extension’s name and that it had knowledge of third-party action because it linked directly to sites known to infringe copyright.
The Digital Media Law Project points out that there are two ways sharing links can violate the DMCA:
I’m not sure how the extension searches web archives. It if uses Google, for example, then it would make sense to serve Google ae DMCA takedown notice (“stop serving results to the known infringing archive.piracy domain”), but if the extension directly searches the infringing web archive, then the extension developers would need to know that the archive is infringing. Serving them a DMCA takedown (“stop searching the known infringing archive.piracy domain”) would give them notice, and if they ignored it, it would then be appropriate to send the takedown directly to their host (Github, the browser extension stores, etc) citing that they had been informed of the infringement of a site they linked to and were de facto committing contributory infringement themselves.
Given that they didn’t do that, I can conclude one of the following:
How is the accused project designed to circumvent your technological protection measures?
The identified Bypass Paywalls technology circumvents NM/A’s members’ paywalls in one of two ways. [private]
For hard paywalls, it is our understanding that the identified Bypass Paywalls technology automatically scans web archives for a crawled version of the protected content and displays that content.
If the web archives have the content, then a user could just search them manually. The extension isn’t logging users in and bypassing your login process; it’s just running a web search for them.
while (true) { print money; }
Someone’s never heard of Bitcoin
I use --format-sort +res:1080
, which, if my understanding of the documentation is correct, will make it prefer 1080p, the smallest video larger than 1080p if 1080p isn’t available, or the largest video if nothing 1080p or larger is available.
res
is the smallest dimension of the video (so for a 1080x1920 portrait video, it would be 1080).
Default sort is descending order. The +
makes it sort in ascending order instead.
Ah, you’re right - Trilium doesn’t use file-backed notes at all - it saves them in a database (I think Sqlite but I’m not positive).
Do you only experience the 5-10 second buffering issue on mobile? If not, then you might be able to fix the issue by tuning your NextCloud instance - upping the memory limit, disabling debug mode and dropping log level back to warn if you ever changed it, enabling memory caching, etc…
Check out https://docs.nextcloud.com/server/latest/admin_manual/installation/server_tuning.html and https://docs.nextcloud.com/server/latest/admin_manual/installation/php_configuration.html#ini-values for docs on the above.