• 0 Posts
  • 51 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle
  • Both incidentally categories where I will never be happy with slopcode.

    The point here isn’t necessarily that any particular use of LLMs is a good tradeoff (I can accept that many will not be especially when security and correct operation is very important), just that quantity clearly matters, to refute the point you were making earlier that it doesn’t.

    We are actively building a history of cases where LLM usage correlates heavily with that slope you mentioned, but hey that’s OK, we aren’t allowed to call things out before they happen, judgement may only be passed once the damage is done right?

    Out of curiosity, we know that LLM usage increases cognitive deficit and in some cases leads to psychosis. How many fatalities would you say is an acceptable number before governments act? How degraded do we let our societies get before we reign it in?

    I think it’s a mistake to consider all LLM usage as one thing, and that thing as some kind of sin to be denounced as a whole rather than in part, and not considered beyond thinking of ways to get rid of it (which is effectively impossible). There were people who had this attitude towards for example electricity, which is actually very dangerous when misused and caused lots of fires and electrocutions, but the way those problems eventually got mitigated was by working out more sensible ways to use it rather than returning to an off-grid world.


  • One example of a place where quantity is lacking is web browsers. Another might be mobile operating systems. I am glad projects like Firefox and GrapheneOS exist, but it’s obvious that the volume of work needed to achieve broad compatibility and competitiveness for these types of software is a limiting factor. As for the idea that any LLM use is a slippery slope, the way to avoid the slippery slope fallacy would be to have compelling evidence or rationale that any use really does lead naturally to problematic use; without that the argument could apply to basically any programming thing that gets to be associated with things done badly (ie. Java), but I think it isn’t usually the case that a popular tool has genuinely no good or safe ways to use it and I don’t think that’s true for AI.


  • I will complain about quantity, many areas where open source projects are competing with closed source commercial products they have not achieved feature parity or a comparable level of polish, quantity matters. So does, as someone else touched on, quality of life improvements to the process of writing code like ease of acquiring and synthesizing information. That doesn’t mean it’s necessarily a worthwhile tradeoff, but how much is really being sacrificed depends on what exactly is being done with a LLM. To me one part of what’s described here that’s clearly going too far is using it to automate communication with other people contributing to the project, there’s no way that is worth it.

    As for the gun thing, I will support entirely banning LLM powered weapons intended to kill people, that’s an easy choice.




  • The main complaints about Matrix I’ve heard though are about behind the scenes stuff rather than features, which the video touches on:

    But there are some reasons why I think XMPP is superior. In Matrix, when you join a room, your server downloads and stores the entire history of that room. If someone on a federated server posts illegal content in a room you’re in, your server is now hosting it, and you are liable. Whereas in XMPP, messages are relayed in real time. Group chat, MU history stays on your server hosting that room. So your server only stores messages for your users which means that no content caching there is no content caching from other servers. This is a fundamental architectural difference which makes the XMPP protocol better in my opinion.

    Personally I don’t know that much about it but I briefly looked into what it would take to write a client for Matrix a few years ago and it seemed pretty daunting to work with. Maybe it would be possible to write software that implements more Discord features on top of XMPP to have something that works more smoothly.



  • If your focus is LLMs, get a 3090 gpu. Vram is the most important thing here because it determines what models you can load and run at a decent speed, and having 24Gb will let you run the mid range models that specifically target this amount of memory because of this being a very standard amount to have for hobbyists. These models are viable for coding, the smaller ones are less so. Looking at prices it seems like you can get this card for 1-2k depending on if you go used or refurbished. I don’t know if better price options are going to be available soon but with the ram shortage and huge general demand it kind of doesn’t seem like it.

    If you want to focus on image or video generation instead, I understand that there are advantages to going with newer generation cards because certain features and speed is more of a factor than just vram but I know less about this.








  • We can’t afford to make any of this. We don’t have the money for the compute required or to pay for the lawyers to make the law work for us

    I don’t think this is entirely true; yeah, large foundational models have training costs that are beyond the reach of individuals, but plenty can be done that is not, or can be done by a relatively small organization. I can’t find a direct price estimate for Apertus, and it looks like they used their own hardware, but it’s mentioned they used ten million gpu hours, and GH200 gpus; I found a source online claiming a rental cost of $1.50 per hour for that hardware, so I think the cost of training this could be loosely estimated to be something around 20 million dollars.

    That is a lot of money if you are one person, but it’s an order of magnitude smaller than the settlements of billions of dollars being paid so far by the biggest AI companies for their hasty unauthorized use of copyrighted materials. It’s easy to see how copyright and legal costs could potentially be the bottleneck here preventing smaller actors from participating.

    It should benefit the people, so it needs to change. It needs to be “expanded” (I wouldn’t call it that, rather “modified” but I’ll use your word) in that it currently only protects the wealthy and binds the poor. It should be the opposite.

    How would that even work though? Yes, copyright currently favors the wealthy, but that’s because the whole concept of applying property rights to ideas inherently favors the wealthy. I can’t imagine how it could be the opposite even in theory, but in practice, it seems clear that any legislation codifying limitations on use and compensation for AI training will be drafted by lobbyists of large corporate rightsholders, at the obvious expense of everyone with an interest in free public ownership and use of AI technology.



  • What about a way to donate (held in reserve for that purpose?) money after the fact for specific commits, and then have a way to indicate which things you’d be most likely to donate to going forward if they are completed? This would mean less reliable payments since there wouldn’t be a guarantee any given contribution would result in a payout, but there wouldn’t be any disincentive to work on things and there would be a general idea of what donators want. Plus doing it that way would eliminate the need for a manual escrow process.