I’d still keep it. Even though it doesn’t appear to be a more rare CPU (like, a 5950X or similar). Might become worth a little bit in a few years.
Natanox
Lemmy account of natanox@chaos.social
- 0 Posts
- 25 Comments
Rust: Borrow handler got mad at you for asking
(I’d assume)
And switch cases (called match cases) are there as well.
I use lambdas all the time to shovel GTK signal emitions from worker threads into GLib.idle_add in a single line, works as you’d expect.
Previous commenters probably didn’t look at Python in a really long time.
Parts of me want to argue that “experienced devs” can’t seriously still ask ChatGPT for syntax correction. Like, I do that with Codestral as I’m learning Python (despite the occasional errors it’s still so much better than abstract docs…), but that should just be a learning thing… or is it because nowadays a single codebase often consists of 5+ languages and devs are expected to constantly learn all the new “hot shit” which obviously won’t make anyone experts in one specific one like back when the there just weren’t as many?
No wonder there are some older developers who defend Lisp so passionately. Sounds like a dream to work with once you got the hang of it.
Natanox@discuss.tchncs.deto Programmer Humor@lemmy.ml•certificate_of_quality.pngEnglish0·2 months agoInteresting moral question here:
Given the huge problems are power consumption, morals behind training data and blind trust in AI slop, do you think there is a window of acceptable usage for LLMs as locally run (on existing hardware) coding assistant (not executive tool that does it for you) to help with work on FOSS projects (giving back to where it has taken from) with no money flowing to any company (therefore not bolstering that commercial ecosystem)? While this obviously doesn’t address the energy consumption during training, it may alleviates moral issues to the point people start to think about it as acceptable tool.
To make it abundantly clear, this is neither about “vibe coding” where it does code for you badly, and definitely not about any other bullshit like generative “art”. It’s about the question of humble, educated use of a potential useful tool in a way it might be morally acceptable.
Natanox@discuss.tchncs.deto Selfhosted@lemmy.world•Beelink ME mini is a NAS with an Intel N200 processor and support for up to 6 SSDsEnglish1·2 months agoTo my knowledge it isn’t them constantly running that wears them out most, but spinning up and down very often. Weren’t NAS drives designed to never spin down for that very reason?
Natanox@discuss.tchncs.deto Selfhosted@lemmy.world•Beelink ME mini is a NAS with an Intel N200 processor and support for up to 6 SSDsEnglish41·2 months agoWell, they arguably can also be used as one big long-term storage. Not sure who’d need to save so much data for a long time, but there surely will be at least some people who do and buy the “modern solution” over old HDDs thinking they’re better in general. As the “family backup” for example, or as cold storage solution in faculties that can be quickly accessed if needed.
Read somewhere about a professor who used SSDs to “permanently” store important data on SSDs (perhaps in the comments of the article above) for a few years. Well, wasn’t that permanent…
Natanox@discuss.tchncs.deto Selfhosted@lemmy.world•Beelink ME mini is a NAS with an Intel N200 processor and support for up to 6 SSDsEnglish151·2 months agoMore reliable
Heavily depends. If you want to use it as long-term cold storage you absolutely should not use SSDs, they’re losing data when left unpowered for too long. While HDDs are also not perfect in retaining data forever, they won’t fail as quickly when left on a shelf.
Yeah… I’m quickly reaching the point where I’m quicker thinking and writing Python code than even writing the prompts. Let alone the additional time going through the generated stuff to adjust and fix things.
It’s good to get a grip on syntax, terminology and as an overly fancy (but very fast) search bot that can (mostly) apply your question to the very code that’s in front of you, at least in popular languages. But once you got that stuff in your head… I don’t think I’ll bother too much in the future. There surely are tons of useful things you can do with multimodal LLMs, coding on its own properly just isn’t one of it. At least not with the current generation.
Natanox@discuss.tchncs.deto Selfhosted@lemmy.world•Postiz v1.39.2 - Open-source social media scheduling tool, Introducing MCP.English15·3 months agoI try to like your project really hard given it’s open source, the only proper one in the social media manager category that’s self-hostable at that… but my god, this whole generative AI stuff combined with social media and marketing sounds like the epiphany of sloppy shit.
Depends on the language I’d assume. The last thing I heard was that the current Codestral version is optimal for Python for example.
Yeah, same with Codestral. You have to tell it what to do very specifically, and once it gets stuck somewhere you have to move to a new session to get rid of the history junk.
Both it and ChatGPT also repeatedly told me to save binary data I wanted to store in memory as a list, with every 1024 bytes being a new entry… in form of a string (supposedly). And the worst thing is that, given the way it extracted that data later on, this unholy implementation from hell would’ve probably even worked up to a certain point.
Natanox@discuss.tchncs.deto Selfhosted@lemmy.world•How to use GPUs over multiple computers for local AI?English2·3 months agoDepends on which GPU you compare it with, what model you use, what kind of RAM it has to work with, ecetera. NPU’s are purpose-built chips after all. Unfortunately the whole tech is still very young, so we’ll have to wait for stuff like ollama to introduce native support for an apples-to-apples comparison. The raw numbers to however do look promising.
Natanox@discuss.tchncs.deto Selfhosted@lemmy.world•How to use GPUs over multiple computers for local AI?English5·3 months agoMay take a look at systems with the newer AMD SoC’s first. They utilize the systems’ RAM and come with a proper NPU, once ollama or mistral.rs are supporting those they might give you sufficient performance for your needs for way lower costs (incl. power consumption). Depending on how NPU support gets implemented it might even become possible to use NPU and GPU in tandem, that would probably enable pretty powerful models to be run on consumer-grade hardware at reasonable speed.
They would run with 8x speed each. Should not be too much of a bottleneck though, I don’t expect the performance to suffer noticeably more than 5% from this. Annoying, but getting a CPU+Board with 32 lanes or more would throw off the price/performance ratio.
I’m currently looking for this as well. As far as my investigation went right now I’ll probably go for 2x AMD Instinct MI50. Each of them has equivalent to slightly higher performance than a P40, however usually only 16gb VRAM (If you’re super lucky you might get one with 32gb, those are usually not labeled as such though; probably binned MI60). With two of them you got 32gb VRAM and quite the performance for, right now, 200€ / card. Alternatively you should be able to run quantized models on a single card as well.
If you don’t mind running ROCm instead of CUDA this seems like a good bang for the buck. Alternatively you might look into AMDs new line of “AI” SoCs (for example Frameworks Desktop computer). They seem to be really good as well, and depending on your usecase might be more useful than an equally priced 4090.
Natanox@discuss.tchncs.deto Programmer Humor@lemmy.ml•What's stopping you from writing your Rust like this?English0·4 months agoFor a moment I wondered why the Rust code was so much more readable than I remembered.
This would make a nice VS Codium plugin to deal with all the visual clutter. I actually like this.
Instead they’ll become curiosities leading down rabbit holes to understand why and how they happened.
More rare than an i5-8600 and probably becomes rather rare as time moves on.