The actual recommended solution is to just read in a loop until you have everything.
The actual recommended solution is to just read in a loop until you have everything.
Note that this isn’t specific to Go. Reading from stream-like data, be it TCP connections, files or whatever always comes with the risk that not all data is present in the local buffer yet. The vast majority of read operations returns the number of bytes that could be read and you should call them in a loop. Same of write operations actually, if you’re writing to a stream-like object as the write buffers may be smaller than what you’re trying to write.
Looks exactly like Visual Studio 2022.
I guess the joke implies that automated (or incorrect manual) conflict resolution causes code that doesn’t compile. But still not git’s fault. They should probably have merged earlier and in rare cases where that wasn’t possible, you have to bite the bullet and fix this stuff.
Literally the plot twist in…
Soma
Longer days. Which kind of works in an area where the sun doesn’t rise all winter and doesn’t set all summer. Until you have to consider having to work with anyone else. Not only do you have timezone offsets that change every day, you get date offsets. After less than a month, you’re already two days off from the rest of the world.
I have a portable switch dock (it’s the size of a small power brick) and the cable that came with it broke. Finding a cable that supports the exact spec that the Switch needs to both get enough power to put it into docked mode and transmit the video signal took a few tries.
Not sure whose fault it is but the ports on the AMD-based framework laptops are a mess. The framework 16 has four different specs across its six USB-C port and the framework 13 has three different specs across four ports. Meanwhile their intel-based laptops have four identical ports with USB4, USB-PD and Display Port.
If posted in the right circles, this might motivate someone to get something on a Spartan 6 that runs Linux.
wikipedia says 55. I guess the LLM is missing either South Sudan or Western Sahara.
No joke here. Large language Models (which people keep calling AI) have no way of checking if what they’re saying is correct. They are essentially just fancy text completion machines that answer the question what word comes next over and over. The result looks like natural language but tends to have logical and factual problems. The screenshot shows an extreme example of this.
In general, never rely on any information an LLM gives you. It can’t look up external information that wasn’t in its training set. It can’t solve logic problems. It can’t even reliably count. It was made to give you a plausible answer, not a correct one. It’s not a librarian or a teacher, it’s an improv actor who will „yes, and“ everything. LLMs will often rather make up information than admit that they don’t know. As an easy demonstration, ask ChatGPT for a list of restaurants in your home town that offer both vegan and meat-based options. More often than not, it will happily make you a list with plausible names and descriptions but when you google them, none of the restaurants actually exist.
Hours? I would give up a week of lunch for them to only cancel stuff that I needed hours to code. My personal record is over a year.