Take your phone number. Now add/subtract 1. Those are your number neighbors.
- 1 Post
- 25 Comments
wischi@programming.devto Mildly Infuriating@lemmy.world•To join Facebook these days, one must record a video selfieEnglish4·6 days agoCan’t really be a bit of both because they can’t confirm shit if they don’t know what you look like in the first place. It could be to confirm that you are human (and maybe that you don’t already have an account) but they can’t confirm your “identity”.
Stop using timezones? So every day would actually be two weekdays because at some random point in time it would switch date during the day. Let’s meet next Monday wouldn’t even specify a single day anymore in most countries. And there is no real benefit to stop using timezones, just downsides. Yes you’d know which time it is anywhere but you still wouldn’t know of they are awake or not and have to either look it up or remember it - the same you have to do now.
I used keepass since ages and about two years ago I switched to a self-hosted vaultwarden instance and I still think it was a great choice. So of you have a docker experience and a little VM lying around you could give vaultwarden/Bitwarden a try.
It’s not about mythology or Mesopotamia. Those numbers are called highly composite numbers (HCN) and superior highly composite numbers (SHCN) and are great for doing calculations (especially divisions) in your head because they have a lot of factors. That’s why they were used everywhere before calculators were a thing.
Not true. You have math thank for that and there is a good reason for numbers like that (and why Babylonier used them). They are very useful to do calculations in your head, especially division because the have a lot of factors. The concept is called highly composite numbers (HCN) and superior highly composite numbers (SHCN). They are practically “anti-primes”. That’s why base-6 or base-12 are objectively a better number system than base-10 but it’s pretty much too late to switch now.
There is more truth to that than to OPs claim
Probably because a lot or them (especially _iel) use
literalletteral translations, nobody in their right mind would use in everyday conversations. Like in this post with “michmichs”.
No we don’t say that and I was completely confused until I read the english line.
wischi@programming.devto Technology@lemmy.world•WhatsApp provides no cryptographic management for group messagesEnglish4·25 days agoIt’s not called Meta data by accident 🤣
Shut that section down and ground the wires. Not really that dangerous. It’s only dangerous if you don’t follow protocol.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish3·30 days ago“Amazingly” fast for bio-chemistry, but insanely slow compared to electrical signals, chips and computers. But to be fair the energy usage really is almost magic.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish2·1 month agoBut by that definition passing the Turing test might be the same as super human intelligence. There are things that humans can do, but computers can’t. But there is nothing a computer can do but still be slower than humans. That’s actually because our biological brains are insanely slow compared to computers. So once a computer is better or as accurate as a human it’s almost instantly superhuman at that task because of its speed. So if we have something that’s as smart as humans (which is practically implied because it’s indistinguishable) we would have super human intelligence, because it’s as smart as humans but (numbers made up) can do 10 days of cognitive human work in just 10 minutes.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish111·1 month agoAI isn’t even trained to mimic human social behavior. Current models are all trained by example so they produce output that would score high in their training process. We don’t even know (and it’s likely not even expressable in language) what their goals are but (anthropomorphised) are probably more like “Answer something that humans that designed and oversaw the training process would approve of”
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish11·1 month agoTo be fair the Turing test is a moving goal post, because if you know that such systems exist you’d probe them differently. I’m pretty sure that even the first public GPT release would have fooled Alan Turing personally, so I think it’s fair to say that this systems passed the test at least since that point.
wischi@programming.devto Technology@lemmy.world•AI models routinely lie when honesty conflicts with their goalsEnglish24·1 month agoWe don’t know how to train them “truthful” or make that part of their goal(s). Almost every AI we train, is trained by example, so we often don’t even know what the goal is because it’s implied in the training. In a way AI “goals” are pretty fuzzy because of the complexity. A tiny bit like in real nervous systems where you can’t just state in language what the “goals” of a person or animal are.
The “may” carries a lot of weight so it probably depends. The way US law works is pretty weird IMHO and the reason for many of such disclaimers/waivers. “Objects in mirror are closer than they appear”, “Contents may be hot”, etc.