Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 44 Posts
  • 2.76K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle


  • You have all your devices attached to a console server with a serial port console set up on the serial port, and if they support accessing the BIOS via a serial console, that enabled so that you can access that remotely, right? Either a dedicated hardware console server, or some server on your network with a multiport serial card or a USB to multiport serial adapter or something like that, right? So that if networking fails on one of those other devices, you can fire up minicom or similar on the serial console server and get into the device and fix whatever’s broken?

    Oh, you don’t. Well, that’s probably okay. I mean, you probably won’t lose networking on those devices.


  • You have remote power management set up for the systems in your homelab, right? A server set up that you can reach to power-cycle other servers, so that if they wedge in some unusable state and you can’t be physically there, you can still reboot them? A managed/smart PDU or something like that? Something like one of these guys?

    Oh. You don’t. Well, that’s probably okay. I mean, nothing will probably go wrong and render a device in need of being forcibly rebooted when you’re physically away from home.


  • You have squid or some other forward http proxy set up to share a cache among all the devices on your network set up to access the Web, to minimize duplicate traffic?

    And you have a shared caching DNS server set up locally, something like BIND?

    Oh. You don’t. Well, that’s probably okay. I mean, it probably doesn’t matter that your devices are pulling duplicate copies of data down. Not everyone can have a network that minimizes latency and avoids inefficiency across devices.







  • ping spikes

    If you’re talking about the game-server time, I’d want to isolate that to the WiFi network first, to be sure that it’s not something related to the router-ISP connection or a network issue even further out. You can do something like run mtr (which does repeated traceroutes) to see at what hop the latency starts increasing. Or leave ping running pinging your router’s IP address, the first hop you see if you run a traceroute or mtr. If it’s your WiFi connection, then the latency should be spiking specifically to your router, at the first hop, and you might see packet loss. If it’s an issue further down the network, then that’s where you’ll start seeing latency increase and packet loss.

    You might need to install mtr — I don’t know whether Fedora has it installed by default.

    Please help. I don’t want to go beneath house to run cat6. It’s dark and there are spiders.

    Honestly, I think that everyone should use wired Ethernet unless they need their device to be able to move around, as it maintains more-consistent and lower network latency, provides higher bandwidth, and keeps the Ethernet traffic off the air; 2.4 GHz is used for all sorts of other useful things, like gamepad controllers (I have a Logitech F710 that uses a proprietary 2.4GHz protocol, and at some point, when some other 2.4GHz device showed up, it caused loss of connectivity for a few seconds, which was immensely frustrating). And you have interference from stuff like microwaves and all that at the same frequency range. Avoids some security issues; we’ve had problems discovered with wireless protocols.

    But, all right. I won’t lecture. It’s your network.

    If you think that it’s Fedora and maybe your driver is at fault, one thing you might check is your kernel logs. If the driver is hitting some kind of problem and then recovering by resetting the interface, that might cause momentary drop-outs. After it happens, take a gander at $ journalctl -krb which will show your kernel log for the current boot in reverse order, with the most-recent stuff up top. If you have messages about your wireless driver, that’d be pretty suspicious.

    If the driver is at fault, I probably don’t have a magic fix, unless you want to try booting into an older kernel, which may still be viable; if you still have it installed and GRUB, the bootloader that Linux distros typically use, is set up to show your list of kernels at boot, then you can try choosing that older kernel and see if it works with your newer distro release and if the problem goes away. I don’t know if Fedora defaults to showing such a list or hides it behind some splash screen; it used to do so, but I haven’t used Fedora in quite some years. You might want to whack Shift or an arrow key during boot to get boot to stop at GRUB. If you discover that it’s a regression in the driver, I’d submit a bug report (“no problems with kernel version X, these messages and momentary loss of connectivity with kernel version Y”) which would probably help get it actually fixed in an update.

    You might also try just using a different wireless Ethernet interface, like a USB wireless Ethernet interface, and seeing if that magically makes it go away. Inexpensive USB interfaces are maybe $10 or $15. I’d probably look for some indication that it’s a driver problem before doing that.


  • tal@lemmy.todaytoAsk Lemmy@lemmy.worldEverything and Nothing
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 hours ago

    I would say that the Web in 2026 is much less isolated in terms of person-person interaction than it was in the late '90s, as a lot of websites in the late 1990s were static or mostly-static and the major rise of social media hadn’t yet happened. Much social interaction happened on non-Web platforms like IRC, Usenet, or mailing lists.

    https://en.wikipedia.org/wiki/Web_2.0

    The term “Web 2.0” was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article “Fragmented Future”:[3][20]

    “The Web we know now, which loads into a browser window in essentially static screenfuls, is only an embryo of the Web to come. The first glimmerings of Web 2.0 are beginning to appear, and we are just starting to see how that embryo might develop. The Web will be understood not as screenfuls of text and graphics but as a transport mechanism, the ether through which interactivity happens. It will […] appear on your computer screen, […] on your TV set […] your car dashboard […] your cell phone […] hand-held game machines […] maybe even your microwave oven.”

    Instead of merely reading a Web 2.0 site, a user is invited to contribute to the site’s content by commenting on published articles, or creating a user account] or profile on the site, which may enable increased participation. By increasing emphasis on these already-extant capabilities, they encourage users to rely more on their browser for user interface, application software (“apps”) and file storage facilities. This has been called “network as platform” computing.[5] Major features of Web 2.0 include social networking websites, self-publishing platforms (e.g., WordPress’ easy-to-use blog and website creation tools), “tagging” (which enables users to label websites, videos or photos in some fashion), “like” buttons (which enable a user to indicate that they are pleased by online content), and social bookmarking.

    Users can provide the data and exercise some control over what they share on a Web 2.0 site.[5][28] These sites may have an “architecture of participation” that encourages users to add value to the application as they use it.[4][5] Users can add value in many ways, such as uploading their own content on blogs, consumer-evaluation platforms (e.g. Amazon and eBay), news websites (e.g. responding in the comment section), social networking services, media-sharing websites (e.g. YouTube and Instagram) and collaborative-writing projects.[29] Some scholars argue that cloud computing is an example of Web 2.0 because it is simply an implication of computing on the Internet.[30]

    I don’t know exactly when the first Web forum software packages started spreading off-the-cuff, but they took a while to spread; were much less common early on, and at first were custom implementions on a per-site basis.

    PhpBB’s been around for a while, was popular in the “standalone Web forum” era.

    goes to look when it started out

    https://en.wikipedia.org/wiki/PhpBB

    phpBB was founded by James Atkinson as a simple UBB-like forum for his own website on June 17, 2000.

    It looks like UBB is a commercial package that dates to 1997, though it certainly didn’t meet with phpBB level of use, and I don’t know if I’ve ever used a UBB-based forum.


  • I don’t know the specific situation there, but traditionally if you have a military conflict going on, battle damage assessment is part of a military’s job.

    Battle damage assessment (BDA), sometimes referred to as bomb damage assessment, is the process of evaluating the physical and functional damage inflicted on a target as a result of military operations. It is a core component of combat assessment and is used to inform judgments about mission effectiveness and potential follow-on actions, including reattack recommendations.[1]

    Information on battle damage is highly valuable to the enemy and military intelligence and censors will endeavor to conceal, exaggerate or underplay the extent of damage depending on the circumstances.

    With long-range weapons — which is what Iran is using against UAE targets — it can be hard to know whether-or-not you’re actually hitting something. You need some sort of reconnaissance platform or a physical person to go out and take a look. So in general, a defending military would rather not permit an attacking military to know what has actually been hit. If the attack missed, then they don’t want the attacking military to know, so that they can fire another at the target, for example. And if there are accuracy issues or jamming or other things going on, they don’t want the attacking military to know about that. If the attacking military is defeating jamming efforts or has resolved accuracy issues or similar, they also don’t want the attacking military to know about that. They’re going to want their attacker to be as blind as they can keep them, to deny them a useful battle damage assessment.

    In one extreme case of this, the UK, in World War II, had Nazi Germany fire V-2 rockets, early ballistic missiles, at them. Guidance systems at the time were primitive, limiting accuracy, and the British conducted an extensive disinformation effort, mis-reporting where rockets were hitting and seeking to prevent Germany from obtaining access to accurate information. This led to Germany consistently shooting V-2s at the wrong place, because they were trusting that bad information for their battle damage assessment.

    https://en.wikipedia.org/wiki/V-2_rocket#Direct_attack_and_disinformation

    The only effective defences against the V-2 campaign were to destroy the launch infrastructure—expensive in terms of bomber resources and casualties—or to cause the Germans to aim at the wrong place by disinformation. The British were able to convince the Germans to direct V-1s and V-2s aimed at London to less populated areas east of the city. This was done by sending deceptive reports on the sites hit and damage caused via the German espionage network in Britain, which was secretly controlled by the British (the Double-Cross System).[79]

    EDIT: Another WW2 example that comes to mind: for some time, Japanese warships had been trying to depth-charge American submarines, but using an incorrect depth. A congressman released information to the public about this fact. That information then made its way to Japan, at which point the Japanese military corrected their weapon use.

    https://en.wikipedia.org/wiki/Andrew_J._May

    May was responsible for the release of highly classified military information during World War II known as the May Incident.[6] U.S. submarines had been conducting a successful undersea war against Japanese shipping during World War II, frequently escaping their anti-submarine depth charge attacks.[6][7] May revealed the deficiencies of Japanese depth-charge tactics in a press conference held in June 1943 on his return from a war zone junket.[6][7] At this press conference, he revealed the highly sensitive fact that American submarines had a high survival rate because Japanese depth charges were exploding at too shallow a depth.[6][7] Various press associations sent this leaked news story over their wires and many newspapers published it, including one in Honolulu, Hawaii.[6][7]

    After the news became public, Japanese naval antisubmarine forces began adjusting their depth charges to explode at a greater depth.[6][7] Vice Admiral Charles A. Lockwood, commander of the U.S. submarine fleet in the Pacific, estimated that May’s security breach cost the United States Navy as many as 10 submarines and 800 crewmen killed in action.[6][7] He said, “I hear Congressman May said the Jap depth charges are not set deep enough. He would be pleased to know that the Japs set them deeper now.”[6][7]


  • One minor thing that I am not super enthusiastic about when it comes to emojis is that they are typically colored. This has two drawbacks:

    • In a number of environments, it’s possible to set text color. This is only really practical because most characters are not colored, so the color can be variable. If we start introducing colored characters in general, that stops working. It also has at least the potential to create issues for colorblind users (though we could potentially also create workarounds).

    • It means that onscreen text may not be practical to present well in a monochrome environment, like a monochrome e-ink display or printed on paper. Traditionally, if you can see text onscreen, you can print it and it’s still legible on a monochrome printer. But, for example, there’s U+1FA75, LIGHT BLUE HEART: 🩵, and U+1FA77, PINK HEART: 🩷. Most non-sight-impaired users can probably distinguish between the two on a color display, but I suspect that a situation where one was using it to write text — maybe using blue to indicate male and pink to indicate female or something like that — wouldn’t be very easy to distinguish after being printed on a monochrome printer.

    Both of these are kind of minor complaints. In practice, I just don’t see a whole lot of emoji use, and haven’t run into practical issues. But I do think that if we wanted to adopt a writing system that incorporated color, I’d probably favor a more-considered approach than just throwing whatever someone happens to propose in.

    One other minor issue is that some emojis have political or social weight that get people upset. For example, you have U+1F52B, PISTOL.

    Some people felt that people shouldn’t be able to portray an actual pistol, so changed the thing to a water pistol. I personally think that the whole debate is kind of absurd, because one can just write “pistol”, but it clearly has been a topic of political infighting.

    https://en.wikipedia.org/wiki/Pistol_emoji

    The pistol emoji (U+1F52B 🔫 PISTOL) is an emoji defined by the Unicode Consortium as depicting a “handgun” or “revolver”.[1]

    It was historically displayed as a handgun on most computers (although Google once used a blunderbuss);[2] as early as 2013, Microsoft chose to replace the glyph with a ray gun,[3] and in 2016 Apple replaced their glyph with a water pistol.[4] Since then, its rendering has been inconsistent across vendors. Microsoft changed its glyph back to an icon of a revolver during 2016 and 2017, before switching it to a (differently-styled) ray gun; in 2018, Google and Samsung changed their devices’ rendering of the emoji to a water pistol,[2] as well as the websites Facebook and Twitter. In 2024, Twitter (by then known as “X”) chose to restore the glyph of a handgun, although instead of a revolver it used a semi-automatic M1911.[5]

    Based on the above, it looks like Elon Musk moved things back to being a classic American handgun.

    But, point is, you have this political spat and platform inconsistency going on (where the imparted meaning of someone’s text might reasonably change based on how the Unicode characters are portrayed) where it’s not at all clear to me that anyone ever had a particular desire to embed a pistol in text in the first place, be it a water gun or semiautomatic pistol or revolver or whatever.

    I’ve seen people arguing about the skin color of characters in various emojis. In text, I can just say “sad person” without attaching addition information, but if I have a visual representation, then I have to choose things like the skin color.

    It just seems like room for friction that doesn’t really need to exist.

    Oh, and another point — one of the things that initially seemed to me like a great application for Unicode emojis is flags, because in theory, those are designed to let one identify a country at a distance, and often people look at lists of countries. But…there are actually a lot of flags that look really similar to each other or are even identical, like the flag of Romania (U+1F1F7, U+1F1F4: 🇷🇴) and the flag of Chad (U+1F1F9, U+1F1E9: 🇹🇩). I remember some Romanians a bit back poking fun at some Romanian politician who had inadvertently used the Chad flag in some important post on social media. I’d imagine that it’s more-obnoxious if someone decides to do it in, say, a menu for country location. Like, in most cases, I think that it’s probably preferable to use the ISO country codes than flag emojis if you really need a short form, or to just write out the name of the country fully.


  • It’s 2026, and I still have a hard time seeing major gains from emojis.

    They are maybe useful for something like Twitter, where people had artificially-constrained message lengths, and wanted to pack as much information into as few characters as possible, but that seems like a pretty niche use.

    I get that conversational text has reasons to want to add information that normally comes out-of-band, via tone or expression, but it’s not clear to me that that requires emojis. I’ll use an emoticon for that. There’s a pretty small number of emotions that one really needs to indicate, so even if one wants to use an emoji, there just aren’t all that many that one needs.

    I think that there’s a case for a heart emoji. I certainly have seen people embedding hearts in handwritten text, so people do want to do that. I don’t, but I think that providing a way to do the same thing in typed text as one does in handwritten text is certainly reasonable.

    But…the overwhelming majority of emojis just don’t have an analog in handwritten text.

    On phones using on-screen keyboards, where text entry is slower, it might be faster to pick out an emoji than to write an associated word…but if that’s the goal, present-day phone on-screen keyboards also typically do predictive text, which is a more-general solution to the problem.

    I think that having Unicode include characters for various languages is nice, lets one embed quotes from various languages together. Line-drawing characters are convenient for monospace-font text stuff. I like having some typographic characters, like printer’s quotes or em- and en-dashes. Superscript characters and subscript characters.

    But I’ve just never really benefited much from emojis. They don’t really hurt much, but I don’t feel that they’ve provided much of a benefit, either.




  • You mean for the Linux kernel specifically? Linux distributions?

    For software in general — not Linux-specific — updates fix bugs (some of which might be security-related). Adds features.

    That may be too general to be useful, but the question doesn’t have much by way of specifics.

    I feel like maybe more context would make for better answers. Like, if what you’re asking is “I have a limited network connection, and I’d like to reduce or eliminate downloading of updates” or “I have a system that I don’t want to reboot; do I need to apply updates”, that might affect the answer.

    EDIT: Okay, you updated your post, and it sounds like it’s the Ubuntu distribution and the new release frequency that’s an issue.

    Well, if you want fewer updates and are otherwise fine with Ubuntu, you could try using Ubuntu LTS.

    https://ubuntu.com/about/release-cycle

    LTS releases

    LTS are released every two years and receive 5 years of standard security maintenance.

    LTS releases are the go-to choice for users who value stability and extended support. These versions are security maintained for 5 years with CVE patches for packages in the Main repository. They are recommended for production environments, enterprises, and long-term projects.

    You’ll still get security updates, but you won’t see new releases on a six-month basis.

    It can be nice to have a relatively-new kernel, as it means support for the latest hardware (like, say you have a desktop with a new video card), but if you have some system that’s working and you don’t especially want it to change, a lower frequency might be preferable for you.

    I use Debian myself, and Debian stable tends to have less-frequent new releases. You’ll normally get a new stable release every two years, with inter-release updates generally just being bugfixes, and new stuff going in every two years.

    https://www.debian.org/releases/

    Debian announces its new stable release on a regular basis. The Debian release life cycle encompasses five years: the first three years of full support followed by two years of Long Term Support (LTS).

    EDIT2: If you already have Ubuntu on your system and only want LTS updates, it looks like this is how one selects notification of new LTS releases or all releases.

    https://ubuntu.com/tutorials/upgrading-ubuntu-desktop#5-optional-upgrading-to-interim-releases

    Navigate to the ‘Updates’ tab and change the menu option titled ‘Notify me of a new Ubuntu version’ to For any new version.

    EDIT3: I’d wait until an LTS release to switch to LTS, if you aren’t currently using LTS, so that you aren’t on a system that isn’t getting updates. Looking at that Ubuntu release page, it looks like 26.04 is an LTS release. The Ubuntu versioning scheme refers to the year and month (26.04 being the fourth month of 2026). It’s the third month of 2026 right now, so the next release will be LTS, so switching over to LTS notifications now is probably a good time. You’ll get a release update notification next month. You do that update, and then will be on LTS and won’t receive another notification again for the next two years.