The catarrhine who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

  • 0 Posts
  • 513 Comments
Joined 10 months ago
cake
Cake day: January 12th, 2024

help-circle
  • I am not sure, but I believe that this political abuse is further reinforced by something not mentioned in the text:

    • Twitter is mostly short texts, lacking situational info, subtlety, signs of doubt, etc. Those require a lot of contextual info to accurately understand, but as a piece of content is retweeted most of that context is gone.
    • plenty people are not honest; they’re assumptive as a brick. They make shit up = assume = bullshit as it goes, never acknowledging “hey, I don’t actually know this, it’s just a shower thought, it might be wrong”.
    • people holding minority views are more often dogpiled, and by bigger dogpiles, than people holding majority views. Kind of like the Petrie Modifier, but with worldviews instead of sex.

    It’s breeding grounds for witch hunting: people don’t get why someone said something, they’re dishonest so they assume why, they bring on the pitchforks because they found a witch. And that’s bound to affect anyone voicing anything slightly off the echo chamber.

    And I think that this has been going on for years; cue to “the Twitter MC of the day”. This predates Musk, but after Musk took over he actually encouraged the witch hunts for his own political goals.






  • When it comes to how people feel about AI translation, there is a definite distinction between utility and craft. Few object to using AI in the same way as a dictionary, to discern meaning. But translators, of course, do much more than that. As Dawson puts it: “These writers are artists in their own right.”

    That’s basically my experience.

    LLMs are useful for translation in three situations:

    • declension/conjugation table - faster than checking a dictionary
    • listing potential translations for a word or expression
    • a second row of spell/grammar-proofing, just to catch issues that you didn’t

    Past that, LLM-based translations are a sea of slop: they screw up with the tone and style, add stuff not present in the original, repeat sentences, remove critical bits, pick unsuitable synonyms, so goes on. All the bloody time.

    And if you’re handling dialogue, they will fuck it up even in shorter excerpts, by making all characters sound the same.








  • For real. Companies being extra pushy with their product always makes me picture their decision makers saying:

    “What do you mean, «we’re being too pushy»? Those are customers! They are not human beings, nor deserve to be treated as such! This filth is stupid and un-human-like, it can’t even follow simple orders like «consume our product»! Here we don’t appeal to its reason, we smear advertisement on its snout until it needs to open the mouth to breath, and then we shove the product down its throat!”

    Is this accurate? Probably not. But it does feel like this, specially when they’re trying to force a product with limited use cases into everyone’s throats, even after plenty potential customers said “eeew no”. Such as machine text and image generation.




  • Bots are parasites: they only thrive if the host population is large enough to maintain them. Once the hosts are gone, the parasites are gone too.

    In other words: botters only bot a platform when they expect human beings to see and interact with the output of their bots. As such they can never become the majority: once they do, botting there becomes pointless.

    That applies even to repost bots - you could have other bots upvoting the repost, but you won’t do it unless you can sell the account to an advertiser, and the advertiser will only buy it if they can “reach” an “audience” (i.e. spam humans).