• 0 Posts
  • 209 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • I don’t know the specifics, but there is such a thing as keyboard rollover. MOST KEYBOARDS—whoa, sorry. Most keyboards support up to 6 keys at once, but it might be that they’re still divided into sections with lower rollover numbers, such as the arrow keys and space. Some “gaming” keyboards support up to 25 though, so your best bet if this bothers you is just upgrading to a spiffier typer.



  • petrol_sniff_king@lemmy.blahaj.zonetoScience Memes@mander.xyzCalculatable
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    2
    ·
    14 days ago

    There’s actually a neat reason for this! The way that simple keys work, like those in a calculator, is by connecting a circuit and letting a small amount of voltage through. This is usually fine because the keypad is broken up into different rollover zones, which is how multi-key input works. But if you find and press keys that are all in the same zone, their voltages add up and can actually overwhelm the little cpu in there. Really old calculators were really easy to break because designers never thought users would need to press keys like division, multiplication, subtract, add, square and square root all at once, which as you can imagine, caused a massive power spike.

    Now, is any of this true? I have no idea dude, you’re calculator was probably fucking haunted or something. I’d have taken that thing to a seance with a ouija board immediately.












  • Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

    I didn’t quite understand this at first. I think I was going to say something about the paper leaving the method ambiguous, thus implicating all methods yet unknown, etc, whatever. But yeah, this divide between solvable and “unsolvable” shifts if we ever break NP-hard and have to define some new NP-super-hard category. This does feel like the piece I was missing. Or a piece, anyway.

    e.g. humans don’t fit the definition either.

    I did think about this, and the only reason I reject it is that “human-like or -level” matches our complexity by definition, and we already have a behavior set for a fairly large n. This doesn’t have to mean that we aren’t still below some curve, of course, but I do struggle to imagine how our own complexity wouldn’t still be too large to solve, AGI or not.


    Anyway, the main reason I’m replying again at all is just to make sure I thanked you for getting back to me, haha. This was definitely helpful.



  • Hey! Just asking you because I’m not sure where else to direct this energy at the moment.

    I spent a while trying to understand the argument this paper was making, and for the most part I think I’ve got it. But there’s a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:

    If producing an AGI is intractable, why does the human meat-brain exist?

    Evolution “may be thought of” as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the “AI” it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).

    The question is, where does this line of thinking fail?

    Going by the proof, it should either be:

    • That evolution is an intractable method. 60 million years is a long time, but it still feels quite short for this answer.
    • Something about it doesn’t fit within this computational paradigm. That is, I’m stretching the definition.
    • The language “no better than chance” for option 2 is actually more significant than I’m thinking. Evolution is all chance. But is our existence really just extreme luck? I know that it is, but this answer is really unsatisfying.

    I’m not sure how to formalize any of this, though.

    The thought that we could “encode all of biological evolution into a program of at most size K” did made me laugh.