Hey thanks, both AW1 and Control are games I might pick up again. Didn’t hate them, they just didn’t really hook me.
Hey thanks, both AW1 and Control are games I might pick up again. Didn’t hate them, they just didn’t really hook me.
I found the pacing of the first few chapters in the first Alan Wake sublime, in terms of storytelling. The gameplay frustrated me on the other hand, became quickly monotonous and tedious for me. So I only played like a third of the game, much as I liked the story and was curious to see where it went. Then Control I was left completely unmoved by. So I’ve been hesitating to take up the second Alan Wake, basically because I didn’t much like the first iteration, or Control, which I’ve heard is somehow connected. Maybe I’m missing out. Or maybe these games appeal only to a certain audience.
Never used this. Never cared for it 🤷🏻♂️
Do you use the integrated AI in new versions of Excel or do you ask ChatGPT or some other AI to write it out for you?
Much of this looks good. But some of it looks painfully mid/generic. Might still be fun.
Well with an average in the 80s on metacritic one would assume it’s a very decent game. But user reviews tend to be a lot harsher indeed.
Thanks for the review. Disappointing to be sure. I was hoping to play it at some point and that it wouldn’t suck as much as people say it does. Or that they would turn it around in time.
Sometimes I wonder whether Starfield truly deserves all the bad publicity or whether people are also still upset because it became an Xbox exclusive and that is clouding their judgement. I know it does affect me for one. I got a ps5 for gaming and I’m automatically much less interested in anything that isn’t on the platform. And I was of course very disappointed when Microsoft outright bought all these huge IPs and made them exclusive to Xbox.
I’m waiting for the ultimate edition that will include everything here
This looks more like a scythe that doubles as a weapon to me
This thread is like a lesson in the importance of x and y axes range in time series plots
Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.
The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.
Nice but paywalled for me
It was a more optimistic time, perhaps a more naive time depending on your perspective. A time when most people felt that crowds were wise and the truth would surface spontaneously. Where the internet would help us spread knowledge and democracy and none of the bad things. Where conspiracy theories, disinformation, outright hatred and bigotry were considered fringe phenomena that could be kept at bay. When people would point to 4chan as the worst the internet had to offer, if they even knew about it. Where politicians and their voters could argue passionately, without necessarily feeling that other side are “extremists” or “fascists” who would literally “destroy our country” if they win an election.
The world is cracking at the seams lately and this leads more people to wanna put the brakes on the internet. Liberals especially, witnessing with horror the surge of the far right and attributing it in part to the internet’s ultimate ability to amplify anything, any voice, any shitty little take, no matter how extreme, how misinformed, or bigoted. Most likely misinformed and bigoted with someone like Musk at the helm, the thinking goes. In short, liberals have shifted from the exuberant naïveté of the past to protection mode, trying to stem the tide of right wing populism and perhaps ultimately fascism. And thus will come off as overbearing censors to anyone who doesn’t understand why they do what they do or is still optimistic that a lack of censorship will only lead to good things.
Freedom only works with a social contract in place, some consensus, some ground truths about the world that we can all agree on. Or that a solid, relatively stable majority at least can agree on. When that starts to break down, freedom to say and do whatever you want online may in fact bring the downfall sooner by stoking the fires of division. Of course the likes of Musk probably do think that they are fighting the good fight and are championing free speech, but increasingly he seems to be shifting to the right politically, and rather fighting for his presumed right to shape the world in his image and grow his business empire unchecked, if anything, and not some ideal of freedom and democracy. The likes of him, businessmen with nearly unchecked power and ultimately more concern for their business and personal aspirations than democracy, are probably going to become a bigger threat to our freedoms than the government of Australia. Maybe. Probably.
I personally do think that liberals have often gone overboard in their speech policing zeal, but on the other hand understand why they do what they do. Policing the internet seems like a much easier alternative than actually addressing all the major, sometimes seemingly existential socioeconomic challenges liberal democracies face today. The latter would deprive right wing populists and extremists of much of their influence, but is of course way, way harder than policing speech.
Well it is one thing to automate a repetitive task in your job, and quite another to eliminate entire professions. The latter has serious ramifications and shouldn’t be taken lightly. What you call “menial bullshit” is the entire livelihood and profession of quite a few people, speaking of taxis for one. And the means to make some extra cash for others. Also, a stepping stone for immigrants who may not have the skills or means to get better jobs but are thus able to make a living legally. And sometimes the refuge of white collar workers down on their luck. What are all these people going to do when taxi driving is relegated to robots? Will there be (less menial) alternatives? Will these offer a livable wage? Or will such people end up long-term unemployed? Will the state have enough cash to support them and help them upskill or whatever is needed to survive and prosper?
A technological utopia is a promise from the 1950s. Hasn’t been realized yet. Isn’t on the horizon anytime soon. Careful that in dreaming up utopias we don’t build dystopias.
Though I am not a lawyer by training, I have been involved in such debates personally and professionally for many years. This post is unfortunately misguided. Copyright law makes concessions for education and creativity, including criticism and satire, because we recognize the value of such activities for human development. Debates over the excesses of copyright in the digital age were specifically about humans finding the application of copyright to the internet and all things digital too restrictive for their educational, creative, and yes, also their entertainment needs. So any anti-copyright arguments back then were in the spirit specifically of protecting the average person and public-interest non-profit institutions, such as digital archives and libraries, from big copyright owners who would sue and lobby for total control over every file in their catalogue, sometimes in the process severely limiting human potential.
AI’s ingesting of text and other formats is “learning” in name only, a term borrowed by computer scientists to describe a purely computational process. It does not hold the same value socially or morally as the learning that humans require to function and progress individually and collectively.
AI is not a person (unless we get definitive proof of a conscious AI, or are willing to grant every implementation of a statistical model personhood). Also AI it is not vital to human development and as such one could argue does not need special protections or special treatment to flourish. AI is a product, even more clearly so when it is proprietary and sold as a service.
Unlike past debates over copyright, this is not about protecting the little guy or organizations with a social mission from big corporate interests. It is the opposite. It is about big corporate interests turning human knowledge and creativity into a product they can then use to sell services to - and often to replace in their jobs - the very humans whose content they have ingested.
See, the tables are now turned and it is time to realize that copyright law, for all its faults, has never been only or primarily about protecting large copyright holders. It is also about protecting your average Joe from unauthorized uses of their work. More specifically uses that may cause damage, to the copyright owner or society at large. While a very imperfect mechanism, it is there for a reason, and its application need not be the end of AI. There’s a mechanism for individual copyright owners to grant rights to specific uses: it’s called licensing and should be mandatory in my view for the development of proprietary LLMs at least.
TL;DR: AI is not human, it is a product, one that may augment some tasks productively, but is also often aimed at replacing humans in their jobs - this makes all the difference in how we should balance rights and protections in law.
Be careful with that logic, these are jobs forever lost to robots. They will eventually come for your job or the job of someone you know. Increasingly the question won’t be whether robots can do X better than humans, but whether they should.
Unfortunately unless you are a tiny niche community that isn’t ever targeted by spam or idiots (and how common is that really), moderators are a necessary evil. You probably don’t hate moderators. You probably hate bad/aggressive/biased/etc moderators. Or maybe sometimes you are the problem, I don’t know. It is not a problem with an easy solution. Usually large forums with no moderation become quickly unbearable to most people. And then moderators become in turn unbearable to some people.
Maybe a trusted AI can do a better job at this - like give it the community rules and ask it to enforce them objectively, transparently, and dispassionately, unless a certain number of participants complain, in which case it can reverse its decision and learn from that.