A Day in the Life of the Gen Z Worker - The Atlantic

2 Comments
Read the whole story
jlvanderzwan
14 hours ago
reply
Whomever came up with "microretirement" deserves a macroretirement in special hell
acdha
20 hours ago
reply
“Finally, it is 1:50 p.m. Just 10 more minutes until her Microvacation! To participate in this new trend, she gets up from her desk to travel briefly to a second, more fun location—in this case, a coffee shop—for fewer than 30 minutes. Some workers take multiple Microvacations per week, and employers warn it can be addictive.”
Washington, DC
Share this story
Delete

Roblox Solved The Physics Problem That Stumped Everyone!

1 Comment
From: Two Minute Papers
Duration: 6:19
Views: 68,615

❤️ Check out Vast.ai and run DeepSeek or any AI project: https://vast.ai/papers

📝 The paper is available here:
https://graphics.cs.utah.edu/research/projects/avbd/

Play with it!
https://graphics.cs.utah.edu/research/projects/avbd/avbd_demo2d.html

📝 My paper on simulations that look almost like reality is available for free here:
https://rdcu.be/cWPfD

Or this is the orig. Nature Physics link with clickable citations:
https://www.nature.com/articles/s41567-022-01788-5

🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Benji Rabhan, B Shang, Christian Ahlin, Gordon Child, John Le, Juan Benet, Kyle Davis, Loyal Alchemist, Lukas Biewald, Michael Tedder, Owen Skarpness, Richard Sundvall, Steef, Sven Pfiffner, Taras Bobrovytsky, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi
If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers

My research: https://cg.tuwien.ac.at/~zsolnai/
X/Twitter: https://twitter.com/twominutepapers
Thumbnail design: Felícia Zsolnai-Fehér - http://felicia.hu

Read the whole story
jlvanderzwan
14 hours ago
reply
Just skip the video and go straight to the demo page

https://graphics.cs.utah.edu/research/projects/avbd/avbd_demo2d.html

So I generally stopped following this channel because all the AI glazing gets very grating, and calling graphics cards "consumer grade" when it's always the latest models that easily hit the $1000 is disingenuous at best. To top it off in this video he says "imagine a Tesla model S hanging off a chain" and wtf would you use a Tesla car in 2025 as an example unless you're either a Yarvin-pilled racist or at best so blatantly in denial that you should be shunned for being dangerously delusional.

Also Roblox is uhm... not so great.

Having said that, I sadly have to admit that this is a genuinely impressive paper as anyone with a physics background can tell you.
Share this story
Delete

How do you stop an AI model turning Nazi? What the Grok drama reveals about AI training

1 Comment
Anne Fehres and Luke Conroy & AI4Media, CC BY

Grok, the artificial intelligence (AI) chatbot embedded in X (formerly Twitter) and built by Elon Musk’s company xAI, is back in the headlines after calling itself “MechaHitler” and producing pro-Nazi remarks.

The developers have apologised for the “inappropriate posts” and “taken action to ban hate speech” from Grok’s posts on X. Debates about AI bias have been revived too.

But the latest Grok controversy is revealing not for the extremist outputs, but for how it exposes a fundamental dishonesty in AI development. Musk claims to be building a “truth-seeking” AI free from bias, yet the technical implementation reveals systemic ideological programming.

This amounts to an accidental case study in how AI systems embed their creators’ values, with Musk’s unfiltered public presence making visible what other companies typically obscure.

What is Grok?

Grok is an AI chatbot with “a twist of humor and a dash of rebellion” developed by xAI, which also owns the X social media platform.

The first version of Grok launched in 2023. Independent evaluations suggest the latest model, Grok 4, outpaces competitors on “intelligence” tests. The chatbot is available standalone and on X.

xAI states “AI’s knowledge should be all-encompassing and as far-reaching as possible”. Musk has previously positioned Grok as a truth-telling alternative to chatbots accused of being “woke” by right-wing commentators.

But beyond the latest Nazism scandal, Grok has made headlines for generating threats of sexual violence, bringing up “white genocide” in South Africa, and making insulting statements about politicians. The latter led to its ban in Turkey.

So how do developers imbue an AI with such values and shape chatbot behaviour? Today’s chatbots are built using large language models (LLMs), which offer several levers developers can lean on.

What makes an AI ‘behave’ this way?

Pre-training

First, developers curate the data used during pre-training – the first step in building a chatbot. This involves not just filtering unwanted content, but also emphasising desired material.

GPT-3 was shown Wikipedia up to six times more than other datasets as OpenAI considered it higher quality. Grok is trained on various sources, including posts from X, which might explain why Grok has been reported to check Elon Musk’s opinion on controversial topics.

Musk has shared that xAI curates Grok’s training data, for example to improve legal knowledge and to remove LLM-generated content for quality control. He also appealed to the X community for difficult “galaxy brain” problems and facts that are “politically incorrect, but nonetheless factually true”.

We don’t know if these data were used, or what quality-control measures were applied.

Fine-tuning

The second step, fine-tuning, adjusts LLM behaviour using feedback. Developers create detailed manuals outlining their preferred ethical stances, which either human reviewers or AI systems then use as a rubric to evaluate and improve the chatbot’s responses, effectively coding these values into the machine.

A Business Insider investigation revealed xAI’s instructions to human “AI tutors” instructed them to look for “woke ideology” and “cancel culture”. While the onboarding documents said Grok shouldn’t “impose an opinion that confirms or denies a user’s bias”, they also stated it should avoid responses that claim both sides of a debate have merit when they do not.

System prompts

The system prompt – instructions provided before every conversation – guides behaviour once the model is deployed.

To its credit, xAI publishes Grok’s system prompts. Its instructions to “assume subjective viewpoints sourced from the media are biased” and “not shy away from making claims which are politically incorrect, as long as they are well substantiated” were likely key factors in the latest controversy.

These prompts are being updated daily at the time of writing, and their evolution is a fascinating case study in itself.

Guardrails

Finally, developers can also add guardrails – filters that block certain requests or responses. OpenAI claims it doesn’t permit ChatGPT “to generate hateful, harassing, violent or adult content”. Meanwhile, the Chinese model DeepSeek censors discussion of Tianamen Square.

Ad-hoc testing when writing this article suggests Grok is much less restrained in this regard than competitor products.

The transparency paradox

Grok’s Nazi controversy highlights a deeper ethical issue: would we prefer AI companies to be explicitly ideological and honest about it, or maintain the fiction of neutrality while secretly embedding their values?

Every major AI system reflects its creator’s worldview – from Microsoft Copilot’s risk-averse corporate perspective to Anthropic Claude’s safety-focused ethos. The difference is transparency.

Musk’s public statements make it easy to trace Grok’s behaviours back to Musk’s stated beliefs about “woke ideology” and media bias. Meanwhile, when other platforms misfire spectacularly, we’re left guessing whether this reflects leadership views, corporate risk aversion, regulatory pressure, or accident.

This feels familiar. Grok resembles Microsoft’s 2016 hate-speech-spouting Tay chatbot, also trained on Twitter data and set loose on Twitter before being shut down.

But there’s a crucial difference. Tay’s racism emerged from user manipulation and poor safeguards – an unintended consequence. Grok’s behaviour appears to stem at least partially from its design.

The real lesson from Grok is about honesty in AI development. As these systems become more powerful and widespread (Grok support in Tesla vehicles was just announced), the question isn’t whether AI will reflect human values. It’s whether companies will be transparent about whose values they’re encoding and why.

Musk’s approach is simultaneously more honest (we can see his influence) and more deceptive (claiming objectivity while programming subjectivity) than his competitors.

In an industry built on the myth of neutral algorithms, Grok reveals what’s been true all along: there’s no such thing as unbiased AI – only AI whose biases we can see with varying degrees of clarity.

The Conversation

Aaron J. Snoswell previously received research funding from OpenAI in 2024–2025 to develop new evaluation frameworks for measuring moral competence in AI agents.

Read the whole story
jlvanderzwan
15 hours ago
reply
By putting it out of its misery, chief.

Also:

> previously received research funding from OpenAI in 2024–2025 to develop new evaluation frameworks for measuring moral competence in AI agents.

... kindly f off.
Share this story
Delete

Saturday Morning Breakfast Cereal - Summary

2 Comments and 4 Shares


Click here to go see the bonus panel!

Hovertext:
I saw an article that said it was a 3 minute read then offered an AI summary, and I believe it may be included in an eventual epitaph for civilization.


Today's News:
Read the whole story
jlvanderzwan
15 hours ago
reply
Influencers: "We gotta protect our phoney baloney jobs!"

https://www.youtube.com/watch?v=uTmfwklFM-M
acdha
20 hours ago
reply
Washington, DC
Share this story
Delete
1 public comment
llucax
2 hours ago
reply
This is pretty much how I feel lately...
Berlin

Irregular Webcomic! #2824 Rerun

1 Comment
Comic #2824

Oops. Looks like James Clerk Maxwell was about to record his unified field theory.


2025-07-13 Rerun commentary: Farmers who raise both sheep and cattle in the one grazing area also operate on a unified field theory.
Read the whole story
jlvanderzwan
15 hours ago
reply
> Farmers who raise both sheep and cattle in the one grazing area also operate on a unified field theory.

Surely that would be a unified field *practice*, Mr. Morgan-Mar?
Share this story
Delete

A terrible comic about baseball

1 Share

On social media recently, it was “Make a Terrible Comic Day.” Since making terrible things comes naturally to me, I decided to participate! Here is my contribution.

It was fun to draw this by hand, which I am quite out of practice at.

You can see more “terrible comics” using the #makeaterriblecomicday2025 hashtag on Bluesky and Instagram!

Read the whole story
jlvanderzwan
15 hours ago
reply
Share this story
Delete
Next Page of Stories