~www_lesswrong_com | Bookmarks (712)
-
Announcing Progress Conference 2025 — LessWrong
Published on April 17, 2025 5:12 PM GMTLast fall the Roots of Progress Institute hosted the...
-
Host Keys and SSHing to EC2 — LessWrong
Published on April 17, 2025 3:10 PM GMT I do a lot of work on EC2,...
-
AI #112: Release the Everything — LessWrong
Published on April 17, 2025 3:10 PM GMTOpenAI has upgraded its entire suite of models. By...
-
How worker co-ops can help restore social trust — LessWrong
Published on April 17, 2025 2:14 PM GMTThe US is experiencing a great decline in trust....
-
On AI personhood — LessWrong
Published on April 17, 2025 12:31 PM GMTIt seems to me the question of consciousness of...
-
8 PRIME SKILLS An analisis — LessWrong
Published on April 17, 2025 11:36 AM GMTWhat is this about?With some parameters we have thus...
-
8 PRIME SKILLS - A simplified construction from MaxEnt Informational Efficiency in 4 questions — LessWrong
Published on April 17, 2025 11:04 AM GMTWhat is this about?We often experience complex things (like...
-
Understanding and overcoming AGI apathy — LessWrong
Published on April 17, 2025 1:04 AM GMTCrossposting from my substack.Note for LW: This post is...
-
ALLFED emergency appeal: Help us raise $800,000 to avoid cutting half of programs — LessWrong
Published on April 16, 2025 9:47 PM GMTSUMMARY: ALLFED is making an emergency appeal here due to...
-
Prodromes and Biomarkers in Chronic Disease — LessWrong
Published on April 16, 2025 9:30 PM GMTMidjourneyThanks to Renaissance Philanthropy for their support of my...
-
To be legible, evidence of misalignment probably has to be behavioral — LessWrong
Published on April 15, 2025 6:14 PM GMTOne key hope for mitigating risk from misalignment is...
-
AISN #51: AI Frontiers — LessWrong
Published on April 15, 2025 4:01 PM GMTWelcome to the AI Safety Newsletter by the Center...
-
Surprising LLM reasoning failures make me think we still need qualitative breakthroughs for AGI — LessWrong
Published on April 15, 2025 3:56 PM GMTIntroductionWriting this post puts me in a weird epistemic...
-
OpenAI #13: Altman at TED and OpenAI Cutting Corners on Safety Testing — LessWrong
Published on April 15, 2025 3:30 PM GMTThree big OpenAI news items this week were the...
-
3M Subscriber YouTube Account 'Channel 5' Reporting On Rationalism — LessWrong
Published on April 15, 2025 1:02 PM GMTHi, I thought it might be interesting to some...
-
The real reason AI benchmarks haven’t reflected economic impacts — LessWrong
Published on April 15, 2025 1:44 PM GMTBasically, the linkpost argues that the broad reason why...
-
ASI existential risk: reconsidering alignment as a goal — LessWrong
Published on April 15, 2025 1:36 PM GMTDiscuss
-
Map of AI Safety v2 — LessWrong
Published on April 15, 2025 1:04 PM GMTThe original Map of AI Existential Safety became a...
-
Can SAE steering reveal sandbagging? — LessWrong
Published on April 15, 2025 12:33 PM GMTSummary We conducted a small investigation into using SAE features...
-
Risers for Foot Percussion — LessWrong
Published on April 15, 2025 11:10 AM GMT The ideal seat height for foot percussion is...
-
Intro to Multi-Agent Safety — LessWrong
Published on April 13, 2025 5:40 PM GMTWe live in a world where numerous agents, ranging...
-
Vestigial reasoning in RL — LessWrong
Published on April 13, 2025 3:40 PM GMTTL;DR: I claim that many reasoning patterns that appear...
-
Four Types of Disagreement — LessWrong
Published on April 13, 2025 11:22 AM GMTEpistemic status: a model I find helpful to make...
-
How I switched careers from software engineer to AI policy operations — LessWrong
Published on April 13, 2025 6:37 AM GMTThanks to Linda Linsefors for encouraging me to write my...