Jack Clark
@jackclarkSF
POLICY@AnthropicAI, ONEAI OECD, co-chair @indexingai, writer @ http://importai.net Past: @openai, @business @theregister. Neural nets, distributed systems, weird futures
David Duvenaud
@DavidDuvenaud
ACADEMICMachine learning prof @UofT. Former team lead at Anthropic. Working on generative models, inference, & latent structure.
Dwarkesh Patel
@dwarkesh_sp
CREATORHost of @dwarkeshpodcast https://www.youtube.com/DwarkeshPatel https://open.spotify.com/show/4JH4tybY1zX6e5hjCwU6gF https://apple.co/3ujLQkZ
augustus odena
@gstsdn
RESEARCHERSomething new. Previously: AI research at TBD Labs / Meta; cofounder at @AdeptAILabs; Invented Scratchpad / Chain-of-Thought; Google Brain
Stephanie Chan
@scychan_brains
RESEARCHERStaff Research Scientist at DeepMind. Artificial & biological brains 🤖 🧠 Societal impacts of AI + Science of AI. Views are my own.
rishi
@RishiBommasani
POLICYEconomic impacts of AI; AI policy & governance @StanfordHAI Previous: Stanford CS PhD w/ @percyliang @jurafsky, Cornell CS
Owain Evans
@OwainEvans_UK
AI SAFETYRuns an AI Safety research group in Berkeley (Truthful AI) + Affiliate at UC Berkeley. Past: Oxford Uni, TruthfulQA, Reversal Curse. Prefer email to DM.
Alex Tamkin
@AlexTamkin
RESEARCHERmachine learning, science & society @AnthropicAI | recently: Clio, Anthropic Economic Index, Claude Artifacts | prev: phd @StanfordAILab, @stanfordnlp
⿻ Andrew Trask
@iamtrask
RESEARCHERi teach AI on X building AI with attribution-based control @openminedorg, @GoogleDeepMind, @OxfordUni, @UN, @GovAIOrg, and @CFR_org
Dylan HadfieldMenell
@dhadfieldmenell
AI SAFETYAssociate Prof @MITEECS working on value (mis)alignment in AI systems; Safety & Alignment Advisor at http://Character.AI; @dhadfieldmenell@bsky.social; he/him
Evan Hubinger
@EvanHub
AI SAFETYAlignment Stress-Testing lead @AnthropicAI. Opinions my own. Previously: MIRI, OpenAI, Google, Yelp, Ripple. (he/him/his)
Ajeya Cotra
@ajeya_cotra
AI SAFETYHelping the world prepare for extremely powerful AI. Risk assessment @METR_evals. Writing at Planned Obsolescence (about AI), Good Bones (about whatever).
Cas (Stephen Casper)
@StephenLCasper
AI SAFETYAI safeguards & gov. research. PhD student @MIT_CSAIL (mnr. Public Policy), and Fellow at @BKCHarvard. Fmr. @AISecurityInst. https://stephencasper.com/
Adrien Ecoffet
@AdrienLE
RESEARCH ENGINEERTrying to make AGI go well. Researcher at @openai. Views my own.
William MacAskill
@willmacaskill
RESEARCHERConsider donating 10% to effective charities: http://www.givingwhatwecan.org/pledge Or a career for impact: http://80000hours.org My research: http://forethought.org
Saffron Huang
@saffronhuang
POLICYhow shall we live together? societal impacts researcher @AnthropicAI • ex @GoogleDeepMind @AISecurityInst⋅ @collect_intel co-founder • views mine
Connor Leahy
@NPCollapse
AI SAFETYUS Director @ControlAI - Leave me anonymous feedback: http://bit.ly/3RZbu7x - I don't know how to save the world, but dammit I'm gonna try
Joe Carlsmith
@jkcarlsmith
AI SAFETYPhilosophy, futurism, AI. Working on Claude's values @AnthropicAI. Formerly @coeff_giving. Opinions my own.
Nathan Labenz
@labenz
CREATORAI Scout, building text-2-video @Waymark, host of The Cognitive Revolution podcast
Rob Wiblin
@robertwiblin
CREATORHost of the 80,000 Hours Podcast. Exploring the inviolate sphere of ideas one interview at a time: http://80000hours.org/podcast/
Lisan al Gaib
@scaling01
CREATORlead them to paradise LisanBench: https://lisanbench.com/ Impressum & Datenschutz: https://lisanbench.com/legal
Toby Shevlane
@tshevl
FOUNDER@_Mantic_AI cofounder & CEO, on a mission to solve forecasting. Prev: research scientist @GoogleDeepMind, PhD at @UniofOxford.
Toby Ord
@tobyordoxford
AI SAFETYSenior Researcher at Oxford University. Author — The Precipice: Existential Risk and the Future of Humanity.
Rosie Campbell
@RosieCampbell
AI SAFETYForever expanding my nerd/bimbo Pareto frontier. AI welfare 🤝 AI safety. Managing Director @eleosai, Ex-OpenAI, 2024 @rootsofprogress fellow
Allan Dafoe
@AllanDafoe
AI SAFETYAGI governance: navigating the transition to beneficial AGI (Google DeepMind)
rohit
@krishnanrohit
CREATOREssays: http://www.strangeloopcanon.com | Book: http://amazon.com/dp/B0CJ9F327M | World model: https://github.com/Strange-Lab-AI/vei
Marius Hobbhahn
@MariusHobbhahn
AI SAFETYCEO at Apollo Research @apolloaievals prev. ML PhD with Philipp Hennig & AI forecasting @EpochAIResearch
Ashvin Nair
@ashvinair
RESEARCH ENGINEERRL foundations @cursor_ai. Prev: o1, o3, Code Interpreter @openai, 9 years learning to poke by poking at UC Berkeley
Matt Clifford
@matthewclifford
POLICYCo-founder @join_ef; Chair @ARIA_Research; Make Britain Rich Again.
Maksym Andriushchenko
@maksym_andr
AI SAFETYPrincipal investigator at @ELLISInst_Tue & @MPI_IS, mentor at @MATSprogram, PhD from @EPFL. Past works: AgentHarm, OS-Harm, HalluHard, PostTrainBench, Claudini.
Steven Adler
@sjgadler
AI SAFETYAI safety researcher (ex-OpenAI: danger evals, AGI readiness, etc), writing at https://clear-eyed.ai
Buck Shlegeris
@bshlgrs
AI SAFETYCEO@Redwood Research (@redwood_ai), working on technical research to reduce catastrophic risk from AI misalignment. bshlegeris@gmail.com