[Home](https://slatestarcodex.com/ "Slate Star Codex")
* [About / Top Posts](https://slatestarcodex.com/about/)
* [Archives](https://slatestarcodex.com/archives/)
* [Top Posts](https://slatestarcodex.com/top-posts/)
[Comments Feed](https://slatestarcodex.com/comments/feed/) [RSS Feed](https://slatestarcodex.com/feed/)
[Slate Star Codex](https://slatestarcodex.com/ "Slate Star Codex")
==================================================================
* ### Blogroll
* ### Economics
* [Artir Kel](https://nintil.com/)
* [Bryan Caplan](https://www.econlib.org/author/bcaplan/)
* [David Friedman](http://daviddfriedman.blogspot.com/)
* [Pseudoerasmus](https://pseudoerasmus.com/)
* [Scott Sumner](http://www.themoneyillusion.com/)
* [Tyler Cowen](http://marginalrevolution.com/)
* ### Effective Altruism
* [80000 Hours Blog](https://80000hours.org/blog/)
* [Effective Altruism Forum](http://www.effective-altruism.com/)
* [GiveWell Blog](http://blog.givewell.org/)
* ### Rationality
* [Alyssa Vance](http://rationalconspiracy.com/)
* [Beeminder](https://blog.beeminder.com/tag/rationality/)
* [Elizabeth Van Nostrand](http://acesounderglass.com/)
* [Gwern Branwen](https://www.gwern.net/)
* [Jacob Falkovich](http://putanumonit.com/)
* [Jeff Kaufman](http://www.jefftk.com/index)
* [Katja Grace](http://aiimpacts.org/)
* [Kelsey Piper](https://www.vox.com/authors/kelsey-piper)
* [Less Wrong](http://lesswrong.com)
* [Paul Christiano](https://sideways-view.com/)
* [Robin Hanson](http://www.overcomingbias.com/)
* [Sarah Constantin](https://srconstantin.github.io/)
* [Zack Davis](http://zackmdavis.net/blog/)
* [Zvi Mowshowitz](https://thezvi.wordpress.com/)
* ### Science
* [Andrew Gelman](http://andrewgelman.com/)
* [Greg Cochran](https://westhunt.wordpress.com/)
* [Michael Caton](http://cognitionandevolution.blogspot.com/)
* [Razib Khan](https://www.gnxp.com/)
* [Scott Aaronson](http://www.scottaaronson.com/blog/)
* [Stephan Guyenet](https://www.stephanguyenet.com/)
* [Steve Hsu](http://infoproc.blogspot.com/)
* ### SSC Elsewhere
* [SSC Discord Server](https://discord.gg/RTKtdut)
* [SSC Podcast](https://linktr.ee/sscpodcast)
* [SSC Subreddit](https://www.reddit.com/r/slatestarcodex/)
* [Unsong](http://unsongbook.com)
* ### Archives
* [January 2021](https://slatestarcodex.com/2021/01/)
* [September 2020](https://slatestarcodex.com/2020/09/)
* [June 2020](https://slatestarcodex.com/2020/06/)
* [May 2020](https://slatestarcodex.com/2020/05/)
* [April 2020](https://slatestarcodex.com/2020/04/)
* [March 2020](https://slatestarcodex.com/2020/03/)
* [February 2020](https://slatestarcodex.com/2020/02/)
* [January 2020](https://slatestarcodex.com/2020/01/)
* [December 2019](https://slatestarcodex.com/2019/12/)
* [November 2019](https://slatestarcodex.com/2019/11/)
* [October 2019](https://slatestarcodex.com/2019/10/)
* [September 2019](https://slatestarcodex.com/2019/09/)
* [August 2019](https://slatestarcodex.com/2019/08/)
* [July 2019](https://slatestarcodex.com/2019/07/)
* [June 2019](https://slatestarcodex.com/2019/06/)
* [May 2019](https://slatestarcodex.com/2019/05/)
* [April 2019](https://slatestarcodex.com/2019/04/)
* [March 2019](https://slatestarcodex.com/2019/03/)
* [February 2019](https://slatestarcodex.com/2019/02/)
* [January 2019](https://slatestarcodex.com/2019/01/)
* [December 2018](https://slatestarcodex.com/2018/12/)
* [November 2018](https://slatestarcodex.com/2018/11/)
* [October 2018](https://slatestarcodex.com/2018/10/)
* [September 2018](https://slatestarcodex.com/2018/09/)
* [August 2018](https://slatestarcodex.com/2018/08/)
* [July 2018](https://slatestarcodex.com/2018/07/)
* [June 2018](https://slatestarcodex.com/2018/06/)
* [May 2018](https://slatestarcodex.com/2018/05/)
* [April 2018](https://slatestarcodex.com/2018/04/)
* [March 2018](https://slatestarcodex.com/2018/03/)
* [February 2018](https://slatestarcodex.com/2018/02/)
* [January 2018](https://slatestarcodex.com/2018/01/)
* [December 2017](https://slatestarcodex.com/2017/12/)
* [November 2017](https://slatestarcodex.com/2017/11/)
* [October 2017](https://slatestarcodex.com/2017/10/)
* [September 2017](https://slatestarcodex.com/2017/09/)
* [August 2017](https://slatestarcodex.com/2017/08/)
* [July 2017](https://slatestarcodex.com/2017/07/)
* [June 2017](https://slatestarcodex.com/2017/06/)
* [May 2017](https://slatestarcodex.com/2017/05/)
* [April 2017](https://slatestarcodex.com/2017/04/)
* [March 2017](https://slatestarcodex.com/2017/03/)
* [February 2017](https://slatestarcodex.com/2017/02/)
* [January 2017](https://slatestarcodex.com/2017/01/)
* [December 2016](https://slatestarcodex.com/2016/12/)
* [November 2016](https://slatestarcodex.com/2016/11/)
* [October 2016](https://slatestarcodex.com/2016/10/)
* [September 2016](https://slatestarcodex.com/2016/09/)
* [August 2016](https://slatestarcodex.com/2016/08/)
* [July 2016](https://slatestarcodex.com/2016/07/)
* [June 2016](https://slatestarcodex.com/2016/06/)
* [May 2016](https://slatestarcodex.com/2016/05/)
* [April 2016](https://slatestarcodex.com/2016/04/)
* [March 2016](https://slatestarcodex.com/2016/03/)
* [February 2016](https://slatestarcodex.com/2016/02/)
* [January 2016](https://slatestarcodex.com/2016/01/)
* [December 2015](https://slatestarcodex.com/2015/12/)
* [November 2015](https://slatestarcodex.com/2015/11/)
* [October 2015](https://slatestarcodex.com/2015/10/)
* [September 2015](https://slatestarcodex.com/2015/09/)
* [August 2015](https://slatestarcodex.com/2015/08/)
* [July 2015](https://slatestarcodex.com/2015/07/)
* [June 2015](https://slatestarcodex.com/2015/06/)
* [May 2015](https://slatestarcodex.com/2015/05/)
* [April 2015](https://slatestarcodex.com/2015/04/)
* [March 2015](https://slatestarcodex.com/2015/03/)
* [February 2015](https://slatestarcodex.com/2015/02/)
* [January 2015](https://slatestarcodex.com/2015/01/)
* [December 2014](https://slatestarcodex.com/2014/12/)
* [November 2014](https://slatestarcodex.com/2014/11/)
* [October 2014](https://slatestarcodex.com/2014/10/)
* [September 2014](https://slatestarcodex.com/2014/09/)
* [August 2014](https://slatestarcodex.com/2014/08/)
* [July 2014](https://slatestarcodex.com/2014/07/)
* [June 2014](https://slatestarcodex.com/2014/06/)
* [May 2014](https://slatestarcodex.com/2014/05/)
* [April 2014](https://slatestarcodex.com/2014/04/)
* [March 2014](https://slatestarcodex.com/2014/03/)
* [February 2014](https://slatestarcodex.com/2014/02/)
* [January 2014](https://slatestarcodex.com/2014/01/)
* [December 2013](https://slatestarcodex.com/2013/12/)
* [November 2013](https://slatestarcodex.com/2013/11/)
* [October 2013](https://slatestarcodex.com/2013/10/)
* [September 2013](https://slatestarcodex.com/2013/09/)
* [August 2013](https://slatestarcodex.com/2013/08/)
* [July 2013](https://slatestarcodex.com/2013/07/)
* [June 2013](https://slatestarcodex.com/2013/06/)
* [May 2013](https://slatestarcodex.com/2013/05/)
* [April 2013](https://slatestarcodex.com/2013/04/)
* [March 2013](https://slatestarcodex.com/2013/03/)
* [February 2013](https://slatestarcodex.com/2013/02/)
* * [Full Archives](https://slatestarcodex.com/archives/)
[Introducing Astral Codex Ten](https://slatestarcodex.com/2021/01/21/introducing-astral-codex-ten/ "Permalink to Introducing Astral Codex Ten")
-----------------------------------------------------------------------------------------------------------------------------------------------
Posted on [January 21, 2021](https://slatestarcodex.com/2021/01/21/introducing-astral-codex-ten/ "3:55 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin2/ "View all posts by Scott Alexander")
Thanks for bearing with me the past few months. My new blog is at [https://astralcodexten.substack.com/](https://astralcodexten.substack.com/). Iâll try to have a less unwieldy domain name working soon.
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | [11 Comments](https://slatestarcodex.com/2021/01/21/introducing-astral-codex-ten/#comments)
[Update On My Situation](https://slatestarcodex.com/2020/09/11/update-on-my-situation/ "Permalink to Update On My Situation")
-----------------------------------------------------------------------------------------------------------------------------
Posted on [September 11, 2020](https://slatestarcodex.com/2020/09/11/update-on-my-situation/ "1:03 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin2/ "View all posts by Scott Alexander")
Itâs been two and a half months since I deleted the blog, so I owe all of you an update on [recent events](https://slatestarcodex.com/2020/06/22/nyt-is-threatening-my-safety-by-revealing-my-real-name-so-i-am-deleting-the-blog/).
I havenât heard anything from the _New York Times_ one way or the other. Since nothing has been published, Iâd assume they dropped the article, except that they approached an acquaintance for another interview last month. Overall Iâm confused.
But they definitely havenât given me any explicit reassurance that they wonât reveal my private information. And now that Iâve publicly admitted privacy is important to me â something I tried to avoid coming on too strong about before, for exactly this reason â some people have taken it upon themselves to post my real name all over Twitter in order to harass me. I probably inadvertently Streisand-Effect-ed myself with all this; I still think it was the right thing to do.
At this point I think maintaining anonymity is a losing battle. So I am gradually reworking my life to be compatible with the sort of publicity that circumstances seem to be forcing on me. I had a talk with my employer and we came to a mutual agreement that I would gradually transition away from working there. At some point, I may start my own private practice, where Iâm my own boss and where I can focus on medication management â and not the kinds of psychotherapy that Iâm most worried are ethically incompatible with being a public figure. Iâm trying to do all of this maximally slowly and carefully and in a way that wonât cause undue burden to any of my patients, and itâs taking a long time to figure out.
Iâm also talking to [Substack](https://substack.com/) about moving to their blogging platform. While part of me wants to jump right back into blogging here and pretend nothing ever happened, the Substack option has grown on me. I think Iâd feel safer as part of a big group that specifically [promises to defend their bloggers when needed](https://techcrunch.com/2020/07/15/substack-launches-defender-a-program-offering-legal-support-to-independent-writers/). And also, Iâd feel safer with a lot of diverse income streams, and Substack has made me an extremely generous offer. Many people gave me good advice about how I could monetize my blog without Substack â I took these suggestions very seriously, and without violating a confidentiality agreement all I can answer is that Substackâs offer was _extremely_ generous.
When I [originally asked readers](https://www.reddit.com/r/slatestarcodex/comments/i10p4m/survey_on_moving_ssc_to_substack/) about this possibility, they raised a lot of valid concerns: some of them were confused by Substackâs commenting system, others annoyed by its pop-up reminders to subscribe, others were concerned about being stuck outside a paywall. Iâve talked to Substack about this, and theyâve made some really impressive promises to address these things â theyâre going to code a maximally-SSC-like commenting experience, theyâre going to let me opt out of the subscription reminders, I wonât have to âpaywallâ anything besides some Hidden Open Threads. This isnât the time for me to go over the dozens of examples of concerns I had that Substack went above and beyond to address, but assume I had most of the same ones you did and put a lot of work into addressing them.
(and if youâre worried about the Hidden Open Threads, check out [Data Secrets Lox](https://www.datasecretslox.com/index.php), a forum that has done a great job keeping the SSC Open Thread tradition going over the past few months.)
So thatâs where I am right now â trying to wind things down at my day job, very preliminarily planning [a private practice](https://slatestarcodex.com/2018/06/20/cost-disease-in-medicine-the-practical-perspective/), and negotiating writing details with Substack. Iâm also looking into some other things to protect my physical safety. When all of that is done, Iâll start blogging again. Right now Iâm expecting that to be some time between October and January â and obviously when it happens Iâll let you know. I would appreciate if people continued to respect my preferences about anonymity until then. After that Iâll stop caring as much â though Iâll still go by âScott Alexanderâ to keep the brand the same, and Iâll still do what I can to avoid publicity.
I might have hinted at this already, but I should say it explicitly â Iâm really grateful for all the support I got throughout this whole incident. You people are all great. Iâll say so at more length later, and talk more about some specific examples, but for now just accept on faith that youâre all great.
I still plan to do the book review contest! Iâll do it sometime after I start the new blog! Those of you who sent me reviews didnât waste your time! Itâs going to happen! Pestilence may afflict every corner of the world, [the skies may turn red as blood and the sun go dark at noon](https://abc7news.com/smoke-in-the-air-today-why-is-sky-orange-quality-index-oakland-bay-area/6414147/), the [earth may shake](https://www.sacbee.com/news/california/article245534855.html) and [plagues of locusts](https://www.bbc.com/future/article/20200806-the-biblical-east-african-locust-plagues-of-2020) cover the land, but never doubt that there will be a book review contest someday, in the golden future, when all of this is over.
As all the kids are saying these days, âthank you for your continuing support during these difficult timesâ.
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [meta](https://slatestarcodex.com/tag/meta/) | [76 Comments](https://slatestarcodex.com/2020/09/11/update-on-my-situation/#comments)
[NYT Is Threatening My Safety By Revealing My Real Name, So I Am Deleting The Blog](https://slatestarcodex.com/2020/06/22/nyt-is-threatening-my-safety-by-revealing-my-real-name-so-i-am-deleting-the-blog/ "Permalink to NYT Is Threatening My Safety By Revealing My Real Name, So I Am Deleting The Blog")
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Posted on [June 22, 2020](https://slatestarcodex.com/2020/06/22/nyt-is-threatening-my-safety-by-revealing-my-real-name-so-i-am-deleting-the-blog/ "10:54 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin/ "View all posts by Scott Alexander")
**\[EDIT 2/13/21: This post is originally from June 2020, but thereâs been renewed interest in it because the NYT article involved just came out. This post says the NYT was going to write a positive article, which was the impression I got in June 2020. The actual article was very negative; I feel this was as retaliation for writing this post, but I canât prove it. I feel I was misrepresented by slicing and dicing quotations in a way that made me sound like a far-right nutcase; I am actually a liberal Democrat who voted for Warren in the primary and Biden in the general, and I generally hold pretty standard center-left views in support of race and gender equality. You can read my full statement defending against the Timesâ allegations [here](https://astralcodexten.substack.com/p/statement-on-new-york-times-article). To learn more about this blog and read older posts, go to [the About page](https://slatestarcodex.com/about/).\]**
So, I kind of deleted the blog. Sorry. Hereâs my explanation.
Last week I talked to a _New York Times_ technology reporter who was planning to write a story on Slate Star Codex. He told me it would be a mostly positive piece about how we were an interesting gathering place for people in tech, and how we were ahead of the curve on some aspects of the coronavirus situation. It probably would have been a very nice article.
Unfortunately, he told me he had discovered my real name and would reveal it in the article, ie doxx me. âScott Alexanderâ is my real first and middle name, but Iâve tried to keep my last name secret. I havenât always done great at this, but Iâve done better than âhave it get printed in the _New York Times_â.
I have a lot of reasons for staying pseudonymous. First, Iâm a psychiatrist, and psychiatrists are kind of obsessive about preventing their patients from knowing anything about who they are outside of work. You can read more about this [in this _Scientific American_ article](https://webcache.googleusercontent.com/search?q=cache:rcPcxd0-nqQJ:https://blogs.scientificamerican.com/mind-guest-blog/why-psychiatrists-don-t-share-personal-information-with-patients/) â and remember that [the last psychiatrist](https://thelastpsychiatrist.com/) blogger to get doxxed [abandoned his blog](https://twitter.com/JohnDiesattheEn/status/1067072479192068096) too. I am not one of the big sticklers on this, but Iâm more of a stickler than âlet the _New York Times_ tell my patients where they can find my personal blogâ. I think itâs plausible that if I became a national news figure under my real name, my patients â who run the gamut from far-left to far-right â wouldnât be able to engage with me in a normal therapeutic way. I also worry that my clinic would decide I am more of a liability than an asset and let me go, which would leave hundreds of patients in a dangerous situation as we tried to transition their care.
The second reason is more prosaic: some people want to kill me or ruin my life, and I would prefer not to make it too easy. Iâve received various death threats. I had someone on an anti-psychiatry subreddit put out a bounty for any information that could take me down (the mods deleted the post quickly, which I am grateful for). Iâve had dissatisfied blog readers call my work pretending to be dissatisfied patients in order to get me fired. And I recently learned that someone on SSC [got SWATted](https://slatestarcodex.com/re-swatting/) in a way that they link to using their real name on the blog. I live with ten housemates including a three-year-old and an infant, and I would prefer this not happen to me or to them. Although I realize I accept some risk of this just by writing a blog with imperfect anonymity, getting doxxed on national news would take it to another level.
When I expressed these fears to the reporter, he said that it was _New York Times_ policy to include real names, and he couldnât change that.
After considering my options, I decided on the one you see now. If thereâs no blog, thereâs no story. Or at least the story will have to include some discussion of NYTâs strategy of doxxing random bloggers for clicks.
I want to make it clear that Iâm not saying I believe Iâm above news coverage, or that people shouldnât be allowed to express their opinion of my blog. If someone wants to write a hit piece about me, whatever, thatâs life. If someone thinks I am so egregious that I donât deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists. This wasnât that. By all indications, this was just going to be a nice piece saying I got some things about coronavirus right early on. Getting punished for my crimes would at least be predictable, but I am not willing to be punished for my virtues.
Iâm not sure what happens next. In my ideal world, the _New York Times_ realizes they screwed up, promises not to use my real name in the article, and promises to rethink their strategy of doxxing random bloggers for clicks. Then I put the blog back up (of course I backed it up! Iâm not a monster!) and we forget this ever happened.
Otherwise, Iâm going to lie low for a while and see what happens. Maybe all my fears are totally overblown and nothing happens and I feel dumb. Maybe I get fired and keeping my job stops mattering. Iâm not sure. Iâd feel stupid if I caused the amount of ruckus this will probably cause and then caved and reopened immediately. But I would also be surprised if I never came back. Weâll see.
Iâve gotten an amazing amount of support the past few days as this situation played out. You donât need to send me more â message _very much_ received. I love all of you so much. I realize I am making your lives harder by taking the blog down. At some point Iâll figure out a way to make it up to you.
In the meantime, you can still use the [r/slatestarcodex subreddit](https://www.reddit.com/r/slatestarcodex/) for sober non-political discussion, the not-officially-affiliated-with-us [r/themotte](https://www.reddit.com/r/TheMotte/) subreddit for crazy heated political debate, and the [SSC Discord server](https://discord.gg/RTKtdut) for whatever it is people do on Discord. Also, my biggest regret is I wonât get to blog about [Gwernâs work with GPT-3](https://twitter.com/gwern/status/1275079173770317839), so go over and check it out.
Thereâs a SUBSCRIBE BY EMAIL button on the right â put your name there if you want to know if the blog restarts or something else interesting happens. Iâll make sure all relevant updates make it onto [the subreddit](https://www.reddit.com/r/slatestarcodex/), so watch that space.
There is no comments section for this post. The appropriate comments section is [the feedback page](https://www.nytimes.com/2019/10/15/homepage/contact-newsroom.html) of the _New York Times_. You may also want to email the _New York Times_ technology editor Pui-Wing Tam at [email protected], contact her on Twitter at @puiwingtam, or phone the _New York Times_ at 844-NYTNEWS. \[EDIT: The time for doing this has passed, thanks to everyone who sent messages in\]
(please **be polite** â I donât know if Ms. Tam was personally involved in this decision, and whoever is stuck answering feedback forms definitely wasnât. Remember that you are representing me and the SSC community, and I will be very sad if you are a jerk to anybody. Please just explain the situation and ask them to stop doxxing random bloggers for clicks. If you are some sort of important tech person who the New York Times technology section might want to maintain good relations with, mention that.)
If you are a journalist who is willing to respect my desire for pseudonymity, Iâm interested in talking to you about this situation (though I prefer communicating through text, not phone). My email is [email protected]. \[EDIT: Now over capacity for interviews, sorry\]
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Comments Off on NYT Is Threatening My Safety By Revealing My Real Name, So I Am Deleting The Blog
[Slightly Skew Systems Of Government](https://slatestarcodex.com/2020/06/17/slightly-skew-systems-of-government/ "Permalink to Slightly Skew Systems Of Government")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Posted on [June 17, 2020](https://slatestarcodex.com/2020/06/17/slightly-skew-systems-of-government/ "10:34 am") by [Scott Alexander](https://slatestarcodex.com/author/admin/ "View all posts by Scott Alexander")
_\[**Related To:** [Legal Systems Very Different From Ours Because I Just Made Them Up](https://slatestarcodex.com/2020/03/30/legal-systems-very-different-from-ours-because-i-just-made-them-up/), [List Of Fictional Drugs Banned By The FDA](https://slatestarcodex.com/2013/10/25/list-of-fictional-drugs-banned-by-the-fda/)\]_
**I.**
Clamzoria is an acausal democracy.
The problem with democracy is that elections happen before the winning candidate takes office. If somebodyâs never been President, how are you supposed to judge how good a President theyâd be? Clamzoria realized this was dumb, and moved elections to the last day of an officialâs term.
When the outgoing President left office, the country would hold an election. It was run by approval voting: you could either approve or disapprove of the candidate who had just held power. The results were tabulated, announced, and then nobody ever thought about them again.
Clamzoria chose its officials through a prediction market. The Central Bank released bonds for each candidate, which paid out X dollars at termâs end, where X was the percent of voters who voted Approve. Traders could provisionally buy and sell these bonds. On the first day of the term, whichever candidateâs bonds were trading at the highest value was inaugurated as the new President; everyone elseâs bonds were retroactively cancelled and their traders refunded. The President would spend a term in office, the election would be held, and the bondholders would be reimbursed the appropriate amount.
The Clamzorians argued this protected against demagoguery. Itâs easy for a candidate to promise the sun and moon before an election, but by the end of their term, voters know if the country is doing well or not. Instead of running on a platform of popular (but doomed) ideas, candidates are encouraged to run on a platform of unpopular ideas, as long as those unpopular ideas will genuinely make the country richer, safer, stronger, and all the other things that lead people to approve of a Presidentâs term after the fact. Of course, youâre still limited by bond tradersâ ability to predict which policies will work, but bond traders are usually more sober than the general electorate.
This system worked wonderfully for several decades, until Lord Bloodholmeâs administration. He ran for President on an unconventional platform: if elected, he would declare himself Dictator-For-Life, replace democracy with sham elections, and kill all who opposed him. Based on his personality, all the bond traders found this completely believable. But that meant that in the end-of-term election, he would get 100% approval. His bond shot up to be worth nearly $100, the highest any bond had ever gone, and he won in a landslide. Alas, Lord Bloodholme was as good as his word, and â after a single sham election to ensure the bondholders got what they were due â that was the end of Clamzoriaâs acausal democracy.
**II.**
Cognito is a constitutional mobocracy.
It used to be a regular mobocracy. It had a weak central government, radicals would protest whenever they didnât like its decisions, the protests would shut down major cities, and the government would cave. Then people on the other side would protest, and that would also shut down major cities, and the government would backtrack. Eventually they realized they needed a better way, made a virtue out of necessity, and wrote the whole system into their constitution.
The Executive Branch is a president elected by some voting system that basically ensures a bland moderate. They have limited power to make decrees that enforce the will of the legislature. The legislature is the mob. One proposes a bill by having a protest in favor of it. If the protest attracts enough people â the most recent number is 43,617, but it changes every year based on the population and a few other factors â then the bill is considered up for review. Anyone can propose amendments (by having a protest demanding amendments) or vote against it â (by having a protest larger than the original protest demanding that the bill not be passed). After everyone has had a fair chance to protest, the text of the bill supported by the largest protest becomes law (unless the largest protest was against any change, in which case there is no change).
The Cognitans appreciate their system because protests are peaceful and nondisruptive. The government has a specific Protesting Square in every city with a nice grid that lets them count how many protesters there are, and all protests involve going into the Protesting Square, standing still for a few minutes to let neutral observers count people up, and then going home. Itâs silly to protest beyond this; your protest wouldnât be legally binding!
Thereâs been some concern recently that corprorations pay protesters to protest for things they want. Several consumer watchdog organizations are trying to organize mobs in favor of a bill to stop this.
**III.**
Yyphrostikoth is a meta-republic.
Every form of government has its own advantages and disadvantages, and the goal is to create a system of checks and balances where each can watch over the others. The Yyphrostikoth Governing Council has twelve members:
The Representative For Monarchy is a hereditary position.
The Representative For Democracy is elected.
The Representative For Plutocracy is the richest person in the country.
The Representative For Technocracy is chosen by lot from among the countryâs Nobel Prize winners.
The Representative For Meritocracy is whoever gets the highest score on a standardized test of general knowledge and reasoning ability.
The Representative For Military Dictatorship is the top general in the army.
The Representative For Communism is the leader of the largest labor union.
The Representative For Futarchy is whoever has the best record on the local version of [Metaculus](https://www.metaculus.com/questions/?show-welcome=true).
The Representative For Gerontocracy is supposedly the oldest person in the country who is medically fit and willing to serve, but this has been so hard to sort out that in practice they are selected by the national retireesâ special interest group from the pool of willing candidates above age 90.
The Representative For Minarchy is an honorary position usually bestowed upon a respected libertarian philosopher or activist. It doesnât really matter who holds it, because their only job is to vote ânoâ on everything, except things that are sneakily phrased so that ânoâ means more government, in which case they can vote âyesâ. If a Representative For Minarchy wants to vote their conscience, they may break this rule once, after which they must resign and be replaced by a new Representative.
The Representative For Republicanism is selected by the other eleven members of the council.
The Representative For Theocracy is the leader of the Governing Council, and gets not only her own vote but a special vote to break any ties. She is chosen at random from a lottery of all adult citizens, on the grounds that God may pick whoever He pleases to represent Himself.
Long ago, the twelfth Councilor was the Representative For Kratocracy (rule by the strongest). The Representative For Kratocracy was whoever was sitting in the Representative For Kratocracyâs chair when a vote took place. This usually involved a lot of firefights and hostage situations, which was fine in principle â that was the whole point â except that the rest of the Governing Council kept getting caught in the crossfire. During the Nehanian Restoration, the Representative For Kratocracyâs chair was moved to a remote uninhabited island, with the Representative permitted to vote by video-link, but environmentalist groups complained that the constant militia battles there were harming migratory birds. Finally, a petition was sent to the Oracle of Yaanek, asking what to do. The God recommended that the position be eliminated, and offered to decide who filled the newly vacated seat Himself; thus the beginning of the Representative For Theocracy.
The Constitution was never fully amended, so technically the position is still the Representative For Kratocracy, and technically anyone who kills the Representative For Theocracy can still take his seat and gain immense power. But for some reason everyone who tries this dies of completely natural causes just before their plan comes to fruition. Must be one of those coincidences.
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [fiction](https://slatestarcodex.com/tag/fiction/) | [190 Comments](https://slatestarcodex.com/2020/06/17/slightly-skew-systems-of-government/#comments)
[Open Thread 156.25 + Signal Boost For Steve Hsu](https://slatestarcodex.com/2020/06/16/open-thread-156-25/ "Permalink to Open Thread 156.25 + Signal Boost For Steve Hsu")
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Posted on [June 16, 2020](https://slatestarcodex.com/2020/06/16/open-thread-156-25/ "2:25 pm") by [a reader](https://slatestarcodex.com/author/bughimamborag/ "View all posts by a reader")
**\[UPDATE: As of 6/19, Professor Hsu [resigned as VP of Research](https://statenews.com/article/2020/06/michigan-state-vp-of-research-stephen-hsu-resigns). He still encourages interested people to sign the petition as a general gesture of support.\]**
Normally this would be a hidden thread, but I wanted to signal boost [this request for help](https://infoproc.blogspot.com/2020/06/support-freedom-of-ideas-and-inquiry-at.html) by Professor Steve Hsu, vice president of research at Michigan State University. Hsu is a friend of the blog and was a guest speaker at one of our recent online meetups â some of you might also have gotten a chance to meet him at [a Berkeley meetup](https://slatestarcodex.com/2019/01/02/bay-area-ssc-meetup-1-6/) last year. He and his blog [Information Processing](https://infoproc.blogspot.com/2009/09/some-favorite-posts.html) have also been instrumental in helping me and thousands of other people better understand genetics and neuroscience. If youâve met him, you know he is incredibly kind, patient, and willing to go to great lengths to help improve peopleâs scientific understanding.
Along with all the support heâs given me personally, heâs had an amazing career. He started as a theoretical physicist publishing work on black holes and quantum information. Then he transitioned into genetics, spent a while as scientific advisor to the Beijing Genomics Institute, and helped discover genetic prediction algorithms for gallstones, melanoma, heart attacks, and other conditions. Along with his academic work, he also sounded the alarm about the coronavirus early and has been helping shape the response.
This week, some students at Michigan State are trying to cancel him. They point an interview he did on an alt-right podcast (he says he didnât know it was alt-right), to his allowing MSU to conduct research on police shootings (which concluded, like most such research, that they are generally not racially motivated), and to his occasional discussion of the genetics of race (basically just repeating the same âvariance between vs. within clustersâ distinction everyone else does, see eg [here](https://infoproc.blogspot.com/2008/01/no-scientific-basis-for-race.html)). You can read the case being made against him [here](https://twitter.com/GradEmpUnion/status/1270829003130261504), although keep in mind a lot of it is distorted and taken out of context, and you can read his response [here](https://infoproc.blogspot.com/2020/06/twitter-attacks-and-defense-of.html).
Professor Hsu will probably land on his feet whatever happens, but it would be a great loss for Michigan and its scientific community if he could no longer work with them; it would also have a chilling effect on other scientists who want to discuss controversial topics or engage with the public. If you support him, you can sign the petition to keep him on [here](https://docs.google.com/document/d/14n8AJuUpRooDJAZYdRIp2gdEkkKjmnOOH5FOJ8ESmRs/edit). If you are a professor or other notable person, your voice could be especially helpful, but anyone is welcome to sign regardless of credentials or academic status. See [here](https://infoproc.blogspot.com/2020/06/support-freedom-of-ideas-and-inquiry-at.html) for more information. He says that time is of the essence since activists are pressuring the college to make a decision right away while everyone is still angry.
This was supposed to be a culture-war free open thread, but I guess the ship has sailed on that one, so, uh, just do your best, and Iâll delete anything that needs deleting.
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [open](https://slatestarcodex.com/tag/open/) | [2,996 Comments](https://slatestarcodex.com/2020/06/16/open-thread-156-25/#comments)
[The Vision Of Vilazodone And Vortioxetine](https://slatestarcodex.com/2020/06/15/the-vision-of-vilazodone-and-vortioxetine/ "Permalink to The Vision Of Vilazodone And Vortioxetine")
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Posted on [June 15, 2020](https://slatestarcodex.com/2020/06/15/the-vision-of-vilazodone-and-vortioxetine/ "7:31 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin/ "View all posts by Scott Alexander")
**I.**
One of psychiatryâs many embarrassments is how many of our drugs get discovered by accident. They come from [random plants](https://en.wikipedia.org/wiki/Valproate#History) or [shiny rocks](https://en.wikipedia.org/wiki/Lithium#Medicine) or [stuff Alexander Shulgin invented to get high](https://slatestarcodex.com/2016/08/11/book-review-pihkal/).
But every so often, somebody tries to do things the proper way. Go over decades of research into what makes psychiatric drugs work and how they could work better. Figure out the hypothetical properties of the ideal psych drug. Figure out a molecule that matches those properties. Synthesize it and see what happens. This was the vision of vortioxetine and vilazodone, two antidepressants from the early 2010s. They were approved by the FDA, sent to market, and prescribed to millions of people. Now itâs been enough time to look back and give them a fair evaluation. AndâŚ
âŚand itâs been a good reminder of why we donât usually do this.
Enough data has come in to be pretty sure that vortioxetine and vilazodone, while effective antidepressants, are no better than the earlier medications they sought to replace. I want to try going over the science that led pharmaceutical companies to think these two drugs might be revolutionary, and then speculate on why they werenât. Iâm limited in this by my total failure to understand several important pieces of the pathways involved, so Iâll explain the parts I get, and list the parts I donât in the hopes that someone clears them up in the comments.
**II.**
SSRIs take about a month to work. This is surprising, because they start increasing serotonin immediately. If higher serotonin [is associated with](https://slatestarcodex.com/2015/04/05/chemical-imbalance/) improved mood, why the delay?
Most [research](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4048143/) focuses on presynaptic 5-HT1A autoreceptors, which detect and adjust the amount of serotonin being released in order to maintain homeostasis. If that sounds clear as mud to you, well, it did to me too the first twenty times I heard it. Hereâs the analogy which eventually worked for me:
Imagine youâre a salesperson who teleconferences with customers all day. Youâre bad at projecting your voice â sometimes itâs too soft, sometimes too loud. Your boss tells you the right voice level for sales is 60 decibels. So you put an audiometer on your desk that measures how many decibels your voice is. It displays an up arrow, down arrow, or smiley face, telling you that youâre being too quiet, loud, or just right.
The audiometer is presynaptic (itâs on your side of the teleconference, not your customerâs side). Itâs an autoreceptor, not a heteroreceptor (youâre using it to measure yourself, not to measure anything else). And youâre using it to maintain homeostasis (to keep your voice at 60 dB).
Suppose you take a medication that stimulates your larynx and makes your voice naturally louder. As long as your audiometerâs working, that medication will have no effect. Your voice will naturally be louder, but that means youâll see the down arrow on your audiometer more often, youâll speak with less force, and the less force and louder voice will cancel out and keep you at 60 dB. No change.
This is what happens with antidepressants. The antidepressant increases serotonin levels (ie sends a louder signal). But if the presynaptic 5-HT1A autoreceptors are intact, they tell the cells that the signal is too loud and they should release less serotonin. So they do, and now weâre back where we started â ie depressed.
(Why donât the autoreceptors notice the original problem â that you have too little serotonin and are depressed â and work their magic there? Not sure. Maybe depression affects whatever sets the autoreceptors, and causes them to be set too low? Maybe the problem is with reception, not transmission? Maybe you have the right amount of serotonin, but you want excessive amounts of serotonin because that would fix a problem somewhere else? Maybe you should have paid extra for the premium model of presynaptic autoreceptor?)
So how come antidepressants work after a month? The only explanation Iâve heard is that the autoreceptors get âsaturatedâ, which is a pretty nonspecific term. I think it means that thereâs a _second_ negative feedback loop controlling the first negative feedback loop â the cell notices there is way more autoreceptor activity than expected and assumes it is producing too many autoreceptors. Over the course of a month, it stops producing more autoreceptors, the existing autoreceptors gradually degrade, and presynaptic autoreception stops being a problem. In our salesperson analogy, after a while you notice that your audiometer doesnât match your own perception of how much effort youâre putting into speaking â no matter how quietly you feel like youâre whispering, the audiometer just keeps saying youâre yelling very loud. Eventually you declare it defective, throw it out, and just speak naturally. Now the larynx-stimulating medication _can_ make your voice louder.
In the late â90s, some scientists wondered â what happens if you just block the 5-HT1A autoreceptors directly? At the very least, seems like you could make antidepressants work a month faster. Best case scenario, you can make antidepressants work _better_. Maybe after a month, the cells have lost _some_ confidence in the autoreceptors, but theyâre still keeping serotonin somewhere below the natural amount based on the anomalous autoreceptor reading. So maybe blocking the autoreceptors would mean a faster, better, antidepressant.
Luckily we already knew a chemical that could block these â pindolol. Pindolol is a blood-pressure-lowering medication, but by coincidence it also makes it into the brain and blocks this particular autoreceptor involved in serotonin homeostasis. So a few people started giving pindolol along with antidepressants. What happened? According to [some small unconvincing systematic reviews](https://ebmh.bmj.com/content/7/4/107), it did seem to kind of help make antidepressants work faster, but according to [some other small unconvincing meta-analyses](https://onlinelibrary.wiley.com/doi/abs/10.1002/hup.2465) it probably didnât make them work any better. It just made antidepressants go from taking about four weeks to work, to taking one or two weeks to work. It also gave patients dizziness, drowsiness, weakness, and all the other things you would expect from giving a blood-pressure-lowering medication to people with normal blood pressure. So people decided it probably wasnât worth it.
**III.**
And then thereâs buspirone.
Buspirone is generally considered a 5-HT1A agonist. The reality is a little more complicated; itâs a full agonist of presynaptic autoreceptors, and a partial agonist of postsynaptic autoreceptors(uh, imagine the customer in the other side of the teleconference also has an audiometer).
Buspirone has a weak anti-anxiety effect. It may also weakly increase sex drive, which is a nice contrast to SSRIs, which decrease (some would say âdestroyâ) sex drive. Flibanserin, a similar drug, is FDA-approved as a sex drive enhancer.
Buspirone stimulates presynaptic autoreceptors, which should cause cells to release less serotonin. Since high serotonin levels (eg with SSRIs) decreases sex drive, it makes sense that buspirone should increase sex drive. So far, so good.
But why is buspirone anxiolytic? I have just read a dozen papers purporting to address this question, and they might as well have been written in Chinese for all I was able to get from them. [Some of them](https://www.biologicalpsychiatryjournal.com/article/S0006-3223\(09\)00392-8/pdf) just say that decreasing serotonin levels decreases anxiety, which would probably come as a surprise to anyone on SSRIs, anyone who does tryptophan depletion studies, anyone who measures serotonin metabolites in the spinal fluid, etc. Obviously it must be more complicated than this. But how? I canât find any explanation.
A few books and papers [take a completely different tack](https://books.google.com/books?id=h7lva61Muq4C&pg=PA331&lpg=PA331&dq=buspirone+desensitizes+presynaptic+autoreceptors&source=bl&ots=AyBkjBrZw0&sig=ACfU3U0ivAN_NSngnyZrCmLhRcmFsa_uCA&hl=en&sa=X&ved=2ahUKEwiFqfiyjoDqAhUXsJ4KHRGnDvMQ6AEwC3oECAwQAQ#v=onepage&q=buspirone%20desensitizes%20presynaptic%20autoreceptors&f=false) and argue that buspirone just does the same thing SSRIs do â desensitize the presynaptic autoreceptors until the cells ignore them â and then has its antianxiety action through stimulating postsynaptic receptors. But if itâs doing the same thing as SSRIs, how come it has the opposite effect on sex drive? And how come there are some studies suggesting that itâs helpful to add buspirone onto SSRIs? Why isnât that just doubling up on the same thing?
I am deeply grateful to SSC commenter Scchm for [presenting an argument that this is all wrong](https://slatestarcodex.com/2019/04/30/buspirone-shortage-in-healthcaristan-ssr/#comment-748930), and buspirone acts on D4 receptors, both in its anxiolytic and pro-sexual effects. This would neatly resolve the issues above. But then how come nobody else mentions this? How come everyone else seems to think buspirone makes sense, and writes whole papers about it without using the sentence âwhat the hell all of this is crazyâ?
And hereâs one more mystery: after the pindolol studies, everyone just sort of started assuming buspirone would work the same way pindolol did. This doesnât really make sense pharmacologically â pindolol is an _antagonist_ at presynaptic 5-HT1A receptors; buspirone is an _agonist_ of same. And it doesnât actually work in real life â someone [did a study](https://sci-hub.tw/10.1016/0014-2999\(96\)00185-9), and the study found it didnât work. Still, and I have no explanation for this, people got excited about this possibility. If buspirone could work like pindolol, then we would have a chemical that made antidepressants work faster, _and_ treated anxiety, _and_ reduced sexual dysfunction.
And hereâs one _more_ mystery â okay, you have unrealistically high expectations for buspirone, fine, give people buspirone along with their SSRI. Lots of psychiatrists do this, itâs not really my thing, but itâs not a _bad_ idea. But instead, the people making this argument became obsessed with the idea of finding a single chemical that combined SSRI-like activity with buspirone-like activity. A dual serotonin-transporter-inhibitor and 5-HT1A partial agonist became a pharmacological holy grail.
**IV.**
After a lot of people in lab coats poured things from one test tube to another, Merck announced they had found such a chemical, which they called vilazodone (ViibrydÂŽ).
Vilazodone is an SSRI and 5-HT1A partial agonist. I canât find how its exact partial agonist profile differs from buspirone, except that itâs more of a postsynaptic agonist, whereas buspirone is more of a postsynaptic antagonist. I donât know if this makes a difference.
The FDA approved vilazodone to treat depression, so it âworksâ in that sense. But does its high-tech promise pan out? Is it really faster-acting, anxiety-busting, and less likely to cause sexual side effects?
On a chemical level, things look promising. [This study](https://pubmed.ncbi.nlm.nih.gov/12183683/) finds that vilazodone elevates serotonin faster and higher than Prozac does in mice. And this is kind of grim, but [toxicologists have noticed](https://emcrit.org/toxhound/dont-pick-the-scab/) that vilazodone overdoses are much more likely to produce serotonin toxicity than Prozac overdoses, which fits what you would expect if vilazodone successfully breaks the negative feedback system that keeps serotonin in a normal range.
On a clinical level, maybe not. Proponents of vilazodone [got excited about](https://pubmed.ncbi.nlm.nih.gov/25470094/) a study where vilazodone showed effects as early as week two. But âSSRIs take four weeks to workâ is a rule of thumb, not a natural law. You always get a couple of people who get some effect early on, and if your study population is big enough, thatâll show up as a positive result. So you need to compare vilazodone to an SSRI directly. The only group I know who tried this, [Matthews et al](https://pubmed.ncbi.nlm.nih.gov/25500685/), found no difference â in fact, vilazodone was nonsignificantly slower ([relevant figure](https://pubmed.ncbi.nlm.nih.gov/25500685/#&gid=article-figures&pid=fig-1-uid-0 )). Thereâs no sign of vilazodone working any better either.
What about sexual side effects? Vilazodone does better than SSRIs [in rats](https://link.springer.com/article/10.1007/s00213-015-4198-1), but whatever. Thereâs supposedly [a human study](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4457500/) â from the same Matthews et al team as above â but sexual side effects were so rare in all groups that itâs hard to draw any conclusions. This is bizarre â they had a thousand patients, and only 15 reported decreased libido (and no more in the treatment groups than the placebo group). Maybe God just hates antidepressant studies and makes sure they never find anything, and this is just as true when youâre studying side effects as it is when youâre studying efficacy.
[Clayton et al](https://www.sciencedirect.com/science/article/abs/pii/S1743609515301466) do their own study of vilazodoneâs sexual side effects. They find that overall vilazodone improves sexual function over placebo, probably because they used a scale that was very sensitive to the kind of bad sexual function you get when youâre depressed, and not as sensitive to the kind you get on antidepressants. But they did measure how many people who didnât start out with sexual dysfunction got it during the trial, and this number was 1% of the placebo group and 8% of the vilazodone group. How many people would have gotten dysfunction on an SSRI? We donât know because they didnât include an active comparator. Usually I expect about 30 â 50% of people to get sexual side effects on SSRIs, but thatâs based on me asking them and not on whatever strict criteria they use for studies. Remember, the Matthews study was able to find only 1.5% of people getting sexual side effects! So we shouldnât even try to estimate how this compares. All we can say is that vilazodone definitely doesnât have _no_ sexual side effects.
I canât find any studies evaluating vilazodone vs. anything else for anxiety, but I also canât find any patients saying vilazodone treated their anxiety especially well.
Thereâs really only one clear and undeniable difference between vilazodone and ordinary SSRIs, which is that vilazodone costs $290 a month, whereas other SSRIs cost somewhere in the single digits (Lexapro costs $7.31). If youâre paying for vilazodone, you can take comfort in knowing your money helped fund a pretty cool research program that had some interesting science behind it. But Iâm not sure it actually panned out.
**V.**
Encouraged by Merckâs successâŚ
(not necessarily clinical success, success at getting people to pay $290 a month for an antidepressant)
âŚTakeda and Lundbeck announced their own antidepressant with 5-HT1A partial agonist action, vortioxetine. They originally gave it the trade name BrintellixÂŽ, but upon its US release people kept confusing it with the unrelated medication BrilintaÂŽ, so Takeda/Lundbeck agreed to change the name to TrintellixÂŽ for the American market.
Vortioxetine claimed to have an advantage over its competitor vilazodone, in that it also antagonized 5-HT3 receptors. 5-HT3 receptors are weird. Theyâre the only ion channel based serotonin receptors, and theyâre not especially involved in mood or anxiety. They do only one thing, and they do it well: they make you really nauseous. If youâve ever felt nauseous on an SSRI, 5-HT3 agonism is why. And if youâve ever taken Zofran (ondansetron) for nausea, youâve benefitted from its 5-HT3 antagonism. Most antidepressants potentially cause nausea; since vortioxetine also treats nausea, presumably you break even and are no more nauseous than you were before taking it. Also, there are complicated theoretical reasons to believe maybe 5-HT3 antagonism is kind of like 5-HT1A antagonism in that it speeds the antidepressant response.
After this Takeda and Lundbeck kind of just went crazy, claiming effects on more and more serotonin receptors. Itâs a 5-HT7 antagonist! (what is 5-HT7? No psychiatrist had ever given a secondâs thought to this receptor before vortioxetine came out, but apparently itâŚexists to make your cognition worse, so that blocking it makes your cognition better again?) Itâs a 5-HT1B partial agonist! (what is 5-HT1B? Apparently a [useful potential depression target](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5919989/), according to a half-Japanese, half-Scandinavian team, who report no conflict of interest even though vortioxetine is being sold by a consortium of a Japanese pharma company and a Scandinavian pharma company). Itâs a 5-HT1D antagonist! (really? There are four different kinds of 5-HT1 receptor? Are you sure youâre not just making things up now?)
If we take all of this seriously, vortioxetine is an SSRI with faster mechanism of action, fewer sexual side effects, additional anti-anxiety effect, additional anti-nausea effect, plus it gives you better cognition (technically ârelieves the cognitive symptoms of depressionâ). Is any of this at all true?
[A meta-analysis of 12 studies](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4409435/) finds vortioxetine has a statistically significant but pathetic effect size of 0.2 against depression, which is about average for antidepressants. In a few head-to-head comparisons with SNRIs (similar to SSRIs), vortioxetine treats depression about equally well. Patients are more likely to stop the SNRIs because of side effects than to stop the vortioxetine, but SNRIs probably have more side effects than SSRIs, so unclear if vortioxetine is better than those. [Wagner et al](https://sci-hub.tw/10.1016/j.jad.2017.11.056) are able to find a study comparing vortioxetine to the SSRI Paxil; they work about equally well.
What about the other claims? Weirdly, vortioxetine patients have [more nausea and vomiting](https://sci-hub.tw/10.1016/j.jad.2017.11.056) than venlafaxine patients, although itâs not significant. [Other studies](https://www.tandfonline.com/doi/full/10.1080/24750573.2017.1338823#) confirm nausea is a pretty serious vortioxetine side effect. I have no explanation for this. Antagonizing 5-HT3 receptors does one thing â treats nausea and vomiting! â and vortioxetine definitely does this. It must be hitting some other unknown receptor really hard, so hard that the 5-HT3 antagonism doesnât counterbalance it. Either that, or itâs the thing where God hates antidepressants again.
What about sexual dysfunction? [Jacobsen et al](https://www.sciencedirect.com/science/article/abs/pii/S1743609515344192) find that patients have slightly (but statistically significantly) less sexual dysfunction on vortioxetine than on escitalopram. But the study was done by Takeda, and the difference is so slight (a change of 8.8 points on a 60 point scale, vs. a change of 6.6 points) that itâs hard to take it very seriously.
What about cognition? I was sure this was fake, but it seems to have more evidence behind it than anything else. [Carlat Report](https://www.thecarlatreport.com/the-carlat-psychiatry-report/trintellix-and-cognition-a-closer-look/) (paywalled) thinks it might be legit, based on a series of (Takeda-sponsored) studies of performance on the Digit Symbol Substitution Test. People on vortioxetine consistently did better on this test than people on placebo or duloxetine. And it wasnât just that being not-depressed helps you try harder; they did some complicated statistics and found that vortioxetineâs test-score-improving effect was independent of its antidepressant effect (and seems to work at lower doses). Whatâs the catch? The improvement was pretty minimal, and only shows up on this one test â various other cognitive tests are unaffected. So itâs probably doing _something_ measurable, but itâs not going to give you a leg up on the SAT. The FDA seriously considered approving it as indicated for helping cognition, but eventually decided against it on the grounds that if they approved it, people would think it was useful in real life, whereas all we know is that itâs useful on this one kind of hokey test. Still, thatâs one hokey test more than vilazodone was ever able to show for itself.
In summary, vortioxetine probably treats depression about as well as any other antidepressant, but makes you slightly more nauseous, may (if you _really_ trust pharma company studies) give you slightly fewer sexual side effects, and may improve your performance on the Digit Symbol Substitution Test. It also costs $375 a month (Lexapro still costs $7). If you want to pay $368 extra to be a little more nauseous and substitute digits for symbols a little faster, this is definitely the drug for you.
**VI.**
In conclusion, big pharma spent about ten years seeing if combining 5-HT1A partial agonism with SSRI antidepressants led to any benefits. In the end, it didnât, unless you count benefits to big pharmaâs bottom line.
Iâm a little baffled, because pharma companies generally donât waste money researching drugs unless they have very good theoretical reasons to think theyâll work. But I canât make heads or tails of the theoretical case for 5-HT1A partial agonists for depression.
For one thing, thereâs a pretty strong argument that buspirone exerts its effects via dopamine rather than serotonin â a case that it seems like nobody, including the pharma companies, is even slightly aware of. If this were true, the whole project would have been doomed from the beginning. What happened here?
For another, I still donât get the supposed model for how buspirone even _could_ exert its effects through 5-HT1A. Does it increase or decrease serotonergic transmission? Does it desensitize presynaptic autoreceptors the same way SSRIs do, or do something else? I canât figure out a combination of answers to this question that are consistent with each other and with the known effects of these drugs. Is there one?
For another, the case seems to have been premised on the idea that buspirone (a presynaptic 5-HT1A agonist) would work the same as pindolol (a presynaptic 5-HT1A antagonist), even after studies showed that it didnât. And then it combined that with an assumption that it was better to spend hundreds of millions of dollars discovering a drug that combined SSRI and buspirone-like effects, rather than just giving someone a pill of SSRI powder mixed with buspirone powder. Why?
I would be grateful if some friendly pharmacologist reading this were to comment with their take on these questions. This is supposed to be my area of expertise, and I have to admit I am stumped.
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [psychiatry](https://slatestarcodex.com/tag/psychiatry/) | [94 Comments](https://slatestarcodex.com/2020/06/15/the-vision-of-vilazodone-and-vortioxetine/#comments)
[Open Thread 156](https://slatestarcodex.com/2020/06/14/open-thread-156/ "Permalink to Open Thread 156")
--------------------------------------------------------------------------------------------------------
Posted on [June 14, 2020](https://slatestarcodex.com/2020/06/14/open-thread-156/ "4:30 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin/ "View all posts by Scott Alexander")
This is the biweek-ly visible open thread (there are also hidden open threads twice a week you can reach through the Open Thread tab on the top of the page). Post about anything you want, but please try to avoid hot-button political and social topics. You can also talk at the **[SSC subreddit](https://www.reddit.com/r/slatestarcodex/)** â and also check out the **[SSC Podcast](https://linktr.ee/sscpodcast)**. Also:
**1.** Comment of the week: superkamiokande from the subreddit [explains the structural and computational differences between Wernickeâs and Brocaâs areas](https://www.reddit.com/r/slatestarcodex/comments/h7egp3/wordy_wernickes/fumiaix/).
**2.** Thereâs another SSC virtual meetup next week, guest speaker Robin Hanson. More information [here](https://cephalopods.blog/2020/04/12/ssc-lesswrong-meetup/).
**3.** As many areas reopen, local groups will have to decide whether or not to restart in-person meetups. I canât speak to other countries that may have things more under control, but in the US context, I am against this. Just because itâs legal to hold medium-sized gatherings now doesnât mean itâs a good idea. I would feel really bad if anyone became sick or spread the pandemic because of my blog. I donât control local groups, and they can do what they want, but I wonât be advertising meetups on the blogroll until I feel like theyâre safe. Exceptions for East Asia, New Zealand, and anyone else who can convince me that their country is in the clear.
**4.** Some people have noticed that [my toxoplasma post](https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage/) seems disconfirmed by recent protests, which reached national scale even though the incident was very clear-cut and uncontroversial. I agree this is some negative evidence. The toxoplasma model was meant to be a tendency, not a 100% claim about things always work. Certainly it is still mysterious in general why some outrageous incidents spark protests and other near-identical ones donât. I think itâs relevant that everyone is in a bad place right now because of coronavirus (remember, just two months ago Marginal Revolution posted [When Will The Riots Begin?](https://marginalrevolution.com/marginalrevolution/2020/04/when-will-the-riots-begin.html)), and that 2020 is the peak of Turchinâs fifty-year [cycle of conflict](https://slatestarcodex.com/2019/09/02/book-review-ages-of-discord/).
**5.** Speaking of protests, the open threads have been getting pretty intense lately. I realize some awful stuff has been going on, and emotions are really high, but I want everyone to take a deep breath and try to calm down a little bit before saying anything youâll regret later. I will be enforcing the usually-poorly-enforced ban on culture war topics in this thread with unrecorded deletions. I may or may not suspend the next one or two hidden threads to give everyone a chance to calm down. I hope everybody is staying safe and sane during these difficult times.
**6.** If you havenât already taken last weekâs nootropics survey, and you are an experienced user of nootropics, you can [take it now](https://slatestarcodex.com/2020/06/07/take-the-new-nootropics-survey/).
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [open](https://slatestarcodex.com/tag/open/) | [1,208 Comments](https://slatestarcodex.com/2020/06/14/open-thread-156/#comments)
[Wordy Wernickeâs](https://slatestarcodex.com/2020/06/11/wordy-wernickes/ "Permalink to Wordy Wernickeâs")
----------------------------------------------------------------------------------------------------------
Posted on [June 11, 2020](https://slatestarcodex.com/2020/06/11/wordy-wernickes/ "10:01 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin/ "View all posts by Scott Alexander")
There are two major brain areas involved in language. To oversimplify, Wernickeâs area in the superior temporal gyrus handles meaning; Brocaâs area in the inferior frontal gyrus handles structure and flow.
If a stroke or other brain injury damages Brocaâs area but leaves Wernickeâs area intact, you get language which is meaningful, but not very structured or fluid. You sound like a caveman: âWant food!â
If it damages Wernickeâs area but leaves Brocaâs area intact, you get speech which has normal structure and flow, but is meaningless. Iâd read about this pattern in books, but I still wasnât prepared the first time I saw a video of a Wernickeâs aphasia patient ([source](https://tactustherapy.com/what-is-fluent-aphasia-video/)):
During yesterdayâs discussion of GPT-3, a commenter mentioned how _alien_ it felt to watch something use language perfectly without quite making sense. I agree itâs eerie, but it isnât some kind of inhuman robot weirdness. Any one of us is a railroad-spike-through-the-head away from doing the same.
Does this teach us anything useful about GPT-3 or neural networks? I lean towards no. GPT-3 already makes more sense than a Wernickeâs aphasiac. Whatever itâs doing is on a higher level than the Brocaâs/Wernickeâs dichotomy. Still, it would be interesting to learn what kind of computational considerations caused the split, and whether thereâs any microstructural difference in the areas that reflects it. I donât know enough neuroscience to have an educated opinion on this.
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [ai](https://slatestarcodex.com/tag/ai/), [psychology](https://slatestarcodex.com/tag/psychology/) | [101 Comments](https://slatestarcodex.com/2020/06/11/wordy-wernickes/#comments)
[The Obligatory GPT-3 Post](https://slatestarcodex.com/2020/06/10/the-obligatory-gpt-3-post/ "Permalink to The Obligatory GPT-3 Post")
--------------------------------------------------------------------------------------------------------------------------------------
Posted on [June 10, 2020](https://slatestarcodex.com/2020/06/10/the-obligatory-gpt-3-post/ "6:24 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin/ "View all posts by Scott Alexander")
**I.**
I would be failing my brand if I didnât write something about GPT-3, but Iâm not an expert and discussion is still in its early stages. Consider this a summary of some of the interesting questions Iâve heard posed elsewhere, especially comments by [gwern](https://www.gwern.net/newsletter/2020/05) and [nostalgebraist](https://nostalgebraist.tumblr.com/post/619672884731904000/gpt-3-and-scaling-trends). Both of them are smart people who I broadly trust on AI issues, and both have done great work with GPT-2. Gwern has gotten it to [write poetry](https://slatestarcodex.com/2019/03/14/gwerns-ai-generated-poetry/), [compose music](https://www.gwern.net/GPT-2-music), and even [sort of play some chess](https://slatestarcodex.com/2020/01/06/a-very-unlikely-chess-game/); nostalgebraist has created [nostalgebraist-autoresponder](https://nostalgebraist-autoresponder.tumblr.com/) (a Tumblr written by GPT-2 trained on nostalgebraistâs own Tumblr output). Both of them disagree pretty strongly on the implications of GPT-3. I donât know enough to resolve that disagreement, so this will be a kind of incoherent post, and hopefully stimulate some more productive comments. So:
OpenAI has released a new paper, [Language Models Are Few-Shot Learners](https://arxiv.org/pdf/2005.14165.pdf), introducing GPT-3, the successor to the wildly-successful language-processing AI GPT-2.
GPT-3 doesnât have any revolutionary new advances over its predecessor. Itâs just _much_ bigger. GPT-2 had 1.5 billion parameters. GPT-3 has 175 billion. The researchers involved are very open about how itâs the same thing but bigger. Their research goal was to test how GPT-like neural networks scale.
Before we get into the weeds, letâs get a quick gestalt impression of how GPT-3 does compared to GPT-2.
Hereâs a sample of GPT-2 trying to write an article:
> **PROMPT: MILEY CYRUS WAS CAUGHT SHOPLIFTING FROM ABERCROMBIE AND FITCH ON HOLLYWOOD BOULEVARD TODAY**
>
> The 19-year-old singer was caught on camera being escorted out of the store by security guards.
>
> The singer was wearing a black hoodie with the label âBlurred Linesâ on the front and âFashion Policeâ on the back
>
> The singer was also wearing a pair of black-rimmed glasses, a black jacket, black jeans and black sandals.
>
> She was carrying a pair of black and white striped gloves and a small black bag.
And hereâs GPT-3 at the same task:
> **PROMPT: UNITED METHODISTS AGREE TO HISTORIC SPLIT â THOSE WHO OPPOSE GAY MARRIAGE WILL FORM THEIR OWN DENOMINATION**
>
> After two days of intense debate, the United Methodist Church has agreed to a historic split â one that is expected to end in the creation of a new denomination, one that will be âtheologically and socially conservative,â according to the Washington Post. The majority of delegates attending the churchâs annual General Conference in May voted to strengthen a ban on the ordination of LGBTQ clergy and to write new rules that will âdisciplineâ clergy who officiate at same-sex weddings. But those who opposed these measures have a new plan: They say they will form a separate denomination by 2020, calling their church the Christian Methodist denomination.
>
> The Post notes that the denomination, which claims 12.5 million members, was in the early 20th century the âlargest Protestant denomination in the USâ, but that it has been shrinking in recent decades. The new split will be the second in the churchâs history. The first occurred in 1968, when roughly 10% of the denomination left to form the Evangelical United Brethren Church. The Post notes that the proposed split âcomes at a critical time for the church, which has been losing members for years,â which has been âpushed toward the brink of a schism over the role of LGBTQ people in the church.â Gay marriage is not the only issue that has divided the church. In 2016, the denomination was split over ordination of transgender clergy, with the North Pacific regional conference voting to ban them from serving as clergy, and the South Pacific regional conference voting to allow them.
In case youâre wondering, no, the Methodist Church did not really split from the Evangelical United Brethren Church in 1968, it _merged with it_ in that year. GPT-3 is doing what I used to do on essay questions â throw out a bunch of half-remembered names and dates and hope nobody looks too closely at the exact relations.
These kinds of comparisons are inherently fuzzy, since both GPTs have produced a lot of material and itâs easy to cheat by taking better or worse samples of one or the other. You can see more GPT-2 samples [here](https://openai.com/blog/better-language-models/) and more GPT-3 samples [here](https://read-the-samples.netlify.app/)
Or compare some of GPT-2âs poetry:
> The gods are they who came to earth
> And set the seas ablaze with gold.
> There is a breeze upon the sea,
> A sea of summer in its folds,
> A salt, enchanted breeze that mocks
> The scents of life, from far away
> Comes slumbrous, sad, and quaint, and quaint.
> The mother of the gods, that day,
> With mortal feet and sweet voice speaks,
> And smiles, and speaks to men: âMy Sweet,
> I shall not weary of thy pain.â
>
> âŚLet me drink of the wine of pain
> And think upon the agonies of hope,
> And of the blessed Giver of all good things;
> For, man for man, mine is the deepest love
> That sorrow takes upon the humblest soul;
> But who hath learned how sorrow turns to gall
> The places where my feet have trod before.
âŚto some of GPT-3âs:
> I must have shadows on the way
> If I am to walk I must have
> Each step taken slowly and alone
> To have it ready made
>
> And I must think in lines of grey
> Must have dim thoughts to be my guide
> Must look on blue and green
> And never let my eye forget
> That color is my friend
> And purple must surround me too
>
> The yellow of the sun is more more
> Intusive than the bluish snow
> That falls on all of us. I must have
> Grey thoughts and blue thoughts walk with me
> If I am to go away at all.
Ignore the style differences â the GPT-2 instance was trained on 18th-century verse, GPT-3 on Wallace Stevens. I notice that the first poem, while flowery, is kind of incoherent. The second has more of a structure and a theme that it maintains throughout. Iâm not sure it really symbolizes anything interesting, but Iâve never been sure of that with real Wallace Stevens either, so good work.
(you can read more GPT-2 poetry [here](https://www.gwern.net/docs/ai/poetry/2019-03-06-gpt2-poetry-prefix-1000samples.txt). Thereâs not much more public GPT-3 poetry â you can read one or two more samples on the paper linked above).
Enough writing. What about math? Last year, I posted the following excerpt from GPT-2:
> **PROMPT: MY 10 FAVORITE ANIMALS ARE:**
>
> My ten favorite animals are:
> 1\. Zebras with a white scar on the back
> 2\. Insiduous spiders and octopus
> 3\. Frog with large leaves, hopefully black
> 4\. Cockatiel with scales
> 5\. Razorbill with wings hanging about 4 inches from oneâs face and a heart tattoo on a frog
> 3\. Cockatric interlocking tetrabods that can be blind, cut, and eaten raw:
> 4\. Black and white desert crocodiles living in sunlight
> 5\. Zebra and many other pea bugs
Despite the obvious flaws in this piece, I was impressed. GPT-2 was clearly trying to make a numbered list, and almost kind of getting it right! It counted to 4 successfully! Remember, this is a text prediction engine that didnât necessarily need to have any concept of numbers. But it still kind of counted to 4! I wrote:
> Imagine you prompted the model with âWhat is one plus one?â I actually donât know how it would do on this problem. Iâm guessing it would answer âtwoâ, just because the question probably appeared a bunch of times in its training data.
>
> Now imagine you prompted it with âWhat is four thousand and eight plus two thousand and six?â or some other long problem that probably didnât occur exactly in its training data. I predict it would fail, because this model canât count past five without making mistakes. But I imagine a very similar program, given a thousand times more training data and computational resources, would succeed. It would notice a pattern in sentences including the word âplusâ or otherwise describing sums of numbers, it would figure out that pattern, and it would end up able to do simple math. I donât think this is too much of a stretch given that GPT-2 learned to count to five and acronymize words and so on.
I said âa very similar program, given a thousand times more training data and computational resources, would succeed \[at adding four digit numbers\]â. Well, GPT-3 is a very similar program with a hundred times more computational resources, andâŚit can add four-digit numbers! At least sometimes, which is better than GPT-2âs ânone of the timeâ.
**II.**
In fact, letâs take a closer look at GPT-3âs math performance.
The 1.3 billion parameter model, equivalent to GPT-2, could get two-digit addition problems right less than 5% of the time â little better than chance. But for whatever reason, once the model hit 13 billion parameters, its addition abilities improved to 60% â the equivalent of a D student. At 175 billion parameters, it gets an A+.
What does it mean for an AI to be able to do addition, but only inconsistently? For four digit numbers, but not five digit numbers? Doesnât it either understand addition, or not?
Maybe itâs cheating? Maybe there were so many addition problems in its dataset that it just memorized all of them? I donât think this is the answer. There are 100 million possible 4-digit addition problems; seems unlikely that GPT-3 saw that many of them. Also, if it was memorizing its training data, it should have gotten all 100 possible two-digit multiplication problems, but it only has about a 25% success rate on those. So it canât be using a lookup table.
Maybe itâs having trouble _locating_ addition rather than _doing_ addition? (thanks to [nostalgebraist](https://www.lesswrong.com/posts/ZHrpjDc3CepSeeBuE/gpt-3-a-disappointing-paper#1_2__On__few_shot_learning_) for this framing). This sort of seems like the lesson of Table 3.9:
âZero-shotâ means you just type in â20 + 20 = ?â. âOne-shotâ means you give it an example first: â10 + 10 = 20. 20 + 20 = ?â âFew-shotâ means you give it as many examples as it can take. Even the largest and best model only does mediocre on the zero-shot task, but it does better on the one-shot and best on the few-shot. So it seems like if you remind it what addition is a couple of times before solving an addition problem, it does better. This suggests that there is a working model of addition somewhere within the bowels of this 175 billion parameter monster, but it has a hard time drawing it out for any particular task. You need to tell it âadditionâ âweâre doing additionâ âcome on now, do some addition!â up to fifty times before it will actually deploy its addition model for these problems, instead of some other model. Maybe if you did this five hundred or five thousand times, it would excel at the problems it canât do now, like adding five digit numbers. But why should this be so hard? The plus sign almost always means addition. â20 + 20 = ?â is not some inscrutable hieroglyphic text. It basically always means the same thing. Shouldnât this be easy?
When I prompt GPT-2 with addition problems, the most common failure mode is getting an answer that isnât a number. Often itâs a few paragraphs of text that look like they came from a math textbook. It feels like itâs been able to locate the problem as far as âyou want the kind of thing in math textbooksâ, but not as far as âyou want the answer to the exact math problem you are giving meâ. This is a surprising issue to have, but so far AIs have been nothing if not surprising. Imagine telling Marvin Minsky or someone that an AI smart enough to write decent poetry would not necessarily be smart enough to know that, when asked â325 + 504â, we wanted a numerical response!
Or maybe thatâs not it. Maybe it has trouble getting math problems right consistently for the same reason _I_ have trouble with this. In fact, GPT-3âs performance is very similar to mine. I can also add two digit numbers in my head with near-100% accuracy, get worse as we go to three digit numbers, and make no guarantees at all about four-digit. I also find multiplying two-digit numbers in my head much harder than adding those same numbers. Whatâs my excuse? Do I understand addition, or not? I used to assume my problems came from limited short-term memory, or from neural noise. But GPT-3 shouldnât have either of those issues. Should I feel a deep kinship with GPT-3? Are we both [minds heavily optimized for writing](https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/), forced by a cruel world to sometimes do math problems? I donât know.
\[EDIT: an alert reader points out that when GPT-3 fails at addition problems, it fails in human-like ways â for example, forgetting to carry a 1.\]
**III.**
GPT-3 is, fundamentally, an attempt to investigate scaling laws in neural networks. That is, if you start with a good neural network, and make it ten times bigger, does it get smarter? How much smarter? Ten times smarter? Can you keep doing this forever until itâs infinitely smart or you run out of computers, whichever comes first?
So far the scaling looks logarithmic â a consistent multiplication of parameter number produces a consistent gain on the benchmarks.
Does that mean it really is all about model size? Should something even bigger than GPT-3 be better still, until eventually we have things that can do all of this stuff arbitrarily well without any new advances?
This is where my sources diverge. Gwern says yes, probably, and points to years of falsified predictions where people said that scaling might have worked so far, but definitely wouldnât work past _this_ point. Nostalgebraist says maybe not, and points to decreasing returns of GPT-3âs extra power on certain benchmarks (see Appendix H) and to [this OpenAI paper](https://nostalgebraist.tumblr.com/post/619511470655504384/the-gpt-3-paper-cites-another-recent-jan-2020), which he interprets as showing that scaling should break down somewhere around or just slightly past where GPT-3 is. If heâs right, GPT-3 might be around the best that you can do just by making GPT-like things bigger and bigger. He also points out that although GPT-3 is impressive as a general-purpose reasoner that has taught itself things without being specifically optimized to learn them, itâs often worse than task-specifically-trained AIs at various specific language tasks, so we shouldnât get too excited about it being close to superintelligence or anything. I guess in retrospect this is obvious â itâs cool that it learned how to add four-digit numbers, but calculators have been around a long time and can add much longer numbers than that.
If the scaling laws donât break down, what then?
GPT-3 is very big, but itâs not pushing the limits of how big an AI itâs possible to make. If someone rich and important like Google wanted to make a much bigger GPT, they could do it.
> GPT-3 is terrifying because it's a tiny model compared to what's possible, trained in the dumbest way possible on a single impoverished modality on tiny data, yet the first version already manifests crazy runtime meta-learningâand the scaling curves đ´đľđŞđđ are not bending! đŽ [https://t.co/hQbW9znm3x](https://t.co/hQbW9znm3x)
>
> â đđ´đ˘đŻđŤ (@gwern) [May 31, 2020](https://twitter.com/gwern/status/1267215588214136833?ref_src=twsrc%5Etfw)
Does âterrifyingâ sound weirdly alarmist here? I think the argument is something like this. In February, we watched as the number of US coronavirus cases went from 10ish to 50ish to 100ish over the space of a few weeks. We didnât panic, because 100ish was still a very low number of coronavirus cases. In retrospect, we should have panicked, because the number was constantly increasing, showed no signs of stopping, and simple linear extrapolation suggested it would be somewhere scary very soon. After the number of coronavirus cases crossed 100,000 and 1,000,000 at exactly the time we could have predicted from the original curves, we all told ourselves we _definitely_ wouldnât be making that exact same mistake again.
Itâs always possible that the next AI will be the one where the scaling curves break and it stops being easy to make AIs smarter just by giving them more computers. But unless something surprising like that saves us, we should assume GPT-like things will become much more powerful very quickly.
What would much more powerful GPT-like things look like? They can already write some forms of text at near-human level (in the paper above, the researchers asked humans to identify whether a given news article had been written by a human reporter or GPT-3; the humans got it right 52% of the time)
So one very conservative assumption would be that a smarter GPT would do better at various arcane language benchmarks, but otherwise not be much more interesting â once it can write text at a human level, thatâs it.
Could it do more radical things like write proofs or generate scientific advances? After all, if you feed it thousands of proofs, and then prompt it with a theorem to be proven, thatâs a text prediction task. If you feed it physics textbooks, and prompt it with âand the Theory of Everything isâŚâ, thatâs also a text prediction task. I realize these are wild conjectures, but the last time I made a wild conjecture, it was âmaybe you can learn addition, because thatâs a text prediction taskâ and that one came true within two years. But my guess is still that this wonât happen in a meaningful way anytime soon. GPT-3 is much better at writing coherent-sounding text than it is at any kind of logical reasoning; remember it still canât add 5-digit numbers very well, get its Methodist history right, or consistently figure out that a plus sign means âadd thingsâ. Yes, it can do simple addition, but it has to use supercomputer-level resources to do so â itâs so inefficient that itâs hard to imagine even very large scaling getting it anywhere useful. At most, maybe a high-level GPT could write a plausible-sounding Theory Of Everything that uses physics terms in a vaguely coherent way, but that falls apart when a real physicist examines it.
Probably we can be pretty sure it wonât take over the world? I have a hard time figuring out how to turn world conquest into a text prediction task. It could probably imitate a human writing a plausible-sounding _plan_ to take over the world, but it couldnât implement such a plan (and would have no desire to do so).
For me the scary part isnât the much larger GPT weâll probably have in a few years. Itâs the discovery that even very complicated AIs get smarter as they get bigger. If someone ever invented an AI that did do more than text prediction, it would have a pretty fast takeoff, going from toy to superintelligence in just a few years.
Speaking of which â can anything based on GPT-like principles ever produce superintelligent output? How would this happen? If itâs trying to mimic what a human can write, then no matter how intelligent it is âunder the hoodâ, all that intelligence will only get applied to becoming better and better at predicting what kind of dumb stuff a normal-intelligence human would say. In a sense, solving the Theory of Everything would be a failure at its primary task. No human writer would end the sentence âthe Theory of Everything isâŚâ with anything other than âcurrently unknown and very hard to figure outâ.
But if our own brains are also prediction engines, how do _we_ ever create things smarter and better than the ones we grew up with? I can imagine scientific theories being part of our predictive model rather than an output of it â we use the theory of gravity to predict how things will fall. But what about new forms of art? What about thoughts that have never been thought before?
And how many parameters does the adult human brain have? The responsible answer is that brain function doesnât map perfectly to neural net function, and even if it did we would have no idea how to even begin to make this calculation. The irresponsible answer is [a hundred trillion](https://blog.piekniewski.info/2018/08/28/fun-numbers-about-the-brain/). Thatâs a big number. But at the current rate of GPT progress, a GPT will have that same number of parameters somewhere between GPT-4 and GPT-5. Given the speed at which OpenAI works, that should happen about two years from now.
I am definitely not predicting that a GPT with enough parameters will be able to do everything a human does. But Iâm really interested to see what it _can_ do. And weâll find out soon.
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [ai](https://slatestarcodex.com/tag/ai/) | [263 Comments](https://slatestarcodex.com/2020/06/10/the-obligatory-gpt-3-post/#comments)
[Take The New Nootropics Survey](https://slatestarcodex.com/2020/06/07/take-the-new-nootropics-survey/ "Permalink to Take The New Nootropics Survey")
-----------------------------------------------------------------------------------------------------------------------------------------------------
Posted on [June 7, 2020](https://slatestarcodex.com/2020/06/07/take-the-new-nootropics-survey/ "10:52 pm") by [Scott Alexander](https://slatestarcodex.com/author/admin/ "View all posts by Scott Alexander")
A few years ago I surveyed nootropics users about their experiences with different substances and posted [the results](https://slatestarcodex.com/2016/03/01/2016-nootropics-survey-results/) here. Since then lots of new nootropics have come out, so Iâm doing it again. If you have nootropics experience, please take **[The 2020 SSC Nootropics Survey](https://forms.gle/6yLP6qU818hRSgUc7)**. Expected completion time is ~15 minutes.
Thanks!
Posted in [Uncategorized](https://slatestarcodex.com/category/uncategorized/) | Tagged [nootropics](https://slatestarcodex.com/tag/nootropics/), [survey](https://slatestarcodex.com/tag/survey/) | [20 Comments](https://slatestarcodex.com/2020/06/07/take-the-new-nootropics-survey/#comments)
[â Older posts](https://slatestarcodex.com/page/2/)
* ### Meta
* [Register](https://slatestarcodex.com/wp-login.php?action=register)
* [Log in](https://slatestarcodex.com/wp-login.php)
* [Entries feed](https://slatestarcodex.com/feed/)
* [Comments feed](https://slatestarcodex.com/comments/feed/)
* [WordPress.org](https://wordpress.org/)
* [](https://www.beeminder.com/)
Beeminder's an evidence-based willpower augmention tool that collects quantifiable data about your life, then helps you organize it into commitment mechanisms so you can keep resolutions. They've also got a blog about what they're doing [here](http://blog.beeminder.com/tag/rationality/)
[](https://www.janestreet.com/join-jane-street/open-positions/?utm_source=ssc&utm_medium=banner&utm_campaign=trading&utm_term=trading&utm_content=sierpinski)
[Jane Street](https://www.janestreet.com/join-jane-street/open-positions/?utm_source=ssc&utm_medium=banner&utm_campaign=trading&utm_term=trading&utm_content=sierpinski) is a quantitative trading firm with a focus on technology and collaborative problem solving. We're always hiring talented programmers, traders, and researchers and have internships and fulltime positions in New York, London, and Hong Kong. No background in finance required.
[](http://effectivealtruism.org/?refer=ssc)
The [Effective Altruism newsletter](http://effectivealtruism.org/?refer=ssc) provides monthly updates on the highest-impact ways to do good and help others.
[](https://www.patreon.com/user?u=926060)
Support Slate Star Codex on [Patreon](https://www.patreon.com/user?u=926060). I have a day job and SSC gets free hosting, so don't feel pressured to contribute. But extra cash helps pay for contest prizes, meetup expenses, and me spending extra time blogging instead of working.
[](https://80000hours.org/key-ideas/?utm_source=ssc&utm_campaign=2017-04+Sidebar+Ad&utm_medium=blog)
80,000 Hours researches different problems and professions to help you figure out how to do as much good as possible. Their [free career guide](https://80000hours.org/career-guide/?utm_source=ssc&utm_campaign=2017-04+Sidebar+Ad&utm_medium=blog) show you how to choose a career that's fulfilling and maximises your contribution to solving the world's most pressing problems.
[](https://altruisto.com/?ref=ssc)
[Altruisto](https://altruisto.com/?ref=ssc) is a browser extension so that when you shop online, a portion of the money you pay goes to effective charities (no extra cost to you). Just install an extension and when you buy something, people in poverty will get medicines, bed nets, or financial aid.
[](http://LauraBaurMD.com)
Dr. Laura Baur is a psychiatrist with interests in literature review, reproductive psychiatry, and relational psychotherapy; see [her website](http://LauraBaurMD.com) for more. Note that due to conflict of interest she doesn't treat people in the NYC rationalist social scene.
[](http://epidemicforecasting.org/)
The [COVID-19 Forecasting Project](http://epidemicforecasting.org/) at the University of Oxford is making advanced pandemic simulations of 150+ countries available to the public, and also offer pro-bono forecasting services to decision-makers.
[](https://www.givingwhatwecan.org/?refer=ssc)
[Giving What We Can](https://www.givingwhatwecan.org/) is a charitable movement promoting giving some of your money to the developing world or other worthy causes. If you're interested in this, consider taking their Pledge as a formal and public declaration of intent.
[](https://aisafety.com/reading-group/)
[AISafety.com](https://aisafety.com/reading-group/) hosts a Skype reading group Wednesdays at 19:45 UTC, reading new and old articles on different aspects of AI Safety. We start with a presentation of a summary of the article, and then discuss in a friendly atmosphere.
[](https://safetywing.com/)
Norwegian founders with an international team on a mission to offer the equivalent of a Norwegian social safety net globally available as a membership. Currently offering [travel medical insurance for nomads](https://safetywing.com/nomad-insurance), and [global health insurance for remote teams](https://safetywing.com/remote-health).
[](http://seattleanxiety.com/)
[Seattle Anxiety Specialists](https://seattleanxiety.com/) are a therapy practice helping people overcome anxiety and related mental health issues (eg GAD, OCD, PTSD) through evidence based interventions and self-exploration. Check out their free anti-anxiety guide [here](https://seattleanxiety.com/#free-guide-section)
.
[](https://b4x.com/)
[B4X](https://b4x.com/) is a free and open source developer tool that allows users to write apps for Android, iOS, and more.
[](http://www.mealsquares.com/)
[MealSquares](http://www.mealsquares.com/) is a "nutritionally complete" food that contains a balanced diet worth of nutrients in a few tasty easily measurable units. Think Soylent, except zero preparation, made with natural ingredients, and looks/tastes a lot like an ordinary scone.
[](https://substack.com/?utm_source=ssc&utm_campaign=ssc1)
[Substack](https://substack.com/?utm_source=ssc&utm_campaign=ssc1) is a blogging site that helps writers earn money and readers discover articles they'll like.
[](https://www.metaculus.com/questions/?show-welcome=true)
Metaculus is a platform for generating crowd-sourced predictions about the future, especially science and technology. If you're interested in testing yourself and contributing to their project, check out their [questions page](https://www.metaculus.com/questions/)
[](http://www.hulozila.com/ "Hulozila")