WH #10: I'm changing things up
Square pegs do not fit in round holes
This will be the last newsletter I will be sending from the Substack platform. I’ve decided that I will be switching to Beehiiv for this one feature that’s missing on Substack. It’s the ability to create a cool referral program with rewards and incentives. I’m going to attempt to make it as fun as I can. You know…after 4 consecutive weeks of no growth, a square peg doesn’t fit in round holes…so I have to stop doing the same thing and try something else.
Instead of the usual giant dump of hyperlinks I come across, this week will simply be 1 video interview with Sam Altman, CEO of OpenAI. You can still find all of the cool AI stuff I come across in the bookmark vault.
Next week, be sure to look out for the email subject line that starts with “WH #11” on Monday incase the new platform stashes the email outside of your inbox. The regular newsletter format shall resume then.
What I watched this week
Summarized by ChatGPT
This is a conversation between Lex Fridman and Sam Altman, the CEO of OpenAI, a research company focused on advancing artificial intelligence in a safe and beneficial manner. The conversation covers a wide range of topics, including the history of OpenAI, the development of GPT-4, the possibilities and dangers of AI, the distinction between tools and creatures, and the challenges of managing a talented team in the AI field.
Altman discusses the early days of OpenAI and the skepticism they faced when they announced their goal of working on AGI (artificial general intelligence) at the end of 2015. He talks about the pettiness and rancor in the field at that time and how OpenAI and DeepMind were among the few brave enough to talk about AGI in the face of mockery. He notes that they do not get mocked as much now, but emphasizes the importance of having conversations about the power of AI and the need for checks and balances to ensure it is aligned with human values.
Altman then goes on to discuss GPT-4, a language model that he believes will be looked back on as a very early form of AI, much like the earliest computers. He notes that while GPT-4 is slow, buggy, and doesn't do everything well, it represents a significant breakthrough in the history of AI, much like those early computers did.
Altman also discusses the reinforcement learning component of GPT-4, called RLHF, which involves taking human feedback to align the model with what humans want it to do. He explains that while the base model can do many things, it's not very useful without RLHF. He also talks about the process of designing a great prompt to steer GPT-4 and emphasizes the need for OpenAI to be heavily involved and responsible in the process.
The conversation then turns to the possibilities and dangers of AI, with Altman noting that we stand on the precipice of fundamental societal transformation where the collective intelligence of the human species may pale in comparison to the general superintelligence of AI systems. He notes that while this is exciting because of the many applications it will empower, it is also terrifying because of the power that superintelligent AGI wields, which could destroy human civilization intentionally or unintentionally.
Altman emphasizes the importance of AI being aligned with human values and not hurting or limiting humans. He also discusses the need to educate people about the distinction between tools and creatures, noting that projecting creatureness onto a tool can make it more usable but is also dangerous.
The conversation touches on various other topics, such as AI economics, politics, and the psychology of engineers and leaders in the AI field. Altman also talks about his role as CEO of OpenAI and the challenges of managing a talented team in the AI field, including the importance of hiring the right people and putting a ton of effort into the process.