Welcome to this week’s Market Pulse, your 5 minute update on key market news and events, with takeaways and insights from the Sidekick Investment Team.
Our stories this week:
Read the full Market Pulse below, or if you want to access it on the go, download the Sidekick app.
Adrian (Portfolio Manager), and the rest of the Sidekick team.
It’s important to note that the content of this Market Pulse is based on current public information which we consider to be reliable and accurate. It represents Sidekick’s view only and does not represent investment advice - investors should not take decisions to trade based on this information.
A fiduciary is a person who occupies a position of such power and confidence regarding the property of another that the law requires them to act solely in the interest of the person they represent [1]. Examples of fiduciaries are executors and administrators, trustees, and… company board of directors, such as the one of OpenAI, the company behind chatGPT. But things get complicated when the interest to be protected is "humanity," as in the case of OpenAI - a non-profit organisation albeit with a capped-profit arm [2].
In a for-profit company, things are straightforward. Everyone answers to the board, appointed by investors who usually have financial returns on their minds. All of OpenAI responds to its ultimate decision-making body, an independent non-profit board of six directors who do not own any equity in the OpenAI entities and who, broadly speaking, appoint themselves. They answer to their own consciences, not to any investors.
Furthermore, OpenAI was deliberately structured to resist the values that drive much of the tech industry in Silicon Valley—a relentless pursuit of scale and a build-first, ask-questions-later approach to launching consumer products.
So, last Friday, when OpenAI's non-profit board, the ultimate decision-maker, ousted co-founder and start CEO Sam Altman, citing a lack of consistent candour that hindered their oversight [3], the Board presumably acted in good faith. They expressed concerns about Altman's aggressive push to commercialise products like ChatGPT, fearing it could escalate the risk of creating a rogue AI with potentially catastrophic consequences for humanity [4].
Microsoft was caught off guard when they were informed of Sam Altman's dismissal with only a few minutes warning [5], despite their substantial investment of $13 billion in OpenAI's capped-profit arm. However, the Board retains ultimate authority and is not obliged to consider the interests of investors or employees. Their primary responsibility is to safeguard humanity's best interests.
The problem is that although the board acted in good faith, safeguarding humanity is capital-intensive. It requires lots of computing power and expensive engineers. And although its operational agreement explicitly states to investors that "It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation," nobody expected them to mean it.
The saga is far from over, and OpenAI's future hangs in the balance. As we write this, there is an agreement in principle for Altman to return [6]. A significant governance overhaul is needed to ensure the organisation's long-term viability and adherence to its altruistic mission. Only then can OpenAI regain the trust of its stakeholders and navigate the complex challenges that lie ahead.
Given that AGI could lead to extinction, according to experts [7], a reasonable question is: How much time do we have left?
For those unfamiliar with AGI, it stands for Artificial General Intelligence, a hypothetical level of AI that matches human intelligence. Currently, AI systems are specialised, excelling in specific tasks like chess or text comprehension. However, no AI can replicate the human ability to perform a wide range of tasks with equal proficiency.
Predicting the arrival of AGI is challenging due to two key unknowns: our current distance from achieving AGI and the future pace of AI research advancements. While significant progress has been made, the path to AGI remains uncertain.
Rupert Macey-Dare from Oxford University took on the challenge and employed a simple exponential growth model to estimate the timeline for the emergence of AGI [8]. Exponential growth models assume that a quantity increases at a constant rate over time, providing a helpful framework for analysing trends. But they should be taken with a pinch of salt as the development is likely to be influenced by various factors, including breakthroughs in AI research, availability of computational resources, and societal considerations.
Nevertheless, he assumed that we are currently between 5% and 30% of the way towards achieving AGI and that AI capabilities increase by 10% to 100% every two years. Based on these assumptions, Macey-Dare estimated that AGI could arrive between 2032 and 2048, with the most likely date being 2041. This implies that AGI could be reached within 18 years. However, in a worst-case scenario (or best-case, depending on one's perspective), AGI could be achieved as early as 2028, just five years from now.
Despite the uncertainties and assumptions in the methodology, Macey-Dare's work underscores the potential for AGI to arrive sooner than many experts have predicted. This raises important questions about the societal implications and the need to proactively consider its potential impact on our lives.
There are many ways to reduce the wealth gap, but Altimeter Capital's CEO Brad Gerstner has a unique proposal: give every newborn an investment account. He's working with lawmakers to promote a legislative program known as Invest America that would create an investing account seeded with $1,000 for each of the 3.7 million children born each year in the U.S. [9].
Gerstner argues that a $3.7 billion yearly expenditure by the federal government is less than 0.1% of the national budget and would make everyone an equity owner. Most importantly, it would allow everyone to take advantage of the 8th wonder of the world: compounding.
Indeed, assuming that someone contributes consistently, an account with a $1000 initial deposit, a $200 monthly contribution and an 8% average annual return would grow to $310,995 by the time they are 30 years old [10]. Furthermore, employers could match any individual contribution to increase those returns further. But even without any contributions, the $1000 could grow to $192,959 by the time they reach retirement age.
Critics argue that custodial brokerage accounts and wrappers already allow parents to invest for their children with tax benefits. However, Gerstner contends that those benefiting from these tools are often already building wealth. To bridge this gap, he envisions a collaboration between private businesses and the government, aiming to foster a more inclusive system.
Income inequality is a global concern, extending beyond the borders of the United States. The United Kingdom, too, faces a significant level of income inequality [11]. While Gerstner's proposals are tailored for the United States, their potential relevance goes beyond geographical boundaries. Crucially, regardless of government assistance, early investment is the key to long-term impact on one’s wealth.
Please remember, investing should be viewed as longer term. Your capital is at risk — the value of investments can go up and down, and you may get back less than you put in.
[1] https://www.britannica.com/money/fiduciary
[2] https://openai.com/our-structure
[3] https://openai.com/blog/openai-announces-leadership-transition
[6] https://www.ft.com/content/e0bd5dad-6e70-4aa8-9cc9-56cd7a470df1
[7] https://www.bbc.co.uk/news/uk-65746524?
[8] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4496418
[9] https://www.investamerica2024.com/
[10] https://www.cnbc.com/2022/04/11/compound-interest-calculator.html