Finding signal on X is more difficult than it used to be on Twitter. We curate the best tweets on topics like AI, startups, and product development every weekday at 10 AM EST so you can focus on what matters.
84 tweets
Google Meet show how late people will be based on meeting history
Paramount is launching a hostile takeover bid to buy Warner Bros. They are going directly to shareholders with a bid valued at $108.4 billion (Source: https://variety.com/2025/tv/news/paramount-hostile-takeover-bid-warner-bros-discovery-1236603175/…)
We need a shorthand way of saying: "An AI did the work, but I vouch for the result" Saying "I did it" feels slightly sketchy, but saying "Claude did it" feels like avoiding responsibility
Some U.S. tech firms are beginning to recruit for so-called “996” roles—an intense work schedule borrowed from China’s startup culture that runs from 9 a.m. to 9 p.m., six days a week, per Forbes.
Disney has signed a deal with OpenAI & invested $1 billion into the company Sora will now be able to AI generate videos based on animated, masked & creature characters from Disney, Marvel, Pixar & Star Wars Curated selections of AI generated videos will be released on Disney+
A number of people are talking about implications of AI to schools. I spoke about some of my thoughts to a school board earlier, some highlights: 1. You will never be able to detect the use of AI in homework. Full stop. All "detectors" of AI imo don't really work, can be defeated in various ways, and are in principle doomed to fail. You have to assume that any work done outside classroom has used AI. 2. Therefore, the majority of grading has to shift to in-class work (instead of at-home assignments), in settings where teachers can physically monitor students. The students remain motivated to learn how to solve problems without AI because they know they will be evaluated without it in class later. 3. We want students to be able to use AI, it is here to stay and it is extremely powerful, but we also don't want students to be naked in the world without it. Using the calculator as an example of a historically disruptive technology, school teaches you how to do all the basic math & arithmetic so that you can in principle do it by hand, even if calculators are pervasive and greatly speed up work in practical settings. In addition, you understand what it's doing for you, so should it give you a wrong answer (e.g. you mistyped "prompt"), you should be able to notice it, gut check it, verify it in some other way, etc. The verification ability is especially important in the case of AI, which is presently a lot more fallible in a great variety of ways compared to calculators. 4. A lot of the evaluation settings remain at teacher's discretion and involve a creative design space of no tools, cheatsheets, open book, provided AI responses, direct internet/AI access, etc. TLDR the goal is that the students are proficient in the use of AI, but can also exist without it, and imo the only way to get there is to flip classes around and move the majority of testing to in class settings.
This has been said a thousand times before, but allow me to add my own voice: the era of humans writing code is over. Disturbing for those of us who identify as SWEs, but no less true. That's not to say SWEs don't have work to do, but writing syntax directly is not it.
I'm having the most fun i've ever had in my career right now. The AI overlords might take over 5 years from now, but for right now, I'm happy as hell.
Film’s biggest night is headed to @YouTube, starting 2029.
We are incredible excited @compoundvc to launch Compound Reverie Grants today. These are micro-grants for those exploring ideas at the edges, in a world that needs more support for strange and meaningful work. Why Micro Grants? The world needs more out of distribution ideas and people need more space to explore them. Part of being a thesis-driven investment firm operating on long-term time horizons is to apply that same ethos towards helping build the world we want to live in. We find this is rarer than it should be, especially so within the echo chambers of technology and startups. As we've thought about the kind of firm we want to build over the next decade, amongst our excitement, we also kept returning to a sameness we couldn't shake. Tech has grown insular, and respect for different types of thinkers feels improperly distributed. Dollars and belief flow to the most legible people, the ones who fit an overly obvious pattern, who match what a small few have decided success should look like. The less legible get passed over. This reality means we're narrowing the dispersion of what can be built and by who, while increasing the correlation of ideas and aesthetics that get funded or even just supported. To help create a more imaginative future, we've created Compound Reverie Grants. These are micro-grants to fund people exploring ideas that otherwise would not easily get funded through traditional means, but that we think are important for the present and future. They range from as little as $500 to as much as $7,500 each, in fully non-dilutive, no-strings-attached capital. There are moments in life when a small amount of support, intellectual, emotional, and/or fiscal, matters more than anyone appreciates at the time. A month or two of runway. A few hours of debate. The cost of materials for a prototype. Enough breathing room to see if something is real. We hope these grants find people in those moments. Reverie Grants are intentionally broad but there are two specific types of grantees that we know we want to help (and many others that we will learn we want to help that we hope apply). The first are people going deep on something that touches the edges or possible fringes of areas we think about: machine learning, robotics, biology, healthcare, energy, crypto, and the places where those things blur together. These should not be ideas that obviously could be venture-backed companies. You shouldn't be starting a company (though it could lead you to one) but instead you should be working on research, building a prototype, writing, open-sourcing something, or tinkering on a strange project that resists easy explanation. The second are thoughtful people who are simply in transition. Figuring things out. You might not have a thesis-aligned project, but you have a goal, a direction, something you're trying to understand or become and you could use support, a sounding board, a broader network, or a bunch of other things that equate to having people like us in your corner who can help with the complexities of navigating life's idea mazes. We know how hard that can be and we want to meet you and help you. Our hope is that this first wave of grants are only the beginning and as we continue to expand both with other capital that is not just our own, as well as support from others in our sphere who want to be involved, we will get greater clarity and hopefully more scaled impact. More and more we feel we are only creativity constrained and we want to help people who feel that their creativity is limited, under-explored, or not believed in because of its strangeness at a time in which we think the world needs these ideas most. If you're in one of those moments, tell us about it below. If you'd like to refer someone or have any questions, feel free to email grants@compound.vc or DM us on Twitter.
human data will be a $1 trillion/year market This is not a short-term prediction. It is a structural claim about where the economy converges. To believe this, you need to accept two assumptions: • Digital and physical intelligence can eventually automate the tedious parts of the economy • Self-learning intelligence without human data is impossible at the frontier automation is the most useful & liberating thing humanity can do If AI systems can automate functions, then automating all functions is the highest-leverage task for humanity. Automation compresses time. It allows: • Aspirations to be fulfilled faster, by orders of magnitude • Humans to focus on the enjoyable, judgment-heavy parts of work while robots and agents to handle the rest As humans gain time, they create more. Net-new work is initially creative and high-value. Over time it becomes legible, repeatable, and ready for automation. Once automated, it continues delivering value while freeing humans to focus on new creative work. This loop is permanent. Automation does not eliminate human work. It pushes humans toward higher-value, more creative work. At a societal level, automation reshapes the economics of the world. As AI systems take on more production and coordination, the cost of producing goods and services collapses while availability explodes. At the same time, distribution becomes increasingly optimal. Digitally and physically intelligent systems coordinate supply and demand with less friction, less waste, and less delay, making access faster, cheaper, and more reliable every year AI models learn from humans forever Every artificially intelligent system learns from humans in some form: • Demonstrations • Supervised fine-tuning • Preference learning • Complex rubrics and evaluations • Continual corrections Even self-play and synthetic data depend on human grounding — humans define objectives, rewards, and what “good” looks like. As a result: • Every function in the economy contains useful learning signal • Every decision, exception, failure, and tradeoff creates data But raw activity is not enough. That data must be: • Recorded • Structured • Evaluated • Packaged into usable pipelines And importantly, functions must continue running while they are being automated. Automation is iterative, not instantaneous. this creates a universal obligation and opportunity To iteratively automate functions, every company, government agency, or institution running real operations must consume and produce structured data related to those functions. In most cases, it will not be optimal for them to create or structure that data themselves, due to scale inefficiencies, high fixed costs, and the operational difficulty of producing high-quality, reusable structured data in-house. We already see this dynamic today. For example, many lawyers produce more leverage per hour working on standardized, structured legal data through platforms like micro1 than they do performing unstructured work inside individual law firms. At micro1, over 1,000 lawyers work in structured data creation and earn on average ~20% more than in traditional firm roles. Law firms themselves are unlikely to become large-scale producers of structured training data, but they will increasingly be consumers of that data, either directly or by having it embedded in the tools they use. This creates a powerful incentive structure. Labs that are automating functions will pay for this data, because long term the value gained from incremental automation far exceeds the cost of acquiring the data. As a result: • Entities are incentivized to produce high-quality human data not just to automate themselves, but because that data has external market value • Every hour of work can simultaneously: • Run the organization • Train AI models • Generate additional revenue for the organization Human labor becomes not just labor to produce goods & services, but a revenue-generating asset on its own. the ultimate convergence: 5%+ of human time is spent on human data It’s reasonable to think that most functions in the economy will spend some amount of time trying to automate themselves. Not fully, and not all at once, but continuously pushing work out of the human loop as it becomes repeatable and scalable. Today, even knowledge workers spend the majority of their time on communication and coordination rather than on what we would consider actual productive work. As automation advances, tedious parts of knowledge work are progressively removed, and automation increasingly absorbs coordination, scheduling, routing, and routine communication. The result is a larger share of human time being spent on judgment heavy knowledge work. Even under conservative assumptions, it is reasonable to expect that in a more automated economy roughly 75% of work time is still spent on communication and coordination, while about 25% is spent doing actual work. Not all of that work needs to be structured. But a meaningful fraction does. Work that produces decisions, judgments, demonstrations, evaluations, and exceptions becomes far more valuable when captured in a structured, reusable form, both to complete the task and to enable future automation. If only one fifth of that actual work is performed in structured environments, that implies roughly 5% of total human labor time is spent generating structured human data. With global GDP at roughly $100T, and labor representing about 50% of that, total labor spend is around $50T annually. Five percent of that corresponds to roughly $2.5T per year of human time directed at enabling automation, creating demonstrations, feedback, evaluations, and learning signals for AI systems. Certainly not all of this will become explicit spend in the human data market. Much of it will remain implicit, fragmented, or unpriced. But even with aggressive discounting, you still arrive at something on the order of $1T per year. automation reshapes labor, it doesn’t shrink it This results in automation scaling, As automation scales, some amount of what was spent on human labor is redirected towards: • Energy • Compute • AI labor However, total human labor spend continues to increase. Why? Automation creates time. Time enables creativity. Creativity produces net-new functions within the economy. Those functions are initially done by humans. Over time, they follow the same automation cycle. human labor gets more expensive because: • Human time is finite at any moment • Creativity and judgment are scarce • Net-new ideas command premium value As automation expands, humans concentrate more of their time on higher-leverage work. While total human hours do grow over time, that growth cannot be rapidly accelerated in response to demand. The fastest and dominant way the labor market expands is by increasing the value created per human hour. As this continues: • Total human labor spend rises • A larger share of human time is spent generating learning signals and enabling automation we should never call it annotation again The importance of this work in shaping AI means calling it “data labeling” or “annotation” is completely inaccurate. These phrases describe mechanical tasks, when the real value comes from human judgment, expertise, and decision-making expressed in structured form. A more accurate description is expert human data creation or structured human judgment. This is how human expertise compounds in an automated economy. It explains why human data scales with automation rather than disappearing, and why it becomes a first-class economic input over time. human brilliance is needed more than ever This does not require extreme assumptions. It only requires that automation continues to work, and that intelligence continues to learn from humans. If that is true, then human data is not a phase or a temporary bottleneck. It is a structural input to the economy. Human judgment is captured, structured, and refined. That judgment becomes the training substrate of intelligence. That intelligence, in turn, produces more automation. As functions are automated, human time is freed. That time is spent creating new functions to automate, and the beautiful cycle continues.
JUST IN: Elon Musk says AI and humanoid robots will "eliminate poverty" and "make everyone wealthy."
The more I have AI agents write all my code, the more I feel that us devs will be alright (and possibly more in-demand for important stuff) Hard for me to imagine anyone building *reliable* software without an understanding of how to do this (either via experience or study)
I underestimated how emotional the impact of AI would be. For a decade, I was depressed about work. The best part of business is manifesting an idea. Seeing a problem, then fixing it for yourself and others. The worst part of business is trying to herd cats. Motivating the dozens of people required to execute on a vision. I didn’t fully realize this until I started using Claude Code, but I was borderline depressed when it came to work. I’ve always been frustrated by the gap between vision and execution. I’ve never been good at managing large groups of people. I’d have an idea I was excited about, but the moment it exceeded my own skillset (coding, for example), I had to hand it off to others. Wonderful team members—but with their own ideas. Their own pace. Their own taste. And often, after months of meetings and false starts, we’d end up with something I didn’t love, or felt frustrated by. I found this process so draining that I eventually gave up. I delegated almost all operations to CEOs. From a business standpoint, this worked out great. But my original love—building things, with my hands on the tools—was lost for almost a decade. Claude Code has brought back my fire. It feels like I have 50 (nearly) free super-genius employees living in my terminal. If I wake up in the middle of the night, there’s a 50% chance I say “fuck it” and go downstairs to mainline Claude Code at 4 a.m. There’s no longer a gap between vision and execution. It feels like: if you can imagine it, you can build it. Sure, there are frustrating bugs. It can get you 95% of the way there—and the final 5% can take an astonishingly long time. But it feels like magic. I genuinely can’t believe this technology exists. In the last month alone, I’ve used it to: Adapt 10 hours of interviews with my father into a beautifully written memoir Build a personality analysis tool for individuals and couples to explore mental health and relationship dynamics (coming soon) • Design an astonishingly beautiful website for a vacation rental I own • Create a bot that helps manage and execute all my Things tasks (including drafting emails and doing research) • Optimize my home Wi-Fi network • Build a deal analysis tool to deep-dive potential acquisitions and write investment memos • Create an automated personal journal that captures notable moments from my day (things my kids said, decisions I made, meeting notes, etc.) And an endless number of random tasks—easily a year or more of human work. The craziest part: I’ve only spent a few thousand dollars on Claude credits. Those of us using this are probably still 0.0001% of the population. I can’t even imagine what gets built once this is widely distributed—especially when it takes physical form through robotics. The next two years are going to be mental. This makes the printing press look like a joke.
Anthropic CEO, Dario Amodei: "we might be 6-12 months away from models doing all of what software engineers do end-to-end" We're approaching a feedback loop where AI builds better AI But the loop isn't fully closed yet, chip manufacturing and training time still limit speed
2025 was the year that "Chinese peptides" took over SF. I wrote about it in my first for the NYT: gift link here https://nytimes.com/2026/01/03/business/chinese-peptides-silicon-valley.html?unlocked_article_code=1.BlA.bSI-.5puwhP1yiF6B&smid=url-share…
JUST IN: Elon Musk says there is "no need" to save money because universal high income is coming
ai just made you ordinary. can you still win? most people who’ll lose their jobs to ai won’t lose because they were stupid, lazy, or incompetent, but purely because they kept sharpening their sword in a world that had already moved to rifles. you see, for a long time, intelligence worked for us. if you were sharper than the people around you, if you knew more about things, if you could catch patterns easily, money followed. doors opened up. people listened to you. ai didn’t kill the intelligence ceiling. it anal-fucked it. intelligence is now everywhere. you’re one tap away from summoning an oracle that knows more than any single person ever could, can go into incredible depth on almost anything, and do it much faster. you’d think “upskilling” could maybe give you an edge here. trying out new ai tools, frameworks, skills. everyone learning the same thing at the same time and calling it an edge. i can do the funniest thing here. i can tell you that learning ai is the way forward. actually, wait, let me correct myself. the only way forward. i can show you your career’s death, then the aftermath, then sell you peace of mind today for a future you have no control over, in the form of an “ai upskilling course.” many of you will buy that course. not because you’re dumb, but because fear loves anything that looks like a checklist. this is what insurance companies do. fear for hope works every time. or i could tell you to find your safe camp. safe camp 1: doctor, ca, lawyer, therapist. titles that require professional licenses to practice, and are somewhat protected by law, regulation, and institutions. safe camp 2: plumbing, carpentry, hvac, welding. work that requires physical skill, where ai is too expensive to take over, yet. both safe camps optimize for survival. but neither guarantees total protection from being replaced over a longer horizon, say two decades from now. but a tiny minority still exists, and will continue to exist. the minority that will also have access to the same tools you do, but whose work will not feel interchangeable. these are the people who will win with ai, not against it. do not mistake these people for being necessarily louder or smarter. they’re just choosing fewer things, making stranger combinations, and taking decisions that, to an observer, make no sense. yet somehow, they win. if i were to decode this strange pattern and give you a single attribute that makes this possible, it’s called taste. taste is not limited to preference, vibes, or aesthetics. it’s the ability to look at a million options and say, “this one matters. everything else does not.” it’s judgment that sits close to intuition. the difference is that taste can be developed, refined, and tuned over time, while intuition has no clear framework. you can already see this in the real world. why do some creators thrive while thousands using the same tools disappear? why do some founders ship products that feel obvious in hindsight? why do some writers feel irreplaceable even when ai can write a thousand words a minute like it’s nothing? it’s not output volume. it’s that they already know what to build, what to ignore (important), and when to stop. this is also why resisting ai is a dead end. moralizing, dooming, or opting out by calling it a fad is pure denial. the only two guaranteed outcomes of resistance are this: ai will not wait for your comfort, and the winners won’t be the ones who reject it. i’m sorry if these ideas make you uneasy. i’m well aware that “learning more tools,” “stacking more skills,” and “copying playbooks” are the go-to advice of anyone who claims to help you safeguard yourself from ai. i’m betting my money on taste and individuality in its truest sense. build judgment. develop a seductive taste. expose yourself to extremes and startling depth. be aggressive in your use of ai, but never blind. this is war. but one where the losers will be the loudest. the winners won’t be. they’ll just have taste.