We doomscroll, you upskill.
Finding signal on X is harder than ever. We curate high-value insights on AI, Startups, and Product so you can focus on what matters.
23 tweets
A number of people are talking about implications of AI to schools. I spoke about some of my thoughts to a school board earlier, some highlights: 1. You will never be able to detect the use of AI in homework. Full stop. All "detectors" of AI imo don't really work, can be defeated in various ways, and are in principle doomed to fail. You have to assume that any work done outside classroom has used AI. 2. Therefore, the majority of grading has to shift to in-class work (instead of at-home assignments), in settings where teachers can physically monitor students. The students remain motivated to learn how to solve problems without AI because they know they will be evaluated without it in class later. 3. We want students to be able to use AI, it is here to stay and it is extremely powerful, but we also don't want students to be naked in the world without it. Using the calculator as an example of a historically disruptive technology, school teaches you how to do all the basic math & arithmetic so that you can in principle do it by hand, even if calculators are pervasive and greatly speed up work in practical settings. In addition, you understand what it's doing for you, so should it give you a wrong answer (e.g. you mistyped "prompt"), you should be able to notice it, gut check it, verify it in some other way, etc. The verification ability is especially important in the case of AI, which is presently a lot more fallible in a great variety of ways compared to calculators. 4. A lot of the evaluation settings remain at teacher's discretion and involve a creative design space of no tools, cheatsheets, open book, provided AI responses, direct internet/AI access, etc. TLDR the goal is that the students are proficient in the use of AI, but can also exist without it, and imo the only way to get there is to flip classes around and move the majority of testing to in class settings.
Sharing an interesting recent conversation on AI's impact on the economy. AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing. If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually). With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made). The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense). Software 1.0 easily automates what you can specify. Software 2.0 easily automates what you can verify.
Gemini Nano Banana Pro can solve exam questions *in* the exam page image. With doodles, diagrams, all that. ChatGPT thinks these solutions are all correct except Se_2P_2 should be "diselenium diphosphide" and a spelling mistake (should be "thiocyanic acid" not "thoicyanic") :O
Next up… Slide Decks! Turn your sources into a detailed deck for reading OR a set of presentation-ready slides. They are fully customizable, so you can tailor them to any audience, level, and style. Officially rolling out to Pro users now (free users in the coming weeks)!

We entered YC with $16K MRR. 100% bootstrapped. Today, we’re at $75K MRR, a 4.7× increase in just 6 weeks ( and $1.1M in annualized run rate) and on track to double before the end of the year. It’s been wild watching Parrot evolve from a scrappy experiment into a product people genuinely love using daily. Every day, new users tell us the same thing: “It doesn’t feel like I’m studying. I’m just scrolling.” That’s exactly the point. We’re building the first language app designed for the way people actually spend time on their phones, swiping through short, entertaining videos. Except this time, it’s productive scrolling. We’re just getting started. The retention is improving, the love is real, and the growth is compounding. Super excited to keep talking to users and making Parrot 100x better.
New workflow: 1. Open a voice memo app 2. Brain dump your thoughts on a topic; record it 3. Transcribe it 4. Import transcript into NotebookLM 5. Get it to generate a slide deck using Nano Banana Pro 6. See your rambling thoughts visualized into structured & beautiful slides and feel smart
I’m starting to get into a habit of reading everything (blogs, articles, book chapters,…) with LLMs. Usually pass 1 is manual, then pass 2 “explain/summarize”, pass 3 Q&A. I usually end up with a better/deeper understanding than if I moved on. Growing to among top use cases. On the flip side, if you’re a writer trying to explain/communicate something, we may increasingly see less of a mindset of “I’m writing this for another human” and more “I’m writing this for an LLM”. Because once an LLM “gets it”, it can then target, personalize and serve the idea to its user.
The best are always learning. Read like crazy. Think alone. Keep a journal. Write stuff down the moment you see it. Review regularly. Memorize the big ideas to fluency. Attack your best ideas. And never get high on your own supply. You don't have to be gifted. You do have to be deliberate.
Unfortunately, the rumors are true… I can no longer hide the truth. Yes, I did use ChatGPT to write a few tweets when I first started this account. Even worse… I posted in Build in Public. I didn’t have an audience. I didn’t have confidence in my voice yet. But I knew I wanted to get better. That was the beginning of the journey — not the definition of it. Since then, every tweet has been mine. No prompts. No shortcuts. Just reps. I hope you’ll forgive me — and stick around for what comes next. Would you like a spicier or more humorous variant as well to post later as a follow-up?
One of my favorite lessons I’ve learnt from working with smart people: Action produces information. If you’re unsure of what to do, just do anything, even if it’s the wrong thing. This will give you information about what you should actually be doing. Sounds simple on the surface - the hard part is making it part of your every day working process.
>be Andrej Karpathy >studied computer science from Toronto to Stanford, specializing in deep learning >became Tesla’s director of AI in his early 30s, leading the Autopilot vision team >helped build the foundations of OpenAI as one of its earliest researchers >teaches millions through free lectures, notebooks, and open-source work >keeps his life simple, quiet, and focused on learning >steps away from big titles when he feels the need to reset >builds small AI projects for fun, shares them openly >lives calmly, thinking deeply, working on what he believes matters most Has Karpathy quietly optimized life in a way most people never figure out?
Starting to believe that the best place to use Nano Banana Pro might actually be in NotebookLM
Flashcards and Quizzes are officially rolling out TODAY on the mobile app! You can customize the number of questions, difficulty, and topics all from the convenience of your phone. Because being a busy, popular socialite should never get in the way of your studies