Perspectives on AI: Cheat Sheet

The future of work, humanity, and what we should be focused on today.

Welcome to Gratitude Driven, a weekly newsletter where I share practical ideas and insights across personal growth, professional development, and the world of AI and data science.

In This Newsletter

Help Shape My First Book - Book a Chat With Me!

Exciting news! I'm beginning the journey of writing my first book and would love your input.

I'm seeking conversations with readers to better understand your challenges around productivity and mindset to ensure the book addresses what matters most to you.

Each week, I'm opening one conversation slot for a 1:1 chat. This is an opportunity to share your specific challenges and questions around productivity and mindset, and help influence the direction of the book.

Quick note, this is not a technical book. It will be about building a mindset that helps you excel in competitive and technical fields, but I’d really love to learn more about your challenges with productivity in general.

Your perspective matters tremendously, and I'm grateful for any time you can share.

Start learning AI in 2025

Keeping up with AI is hard – we get it!

That’s why over 1M professionals read Superhuman AI to stay ahead.

  • Get daily AI news, tools, and tutorials

  • Learn new AI skills you can use at work in 3 mins a day

  • Become 10X more productive

Perspectives on AI: Cheat Sheet

I have been thinking about AI quite a bit over the last several weeks (or years):

Is AGI around the corner? What’s the future of my career? Is the Singularity going to wipe us out or make the world a super fun paradise?

One challenge for me has been knowing who to trust. Despite working in ML, I do not have anywhere close to the technical knowledge to make my own informed opinion. So, ultimately, I need to find trustworthy sources I can defer to on these really difficult questions.

The problem is that there are so many different schools of thought, with very smart, thoughtful, and reasonable people on all sides.

So rather than clear up any doubts you may have, instead, let me make it even more confusing by sharing the major schools of thought I’ve come across so far, and what they believe on these big questions:

1. AI Skeptics

Perspective: Current AI is fundamentally flawed— a "statistical parrot" that lacks true understanding. They view the entire AGI enterprise as misguided and believe AGI discussions distract from pressing issues like bias, misinformation, and corporate accountability. They focus heavily on debunking AI hype and pointing out fundamental limitations in current approaches.

Timeline: AGI may be centuries away or impossible with current approaches—they're deeply skeptical it will ever arrive as imagined.

Work Impact: Automation will continue affecting specific tasks, but humans remain irreplaceable in most cognitive work—technological unemployment fears are overblown based on historical precedent.

What to do: Regulate today's AI systems for safety and fairness, create oversight agencies, and focus on concrete problems rather than speculative futures.

For example:

2. Cautious Progressives

Perspective: Current AI systems are genuinely impressive and represent real progress toward intelligence, but they're still far from human-level general intelligence. This group supports continued development with appropriate safeguards, viewing both utopian and dystopian predictions as premature. They focus on solving today's AI limitations—bias, reliability, interpretability—while gradually advancing capabilities.

Timeline: AGI is achievable through steady progress but probably won't arrive until around 2050 or later, giving us time to develop it responsibly.

Work Impact: AI will reshape many jobs but humans will remain essential for complex reasoning, creativity, and social interaction—expect evolution rather than replacement.

What to do: Continue AI research with strong ethics guidelines, invest in safety research, and implement targeted regulations for high-risk applications.

For example:

3. Tech Futurists/Accelerationists

Perspective: AI represents humanity's greatest opportunity for solving major challenges. However, this group has two main motivations:

  • Tech Futurists are primarily excited about AI's potential to usher in a post-scarcity utopia where disease, aging, and material want become obsolete

  • Accelerationists are strategically motivated—they argue the biggest risk isn't AI moving too fast, but too slow, as delays might hand advantages to less scrupulous actors (they are scared of China taking the lead, tbh).

Both groups dismiss doom scenarios and believe market forces will naturally produce beneficial outcomes.

Timeline: AGI is coming soon (I’ve been hearing a lot of them say ~2027) and that's either cause for celebration (futurists) or urgency (accelerationists).

Work Impact: Massive productivity gains will create unprecedented prosperity and new job categories, following the same pattern as previous technological revolutions.

What to do: Embrace and accelerate AI development, invest in education and infrastructure, maintain open competition, and avoid heavy regulation that stifles innovation.

For example:

4. AI "Doomers"

Perspective: AI will likely kill everyone, possibly within this decade. They argue that creating superintelligent AI without perfect alignment is essentially building humanity's final invention. They believe once AI surpasses human intelligence, it will pursue its own goals—like turning Earth into paperclips or some other optimization target that doesn't include humans.

Timeline: AGI + ASI could arrive within years (they also say ~2027), making this an immediate crisis.

Work Impact: Job displacement becomes irrelevant when extinction is on the table, though they acknowledge AI could cause economic chaos even before the end.

What to do: Stop all large AI development immediately, enforce global moratoriums, and redirect everything toward alignment research.

For example:

5. The Long-term Optimizers

Perspective: AGI represents both humanity's greatest opportunity and greatest risk, requiring careful optimization for long-term human flourishing. This group uses probabilistic reasoning to argue that even small chances of AGI going wrong warrant massive investment in safety research, given the crazy high stakes for future generations. They're neither doomer nor utopian but strategically focused on maximizing good outcomes.

Timeline: Significant probability of AGI within decades, making right now a critical decision point for humanity's entire future.

Work Impact: In a positive scenario, AI could enable unprecedented abundance and meaningful work; in negative scenarios, economic disruption becomes irrelevant cause we’re all dead.

What to do: Massively increase funding for AI alignment research, establish international coordination mechanisms, and pursue "differential progress" that advances safety faster than capabilities.

For example:

6. Present-Focused Social Justice Advocates

Perspective: AGI discussions are a distraction from AI's real harms happening right now, like algorithmic bias, worker exploitation, privacy violations, and corporate power concentration. These folks argue that today's AI already perpetuates injustice and that fantasizing about future superintelligence prevents us from addressing current problems. They emphasize that AI reflects the values and biases of its creators, making diversity and accountability essential.

Timeline: AGI timelines are irrelevant compared to immediate harm mitigation needs.

Work Impact: AI's impact on work depends entirely on how it's deployed—it could either empower workers or exploit them further, making policy and corporate accountability crucial.

What to do: Implement strict AI governance now, ensure diverse perspectives in development, mandate transparency and accountability, and prioritize justice over efficiency.

For example:

So that’s it! I would really love to make a video where I get perspectives from representatives of these perspectives to address some of these key questions… Let me know if there’s any perspective I missed, or a thinker I should reach out to.

Want to chat 1:1? Book time with me here.

Forwarded this email? Sign up here.

Follow me on YouTube | Instagram | LinkedIn

Note: This email may contain affiliate links. If you make a purchase I may make a small commission, at no cost to you. Thank you for your support!