The future of intelligence and entry-level jobs
A look at the building blocks of AI today and reasons to be concerned/hopeful about its impact on labor markets
After diving into the historical arc of AI, I found myself wanting to pause and ask: What do we actually mean today when we say “AI”? How does it work? What does it do?
To ground my thinking, I revisited the original framework laid out at the 1956 Dartmouth Summer Research Project on Artificial Intelligence. This group proposed a set of core capabilities machines would need to be considered intelligent. Applying those principles to today’s landscape provides a useful lens to understand how modern AI functions—and where it's headed.
Participants in the 1956 Dartmouth Summer Research Project on Artificial Intelligence
The Dartmouth Framework, Revisited
1. Learning from Experience
The Dartmouth group believed machines should improve through experience, not just execute pre-coded instructions.
“Machines should learn from experience.”
Today:
Modern AI learns by ingesting vast amounts of data—text, images, audio, video, and code—and identifying patterns.
Technique: Deep learning (e.g. Transformers)
Scale: Trained on billions or trillions of tokens, using hundreds of billions of parameters
Examples: GPT-4o (trained on multimodal data), vision-language models like CLIP and Gemini
Key idea: Learning isn’t about memorizing facts, but discovering latent structures across data domains.
2. Natural Language Understanding
“A major goal of the project is to understand how a machine could be made to use language.”
Today:
LLMs like GPT, Claude, and Gemini interpret and generate human language through token prediction and sequence modeling.
Key techniques:
Tokenization (breaking text into subword units)
Next-token prediction
Attention mechanisms
Applications:
Chatbots
Coding assistants
Translation
Text-to-SQL tools
Instruction-following agents
Caveat: These models don’t understand language like humans—but they’re highly effective at simulating it.
3. Abstraction and Concept Formation
“A computer can be programmed to form concepts and generalizations.”
Today:
AI models create abstractions via distributed representations encoded in billions of model weights.
How it works:
Parameters are adjusted during training to represent semantic and conceptual structure.
Concepts are encoded as vectors in high-dimensional space—not explicit symbols.
Examples:
“Cat” and “dog” are close in vector space
Some neurons activate for abstract ideas like justice or tool use
4. Reasoning and Problem Solving
“A machine can be made to solve kinds of problems now reserved for humans.”
Today:
Problem-solving relies on prompt engineering, algorithmic scaffolding, and external tool integration.
Techniques:
Chain of Thought (CoT) prompting
Tree of Thought / Program-aided CoT
ReAct (reason + action)
Function calling with external tools (e.g. calculator, browser)
Applications:
Math
Programming
Strategic planning
Limits: LLMs are still weak at symbolic logic and multi-step planning without help.
5. Self-Improvement
Today:
AI doesn’t yet rewrite its own code or weights, but performance improves through interaction and iterative feedback.
Methods:
Prompt engineering
Few-shot learning
Chain-of-Thought prompting
Reflection (self-critique and improvement)
Fine-tuning and reinforcement learning (RLHF)
Brute Force vs. Efficient Learning
What struck me most while revisiting these principles is how much brute force today’s AI systems require.
Humans learn from far fewer examples—but we compute more slowly. I’ve probably seen a million fewer bananas than GPT-5, but I can still recognize one. It likely saw those millions of bananas in weeks. I didn’t reach peak banana-recognition until I was 4 or 5.
So how can machines learn so inefficiently and still so much faster than humans?
Six Core Components of Today’s AI
Here’s a simplified way to think about AI systems’ building blocks—effectively a modern restatement of the Dartmouth framework:
Massive amounts of high-quality input data
Models that turn data into knowledge
Training loops that reinforce and refine understanding
Clean, efficient infrastructure to move data and context through the system
Immense computing power (and electricity!)
A human interaction layer—tools like ChatGPT or vertical AI platforms that make AI usable and useful
Each layer is still seeing crazy efficiency gains in terms of cost, productivity, efficiency, and output. That’s part of what makes this an exciting time to invest in AI infrastructure.
A simple example from the interaction layer - querying an AI model with GPT-3.5-level performance has dropped from approximately $20 per million tokens in November 2022 to just $0.07 per million tokens by October 2024—a 280-fold reduction in just 2 years.
What does this mean for “intelligence?”
As I thought about these layers, and how human’s interact with AI, I started considering how does AI reshape our definitions of knowledge and intelligence?
We’ve long equated knowledge with intelligence. The smarter you were, the more facts you knew. But what happens when ChatGPT “knows” more than any person ever could?
It forces a shift. Traditional IQ tests—and education models built on memorization—start to feel basically obsolete, headed down the same path as mental math wizardry after the arrival of digital calculators.
Instead, skills like strategic thinking, empathy, and relationship building become more important than ever.
Entry-Level Jobs and the AI Disruption
This leads to a real and pressing tension. If AI handles most “high confidence” work—providing correct answers, doing structured tasks—what happens to entry-level roles?
If today’s senior sales exec learned the ropes as a BDR, but we now have AI doing BDR tasks… where do tomorrow’s execs come from?
We're in a painful transition. Many hiring managers haven’t adjusted to this reality yet. They still prioritize credentials and experience—stand-ins for “knowledge”—even as the jobs themselves start to require different capabilities.
What happens when AI technology can effectively replicate the functions of an entry level position, but hiring organizations haven’t redefined what they expect of entry level hires? A reduction in job opportunities. This is painful in the near-term, will undoubtedly impact numerous individuals, and I don’t see a shortcut around it.
A Hopeful Take: Rethinking the Entry-Level
On a societal-level though, I think over the next few years (if we can make it there), AI won’t be reducing the number of entry-level jobs—it will just force a new definition of them.
Entry-level roles will demand more strategic thinking from day one. That’s not necessarily a bad thing. On the contrary, rising talent can tackle harder challenges and take on more stimulating responsibilities, earlier in their careers.
What would it take for this new path to work?
AI must deliver its promised economic boost, allowing companies to reinvest in hiring.
Hiring managers must evolve, prioritizing creativity, empathy, and adaptability over checkboxes.
Job seekers must come AI-native, ready to use tools to fill gaps in their own knowledge.
Education must adapt, focusing less on memorization, and more on collaboration, judgment, and tool fluency.
A Personal Note
In some ways, I’m living this shift.
I joined Propeller VC in a mid-level role without much full-time venture experience. That’s uncommon—but it may become more common as career paths evolve in an AI-native world.
Still TBD if it works out (🤞), but I’m hopeful. I should also note that VC might also just be a unique case; it wasn’t uncommon for successful practitioners to move into the field from others, well before the advent of AI.
Carve-Outs (Inspired by Acquired)
I've been running an angel syndicate via Viaka. If you’ve had a great experience—either as a founder or an angel—please reach out. I’d love to learn from it.
I spent the first week of June in San Francisco. Being new to venture, it was a kid in a candy store moment. The big question now: how do we (Propeller and I) earn the right to win allocation in top-tier deals in the world’s top geography?
My sister-in-law just returned from Korea, and Propeller’s GP came back from China. Both commented on how much more modern the infrastructure is there compared to the U.S. Perhaps both countries have the “benefit” of industrializing later than the U.S., but that doesn’t stand up as an excuse for us falling behind. How do we think about avoiding the “sunk cost fallacy” when it comes to our existing infrastructure, and make intelligent investments in new projects?
I would love to hear your thoughts. Especially if you’re thinking about AI, infrastructure, or what the future of work looks like from here.
— Hani
Sources:
https://techcrunch.com/2025/05/25/from-llms-to-hallucinations-heres-a-simple-guide-to-common-ai-terms/
Perplexity Deep Research
ChatGPT