GN
GlobalNews.one
Technology

The 'What/How' Loop: How LLMs Can Deepen Our Understanding of Software Abstraction

January 21, 2026
Sponsored
The 'What/How' Loop: How LLMs Can Deepen Our Understanding of Software Abstraction

In a dynamic discussion, industry experts Unmesh Joshi, Rebecca Parsons, and Martin Fowler delve into the transformative role of Large Language Models (LLMs) in software development, focusing on the critical relationship between defining the problem ('What') and implementing the solution ('How'). Their conversation, held on January 21, 2026, emphasizes that while LLMs offer undeniable speed and efficiency in code generation, the art of building resilient and maintainable software remains deeply rooted in human understanding and careful abstraction.

The core challenge of software development, they argue, is creating systems that adapt gracefully to change. Traditional approaches often treat programming as a straightforward translation of requirements into code, neglecting the iterative feedback loop crucial for uncovering stable system components and anticipating future modifications. This linear view, often reflected in the phrase 'Human in the loop', undervalues the programmer's expertise in shaping abstractions and managing cognitive load.

Martin Fowler emphasizes that minimizing cognitive load is paramount for creating adaptable systems. By structuring code into manageable modules and mirroring real-world domains with familiar concepts (like ships, ports, and containers in a shipping system), developers can navigate complexity more effectively. Rebecca Parsons adds that understanding the domain at varying levels of granularity enables reasoning about system properties without becoming overwhelmed by intricate details.

Unmesh Joshi highlights the fundamental act of programming as mapping the 'real' domain ('What') onto a computational model ('How'). This isn't a one-way process; instead, the 'What' and 'How' continuously inform each other, revealing stable parts of the system and potential avenues for future change. He warns that relying solely on LLMs for code generation without a firm grasp of underlying abstractions can lead to procedural, less understandable code or overly complex designs.

The discussion champions Test-Driven Development (TDD) as a strategy that operationalizes the feedback loop between 'What' and 'How'. Writing tests before implementation forces developers to consider the interface and desired behavior independently of the implementation details, promoting better encapsulation. LLMs, in this context, can be used as a rapid prototyping tool to sketch initial versions, but the final shape of the code should be guided by human refactoring and design principles.

Ultimately, the conversation emphasizes that LLMs are powerful tools but not replacements for human expertise. While they can significantly accelerate initial code generation and exploration, the ability to craft maintainable, adaptable, and well-understood software hinges on a deep understanding of the 'What/How' loop and the careful creation of meaningful abstractions. By using LLMs strategically within this framework, developers can leverage their strengths while preserving the essential human element in software design.

Sponsored
Alex Chen

Alex Chen

Senior Tech Editor

Covering the latest in consumer electronics and software updates. Obsessed with clean code and cleaner desks.


Read Also

Y Combinator CEO's AI Obsession: Genius or Delusion?
Artificial Intelligence
TechCrunch

Y Combinator CEO's AI Obsession: Genius or Delusion?

Garry Tan, head of Y Combinator, is pushing the boundaries of AI-assisted coding with his open-source 'gstack' setup, designed for Anthropic's Claude. But is this a revolutionary leap forward, or just another case of Silicon Valley hype? Critics are divided, questioning the tool's uniqueness and real-world value.

#Claude#Software Development
AI's Ephemeral Memory: How to Build Durable Understanding with Context Anchoring
Artificial Intelligence
Martin Fowler

AI's Ephemeral Memory: How to Build Durable Understanding with Context Anchoring

Generative AI coding assistants offer incredible potential, but their short-term memory can lead to frustrating context loss. Rahul, a Principal Engineer at Thoughtworks, introduces 'Context Anchoring,' a powerful strategy to externalize and preserve crucial decision-making processes, ensuring long-term alignment and architectural integrity in AI-assisted development.

#Software Development#Generative AI