Learn how AI Skills in Microsoft Agent Framework solve context window limitations through progressive disclosure. This post explores why agent contexts become bloated with instructions and demonstrates how modular skills keep context lean while providing rich domain knowledge on demand.
Context-engineering
All Posts
- llm (8)
- openai (7)
- ai-agent (6)
- rag (4)
- azure (4)
- tools (3)
- microsoft-agent-framework (3)
- ai (2)
- mcp (2)
- vscode (2)
context-engineering (2)
- testing (1)
- sql (1)
- multiagent (1)
- ai-foundry (1)
- agents (1)
- github-copilot (1)
- agent-framework (1)
- ag-ui (1)
- memory (1)
- skills (1)
- apiops (1)
- devops (1)
- apim (1)
- azure-functions (1)
- aks (1)
- kubernetes (1)
- container (1)
- A language model does not have its own memory. It can only respond based on the the messages and context provided to it in each conversation. Providing the right context is critical to getting results that feel personalized and intelligent. This is where memory comes in. But what does memory look like in an AI agent? How can we build systems that remember user preferences across conversations, without overwhelming users with questions or forcing them to repeat themselves? In this post, we will explore short-term and long-term agent memory.