A language model does not have its own memory. It can only respond based on the the messages and context provided to it in each conversation. Providing the right context is critical to getting results that feel personalized and intelligent. This is where memory comes in. But what does memory look like in an AI agent? How can we build systems that remember user preferences across conversations, without overwhelming users with questions or forcing them to repeat themselves? In this post, we will explore short-term and long-term agent memory.
Context-engineering
All Posts
- llm (8)
- openai (7)
- ai-agent (5)
- rag (4)
- azure (4)
- tools (3)
- ai (2)
- mcp (2)
- vscode (2)
- microsoft-agent-framework (2)
- testing (1)
- sql (1)
- multiagent (1)
- ai-foundry (1)
- agents (1)
- github-copilot (1)
- agent-framework (1)
- ag-ui (1)
context-engineering (1)
- memory (1)
- apiops (1)
- devops (1)
- apim (1)
- azure-functions (1)
- aks (1)
- kubernetes (1)
- container (1)