This article discusses building an AI journal with LlamaIndex, focusing on seeking advice within the journal.The implementation starts with passing all relevant content into the context, potentially leading to low precision and high costs.Enhanced implementation involves Agentic RAG, combining dynamic decision-making and data retrieval for better precision in answers.Creation and persistence of an index in a local directory with LlamaIndex SDK is straightforward for enhanced functionality.Observations include the impact of parameters on LLM behavior and the importance of managing the inference capability based on the content.To complete the seek-advice function, involving multiple Agents working together is recommended, leading to Agent Workflow implementation.Agent Workflows can offer dynamic transitions based on LLM model function calls or explicit control over steps for a more personalized experience.A custom workflow example illustrates a structured approach to agent interactions, controlling step transitions for effective advice generation.The article emphasizes leveraging Agentic RAG and Customized Workflow with LlamaIndex to optimize user interactions in AI journal implementations.The source code for this AI journal implementation can be found on GitHub for further exploration and development.