Agentic AI for developers

Gain practical, hands-on skills in Agentic AI for developers to drive innovation, automation and measurable impact in your organisation. Designed for developers in Malaysia who want to build intelligent, autonomous AI workflows.

CategoryGenerative AI & Prompt Engineering
Duration5 Days
Enrolled1000+
(200+ reviews)

Quick Inquiry

Agentic AI for developers

Why Choose Garranto Academy for Agentic AI Training?

Garranto Academy offers expert-led training with real-world projects, ensuring developers gain practical, job-ready Agentic AI skills.

Course Overview:

The Agentic AI for Developers is an immersive 5-day, hands-on program designed for developers with existing Python skills and foundational GenAI knowledge. This course dives deep into building agentic workflows—AI systems capable of planning, acting, and collaborating autonomously. Participants will work with LangGraph, Python Agent-to-Agent (A2A) architectures, and n8n automation to design multi-agent systems that communicate, call tools, and execute tasks across environments. Through guided labs and real-world scenarios, learners will integrate agents with external services, implement safety and guardrails, and apply observability for monitoring system behavior. By the end of the program, participants will have built a complete, production-ready agentic workflow linking LangGraph, Python services, and n8n automations.

What You'll Learn in Our Agentic AI for Developers Course?

Course Objectives:

Upon successful completion of this course, learners will be able to:
  • Understand the core building blocks of agentic systems, including tools, memory, state, and control flow.
  • Model and debug multi-step agentic workflows as executable graphs in LangGraph.
  • Implement Python Agent-to-Agent (A2A) messaging patterns with robust schemas, validation, and retries.
  • Build lightweight API endpoints to expose tools and agents for external interaction.
  • Orchestrate cross-system automations in n8n using triggers, webhooks, and secure credentials.
  • Configure guardrails for prompts, inputs, and secrets to ensure safety and reliability across the stack.
  • Test agent behaviors, log execution runs, capture metrics, and package a complete LangGraph–Python A2A–n8n demo workflow.

Prerequisites

  • Working Python knowledge with GenAI fundamentals and prompts understanding.
  • Ability to install packages, run commands, manage environments.
  • Familiarity with APIs, JSON schemas, FastAPI, and OAuth basics.

Course Outlines:

Module 1.1 — LangGraph: Concepts
  • Key Concepts:
  • Agent graph primitives: nodes, edges, state, and control flow
  • Tool calling, function signatures, and input/output schemas
  • Memory patterns: short-term state vs. persisted context
  • Branching, loops, retries, and timeouts in graphs
  • Prompt templates and parameterization for reliability
  • Guards and validation on tool inputs/outputs
  • Run logging, traces, and debugging strategies

Module 1.2 — LangGraph: Hands-On Lab

  • Scenario:
  • Build a two-step agentic workflow that answers a user query and calls a calculator/search tool.
  • Persist brief conversation state and emit a structured final answer
  • Steps:
  • Create a new Python project and initialize a LangGraph workflow with state.
  • Define one LLM node and one tool node; register tool with input/output schema.
  • Add control flow: route to tool on demand; retry on tool failure.
  • Inject prompt variables; validate outputs; log each run.
  • Run the graph locally; test with two sample queries; capture traces.
  • Deliverables:
  • Source code for the graph and tool
  • Run logs/traces showing at least two successful executions
  • A short README with usage instructions and limitations

Module 2.1 — Python A2A: Concepts
  • Key Concepts:
  • Agent-to-agent messaging patterns (request/response, brokerless handoff)
  • Message contracts with Pydantic models and JSON schema
  • Lightweight API layer for agents (choose FastAPI)
  • Idempotency, retries, and backoff for robust exchanges
  • Concurrency basics with asyncio; avoiding deadlocks
  • Structured logging and correlation IDs for tracing conversations
  • Safety: input sanitization, rate limits, and secret handling

Module 2.2 — Python A2A: Hands-On Lab
  • Scenario:
  • Implement two Python agents: a “Planner” and an “Executor” that exchange structured tasks.
  • Expose minimal endpoints and verify round-trip messaging.
  • Steps:
  • Scaffold two FastAPI services; define shared Pydantic message models.
  • Implement Planner to accept a user goal and emit a structured task list
  • Implement Executor to receive a task, simulate execution, and return results.
  • Add retries/backoff; include correlation IDs; log each hop.
  • Write a small client script to drive end-to-end flow and verify responses.
  • Deliverables:
  • Two FastAPI services with shared schema package
  • Client script demonstrating at least one full round-trip
  • Logs showing retries and correlated request IDs

Module 3.1 — n8n: Concepts
  • Key Concepts:
  • n8n workflow model: nodes, triggers, webhooks, and credentials
  • Calling external APIs and transforming JSON payloads
  • Error handling: try/catch nodes, fallbacks, and alerts
  • Secrets management and environment variables in n8n
  • Testing, versioning, and exporting/importing workflows
  • Connecting n8n with LangGraph/Python services via HTTP

Module 3.2 — n8n: Hands-On Lab
  • Scenario:
  • Create a webhook-triggered workflow that routes a user request to your LangGraph service, then forwards structured results to the Python Executor and returns a combined response.
  • Steps:
  • Set up an n8n webhook trigger; define sample payload and test it.
  • Add HTTP Request node to call LangGraph endpoint; map inputs/outputs.
  • Add a second HTTP Request to call the Python Executor; merge results.
  • Implement basic error handling and a notification step (e.g., email/log).
  • Export the workflow JSON and document credentials used.
  • Deliverables:
  • n8n workflow JSON export
  • Screenshot of a successful run with node outputs
  • Brief notes on credentials and error paths tested

Module 4.1 — Integration: Concepts
  • Key Concepts:
  • End-to-end architecture: n8n trigger → LangGraph plan → Python execute
  • Data contracts across the boundary; schema evolution and versioning
  • Observability across services: logs, traces, and minimal dashboards
  • Deployment options for a demo stack (local first) and next-step hardening

Module 4.2 — Integration: Hands-On Lab
  • Scenario:
  • Assemble a working demo that performs a user-requested multi-step task using all three components.
  • Steps:
  • Wire n8n webhook to call LangGraph Planner; pass structured plan to Python Executor.
  • Add validation and guardrails at each hop; handle errors with fallback steps.
  • Capture correlated logs across all services; verify end-to-end outputs.
  • Package artifacts and instructions for rerunning the demo locally.
  • Deliverables:
  • Running demo across n8n, LangGraph, and two Python agents
  • Logs/traces proving a full successful run and one handled failure
  • Final README with setup, run, and troubleshooting steps

Module 5.1 — Hardening and safety: Concepts
  • Key Concepts:
  • Prompt and tool guardrails; input validation and output filtering
  • Secrets hygiene, API key rotation, and .env management
  • Rate limiting and simple quota controls
  • Test strategies: golden prompts, fixture payloads, smoke tests
  • Lightweight CI hooks for linting, type checks, and unit tests

Module 5.2 — Hardening and safety: Hands-On Lab
  • Scenario:
  • Add safety checks, tests, and minimal CI to your integrated demo.
  • Steps:
  • Introduce schema validation and unsafe-content filters in LangGraph node.
  • Add request validation and rate limits in FastAPI services.
  • Write unit tests for message models and one end-to-end smoke test.
  • Configure a simple CI run (lint, type, tests) and record results.
  • Deliverables:
  • Updated services with guardrails enabled
  • Test report output and CI run log/screenshot
  • Short risk register noting remaining gaps and next steps

Course Outcomes:

Upon completing the "AGENTIC AI FOR DEVELOPERS" course, participants will:
  • Understand the complete architecture and operational workflow of RAG systems.
  • Install, configure, and run local LLMs using Ollama effectively.
  • Create embeddings with OpenAI and integrate them into Weaviate for semantic search.
  • Design efficient chunking and metadata strategies for optimized vector retrieval.
  • Build a full Python-based RAG pipeline that delivers grounded, cited responses.
  • Evaluate and optimize retrieval quality through tuning, top-k adjustments, and prompt refinement.
  • Containerize the entire RAG system with Docker Compose and secure secrets management.

Key Benefits of Learning Agentic AI for Developers

Master cutting-edge Agentic AI capabilities that empower you to build autonomous, self-improving applications using advanced LLM workflows.

How Agentic AI Can Transform Your Development Workflow?

Agentic AI radically accelerates development by automating complex tasks, generating code, and enabling applications to adapt dynamically.

Course Highlights

Comprehensive Learning

In-depth coverage of all key concepts and practical applications

Industry Certificate

Recognized certification upon successful completion

Expert Instructors

Learn from industry professionals with real-world experience

Ongoing Support

Continuous support and resources for career advancement

Course Information

Duration
5 Days
Effort
8 hours/day
Subject
Generative AI & Prompt Engineering
Quizzes
Yes
Level
Advanced
Language
English
Certificate
Yes

Show Your Interest

Be the first to know when new sessions are available

Register Interest