Tech Stack

Coming from a classic infrastructure background (VMware, server administration, and some networking), I shifted my focus toward DevOps, automation, and AI infrastructure. Today, my work mainly revolves around these three pillars:

AI & Data Stack

  • Enterprise AI Platform: Nvidia AI Enterprise
  • Open-Source Inference: Ollama • Open WebUI
  • LLM Gateway & Tracing: LiteLLM • Langfuse
  • Vector DBs: Milvus • PostgreSQL + pgvector

DevOps & Platform Stack

  • Containerization: Docker • Docker Compose
  • Orchestration: K8s (K3s • Tanzu • OpenShift)
  • Infrastructure as Code: Terraform • Ansible
  • CI/CD Pipelines: GitHub Actions • Jenkins
  • Observability: Grafana • Prometheus

Infrastructure & Automation Stack

  • Scripting: Python • Bash • PowerShell
  • Workflow Automation: n8n
  • Virtualization: VMware vSphere
  • HPC: Accelerated Computing on Nvidia GPUs

Certifications

Soft Skills

Awards & Recognition

NVIDIA Enterprise Platform Advisor Badge

Selected as one of only 32 professionals worldwide for NVIDIA’s Enterprise Platform Advisor program, recognizing excellence in AI infrastructure and platform engineering.

Learn more →

Latest Reference Project

In my latest project, I developed a sovereign AI framework leveraging NVIDIA-accelerated AI infrastructure as the backend. The solution integrates open-source and source-available services with NVIDIA NIMs for high-performance inference. It currently supports over 200 active users in production and offers API integrations for connecting to public LLMs.

Read the full case study →