Back
Đọc bằng Tiếng Việt

My AI Journey: From Lagrange to LLMs

A decade with AI — from reading textbooks on Lagrange multipliers to understand SVM, to building KyberAI, to vibe coding this website. Each era felt like the pinnacle. Each was just the beginning.

People sometimes call me "Long AI" or the "Father of KyberAI." It's flattering, but the truth is more interesting: my relationship with AI spans over a decade, through every phase of the field's evolution. Each moment felt like we'd reached the peak. Each time, it was just the beginning.

2014: The Lagrange Era

My first serious encounter with AI wasn't glamorous. I was a student, and I wanted to understand Support Vector Machines and K-means clustering — not just use them, but truly understand them.

So I did what any obsessive learner would do: I read an entire book on Lagrange multipliers.

This was before AI was cool. Before "machine learning engineer" was a job title. Before anyone talked about "AI safety" or "AGI timelines." It was just math — beautiful, foundational math that would underpin everything that came after.

Looking back, that investment paid dividends I couldn't have imagined. When you understand the optimization theory behind SVMs, you develop an intuition for how models learn. That intuition transfers across paradigms.

2017: When AI "Didn't Work"

By 2017, I had started collaborating remotely with an AI startup in Vietnam. NLP was the focus, and it was... rough.

I still remember the debates. Engineers would argue passionately that their hand-crafted algorithms outperformed any ML approach for named entity recognition. And here's the thing: they were right. At the time, carefully tuned rule-based systems often beat neural approaches for specific tasks.

Such a time to be alive.

Those debates taught me something crucial: AI skepticism is healthy. The field has always had more hype than substance in any given moment. But the skeptics made one critical error — they underestimated compounding progress.

The rule-based NER systems that beat neural networks in 2017? They're museum pieces now. The engineers who built them have long since converted to the church of deep learning. Progress in AI isn't linear — it's exponential, punctuated by paradigm shifts.

2017-2020: PhD — Decentralized Meets Machine Learning

My PhD research at UTT/Montimage/INRIA focused on something that seemed esoteric at the time: applying machine learning to distributed network security in Named Data Networking.

But here's what made it interesting. While everyone focused on how ML could help networks, I discovered something working in reverse: network effects could improve machine learning itself.

I broke down Bayesian Network functions into lambda calculus — a theoretical exercise that seemed purely academic. The idea was that IoT devices could share computation instead of rerunning everything from scratch. With the rise of edge computing and federated learning, these ideas are becoming relevant again.

This work also revealed the thread that connects everything I do: decentralization. PhD: decentralized network security. DeFi: decentralized finance. The common thread isn't just the technology — it's trusting systems instead of gatekeepers.

2021-Now: DeFi Research at Kyber Network

When I joined Kyber Network in 2021, I started as a researcher exploring DeFi protocols, AMMs, and aggregators. In 2022, I became Research Lead, taking on greater responsibility for research strategy and team leadership.

But I couldn't escape AI.

In 2023, I led the development of KyberAI with the team — a system that applied machine learning to on-chain data to help DeFi traders make informed decisions. KyberScore used both on-chain signals and off-chain technical analysis to predict short-term token performance. Colleagues jokingly called me the "Father of KyberAI" or simply "Long AI."

KyberAI was eventually discontinued in December 2023, but the lessons remained. Building ML products isn't just about model accuracy — it's about understanding user needs, data pipelines, and the entire ecosystem.

In 2025, I took on data ownership: building the company's data infrastructure from scratch with a small team, migrating from unreliable third-party sources to in-house pipelines, and supporting major product launches. My daily work now spans data product ownership, analytics, and research support.

2023: The Sputnik Moment

Then ChatGPT happened.

I've been in this field long enough to be skeptical of hype cycles. I've seen "AI winters" and "AI springs." But this was different. The transformer architecture, scaled to billions of parameters, trained on the internet's text — it felt like a genuine phase transition.

I built personal project after personal project. Not because I needed to, but because I needed to understand. How do these models really work? What are their failure modes? Where are the boundaries of their capabilities?

I also became an evangelist inside my company. AI tools weren't going to replace anyone — but people who could use AI tools effectively would outpace those who couldn't.

2026: Vibe Coding This Website

And now? I'm writing this post on a website I built entirely through "vibe coding" — collaborative coding with AI assistants. The irony isn't lost on me.

A decade ago, I was reading Lagrange multiplier textbooks to understand the math behind SVMs. Now I'm having conversations with AI systems that understand context, generate code, and help me think through problems.

The field has evolved from "AI doesn't work" to "AI might be too powerful." From hand-crafted features to learned representations. From narrow task-specific models to general-purpose systems.

What I've Learned

A few principles have emerged from this journey:

1. Invest in fundamentals. The time I spent on Lagrange multipliers wasn't wasted. Understanding foundations makes you adaptable across paradigm shifts.

2. Healthy skepticism, unhealthy betting. Be skeptical of any specific claim. But never bet against compounding progress in a field with strong feedback loops.

3. The thread matters. My work spans network security, DeFi, and AI. The common thread — decentralized systems, trust through verification — ties it together. Find your thread.

4. Each peak is a plateau. Every moment felt like we'd reached the pinnacle of AI capability. Every moment was just the beginning of the next phase.

5. Build to understand. Reading papers isn't enough. You have to build. The act of creation reveals what theory obscures.


I don't know what the next decade holds. If the past is any guide, it'll make everything before look quaint. The 2014 version of me reading Lagrange textbooks would be stunned by what's possible today. The 2026 version of me will probably look back on this post the same way.

Each era feels like the pinnacle. Each is just the beginning.