Democratizing AI: Insights from Chirag Agrawal, Tech Leader & AI Platform Builder
I spoke with Ashutosh Garg on The Brand Called You about platform leverage, ethics, and democratizing AI development — covering open-source versus proprietary AI model economics and the future of agentic systems.
Transcript
Host: Welcome to another episode of The Brand Called You. I'm your host, Ashutosh Garg, and today I'm delighted to welcome a senior tech leader from Seattle, USA, Mr. Chirag Agrawal. Chirag, welcome to the show.
Chirag: Thank you, Ashutosh. It's my pleasure to be here. Thank you.
Host: Tell me about your journey. What took you to the US, and what first drew you to the world of AI and voice technology?
Chirag: Yeah, so my first experience of building something with a computer was when I was 15. I built a website for my school. After school, in the evening, I had this really nice computer teacher who would open the lab just for me. I would go to the computer lab and tinker around for hours, and I taught myself to build a website. That was the first spark of curiosity. Something clicked. That was the first time I built something that was alive using a computer, and that was really powerful.
In the second year of my undergrad, I built this search engine from scratch where you could upload a PDF book and then you could search answers to your questions in the book. The engine would tell you that the answer is on this page or that page. That was really my first interaction with AI, that was natural language processing to find answers to your questions. That's when I first understood the power of machine-based intelligence and how productive it could be.
I got a lot of encouragement from my friends, my professors, my family, pretty much everybody who mattered to me at that point. So I decided to build an AI game. I had an Android phone, so I taught myself to build an AI player that I could play against. Then I started sending it out to a lot of startups in Bangalore and asked them for an internship, and I got a great response. I got a lot of offers from gaming companies in Bangalore, and I joined one of them as an intern. That set me on this path of being an AI builder.
In my final year, I got an offer from Amazon to come work for them as a software engineer in Hyderabad. So I moved to Hyderabad and joined Amazon. One day at work, I heard that somebody from the Seattle office had come down to Hyderabad to demo this new product called Alexa. And I was like, wow, that sounds wonderful. I have to go and watch that. I went to the demo, and when I saw it, I was left speechless. There was this simple device, and you could speak to that device, and it would actually understand you and speak back to you. I was so amazed to see that. I was like, how is this even possible? I'm so curious how this technology works. What is behind this box?
Fortunately, the very next year, I got an offer from Alexa to join them in Seattle. And since then, I have been working there.
Host: Amazing. You also said you chose discipline fit over prestige early on. How did that decision shape your journey?
Chirag: Yeah, this is a very personal story. Not a lot of people know about this, but when I was in school, like most Indian parents, my parents also had this goal for me, that I should go to the best engineering college in India. Obviously I also wanted that.
Fortunately, I got an admit from BITS Pilani, and you might have heard of it. It's one of the best institutes in India. They invited me to a counseling session at their Goa campus, and when I was there, I could see the pride on my parents' faces. They were so happy that their son is joining.
But unfortunately, after counseling, I didn't get the branch I wanted. I wanted to study computer science, and that's not what I got. As an 18-year-old, it was a very difficult decision for me to choose my interest over this prestigious institute.
I remember I was sitting in the orientation class with my father, which is technically the first class for a new student in college. I gathered all the courage I had and I told him that I don't want to study here. I want to go to Bangalore and I want to study computer science. That was really difficult for me to do. My father was a bit disappointed, but he understood me.
When the dean or the principal of the college heard that there is a student who doesn't want to study here, he actually called me to his office. I don't remember what he said to me, but I remember the look on his face. It was basically: you must be an idiot for not wanting to study here, like this is the place to be. And he's not wrong. But what I couldn't explain to him was that I can't imagine myself learning anything other than computers, so that's where I will go.
In retrospect, that was one of the best decisions that I could have made at that point.
Host: What an amazing story. Moving on, you often speak about democratizing voice and Gen AI. What does true democratization mean to you in practice?
Chirag: So when I say democratization, I mean removing barriers that keep developers from building with AI.
In 2016, when I started working with voice AI technology, it wasn't really possible for a developer to create an application that can take speech as an input and map it to their business logic. That was a very hard thing to do. It required expertise in NLP, machine learning, AI infrastructure. Basically, you needed millions of dollars and a research team to build something like that.
The focus of my work since then has been to move those difficult parts of building with AI into the platform so that developers don't have to worry about all of these infrastructure-related concerns.
So when I say democratization, what I mean is making building with AI as easy as possible so that people can focus on their business logic, which is where the true innovation at scale actually happens.
Host: Well said. And tell me, what are some common misconceptions developers still hold about building agentic systems?
Chirag: Great question. One of the things I keep seeing is people trying to build these agents that can do everything. These general-purpose agents that should be able to do this and that, basically trying to achieve AGI in some form.
I understand why that idea is so seductive. It's because there is an impression that AI can actually think like humans, like it can meticulously plan things. But that is not true. In practice, it doesn't work. Even big AI platforms like ChatGPT and Claude haven't been able to quite get there.
The key here is to understand that your agents should be defined very narrowly, and they should be very specialized at one particular task or risk, and then they can collaborate with other agents for the rest of the things that they cannot do themselves. So that's one misconception.
The second one that I feel is important to mention is people trying to ship AI into the world without proper evals and guardrails, which kind of misses the whole point of making AI helpful to people. If it's not tested for safety and accuracy, then it's not a useful AI. So that's another misconception.
Host: Very interesting. How can a platform embed ethical behavior into design rather than just monitoring outcomes?
Chirag: I think the key here is to make the right behavior the path of least resistance on the platform, so that doing the right thing is also doing the easy thing.
A common scenario that platform developers run into is: we want to prevent people from abusing the platform in X or Y way. Usually the two choices they have are: can we just publish some guidelines and people will hopefully follow them, or should we build something into the system that prevents the wrong behavior in the first place?
Usually, the second choice is the best choice. That can be shaped in many different ways. You can have interfaces that prevent the abuse of AI or the platform in general. Or you can actively build guardrails into the system which prevent its abuse.
A common one is to build a response classifier that flags harmful content on its way out to the user, so that the AI stays aligned with human goals and it doesn't behave unethically.
Host: Well said, Chirag. You're also an AI platform builder. How do you define platform leverage, and why does it matter so much for scale?
Chirag: Good question. When we talk about software platforms, what it really means is building a general solution to a problem, basically building a platform for that problem. Another way to put it is: problems that can be generalized should be handled at platform level so that other people can come and conduct their business on top of it.
Platform leverage is the multiplier that you get when you make one change in the platform and its value is created across products, across teams, across users, without them doing anything. Basically, one improvement benefits everybody who uses the platform.
Shopify or Amazon are great examples of platforms where people can come and sell their stuff to millions of customers, and they have handled hard problems related to ecommerce like payments, web infrastructure, user experience, inventory management.
An AI platform is similar. If you optimize usage of compute resources in the platform, then that is cost saving for all products built on that platform. If you reduce latency at platform level, then that gain is distributed to all product builders on the platform. If you add memory to the platform, like an AI memory, then suddenly all the products on that platform are smarter. That's why building platform and handling these fundamental concerns at platform level is the path to scale.
Host: Very interesting. Often people tell me that fast AI will mean cutting corners. How do you build systems that are both fast and robust?
Chirag: Interesting question. I think this notion that fast means cutting corners comes from the assumption that speed and robustness are opposing forces, which is not really the case.
If an AI system is using prompt caching, for example, it's not really cutting any corners. It's just trying to be less wasteful of resources. Systems which are robustly designed are designed with the goal of being very efficient from the ground up, and hence they are also fast.
There are many ways to make an AI platform or system fast. Like I mentioned, caching. But there is also processing partial responses from AI, or routing simpler requests to smaller models and complex requests to larger models. All of these are techniques which reduce resource waste in the platform, and that also simultaneously makes the platform very fast.
So yeah, I think this is a misconception: that fast means cutting corners.
Host: And what role does automation play in managing the ongoing costs of AI systems?
Chirag: Automation plays a huge role in managing the ongoing cost. It's one of the biggest levers when it comes to streamlining costs and making them predictable.
I would say there are three important factors. If you work on them, you can really streamline the cost of your AI product. Number one is the size of the model you are using to serve your customers, which determines the GPU capacity required by your product. The other one is the number of input tokens you are sending to the AI to process a single customer request. And the third one is the number of output tokens you are producing to serve one customer request.
Automation can help you in all these three areas. For example, you can automate request routing that uses a smaller model for simpler requests and a larger, more expensive model for complex requests. Most requests are usually simple and they will be handled by a cheaper model.
You can implement context compression, which reduces the number of input tokens you're sending to the model. There are many ways to compress context. It depends on the domain you're working in. You can have more intelligent compression, which uses AI itself to compress, or more rule-based compression, defined according to your business rules.
Similarly for output tokens, you can use smarter prompting techniques, or a different type of model which produces shorter sentences, fewer words, or smaller words, which consume fewer GPU resources and help cut down the ongoing cost of your AI system.
Host: Fascinating. And how do open source and proprietary models fit together in the economics of AI development?
Chirag: Yeah, another interesting question. This one is very strategic and competitive.
Closed models are usually state of the art models. That makes sense because if somebody has a model which is leading in the market, then they can use it to capture immediate revenue by selling access to it. That helps them fund the next phase of research, which pushes the frontier even further.
Open models allow us to conduct a lot more research and development through community contribution. This is what we are seeing with Meta's Llama model, which is open source.
You'll notice that companies which have a lot of existing surfaces, usually big tech companies with existing products, both internal and external, where AI can be used to improve the product, benefit more from open source models. You get community development around those models, and hence those products. That's why you see companies like Google, Meta, Microsoft releasing open source models. Smaller companies, which don't have this advantage, tend to prefer a closed approach. Both are important.
Host: How close are we truly to autonomous, self-improving agents that can operate safely in real settings?
Chirag: I would say we are getting closer every month or every few months. There are benchmarks which track how much time it takes for an AI to achieve a certain task, and AI is consistently improving on these benchmarks.
You can take coding AI assistants as an example. These are the ones that I use at my job regularly. In the beginning of the year, these assistants were okay. They were good, but they were not really, really good. But if you compare them to where they are now, the difference is huge. That's because the scaffolding of tooling that exists now just didn't exist before, and the kind of models we have now didn't exist at the beginning of the year.
So we are improving in this area of being autonomous. I can't predict a timeline, because that's usually pretty difficult.
Host: As someone who's spent the last 10-15 years in technology and AI, what excites you the most about the evolution of agentic systems in the coming years?
Chirag: There is a lot to look forward to. AI agents, from being these standalone tools, are moving towards being this interconnected ecosystem, similar to what we have right now with web technology.
There are industry standards like MCP, which is being proposed from Anthropic, and agent-to-agent protocol, which is Google's standard for agent collaboration. These standards are taking us to a place where independent AI agents will be able to collaborate autonomously to work for you.
We are seeing AI browsers coming to market, and right now they are mostly built to work with web technology. But I think next year you will see them use these AI agents with the help of these standards.
We are in the early days right now. It reminds me of the early days of Android or iOS phones, where we had these apps and they were functional, but not really good, and then they became good very, very fast. A similar thing is going to happen with these AI agents, just on a much shorter timeline.
Host: And my last question, how do you see the human role changing as AI agents take on more complex decisions?
Chirag: Great question. I think there will always be humans in the loop in every AI agentic workflow, and the scope and productivity of every human will be remarkably different going forward. They will have a different role to play in the future than they have now, which is mostly working with the tools directly.
The problem with AI is that it lacks critical judgment and it cannot deal with ambiguity. That's where humans come into the picture. They can define the goal and the intent for the AI to move in the right direction, and they can organize different AI systems to achieve a goal, which is something that's hard for AI to do.
The important skills that people will bring to the table would be critical thinking and introspection, and really defining what you want to do next.
Host: What a fascinating conversation this has been. Thank you again for speaking to me, and all the very best to you.
Chirag: Thank you so much for having me. It was a pleasure. Thank you.