We build AI that actually ships
Most companies we talk to don’t have an AI problem. They have a deployment problem.
The models are there. The APIs are there. What’s missing is someone who can take a promising prototype and turn it into a system that runs reliably in production, that handles edge cases, that doesn’t cost a fortune at scale, that your team can maintain after we leave.
That’s our work. Cartesian Trees is an AI solution partner based in Paris. We work alongside your engineering team to design, build, and deploy AI-powered systems. Not slide decks. Not proof-of-concepts that live in Jupyter notebooks. Working software that solves a specific problem for your business.
Where we come from
We started with a research question: what if your database could reason under uncertainty? That led us to build Bayesian inference algorithms directly into PostgreSQL (MCMC sampling, Gibbs sampling, variational inference), running where the data lives instead of in a separate Python process. That research earned us Jeune Entreprise Innovante (JEI) status from the French government, their recognition that a company is doing genuine R&D, not just applying existing tools.
That background shapes how we approach everything. We don’t just call an API and hope for the best. We understand the math underneath, which means we can debug problems that purely-integration-focused teams can’t, and we know when a fancy approach is overkill and a simple one will do.
As AI moved from research to product, from Bayesian models to LLMs, from batch inference to real-time agents, we moved with it. Today we build LLM-powered applications, agentic systems, RAG pipelines, and the full production stack around them. But the foundation is the same: understanding the technology deeply enough to make it work reliably, not just impressively.
How we work
We don’t do 6-month proposals. AI moves too fast for that, and honestly, nobody knows upfront exactly what will work until you try it against real data.
Instead, we start by understanding what business decision you’re trying to improve or what workflow you’re trying to automate. Then we build a working prototype against your actual data, usually within the first couple of weeks, so we can see what works and what falls apart early, before you’ve invested months. Once the approach is validated, we do the harder, less glamorous work of making it production-ready: evaluation pipelines, error handling, cost optimization, monitoring, and proper testing.
We work embedded with your team throughout. When the project is done, your engineers own the system and understand it well enough to extend it. We’re not interested in creating dependency. We’d rather you call us for the next problem because the last engagement went well, not because you can’t maintain what we built.
Who we are
Ayush Tiwari, founder, based in Paris.
Engineering degree from IIT Roorkee (B.Tech, 2012–2016). Before starting Cartesian Trees, Ayush worked with both American and French companies across a wide range of engineering challenges: contributing to Stripe’s payment infrastructure, building Facebook’s Conversion API integrations, contributing to Metabase (open-source BI), and working on Pyodide (Python in the browser).
That breadth matters. AI projects don’t exist in isolation. They need backend APIs, frontend interfaces, payment systems, data infrastructure, and measurement. Having done all of that means we can build the whole system, not just the model.
Why JEI matters
The Jeune Entreprise Innovante certification is awarded by the French government to companies where R&D is a core part of the business, not companies that just use AI, but companies that advance it. It means our expertise is built on original research, not just OpenAI API wrappers. It also means we approach problems with a researcher’s rigor: we measure, we evaluate, we don’t ship things we can’t prove work.
What we work with
We use whatever is right for the problem, but here’s where our depth lies:
AI and data: LLMs (OpenAI, Anthropic, Mistral, open-source models), agentic frameworks, RAG systems, Bayesian inference and probabilistic programming (PyMC), NLP with spaCy, embeddings and vector search, evaluation frameworks.
Backend: Python, Django, FastAPI, PostgreSQL (deep expertise; it’s where our Bayesian research lives), asyncpg, SQL.
Frontend: React, Next.js, TypeScript. We build the interfaces that make complex systems usable.
Infrastructure: GCP, Kubernetes, Docker, CI/CD. The deployment and monitoring layer that keeps production systems running.
Data: NumPy, SciPy, Pandas, Metabase, data pipeline engineering, web scraping and extraction.
Open source: Active contributor to Stripe API, Facebook CAPI, Metabase, Pyodide.
Get in touch
If you’re figuring out how AI fits into your product, or you’ve started and hit the wall between “demo” and “production,” we should talk. We’re most useful when you have real data, a real problem, and a team that’s ready to build.
