Speed vs. Discipline: Technology Trade-offs in Early-Stage Platforms
By Nirmal Chandra Nayak, Cofounder November 29, 2025
Building fast helps you validate the market. Building right helps you scale. The hard part is knowing when to switch gears.
This is the story of how we navigated that tension at CenterShops—what we optimized for, what we sacrificed, and what we learned about technical trade-offs in early-stage platforms.
The Speed Imperative
When we started CenterShops, we had a hypothesis: Tier-2 customers want hyperlocal delivery, and local merchants want digital reach. But we did not know if that hypothesis was correct.
We needed to test it quickly.
That meant making deliberate choices to optimize for speed:
No custom design system: We used off-the-shelf UI components from Flutter libraries. They were not perfect, but they were good enough. Custom design could wait until we validated demand.
Manual operations: Merchant onboarding was manual. We called merchants, explained the platform, and walked them through setup. It did not scale, but it let us learn what merchants actually needed.
Minimal automation: Order assignment to delivery partners was semi-automated. We had algorithms, but operations staff could override them. This gave us flexibility to handle edge cases while we learned the operational patterns.
Monolithic architecture: Everything in one codebase. Simpler to develop, test, and deploy. We could ship features in days, not weeks.
Pragmatic testing: We wrote tests for critical paths (payments, order flow) but not for every function. We prioritized shipping over 100% test coverage.
The Trade-offs We Made
Every speed decision came with a cost:
Technical Debt: We knew the code was not perfect. We had TODO comments. We had hardcoded values. We had functions that did too much. But we shipped.
Operational Overhead: Manual processes meant we needed more people to run the platform. But it also meant we understood the operations deeply.
Scalability Constraints: Our architecture could handle 2,000 orders per month, but it would struggle at 10x that volume. We accepted that constraint because we needed to prove we could get to 2,000 first.
Maintenance Burden: Lack of automated testing meant regressions were caught in production, not in CI/CD. We fixed them quickly, but it was reactive, not proactive.
When Speed Stopped Being Enough
Around month six, we started hitting the limits of our speed-first approach:
Performance Issues: Database queries that were fast with 100 orders per day became slow with 500 orders per day. We needed indexing, query optimization, and caching strategies.
Operational Bottlenecks: Manual merchant onboarding could not keep up with demand. We needed self-service onboarding flows and automated verification.
Code Complexity: The monolith was becoming harder to navigate. New features took longer to ship because we had to understand more of the codebase.
Support Overhead: Customer support tickets were growing faster than our ability to respond. We needed better logging, error tracking, and diagnostic tools.
This was the inflection point. We had validated demand. Now we needed to build for scale.
The Shift to Discipline
We did not rewrite everything. That would have been wasteful. Instead, we prioritized refactoring based on operational pain:
1. Database Optimization
Problem: Slow queries during peak hours.
Solution: Added indexes on frequently queried columns (merchant_id, order_status, delivery_partner_id). Implemented query result caching in Redis. Moved analytics queries to read replicas.
Impact: Query response times dropped from 2-3 seconds to 200-300ms. Peak hour performance improved significantly.
Trade-off: Added operational complexity (managing read replicas, cache invalidation). But the performance gains were worth it.
2. Automated Merchant Onboarding
Problem: Manual onboarding was a bottleneck.
Solution: Built self-service onboarding flow. Merchants could sign up, upload documents, and configure their menu without human intervention. We added automated verification checks (document validation, address verification, bank account verification).
Impact: Onboarding time dropped from 2-3 days to 24 hours. Operations team could focus on merchant support instead of data entry.
Trade-off: Upfront development time (3 weeks). But it freed up ongoing operational capacity.
3. Observability and Monitoring
Problem: We were debugging issues reactively, often after customers reported them.
Solution: Integrated Sentry for error tracking, structured logging with ELK stack (Elasticsearch, Logstash, Kibana), and CloudWatch for infrastructure monitoring. Set up alerts for critical failures (payment processing, order assignment).
Impact: We could detect and fix issues before customers noticed. Mean time to resolution (MTTR) dropped from hours to minutes.
Trade-off: Added infrastructure cost and learning curve. But the operational visibility was essential.
4. API Versioning and Backward Compatibility
Problem: Breaking API changes forced all users to update the mobile app immediately.
Solution: Implemented API versioning (v1, v2). Maintained backward compatibility for at least two versions. Deprecated old endpoints gradually with clear migration timelines.
Impact: We could ship API improvements without breaking existing mobile app users. Reduced customer friction during updates.
Trade-off: More code to maintain (supporting multiple API versions). But it improved user experience significantly.
5. Automated Testing and CI/CD
Problem: Regressions were caught in production.
Solution: Increased test coverage for critical paths. Added integration tests for order flow, payment processing, and delivery assignment. Configured GitHub Actions to run tests on every pull request.
Impact: Caught bugs before deployment. Gave us confidence to refactor without breaking existing functionality.
Trade-off: Slower development initially (writing tests takes time). But it paid off in reduced production incidents.
The Real Lesson
The question is not "speed or discipline?" The question is "when do you shift from speed to discipline?"
Here is what we learned:
Start with speed: In the early days, the biggest risk is building something nobody wants. Optimize for learning, not perfection.
Recognize the inflection point: When operational pain exceeds the cost of refactoring, it is time to invest in discipline. For us, that was around 500-1,000 orders per month.
Refactor incrementally: Do not rewrite everything. Prioritize based on operational pain. Fix the bottlenecks that hurt the most.
Measure the impact: Track metrics before and after refactoring. Did query performance improve? Did onboarding time decrease? Did MTTR drop? Use data to justify the investment.
Accept ongoing trade-offs: Even after refactoring, you will still have technical debt. That is normal. The goal is not zero debt. The goal is manageable debt that does not block progress.
Why This Matters to Employers
This experience demonstrates several critical capabilities:
Technical Judgment: Knowing when to optimize for speed versus discipline. Understanding that the right answer changes as the product matures.
Operational Thinking: Recognizing that technical decisions have operational consequences. A slow query is not just a technical problem—it is a customer experience problem.
Prioritization: Refactoring everything is wasteful. Refactoring nothing is unsustainable. The skill is knowing what to fix and when.
Measurement: Using metrics to guide decisions. Not refactoring based on gut feel, but based on operational data.
Looking Ahead
The lessons from CenterShops now inform how we approach ChittaTaxis:
- Start with speed to validate demand
- Build observability from day one (we learned this the hard way)
- Plan for the inflection point (know what you will refactor when you hit scale)
- Measure everything (you cannot optimize what you do not measure)
The core insight: Speed and discipline are not opposites. They are phases. The best teams know when to switch gears.
Next in this series: Merchant Onboarding at Scale: Trust and Operational Consistency