Building a Hyperlocal Marketplace: Architecture Decisions for CenterShops
By Nirmal Chandra Nayak, Cofounder | Prashanth Kumar Nagapaga, Cybsersecurity Architect | Technical Advisor
November 22, 2025
When you are building a hyperlocal marketplace in a Tier-2 city, the architecture decisions are not just about technology. They are about speed, cost, operational reality, and what you can actually maintain with a small team.
This is the story of how we built CenterShops—the technical choices we made, why we made them, and what we learned.
The Core Requirements
Before writing a single line of code, we mapped out what the platform actually needed to do:
Customer-facing:
- Browse merchants by category (food, groceries, vegetables, parcels)
- Place orders with real-time pricing
- Track delivery status
- Pay via UPI, cards, or cash on delivery
- Rate merchants and delivery partners
Merchant-facing:
- Receive and accept orders
- Update inventory and availability
- Manage order fulfillment
- Track earnings and payouts
Delivery partner-facing:
- Receive delivery assignments
- Navigate to pickup and drop locations
- Update delivery status
- Track earnings
Admin/Operations:
- Onboard merchants and delivery partners
- Monitor platform health
- Handle disputes and support tickets
- Manage geofencing and service areas
The Technology Stack We Chose
We made deliberate choices based on team expertise, time constraints, and operational needs:
Mobile Apps: Flutter
Why: Single codebase for Android and iOS. Our team had Flutter experience. Fast iteration cycles. Good enough performance for our use case.
Trade-off: Native performance would have been better for complex animations, but we prioritized speed to market. We could always rewrite critical paths later if needed.
Backend: Laravel (PHP)
Why: Mature ecosystem. Built-in authentication, queues, and scheduling. Easy to deploy. Good documentation. Team familiarity.
Trade-off: Not the "cool" choice. Node.js or Go would have been more performant at scale, but Laravel gave us productivity and stability. We optimized for developer velocity, not theoretical scale.
Database: MySQL
Why: Relational data model fit our domain (orders, merchants, users, transactions). ACID guarantees mattered for payments. Well-understood operational characteristics.
Trade-off: NoSQL would have given more flexibility, but we valued data consistency over schema flexibility. Financial transactions require strong guarantees.
Caching: Redis
Why: Fast key-value store for session management, rate limiting, and frequently accessed data (merchant menus, geofencing rules).
Trade-off: Added operational complexity (one more service to monitor), but the performance gains for read-heavy operations were worth it.
Load Balancing: Nginx
Why: Reverse proxy, SSL termination, static file serving. Battle-tested. Low resource footprint.
Trade-off: Could have used a managed load balancer, but Nginx gave us more control and lower costs.
Cloud Infrastructure: AWS
Why: EC2 for compute, RDS for managed MySQL, S3 for file storage, CloudFront for CDN, Route 53 for DNS. Mature ecosystem. Pay-as-you-go pricing.
Trade-off: AWS has a learning curve and can get expensive if not optimized. But it gave us reliability and the ability to scale when needed.
CI/CD: GitHub Actions
Why: Integrated with our repository. Simple YAML configuration. Free for private repos. Automated testing and deployment.
Trade-off: More complex pipelines might need Jenkins or GitLab CI, but GitHub Actions was sufficient for our needs.
Payments: Razorpay, PhonePe, Stripe, PayPal
Why: Multiple payment options for customer preference. Razorpay and PhonePe for Indian UPI/cards. Stripe and PayPal for international testing and future expansion.
Trade-off: Managing multiple payment gateways increased integration complexity, but it reduced dependency on a single provider and improved customer choice.
Architecture Patterns We Used
Monolith First, Microservices Later
We started with a monolithic Laravel application. All business logic in one codebase. Simpler to develop, test, and deploy.
Why: Premature microservices add complexity without clear benefits. We did not know which parts of the system would need independent scaling. A monolith let us iterate fast.
Trade-off: As the platform grew, certain modules (payment processing, notification service) became candidates for extraction. But we waited until we had real operational data to guide those decisions.
Event-Driven for Asynchronous Tasks
We used Laravel queues (backed by Redis) for tasks that did not need immediate responses:
- Sending order confirmation emails/SMS
- Processing payment webhooks
- Generating merchant payout reports
- Updating analytics dashboards
Why: Keeps the main request-response cycle fast. Improves user experience. Allows retry logic for transient failures.
Trade-off: Debugging asynchronous failures is harder than synchronous code. We invested in logging and monitoring to compensate.
Geofencing for Service Area Management
We implemented geofencing to define service areas. Customers could only see merchants within their delivery radius. Delivery partners only received assignments within their operating zones.
Why: Hyperlocal models depend on fast delivery. Geofencing ensures realistic delivery times and prevents over-promising.
Trade-off: Required accurate GPS data and real-time location tracking. Privacy concerns meant we had to be transparent about location usage.
What We Got Right
Pragmatic Technology Choices: We chose boring, proven technologies over trendy ones. This reduced risk and let us focus on product, not infrastructure.
Fast Iteration: Flutter + Laravel gave us rapid development cycles. We could ship features weekly, learn from user feedback, and adjust.
Payment Flexibility: Supporting multiple payment gateways reduced friction. Customers had options. We were not locked into one provider.
Operational Visibility: We built admin dashboards early. This helped us monitor platform health, identify bottlenecks, and support merchants effectively.
What We Would Do Differently
Earlier Investment in Monitoring: We added logging and monitoring reactively, after issues occurred. Starting with structured logging, error tracking (Sentry), and performance monitoring (New Relic or Datadog) would have saved debugging time.
API Versioning from Day One: We did not version our APIs initially. When we needed to make breaking changes, it caused friction with mobile app updates. API versioning should be built in from the start.
Automated Testing: We wrote tests, but not enough. More unit tests, integration tests, and end-to-end tests would have caught regressions earlier and given us confidence to refactor.
Database Indexing Strategy: We added indexes reactively as queries slowed down. A proactive indexing strategy based on expected query patterns would have prevented performance issues.
The Real Lesson
The best architecture is not the most elegant or the most scalable. It is the one that lets you learn fast, adapt quickly, and deliver value to users.
For CenterShops, that meant:
- Choosing technologies we knew well
- Building a monolith first
- Optimizing for developer velocity
- Adding complexity only when justified by real operational needs
The trade-off: We accepted technical debt in exchange for speed. We knew we would need to refactor later. But we also knew that a perfect architecture for a product with no users is worthless.
The result: We launched in months, not years. We validated demand. We learned what mattered. And when we hit scaling constraints, we had real data to guide our decisions.
Next in this series: Speed vs. Discipline: Technology Trade-offs in Early-Stage Platforms