Database Strategies for Modern Web Applications: Choosing, Scaling, and Optimizing
The Dangerous Myth of "NoSQL First" for Modern Web Applications
Every year, I see talented developers make the same mistake. They jump straight into NoSQL databases for their new web application, convinced by the hype that it's inherently more "modern," "flexible," or "scalable." They believe it will accelerate development and future-proof their product. This is a dangerous myth, and it often leads to significant pain down the road. I've been building and scaling web applications for over eight years, from complex Shopify apps like Store Warden to AI automation tools like Flow Recorder, and I've seen firsthand how this "NoSQL first" mentality can cripple a promising SaaS.
A recent study I saw indicated that over 60% of startups eventually refactor their data layer within three years. My guess? A significant portion of those refactors stem from an initial, ill-informed database choice. When I started building applications in Dhaka, the buzz around NoSQL was immense. Everyone talked about MongoDB's flexibility. I tried it for a few early projects, lured by the promise of schema-less development. It felt liberating at first, sure. But as my applications grew, as business logic became more intricate, and as I needed stronger data integrity for things like financial transactions in Paycheck Mate, the limitations became glaring.
The conventional wisdom often suggests that NoSQL is faster to prototype with. I disagree. For most modern web applications, especially those dealing with structured business data, relational databases like PostgreSQL offer a far more robust, predictable, and ultimately faster path to a stable, scalable product. I'm an AWS Certified Solutions Architect (Associate), and my experience architecting scalable SaaS solutions has consistently shown that a well-designed SQL schema provides a solid foundation. It enforces data integrity from day one. It makes complex queries manageable. It allows for predictable scaling strategies.
You don't just "throw data" into a database. You manage it. You query it. You ensure its consistency. For most of my projects, including the core logic behind Trust Revamp, where data consistency is paramount, I’ve found SQL to be the superior choice initially. The "flexibility" of NoSQL often translates to a lack of guardrails, which can be disastrous when your application handles critical user or business data. Don't chase trends. Follow the evidence. For most web applications, especially your first or second SaaS, a well-chosen relational database is your strongest ally.
Database Strategy Web Application in 60 seconds: Forget the hype. For most modern web applications, especially early SaaS products, start with a robust relational database like PostgreSQL. It provides strong data integrity, predictable scaling paths, and a solid foundation for complex business logic, saving you from costly refactors later. Only consider NoSQL when your data model is genuinely schema-less, highly denormalized, and requires extreme horizontal scaling from the outset, or for specific use cases like real-time analytics or caching. Your database choice is a critical architectural decision; don't make it based on fleeting trends.
What Is Database Strategy Web Application and Why It Matters
A Database Strategy Web Application is more than just picking between SQL and NoSQL. It's a comprehensive plan for how your application will store, retrieve, manage, and scale its data throughout its lifecycle. Think of it as the blueprint for your application's memory and knowledge base. It encompasses everything from the initial data modeling and schema design to performance optimization, backup and recovery, security, and how your database will evolve as your application grows.
I've learned this lesson through shipping six products for global audiences. Data is the heartbeat of any application. Without a sound strategy, your application will suffer from slow performance, data inconsistencies, security vulnerabilities, or simply collapse under load. It's like trying to build a skyscraper without a proper foundation; it might stand for a bit, but it will eventually crumble. I've personally seen startups pour millions into marketing only to fail because their backend couldn't handle the load, or their data became a tangled mess.
The first principle of any effective database strategy is to understand your data. What kind of data are you storing? How does it relate to other pieces of data? How frequently will it be read? How often will it be written? What are the consistency requirements? For Store Warden, my Shopify app, I deal with e-commerce data – orders, products, customers. This data has clear relationships and strict integrity requirements. A relational model fits perfectly. For a different project, perhaps a logging service or a real-time chat, a document or key-value store might make more sense.
Your database strategy dictates your application's future. It affects development speed, operational costs, scalability limits, and even the features you can realistically build. A poor choice early on can lead to massive technical debt, requiring costly and time-consuming migrations down the line. I've had to refactor parts of my data layer in the past, and it's never a trivial task. It consumes precious time and resources that could be spent on building new features or acquiring customers. As an AWS Certified Solutions Architect, I understand that anticipating future needs and building for resilience from the start is paramount. You don't need to over-engineer for tomorrow's scale today, but you absolutely need a strategy that allows for that scale. You need to know your path.
A Practical Framework for Building Your Database Strategy
Building a database strategy isn't about guesswork. It's a methodical process. I've refined this framework over eight years, shipping projects like Store Warden and Flow Recorder. It works. You need to follow the evidence, not just popular opinion.
1. Understand Your Data's DNA
You can't build a house without knowing what materials you're using. The same goes for data. You must analyze your data's fundamental characteristics.
- Data Types: Are you storing structured customer records, unstructured log files, or semi-structured product catalogs?
- Relationships: How do different pieces of data connect? Does an order always have a customer? This dictates relational needs. For Store Warden, order-to-product relationships are critical. I need strong foreign keys.
- Access Patterns: How will users query this data? Will they read it often, write it rarely, or vice-versa? Are you doing complex analytics or simple key-value lookups? Flow Recorder needs quick, individual flow retrievals and fast log writes.
- Volume and Velocity: How much data will you store today? How fast will it grow? I saw one startup crash at 100,000 users because they never considered data growth.
- Consistency Needs: Do you need immediate data consistency (ACID transactions) like a banking system, or can you tolerate eventual consistency like a social media feed? For Paycheck Mate, my financial tool, ACID is non-negotiable.
Don't skip this. Spend time here. It's the foundation of your entire strategy.
2. Define Your Application's Core Requirements
Once you know your data, you define what your application demands from its database. Be specific. Quantify everything.
- Performance: What are your latency targets? "Fast" isn't enough. Do you need sub-100ms reads for critical user paths? Flow Recorder needs near-instantaneous flow execution logging.
- Consistency: Strict ACID compliance for financial transactions? Or can you live with eventual consistency for user profiles?
- Availability: How much downtime can you tolerate? 99.9%? 99.999%? This impacts replication and failover strategies. My AWS Certified Solutions Architect training taught me to always consider disaster recovery.
- Scalability: How many users or transactions do you expect in 6 months, 1 year, 3 years? Will you scale vertically (bigger server) or horizontally (more servers)? I've learned that optimizing your queries can push horizontal scaling much further out than you think.
- Cost: What's your budget for infrastructure and operations? A serverless database like Neon can save money for variable loads.
- Security: What are your compliance requirements (GDPR, HIPAA)? Data encryption, access control, and auditing are non-negotiable.
These requirements will narrow down your database choices significantly.
3. Evaluate Database Models and Technologies
Now you match your data's DNA and application requirements to actual database technologies. This is where SQL vs NoSQL comes into play.
- Relational Databases (SQL): PostgreSQL, MySQL, SQL Server.
- Best for: Structured data, complex relationships, strong consistency, transactions (e-commerce, financial apps like Paycheck Mate, inventory management like Store Warden).
- My take: Don't dismiss SQL. It's incredibly powerful and mature. PostgreSQL is often my default for new projects. It handles much more than people give it credit for.
- Document Databases (NoSQL): MongoDB, Couchbase.
- Best for: Flexible schemas, rapidly evolving data models, content management, user profiles.
- My take: Great for specific use cases. Overrated as a "one-size-fits-all" solution. I see too many developers force relational data into documents, creating complex join logic in application code.
- Key-Value Stores (NoSQL): Redis, DynamoDB.
- Best for: Caching, session management, leaderboards, simple data retrieval. Extremely fast.
- My take: Essential for performance. Redis is a staple in my stack for caching.
- Vector Databases: Pinecone, Weaviate, OpenSearch Vector Engine.
- Best for: AI applications, semantic search, recommendation engines. Storing and querying high-dimensional embeddings.
- My take: A game-changer for AI. Flow Recorder relies heavily on this for semantic flow discovery.
Don't pick based on hype. Pick based on fit.
4. Architect for Scalability (But Don't Over-engineer)
Everyone talks about scalability. Few do it right from the start. You need a strategy that allows for growth without crippling you today.
- Vertical Scaling First: Upgrade your existing server. It's simpler, often cheaper in the early stages. My own experience building SaaS in Dhaka shows that many applications don't need distributed systems until much later than predicted. I pushed a single PostgreSQL instance on AWS RDS to handle millions of transactions daily for one project.
- Horizontal Scaling:
- Read Replicas: For read-heavy applications, duplicate your database for reads. This works for Store Warden, distributing analytics queries.
- Sharding/Partitioning: Distribute data across multiple database instances. This adds complexity. Only do it when you must.
- Serverless Database: Options like Amazon Aurora Serverless or Neon for PostgreSQL scale automatically based on demand. Great for unpredictable workloads.
- Caching: Use Redis to offload reads from your primary database. This is low-hanging fruit for performance gains. I use it in almost every project.
The contrarian view: You don't need to shard your database when you have 100 users. Focus on efficient schema design, indexing, and query optimization first. These efforts will buy you years of runway on a simpler setup.
5. Plan for Data Governance and Lifecycle
This is the step most guides skip. It's crucial. Data isn't static. It has a lifecycle.
- Backup and Recovery: You will have data loss if you don't plan for this. Automate backups. Test your recovery process regularly. I once lost a day's worth of data in a dev environment because I assumed the default cloud backup was enough. It wasn't granular enough.
- Security: Encrypt data at rest and in transit. Implement strong access controls. Regular security audits. As an AWS Certified Solutions Architect, I know that data breaches are a business killer.
- Data Retention Policies: How long do you need to keep data? Storing data indefinitely costs money and slows queries. Define what data to archive, what to delete, and when. For Flow Recorder, old execution logs get moved to cheaper S3 storage after 90 days.
- Compliance: Understand legal and regulatory requirements (GDPR, CCPA). This impacts how you store, process, and protect user data.
Ignoring data governance is like leaving your front door unlocked. It's not a matter of if something goes wrong, but when.
6. Prototype and Benchmark
Don't just pick a database and commit. Prove your choice.
- Proof of Concept (PoC): Build a small, isolated application that mimics your core data operations. Use your chosen database.
- Load Testing: Simulate expected traffic. Measure performance metrics: latency, throughput, error rates. See if it meets the requirements you defined in Step 2.
- Iterate: If the PoC fails, don't be afraid to go back to the drawing board. This early stage is cheap to change. Changing your database strategy later is expensive. I've switched databases completely after a PoC failed to meet performance targets. It saved me months of refactoring later.
My Database Strategy in Action: Lessons from the Trenches
I've built and scaled multiple SaaS products. I've made good decisions and painful mistakes. Here are two examples of how database strategy played out in my projects.
Example 1: Scaling Shopify Data for Store Warden
Setup: I built Store Warden, a Shopify app designed to help merchants monitor their stores. It pulls in significant amounts of e-commerce data: orders, products, customers, and their associated metadata. This data has clear, strict relationships. An order belongs to a customer and contains products. Strong data integrity is paramount. I chose PostgreSQL hosted on AWS RDS. It's a reliable relational database, my default for structured data.
Challenge:
Initially, Store Warden was a success. We hit 50,000 active stores, each pushing thousands of orders daily. My analytics dashboard, which showed merchants trends like "top-selling products by region" or "order volume over time," started crawling. Some complex queries, involving joins across orders, order_items, and products tables, took over 30 seconds to complete. Customers complained about slow dashboards. My AWS RDS instance's CPU utilization was consistently high, hitting 80-90% during peak hours, and my read replica couldn't keep up. The single created_at index wasn't sufficient for the diverse range of analytics queries.
Action: I dug into the query logs. I found a few common query patterns were the culprits.
- Inefficient Indexing: I added specific, compound indexes tailored to these patterns. For example,
(store_id, created_at, status)for filtering orders by store and date with specific statuses. For product-level analytics,(store_id, product_id, sales_date). This was a targeted fix, not just adding indexes everywhere. - Materialized Views: For the most resource-intensive aggregate reports (e.g., daily sales summaries), I implemented daily materialized views. This pre-calculated the data overnight. It meant users fetched pre-computed results, not live aggregates. I configured these views to refresh during off-peak hours.
- Query Optimization: I reviewed the SQL queries themselves. Some ORM-generated queries were inefficient. I rewrote the most critical ones using raw SQL, ensuring they leveraged the new indexes optimally.
Result: The impact was immediate and dramatic. Analytics reports that previously took 30-45 seconds now loaded in under 2 seconds. This was a 95% reduction in query time. Customer satisfaction scores for the dashboard features jumped by 15%. My AWS RDS CPU utilization dropped to a healthy 30-40% during peak times, even with increased data volume. I also saw a 15% reduction in my overall AWS RDS costs for read replicas because the primary database was no longer struggling to keep up with analytics requests. This bought me another year before I even considered sharding.
Example 2: Semantic Search with Flow Recorder
Setup: Flow Recorder is my AI automation tool. Users create "flows"—sequences of actions. These flows need to be searchable, not just by name, but by what they do. This means semantic search. I initially stored the flow definitions in MongoDB due to its flexible schema, which was great for the varied structure of user-defined flows. For search, I generated embeddings (numerical representations) of each flow's description and actions.
Challenge: My first attempt at semantic search was naive. I stored the generated embeddings as arrays within the MongoDB documents. To find similar flows, I had to fetch all relevant embeddings from MongoDB, then iterate through them in my application code to calculate cosine similarity. This process was incredibly slow. Finding relevant flows among 10,000+ flows took 5-10 seconds. It was a terrible user experience. Users would search, wait, and often give up. MongoDB isn't designed for efficient vector similarity search. The application server was also burdened by the similarity calculations. It was a major bottleneck for user engagement and adoption.
Action: I realized I needed a dedicated Vector Database. This was a specific type of NoSQL database designed for high-performance similarity search. I integrated with a managed Vector DB (specifically, I used Amazon OpenSearch Service with its vector engine capabilities, but Pinecone is another great option).
- Dedicated Vector Storage: I extracted the flow embeddings from MongoDB and stored them in the Vector DB. Each entry in the Vector DB contained the embedding and a reference (ID) back to the original flow in MongoDB.
- Optimized Search: When a user searched, I generated an embedding for their query, sent it to the Vector DB, which then performed an approximate nearest neighbor (ANN) search extremely quickly.
- Hybrid Approach: The Vector DB returned the IDs of the most semantically similar flows. I then used these IDs to fetch the full flow details from MongoDB.
Result: The change was transformative. Semantic search for flows, even with over 100,000 flows, now completes in under 500ms. This is a 90% speed improvement. User engagement with the flow discovery feature increased by 20% within a month. Developers could build new search features much faster because the underlying search infrastructure was handled by a specialized service. My application's backend could focus on flow execution, not complex vector math. This hybrid database strategy, combining MongoDB's flexibility with a Vector DB's specialized search, was the key.
Common Mistakes in Database Strategy (and How to Fix Them)
I've seen these mistakes made countless times, and I've made some of them myself. Avoiding them saves you time, money, and headaches.
1. Premature Optimization
You build for Google's scale when you have 10 users. You implement complex sharding or distributed systems when a single, well-indexed database instance would suffice. This adds unnecessary complexity and cost.
- Fix: Start simple. Optimize when data or performance metrics demand it. Monitor your database actively.
2. Blindly Following Trends
"NoSQL is the future!" "Serverless is the only way!" These statements lead to bad decisions. Just because a technology is popular doesn't mean it's right for your specific data and application. I've seen teams use MongoDB for purely relational data, then struggle with complex queries.
- Fix: Match the database type to your data's structure, access patterns, and consistency requirements. Don't pick based on hype.
3. Ignoring Backup and Recovery
Assuming your cloud provider's default snapshots are enough. Or worse, not testing your recovery process. When disaster strikes, you'll be scrambling.
- Fix: Implement a robust, tested backup and recovery strategy. Schedule regular drills to ensure you can restore data quickly and accurately. Your AWS certification requires this.
4. Over-reliance on ORMs for Complex Queries
Object-Relational Mappers (ORMs) like Prisma or Eloquent (Laravel) are fantastic for developer productivity. But they can generate inefficient SQL for complex joins, aggregations, or specific analytical queries.
- Fix: Write raw SQL for performance-critical queries. Profile these queries. Use tools like
EXPLAIN ANALYZEin PostgreSQL to understand their execution plan.
5. Not Planning for Data Lifecycle
This sounds like good advice: "Keep all your data, you might need it later!" But storing data indefinitely leads to increasing costs and slower query performance over time. Old, unused data becomes dead weight.
- Fix: Define clear data retention policies. Archive old, infrequently accessed data to cheaper storage (like AWS S3) or delete it if it's no longer needed for business or compliance reasons.
6. Forgetting to Monitor Database Performance
You wait for outages or customer complaints to realize something is wrong with your database.
- Fix: Set up real-time monitoring and alerting for key database metrics: CPU utilization, memory usage, disk I/O, network latency, and critical query performance. Tools like Datadog or AWS CloudWatch are essential.
Essential Tools + Resources for Database Strategy
Choosing the right tools is critical. Here are some I use and recommend, along with my take on their real value.
| Tool | Type | Use Cases | My Take |
|---|---|---|---|
| PostgreSQL | Relational Database | E-commerce, complex business logic, analytics, SaaS | My go-to for structured data. Powerful, reliable, extensible. Handles more than you think. |
| MongoDB | Document Database | Content management, user profiles, flexible schemas | Good for specific use cases. Overrated when chosen for purely relational data. Use its strengths, not just because it's NoSQL. |
| Redis | Key-Value Store | Caching, session management, real-time leaderboards | Essential for performance. Not a primary data store, but boosts any app. |
| AWS RDS | Managed SQL/NoSQL | Scalable relational databases (PostgreSQL, MySQL), DynamoDB | Simplifies operations significantly. My choice for most SaaS deployments in the cloud. |
| Neon | Serverless PG | Modern web apps, Vercel deployments, low-latency APIs | Great for fast spin-up and scale. Still relatively new but promising for serverless architectures. |
| SQLite | Embedded SQL | Local development, small apps, mobile, desktop | Underrated. Perfect for local testing, simple embedded databases, and rapid prototyping. Saves dev time. |
| Prisma | ORM | Type-safe database access, schema migrations | Boosts developer speed with type safety. Use raw SQL when performance demands it. |
| Amazon OpenSearch | Search & Vector DB | Full-text search, log analytics, semantic search (vector) | Powerful for search. The vector engine is a game-changer for AI features like in Flow Recorder. |
Resources:
- PostgreSQL Official Docs: The best source for understanding PostgreSQL deeply.
- MongoDB Docs: Comprehensive guides for MongoDB.
- MDN Web Docs - SQL: A good general primer on SQL concepts.
- "Designing Data-Intensive Applications" by Martin Kleppmann: This book is a masterclass in understanding database trade-offs. I recommend it to every senior engineer.
Authority Signals: What I've Learned
My 8+ years of experience, building products like Trust Revamp and Custom Role Creator, and my AWS Certified Solutions Architect credential, have taught me some hard truths about database strategy.
Pros and Cons of SQL vs NoSQL for Web Applications
This comparison isn't about one being "better." It's about choosing the right tool for the job.
| Feature | SQL (Relational Databases) | NoSQL (Non-Relational Databases) |
|---|---|---|
| Pros | Strong data consistency (ACID). Mature ecosystem. | High scalability (horizontal). Flexible schema. Faster writes. |
| Complex query capabilities (joins, aggregations). | Handles unstructured/semi-structured data. | |
| Well-defined schemas ensure data integrity. | Good for real-time applications and massive data ingestion. | |
| Cons | Less flexible schema (changes are harder). | Eventual consistency challenges (harder to reason about). |
| Vertical scaling limitations (eventually). | Less mature tooling for complex queries/transactions. | |
| Can be harder to scale horizontally without sharding. | Data duplication often necessary. Joins are not native. |
The Surprising Truth About Database Scaling
Here's a finding that surprised me and contradicts common advice: You don't always need to go distributed from day one.
Many developers, especially those starting new SaaS projects, are immediately drawn to distributed NoSQL databases or complex sharding setups for SQL. They fear "hitting a wall" with a single database instance. I've been there. When I built Flow Recorder, I started with a single managed PostgreSQL instance on AWS RDS. My initial plan was to shard it within a year.
However,
From Knowing to Doing: Where Most Teams Get Stuck
You now understand the critical components of a robust Database Strategy Web Application. You know why it matters for scalability, performance, and long-term maintainability. You've seen the frameworks, the metrics, and the common pitfalls. But knowing isn't enough – execution is where most teams fail. I’ve seen it repeatedly, from small startups in Dhaka to larger, established platforms. Developers get bogged down in the day-to-day, pushing features, and "solving" database issues reactively. This reactive approach, while seemingly fast in the short term, is a slow, error-prone killer of scalable systems.
Conventional wisdom often suggests that for a small project, you can just "wing it" with the database, or rely solely on an ORM's defaults. I fundamentally disagree. That's a recipe for technical debt and sleepless nights down the line. When I built Flow Recorder, a crucial part of its stability came from a meticulously planned database strategy from day one, even when it was just a prototype. We didn't wait for performance bottlenecks to appear; we designed for them not to happen. You don't build a skyscraper without an architectural blueprint, and your web application's database is its foundation. Doing it manually, without a clear, documented strategy, doesn't just slow you down; it guarantees you'll hit a wall you can't scale. I've seen WordPress sites become unmanageable due to database chaos, requiring massive overhauls that could have been avoided with a proper strategy.
Want More Lessons Like This?
I’m constantly experimenting, building, and breaking things to learn what truly works in software development and business. If you're tired of conventional advice and want insights grounded in real-world experience, join my journey.
Subscribe to the Newsletter - join other developers building products.
Frequently Asked Questions
Is a detailed Database Strategy Web Application really necessary for every project?
Absolutely, yes. This isn't over-engineering; it's foundational engineering. I've built and scaled systems for 8+ years, and the projects that thrive are the ones with a clear database strategy from day one. Even for a simple CRUD app, understanding your data access patterns and future scaling needs prevents costly refactors. When I was building Store Warden, a Shopify app, anticipating data growth and query types was non-negotiable for performance. Skipping this step means you're building on sand.How long does it typically take to implement a robust database strategy?
It depends heavily on project complexity and team size. For a new, moderately complex web application, an initial strategy can be drafted and refined within 2-4 weeks. This includes schema design, indexing plans, and initial scaling considerations. For existing applications, an audit and strategy implementation might take 1-3 months, depending on the current state of technical debt. The key is to start, even if it's imperfect. I found that even a few days spent planning early on saved weeks of debugging later for projects like Trust Revamp.Where do I even begin with defining my database strategy?
Start with your application's core use cases and data flow. Map out your entities, their relationships, and the most frequent read/write operations. Think about data volume and user concurrency projections. Don't worry about perfection; focus on understanding the "why" behind your data. I always begin with simple ERDs (Entity-Relationship Diagrams) and then move to query profiling. Tools like `EXPLAIN` in SQL are your best friends here. My AWS Certified Solutions Architect experience taught me that a well-defined data model is the bedrock of any scalable cloud architecture.What if I'm already deep into development? Is it too late to implement a strategy?
It's never too late, but it will be harder. Think of it as refactoring. You'll need to allocate dedicated time to audit your existing database, identify bottlenecks, and then systematically apply strategic changes. This often means carefully planning migrations and ensuring backward compatibility. For a WordPress plugin like Custom Role Creator, retrofitting a strategy after launch involved significant testing to avoid breaking existing installations. It's a painful process, but less painful than letting your application crumble under its own weight.Doesn't using advanced database strategies add unnecessary complexity and cost?
This is a common misconception. A well-defined strategy simplifies complexity in the long run by providing clear guidelines and preventing ad-hoc solutions. Cost-wise, an optimized database runs more efficiently, reducing infrastructure expenses and developer time spent on performance issues. For example, proper indexing can drastically cut down query times, meaning you need less powerful (and cheaper) database instances. I’ve seen this firsthand on Python Flask/FastAPI projects where early optimization meant lower AWS bills and happier users. Complexity arises from *lack* of strategy, not from having one.Can I just rely on ORMs or framework defaults for my database needs?
You can, but you shouldn't *solely* rely on them. ORMs like those in Laravel or Node.js are powerful for rapid development, but they often abstract away critical database performance details. Relying purely on defaults means you're leaving a lot of performance on the table. You need to understand the SQL they generate, especially for complex queries. I often start with ORMs for convenience but then dive into raw SQL or query builders for critical paths after profiling. This hybrid approach ensures both development speed and optimal performance, a lesson hard-learned from scaling various SaaS products.The Bottom Line
You've moved past just coding features to understanding the strategic foundation of your web application's data. You've gained insights into how a well-defined database strategy transforms a fragile system into a robust, scalable product.
The single most important thing you can do TODAY is pick one core feature of your current project and draft a simple ERD for its data model. Don't worry about perfection; just sketch it out. If you want to see what else I'm building, you can find all my projects at besofty.com. Take control of your database, and you'll find your development velocity increases, your systems become more reliable, and your peace of mind returns.
Ratul Hasan is a developer and product builder. He has shipped Flow Recorder, Store Warden, Trust Revamp, Paycheck Mate, Custom Role Creator, and other tools for developers, merchants, and product teams. All his projects live at besofty.com. Find him at ratulhasan.com. GitHub LinkedIn