Algorithmic Thinking for Frontend Developers: Mastering Data Structures for UI Performance
From 3-Second Loads to Instant UI: How Algorithmic Thinking Transformed My React Apps
In 2022, I faced a critical problem with Store Warden, my Shopify app designed to help merchants manage product data. We were onboarding larger stores, some with tens of thousands of product variants. The main dashboard, built with React, started choking. It took over three seconds to load, sometimes five, even on a fast connection. Users saw a spinner, then a sluggish UI. They complained. Abandonment rates on the dashboard spiked by nearly 20% in a month. This wasn't just a minor bug; it threatened the product's viability. I knew then that raw coding skill wasn't enough. I needed to apply a deeper understanding of Data Structures Algorithms Frontend principles to fix it.
I spent weeks digging. The backend was solid. My AWS certifications and 8 years of experience building scalable SaaS applications told me the database queries and API responses were optimized. The bottleneck was clearly in the browser. Specifically, how the frontend processed and rendered that massive dataset. My initial React code, while clean, treated every array as just an array. I was iterating, filtering, and mapping without considering the underlying computational cost. Every user interaction triggered cascades of unnecessary re-renders. This wasn't a React problem; it was an algorithmic problem manifesting in React.
The solution wasn't a new library or a framework upgrade. It was a fundamental shift in how I thought about data on the client side. I began implementing specific data structures – not just for trivial tasks, but for core UI components. I replaced linear searches with hash maps for quick lookups. I used trees to represent hierarchical data efficiently. Within two months, I reduced the dashboard load time by 70% for our largest clients. That 3-second wait dropped to under 1 second. User engagement bounced back. We saved countless hours of customer support. This wasn't magic; it was applying algorithmic thinking frontend to solve real-world performance issues. This is the kind of specificity I bring from building and shipping 6+ products. You'll find that vague advice won't get you there. You need concrete strategies.
Data Structures Algorithms Frontend in 60 seconds: Data Structures and Algorithms (DSA) are critical for frontend performance, especially in complex UIs. They dictate how efficiently your JavaScript code stores, retrieves, and manipulates data, directly impacting rendering speed and user experience. By choosing the right data structure – like a Map for quick lookups instead of an array for linear search – you drastically reduce computational complexity. This means faster React component updates, smoother animations, and a more responsive application, even with large datasets. I've seen this directly improve user retention and satisfaction across my Shopify apps.
What Is Data Structures Algorithms Frontend and Why It Matters
When I first started building web applications in Dhaka, the focus was mostly on getting things to work. You'd fetch data, display it, and handle basic interactions. For simpler sites, that approach works fine. But when you start building something like Flow Recorder, an AI automation tool that processes complex user flows, or Store Warden, which manages thousands of product variants and orders, "just getting it to work" quickly breaks. This is where Data Structures Algorithms Frontend becomes non-negotiable.
At its core, Data Structures Algorithms Frontend means consciously choosing how you organize and manipulate data within your client-side JavaScript applications to optimize performance. It's not about memorizing every algorithm from a textbook. It's about understanding the trade-offs. It's about knowing when an array is fine, when you need a Map, or when a custom tree structure will save your application from grinding to a halt.
Think of it this way: every time your React component re-renders, or your Svelte store updates, your JavaScript engine is performing operations on data. If that data is poorly organized, these operations become incredibly slow. A simple filter or find on a large array has a time complexity of O(N), meaning the time it takes increases linearly with the number of items. Do that repeatedly, or with thousands of items, and you introduce noticeable lag. I learned this the hard way when Paycheck Mate's transaction history started slowing down for users with years of data.
This isn't theoretical computer science for some backend server. This impacts the immediate user experience. A fast frontend feels snappy. A slow frontend feels broken. My AWS Solutions Architect Associate certification taught me about efficient resource utilization on the server, but it's equally vital on the client. The browser has limited resources. You're competing with other tabs, extensions, and the user's hardware. Every millisecond counts.
For frontend developers, understanding this means:
- Optimizing DOM Operations: Reducing the number of times the browser has to update the DOM. React's virtual DOM helps, but if your data processing is slow, the reconciliation process still suffers.
- Efficient State Management: Storing and updating application state in a way that allows for quick lookups and minimal re-renders.
- Faster User Interactions: Ensuring that sorting, filtering, and searching data in the UI respond instantly, without perceptible delays.
- Scalability: Building UIs that can handle growing datasets and increasing complexity without degrading performance. When I built Trust Revamp, handling dynamic content blocks, I quickly realized a simple array of objects wouldn't cut it for drag-and-drop reordering. I needed to think about how to represent the hierarchy efficiently.
The unexpected insight here is that many frontend developers, myself included early on, treat data structures as a backend concern. We assume JavaScript arrays and objects are "good enough" for everything. They are not. They are building blocks. How you use those blocks, how you structure them, makes all the difference. It's the difference between a website that feels professional and one that feels clunky. It's the difference between users staying or leaving.
My Step-by-Step Framework for Algorithmic Frontend Performance
Building a snappy frontend isn't magic. It's a methodical process. Over eight years of shipping products like Flow Recorder and Store Warden, I've refined a framework. This isn't about memorizing every algorithm. It's about a systematic approach to identify, diagnose, and solve performance bottlenecks before they hit your users.
1. Define Performance Baselines & User Stories
Before writing any code, I define what "fast" means for this feature. What's the acceptable latency? For Flow Recorder's drag-and-drop builder, "fast" meant reordering 100 blocks in under 100ms. If a user is filtering a list, I aim for sub-50ms response times. This isn't just a technical metric; it's tied to a user's expectation. When a user clicks a button, they expect an immediate reaction. If I'm building a complex data table for a Shopify app, I specify that filtering 10,000 items must complete within 200ms. Without these baselines, "performance" becomes a vague goal. You'll never know if you've succeeded.
2. Profile Early and Often
Don't wait until things feel slow. That's reactive. I use Chrome DevTools Performance tab religiously. It shows me exactly where JavaScript execution time is spent. I learned this building Store Warden's order dashboard. Initial load times were fine, but filtering 5,000 orders became painful. The DevTools showed hundreds of milliseconds spent in array filter and map operations. I profile new features immediately after building a working prototype, even before styling. This helps catch fundamental architectural flaws early. It's much cheaper to fix a data structure issue on day two than day ninety-two.
3. Map Data Flow and Identify Hotspots
Trace how data moves from your API, through your state management, to your UI components. Where does it get transformed? Which operations touch large datasets? For Paycheck Mate, the transaction history was the obvious hotspot. Thousands of transactions meant any operation on the full dataset would be slow. For Trust Revamp's content editor, the challenge was dynamic reordering of nested blocks. Mapping this flow helped me see that the data manipulation logic, not just the rendering, was the critical path. I draw simple diagrams. I ask: "If this dataset grows 10x, where will it break?"
4. Choose the Right Data Structure for the Job
This is where algorithmic thinking directly impacts the frontend. Is it a simple list (JavaScript array)? A key-value store (Map or Object)? A tree? A graph? For Trust Revamp's nested content, a deeply nested array of objects was painful for drag-and-drop reordering. Flattening the data into an array where each item had a unique id and a parentId made updates significantly faster. This transformed an O(N^2) operation into something closer to O(N log N) or O(N) for specific manipulations. Don't default to an array just because it's easy. Consider your most frequent operations: lookups, insertions, deletions, reordering. Pick the structure that optimizes those.
5. Implement Algorithms Thoughtfully
Once you have the right data structure, apply the right algorithm. Don't just filter and map blindly on large arrays. Can you use a memoized selector? A binary search on a sorted array? For a large table with dynamic sorting, I've implemented quicksort or merge sort variations in JavaScript. For Paycheck Mate, instead of re-filtering a huge array, I built indexes using Map objects, allowing O(1) lookups for common filter criteria like transaction type. If you need to search for an item in a list of 10,000, a simple find is O(N). If that list is sorted, a binary search is O(log N), which is orders of magnitude faster.
6. Test with Real-World Data & Scenarios
My local dev environment rarely mirrors production. I load up 10,000 items. I simulate slow networks. I test on older hardware if possible. When I built Custom Role Creator for WordPress, testing with 200+ custom capabilities showed where naive array lookups became slow. I don't just test the "happy path." I test edge cases: empty states, maximum data loads, concurrent operations. This often reveals bottlenecks that simple unit tests miss. It's easy to build a fast UI with 10 items. The real challenge comes with 10,000.
7. Revisit and Refactor with a Target
Performance is not a one-time fix. It's a continuous process. After launch, I monitor key metrics. If a metric degrades, I revisit the code. This is the step most guides skip, but it's essential for long-term product health. For Store Warden, I optimized the order filtering from 500ms to 50ms by switching from array filtering to a Map lookup for product IDs. This was a post-launch optimization driven by user feedback and monitoring. Performance goals evolve as your product grows. You'll always find new bottlenecks. Be ready to iterate.
My Real-World Frontend Performance Battles
My journey as a developer from Dhaka has been about solving real problems for global users. These aren't theoretical exercises; they're situations where slow code directly impacted user satisfaction and business metrics.
Example 1: Trust Revamp's Laggy Drag-and-Drop Editor
Setup: Trust Revamp is a Shopify app I built for creating dynamic, customizable content blocks. Think of it as a page builder for specific sections of an e-commerce store. A core feature is a drag-and-drop editor where users arrange sections, rows, and columns to build their layouts.
Challenge: My initial implementation used a deeply nested array of objects to represent the UI hierarchy. Each section had an array of rows, each row an array of columns, and so on. Dragging an item meant deep cloning this entire structure, traversing it to find the source, splicing it out, and then inserting it at the new destination. Reordering just 50 blocks became sluggish. It often took 300-500ms, especially in React, causing noticeable lag and a clunky user experience.
Failure: I first tried optimizing the React component re-renders with React.memo and useCallback, thinking the issue was excessive rendering. It helped a little, reducing some unnecessary component updates. But the underlying data manipulation — the core algorithmic problem — was still the bottleneck. I was treating the symptom, not the cause. The JavaScript execution time for the data operations remained high, even if the React components themselves were more optimized.
Action: I completely refactored the data structure. Instead of a deeply nested array, I flattened the data into a single array where each item had a unique id and a parentId. This represented the hierarchy without deep nesting. For reordering, I implemented a custom tree-traversal algorithm that manipulated parent-child relationships and indices within this flat array. This involved updating just a few properties on a couple of objects, rather than deep cloning and traversing the entire tree. I also used a custom useDraggable hook combined with a virtualized list for rendering very large, complex structures efficiently, ensuring only visible items were in the DOM.
Result: Reordering 100 blocks now completes in under 50ms. The UI feels instant and responsive. This change alone reduced perceived latency by over 90% for complex layouts. Users immediately noticed the improvement. The app's usability scores went up.
Example 2: Paycheck Mate's Slow Transaction History
Setup: Paycheck Mate is another product I shipped. It helps users track income and expenses. Some users accumulate thousands of transactions over years, with datasets reaching 10,000 to 20,000 entries.
Challenge: The transaction history table needed filtering, sorting, and pagination. Initially, all these operations were performed on a single, large array of transaction objects stored directly in the frontend state. Filtering 10,000 transactions by a date range, for example, took 800ms to 1.2 seconds. Sorting took similar times. Users frequently reported the app "freezing" or becoming unresponsive when interacting with their history.
Failure: My first instinct was to move all filtering and sorting to the backend. While this solved the initial load time by paginating the data, I still needed some client-side interactivity. If I fetched 100 items, and the user applied a filter, I would have to make another backend call. This led to a poor experience with constant network requests. I also tried debouncing the filter inputs, but this just delayed the lag, it didn't eliminate the underlying slowness of processing the large array.
Action: I implemented a client-side indexing strategy. For filtering by transaction type or category, I created a Map where keys were the types/categories, and values were arrays of relevant transaction IDs. This allowed O(1) lookups. For date range filters, I maintained a pre-sorted array of transaction dates, enabling efficient binary search (O(log N)) to find the start and end indices. For sorting, instead of re-sorting the entire array on every change, I used a stable sort algorithm and memoized the sorted results based on the sort key. Critically, I moved the core filtering and sorting logic into a Web Worker. This ensured that even during heavy computations, the main thread remained unblocked, and the UI stayed responsive.
Result: Filtering 10,000 transactions now consistently takes under 100ms. Sorting is virtually instantaneous. The Web Worker ensures the UI remains responsive, preventing any perceived "freezing." This drastically improved user satisfaction. I even received an email from a user praising the "new speed and responsiveness" after these changes were deployed. My AWS Solutions Architect Associate certification taught me about efficient resource utilization, and this was a direct application of that principle on the client-side.
Common Algorithmic Mistakes in Frontend Development
I've made my share of mistakes. Every product I've shipped, from Paycheck Mate to Store Warden, has taught me a new lesson about what not to do. Here are some pitfalls I've fallen into, and how you can avoid them.
Blindly using filter, map, reduce on large arrays.
These array methods are convenient, but they are O(N) operations. If you're repeatedly calling them on arrays with thousands of items, your UI will crawl. I saw this with Paycheck Mate's transaction history.
Fix: For repeated lookups or filtering on large datasets, consider creating an index. A Map for key-value lookups (O(1) average) or a Set for existence checks (O(1) average) can be orders of magnitude faster. If your data is sorted, use binary search (O(log N)).
Deeply nesting objects for UI state.
Representing complex UI components with deeply nested objects can quickly become a performance nightmare, especially with immutable updates. Cloning and traversing these structures is expensive. Trust Revamp's drag-and-drop editor initially suffered from this.
Fix: Flatten your data when possible. Use IDs and parent IDs to represent relationships instead of direct nesting. This simplifies updates and reduces the cost of immutability. Libraries like Normalizr can help normalize nested API responses into a flatter structure.
Over-relying on framework re-rendering optimizations.
React's Virtual DOM or Svelte's reactivity system are powerful. But they optimize rendering, not data processing. If your JavaScript takes 500ms to prepare the data, the fastest rendering engine won't make your UI feel instant.
Fix: Profile JavaScript execution, not just component renders. Use Chrome DevTools to see where your scripts are spending time. Optimize your data structures and algorithms before the render cycle.
Premature Optimization without Profiling.
This one sounds like good advice, but it's often a trap. Thinking "this will be slow" and then complexifying your code with advanced algorithms or data structures before you know it's actually slow is a common mistake. I've done it, and it usually just adds complexity without real benefit.
Fix: Profile first. Identify actual bottlenecks using Chrome DevTools. Optimize only when a specific performance target is not met. A simple O(N) solution is often perfectly fine and more readable for N < 100 or even N < 1000. Complexity should be introduced only when justified by data.
Not considering data mutation costs.
JavaScript's immutability patterns (spread operators, Object.assign, map, filter) create new objects or arrays. For small datasets, this is negligible. For very large datasets or frequent updates, these operations can be slow, consuming significant memory and CPU.
Fix: Understand the trade-offs. For critical performance paths with large datasets, consider libraries like Immer.js, which allow you to "mutably" update state while maintaining immutability under the hood. Alternatively, for heavy computations, perform mutable operations inside a Web Worker and then pass the final immutable result back to the main thread.
Blocking the main thread with heavy computation.
Any JavaScript execution that takes more than 50-100ms will cause a noticeable stutter in your UI, blocking user interactions and animations. Paycheck Mate's large data filtering initially suffered from this.
Fix: Offload CPU-intensive tasks (complex calculations, large data transformations, heavy filtering/sorting) to Web Workers. Web Workers run in a separate thread, ensuring the browser UI remains responsive, even if the computation takes longer.
My Essential Toolkit for Frontend Algorithmics
Building performant UIs requires the right mindset and the right tools. Here's what I use regularly in my work, from building Shopify apps to scaling WordPress plugins.
| Tool/Resource | Purpose | Why I Use It |
|---|---|---|
| Chrome DevTools Performance Tab | Profiling runtime performance, identifying JavaScript bottlenecks | Unbeatable for granular analysis of JavaScript execution, call stacks, and identifying exactly which functions consume the most time. It shows why your app is slow. |
| Lighthouse | Auditing web page quality, including Core Web Vitals | Provides a quick, holistic overview of performance, accessibility, SEO, and best practices. Great for setting high-level targets for products like Store Warden. |
| React DevTools Profiler | Component-level render performance analysis in React | Helps pinpoint unnecessary re-renders, expensive components, and identify where React.memo or useCallback could be beneficial. |
| Immer.js | Immutable state updates with mutable syntax | Simplifies complex immutable state updates, making them easier to write and read, while maintaining performance benefits. I use this when managing large, nested state objects. |
structuredClone API | Deep cloning JavaScript values (browser native) | Efficiently duplicates complex objects and arrays without needing external libraries. It's built into the browser and handles more types than JSON.parse(JSON.stringify()). |
| Web Workers | Offloading heavy computations from the main thread | Essential for keeping the UI responsive during CPU-intensive tasks, such as filtering large datasets in Paycheck Mate. They prevent jank and freezing. |
| MDN Web Docs | Comprehensive web technology documentation | My go-to resource for deeply understanding native JavaScript APIs, data structures, and browser features. It’s always up-to-date and accurate. |
| "Cracking the Coding Interview" | Algorithms and data structures fundamentals | While geared towards interviews, it's an excellent resource for refreshing core computer science concepts that are directly applicable to optimizing frontend data processing. |
Underrated Tool: The structuredClone API. Many developers still reach for lodash.cloneDeep or use JSON.parse(JSON.stringify()) for deep cloning. structuredClone is a native browser API that's often faster, more robust (handles Date, RegExp, Map, Set, etc.), and simpler to use. I frequently use it for snapshotting state before complex operations or creating mutable copies for Web Worker processing. It removes a dependency and improves performance.
Overrated Tool: Some "universal" state management libraries for simple apps. While powerful (e.g., Redux with all its middleware), they often introduce too much boilerplate and complexity for applications that don't truly need a global, highly predictable state architecture. I've seen teams get bogged down in Redux patterns when a simpler useState, useReducer, or useContext approach would have been more performant and easier to maintain. For Trust Revamp, I initially explored a very complex state manager but ended up simplifying to a custom context API for better performance and less overhead. Choose the simplest tool that solves your problem.
Why Algorithmic Thinking is a Business Imperative for Frontend Developers
As an AWS Certified Solutions Architect, I know that efficient resource utilization drives business value. This applies directly to the frontend. Fast UIs don't just feel good; they make money.
Google found that a 0.1-second improvement in site speed can lead to an 8% increase in conversion rates for retail sites. This isn't just a technical detail; it's a direct impact on revenue. For my products like Store Warden, where every millisecond counts in a user's decision to complete an action, this makes a huge difference.
| Aspect | Pros of Algorithmic Frontend Thinking | Cons (or Perceived Cons) |
|---|---|---|
| Performance | Faster UIs, instant feedback, reduced lag, smoother animations. | Requires more upfront planning, deeper CS knowledge. |
| Scalability | Handles larger datasets, complex interactions, and growing user bases. | Can add initial development time and cognitive load. |
| User Experience | Higher satisfaction, increased engagement, improved retention, fewer bounces. | Might seem like "over-engineering" to some stakeholders. |
| Maintainability | Clearer logic for data processing, easier debugging of performance issues. | Can be harder for junior developers to grasp initially. |
| Developer Growth | Deeper understanding of systems, enhanced problem-solving skills. | Steeper learning curve for advanced concepts. |
| Business Impact | Higher conversion rates, lower bounce rates, stronger brand perception, competitive edge. | Not always immediately visible to non-technical teams. |
One finding that surprised me and contradicts common advice: Many frontend developers (including my past self) often optimize for bundle size first, assuming smaller bundles always mean faster apps. While bundle size matters for initial load, I've consistently found that runtime performance — how quickly the UI responds to user input after loading — has a far greater impact on user satisfaction and retention for interactive applications.
A small app that lags on every click is worse than a slightly larger app that feels instant. My AWS Solutions Architect Associate certification emphasizes resource efficiency, and that applies just as much to client-side CPU cycles as it does to server-side memory. For Trust Revamp, I traded a slightly larger JavaScript bundle (due to a custom tree data structure implementation) for a 10x improvement in interaction speed. The user feedback was overwhelmingly positive. No one complained about the bundle size, but they did complain about lag. Focus on perceived speed and responsiveness over raw file size, especially for highly interactive applications. It's about how the user feels the app, not just how quickly it downloads.
From Knowing to Doing: Where Most Teams Get Stuck
You now understand the core principles of Data Structures Algorithms Frontend. You've seen why it matters and how it can transform your frontend. But knowing isn't enough — execution is where most teams fail. I’ve seen this firsthand building products in Dhaka. It's easy to read about optimal algorithms, much harder to implement them under pressure.
The manual, ad-hoc way works for a while. You push features. You ship. But it's slow, error-prone, and doesn't scale. When I first built Flow Recorder, I started with a straightforward data rendering approach. It worked fine for a few hundred records. When users started pushing thousands of data points, the UI choked. We had to refactor the data handling completely, moving from simple arrays to a more structured, indexed approach. This wasn't about a new library; it was about applying a tree-like structure to manage deeply nested user interactions. That experience taught me that delaying proper data structuring only accumulates technical debt. You'll spend more time fixing performance regressions than building new features. The cost of a naive approach isn't just slow load times; it's lost development velocity.
Want More Lessons Like This?
I share these specific, hands-on lessons because I believe in learning by doing. My insights come from shipping products like Store Warden and Trust Revamp, not just theorizing. If you're a developer who wants to build real, scalable products, join my journey.
Subscribe to the Newsletter - join other developers building products.
Frequently Asked Questions
Does 'Data Structures Algorithms Frontend' really apply to modern frameworks like React or Svelte?
Yes, absolutely. Modern frameworks abstract away some complexity, but they don't eliminate the need for sound data structure choices. React's component tree is a graph. Its virtual DOM manipulation relies on efficient diffing algorithms. State management, especially with complex nested data, directly benefits from understanding trees, maps, and sets. When I optimized the data visualization in Flow Recorder, mapping complex user interactions to a specific tree structure allowed for significantly faster rendering and updates. This isn't just theoretical; it's how performant UIs are built.I'm a solo developer / my team is small. Is this overkill for us?
No, it's not overkill; it's even more crucial. Small teams benefit disproportionately from building things right the first time. You don't have a large team to throw at performance issues later. With Paycheck Mate, I started with a clear understanding of its financial data structures. This foresight prevented many common pitfalls related to data consistency and calculation speed, even as a solo project. It allows you to build a robust foundation that scales with your ambition, not just your team size. It's about efficiency, not complexity.How long will it take to see benefits from applying these principles?
The timeframe varies. For a critical, performance-bottlenecked component, you might see significant improvements within days of refactoring with better data structures. For a larger project-wide shift, it could be weeks or months. When I refactored the product data handling in Store Warden, we saw a 30% improvement in product listing load times within two weeks. The key is to start small, identify specific pain points, and apply targeted data structure optimizations. Don't try to refactor everything at once.What's the absolute first step I should take to integrate DSA into my frontend workflow?
Start by identifying one component or data flow in your current project that feels slow or complex. Draw out the data. Literally, on paper or a whiteboard. See how data flows in and out. Then, consider if a different data structure – maybe a Map for faster lookups instead of an array, or a tree for hierarchical data – would make that specific interaction more efficient. For example, if you're constantly searching a list, switching to a `Map` or `Set` can dramatically improve lookup times from `O(n)` to `O(1)`, as I did with user permissions in Custom Role Creator.Is 'Data Structures Algorithms Frontend' just about performance, or are there other benefits?
It's much more than just performance. While speed is a primary driver, proper data structures also lead to more maintainable, scalable, and less buggy code. They simplify complex logic, making it easier to reason about your application's state. When I was building Trust Revamp, a well-defined graph structure for user review data made it far simpler to implement complex filtering and aggregation features. It improved developer experience, reduced bug surface area, and ensured the platform could handle growth without constant refactoring.Should I try to implement complex algorithms from scratch, or use libraries?
You should use libraries where appropriate, but always understand the underlying principles. Don't reinvent the wheel for common tasks like sorting or searching if a battle-tested library like Lodash provides an optimized solution. However, understanding *how* that library works under the hood – its time and space complexity – empowers you to choose the right tool and debug effectively. My approach is always to leverage existing robust solutions, but only after I grasp the core Data Structures Algorithms Frontend concepts they embody.The Bottom Line
You now have the tools to transform your frontend from a collection of ad-hoc components into a performant, scalable, and maintainable system. The single most important thing you can do today is to pick one component in your current project and map its data flow. Identify where data is being inefficiently processed or stored.
If you want to see what else I'm building, you can find all my projects at besofty.com. Start building frontend experiences that don't just work, but excel. You'll ship faster, build more robust features, and develop with a confidence that few frontend developers possess.
Ratul Hasan is a developer and product builder. He has shipped Flow Recorder, Store Warden, Trust Revamp, Paycheck Mate, Custom Role Creator, and other tools for developers, merchants, and product teams. All his projects live at besofty.com. Find him at ratulhasan.com. GitHub LinkedIn