Friday, June 14, 2024

Unleashing the Power of Event-Driven Architecture: A Deep Dive into AWS Solutions

 Introduction: Hey there, tech enthusiasts! 🙌 Are you ready to embark on a thrilling journey into the world of event-driven architecture (EDA)? If you're part of an organization heavily invested in AWS and seeking to revolutionize your data processing capabilities, you've come to the right place! In this blog post, we'll dive deep into the realm of EDA, exploring the best solutions AWS has to offer. Get ready to discover how you can supercharge your system's scalability, streamline integration with services like Databricks, and unlock the full potential of real-time data processing. Let's get started! 🚀

The Quest for the Ultimate EDA Solution: Picture this: you're on a mission to find the perfect EDA solution for your organization. You've got a checklist of requirements that would make even the most seasoned architect break a sweat. 😅 From configurable event routing and deep filtering capabilities to event retry and retention support, you need a solution that can handle it all. But wait, there's more! You also need event storage for playback, guaranteed event ordering, topic/partition support, and seamless integration with external services like Databricks. Oh, and let's not forget about maintainability, scalability, and ease of use. Phew! 😮‍💨

Fear not, my friend! AWS has got your back with a range of powerful options. Let's take a closer look at the contenders:

🥊 In the blue corner, we have Amazon EventBridge. With its serverless model and seamless integration with the AWS ecosystem, EventBridge packs a punch when it comes to operational simplicity. However, it falls short in terms of event storage for playback and inherent event ordering.

🥊 In the red corner, we have Apache Kafka and its managed variants, MSK and MSK Serverless. Kafka is a heavyweight champion in the world of event streaming, boasting extensive capabilities for event processing, strong guarantees on event ordering, and data retention. But be warned, it comes with a steep learning curve and requires significant DevOps resources to tame.

🥊 And in the green corner, we have Confluent Cloud, a fully managed Kafka service that offers the power of Kafka without the operational overhead. It's like having a personal trainer for your Kafka clusters! 💪 However, cost considerations and potential vendor lock-in are factors to keep in mind.

Case Study: XYZ Corporation's EDA Transformation Let me tell you a story about XYZ Corporation, a company that was drowning in data and struggling to keep up with the demands of real-time processing. They knew they needed an EDA solution, but the options seemed overwhelming. 😵

That's when they discovered Amazon Kinesis (On-demand Mode). It was like finding the perfect piece to complete their AWS puzzle! 🧩 Kinesis offered a balanced approach with its managed service model, seamless AWS integration, and capabilities that aligned perfectly with XYZ Corporation's requirements.

With Kinesis, they were able to: ✅ Streamline data streaming and processing ✅ Seamlessly integrate with AWS Lambda for scalability ✅ Effortlessly connect with Databricks for powerful analytics

The implementation journey was a breeze! They started with a prototype deployment in their CRM Implementation - Service Layer project, allowing them to validate Kinesis's effectiveness in handling event processing, routing, and filtering. The results were mind-blowing! 🤯

XYZ Corporation conducted thorough integration testing with Databricks, ensuring a seamless data flow and unlocking advanced analytics capabilities. They leveraged AWS Lambda for processing scalability, monitored costs closely, and implemented comprehensive monitoring with CloudWatch and DataDog. It was like having a superhero team watching over their EDA solution! 🦸‍♀️🦸‍♂️

The Verdict: Amazon Kinesis Reigns Supreme! 👑 After careful consideration and analysis, the verdict is in: Amazon Kinesis (On-demand Mode) emerges as the champion for XYZ Corporation's EDA needs. It strikes the perfect balance between operational simplicity, scalability, AWS ecosystem integration, and the ability to meet their specific functional requirements.

But wait, there's more! 🎉 The appendices section of this blog post is a treasure trove of additional insights. These architectural considerations will help you optimize your EDA solution and unlock even more possibilities! 💡

Conclusion and Call-to-Action: Phew, what a journey! We've explored the world of EDA, battled through the options, and emerged victorious with Amazon Kinesis as the recommended solution for organizations heavily invested in AWS. 🏆

But the adventure doesn't stop here! It's time for you to take action and embark on your own EDA transformation. Start by assessing your organization's requirements, dive deep into the capabilities of Amazon Kinesis, and unlock the power of real-time data processing. 💪

Remember, the key to success lies in thorough planning, testing, and continuous improvement. Don't be afraid to experiment, iterate, and push the boundaries of what's possible with AWS and Kinesis. 🚀

If you have any questions, need further guidance, or want to share your own EDA success stories, drop a comment below! Let's keep the conversation going and empower each other in this thrilling world of event-driven architecture. 💬

Happy architecting, everyone! 🎉👩‍💻👨‍💻 

Appendices:

 

Thursday, June 13, 2024

Mastering Event Processing with EventBridge Pipes and Kinesis: A Comprehensive Guide to Monitoring, Retry Mechanisms, and Dead Letter Queues

 Introduction: In the world of event-driven architectures, reliable event processing is paramount. As a seasoned developer, I've worked extensively with AWS services like EventBridge Pipes and Kinesis to build robust and scalable event processing pipelines. In this blog post, I'll share my experiences and insights on how to effectively monitor and handle event failures using retry mechanisms and dead letter queues (DLQs).


The Importance of Monitoring: Picture this: you've built a sophisticated event processing system using EventBridge Pipes and Kinesis, but suddenly, events start failing silently. Without proper monitoring in place, you might not even realize there's an issue until it's too late. That's why setting up comprehensive monitoring is crucial.

EventBridge Configuration: To ensure your EventBridge Pipes are resilient to failures, you need to configure them with the right settings. First, let's talk about the retry policy. EventBridge allows you to customize the retry policy for each target, specifying the number of retry attempts and the time interval between retries. It's like giving your events multiple chances to succeed before giving up.

But how does EventBridge handle retries? It uses a clever technique called exponential backoff and jitter. Imagine your events as adventurers trying to cross a treacherous bridge. With exponential backoff, the wait time between retries gradually increases, giving the events more time to recover from temporary failures. And jitter adds a touch of randomness to the retry intervals, preventing multiple events from retrying simultaneously and overwhelming the target system.

Storing Complete Event Data in Amazon S3: Now, let's dive into the world of dead letter queues (DLQs). When an event fails to be processed and lands in the DLQ, it's like sending a distress signal. The DLQ holds valuable information about the failed event, but here's the catch: it only contains the metadata, not the complete event data itself.

To ensure you have access to the full event details, even if the DLQ message expires, you need to store the complete event data in a persistent storage system like Amazon S3. Picture this: a hydrate Lambda function springs into action whenever a message lands in the DLQ. It's like a detective on a mission to retrieve the complete event data from the Kinesis stream using the metadata from the DLQ message. Once the data is retrieved, it's securely stored in Amazon S3 for safekeeping.

But why is this important? Imagine an outage occurs on a Friday night, and by Sunday night, the DLQ message has vanished into thin air. Without the complete event data stored in S3, investigating and reprocessing the failed event would be like searching for a needle in a haystack. By storing the event data in S3, you create a reliable and persistent source of information that's always available for investigation and reprocessing, even if the DLQ message has expired.

Naming Convention for S3 Objects: When storing event data in S3, it's crucial to have a well-defined naming convention for the objects. It's like organizing your closet—you want to be able to find what you need quickly and easily. I recommend a naming convention that includes the consumer name, date and time components, a partition key, a timestamp, and a unique identifier. It's like giving each event a unique address in the S3 universe.

With this naming convention, you can query and filter stored events based on specific criteria. Want to investigate events for a particular date? No problem! Just query objects with the appropriate prefix. Need to analyze events from a specific consumer or shard? Easy peasy! The naming convention acts as a map, guiding you to the right events effortlessly.

Flow and Logic: Now that we have all the pieces in place, let's take a step back and look at the bigger picture. The flow and logic of the event processing pipeline is like a well-choreographed dance. EventBridge Pipes are configured with a retry policy, ready to handle any missteps. When an event fails to be delivered, EventBridge gracefully retries based on the configured policy, giving the event multiple chances to succeed.

If the event still can't make it to its destination after the specified retry attempts, it's sent to the DLQ. That's when the hydrate Lambda function springs into action, retrieving the complete event data from the Kinesis stream and storing it safely in Amazon S3. It's like a rescue mission, ensuring no event is left behind.

But the journey doesn't end there. If an investigation or reprocessing is needed, the SRE/DevOps team can access the complete event data from Amazon S3 using the trusty naming convention and metadata. It's like having a treasure map that leads directly to the needed information, even if the DLQ message has long since disappeared.

Once the event data is retrieved, it can be reprocessed and sent to the API destination via EventBridge or directly to the API endpoints. It's like giving the event a second chance at success. And if the event is successfully processed and delivered, it can be removed from S3 or marked as processed—a happy ending to its eventful journey.

Conclusion: In the grand scheme of event processing, EventBridge Pipes and Kinesis form a dynamic duo that ensures reliable and resilient event delivery. By leveraging monitoring, retry mechanisms, dead letter queues, and persistent storage in Amazon S3, you can build a robust event processing pipeline that can handle any challenge thrown its way.

Remember, the key to success lies in the details—configuring the right retry policies, storing complete event data in S3, and following a well-defined naming convention. With these tools and best practices in your arsenal, you'll be able to navigate the complex world of event processing with confidence and finesse.

So go forth, intrepid developer, and conquer the world of event processing! May your events flow smoothly, your retries be successful, and your S3 buckets be well-organized. Happy coding!

Friday, September 2, 2016

making HTTP/HTTPS request in C#

        

private static string LoadUrl(string path)
        {
            Uri myUri = new Uri(path, UriKind.Absolute);
            string stringResponse = null;
            HttpWebRequest request = (HttpWebRequest)WebRequest.Create(myUri);
            request.UserAgent = "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.0.5) Gecko/2008120122 Firefox/3.0.5";

            WebResponse response = request.GetResponse();
            using (Stream responseStream = response.GetResponseStream())
            {
                StreamReader reader = new StreamReader(responseStream, Encoding.UTF8);
                stringResponse = reader.ReadToEnd();
            }

            return stringResponse;
        }

Tuesday, August 19, 2014

jQuery $(document).ready () fires twice

At my client side there was an issue where jQuery carousel/slider was erratic (moving at fast pace).

After inspection I found that jQuery $(document).ready () was getting called twice which was initializing the carousel/slider twice. This was caused due to someone moving the markup (which had our page custom jquery) in the HTML to the head tag from the bottom of the page and forgetting to remove the old one.

For this reason jQuery $(document).ready () was being called twice.

Saturday, July 27, 2013

SubmitName issue while creating Name Attribute in IE using jQuery

A few days back I saw that one of my jQuery selector wasn't working in IE. We were using "Name" attribute to make the selection. 
Example: $('input[name="someName"]').val();

But, hold on!! We had been using "Name" attribute for selecting DOM element for such a long time and how come we never saw this issue!! 

I then tried to select few other elements in the same DOM having "Name" attribute using Developer tools console window (Pressing F12 in IE should take you to Developer tools). Ah, it works like a charm. So the issue is with just that INPUT element. 

After closely inspecting the element and it's attribute, I found that there is no "Name" attribute to it and, we have an attribute called "SubmitName" (ever heard of this :) ?? )

Also after inspecting the jQuery code, I found that we were creating this INPUT element dynamically and appending it to the form. 

$("< input/>", {
            type: "hidden",
            name: name,
            value: "somevalue"}).appendTo($form);

Except for IE, in all other browsers we have the attribute as "Name" and not "SubmitName". As always no wonder if IE does things differently (read it as "IE has issues"). 

Wondered, would it  be worth to create the dynamic INPUT element in a different way maybe something like:

var input = $('< input type="hidden" ' + 'name="' + name + '" value="someValue"/>');
            $(input).appendTo($form);

Bingo!! We have the "Name" attribute now, and the selector works like a charm.

Summary:

If you are dynamically creating an element using jQuery and setting its "Name" attribute, Internet Explorer will render it as submitName and not Name.

Hope this helps someone!!




Saturday, March 2, 2013

Safari on iOS 6 caching $.ajax results?

If your web application serves iphone/iPad users (with iOS 6), the below is one of the issue you could see:

I one of the application that I was working on, we saw that the information were not returned back from the ajax call back to the UI. But we could see that from the ASP.NET MVC application the results were being sent back. It was just that the UI wouldnt refresh!!  

We use $.ajax (jQuery) call to perform Refresh operation (to poll and get the results back to the UI). And  I always believed that a POST action would never be cached. Either call it  Over engineering or, User experience here you have!! Safari browser (iOS 6) caching your results (In our case the application was returning back “no data” for first couple of poll and, safari cached that result).

It’s been argued that this is actually a bug in iOS6 and not a feature.

The W3 spec says:

“Responses to [POST] are not cacheable, unless the response includes appropriate Cache-Control or Expires header fields.”

But apple is caching everything unless you say otherwise – the exact opposite of the spec. *sigh*


So you have few options to resolve this:

1) One solution is to check the cache-control directive in response header served up by your application. It should be set to no-cache. It is this that tells the client (and any intermediate proxy servers whether the response can be cached).
2)  Set cache-control to “no-cache” and, Cache =false in your ajax call request. With this being set, iOS6 safari won’t cache your response.
3)   Change your post data every time you poll, like passing random data (Time for example) would also prevent the result being cached (Sounds more like a Hack rather than solution).

Check the post for more information. Hope this would save someone few hours of agony.

Happy Debugging!!

Monday, January 14, 2013

No Inspectable Applications: iPhone remote debugging not working

Today I was trying to remote debug a web app on iphone using Mac safari browser. When navigating to Develop -> iPhone I was seeing the message "No Inspectable Applications" and unable to debug the application.

Note I was using an iPhone with IOS 6. After much of hair pulling, I figured it out to be the Private browsing setting on iPhone which was switched "ON". I was able to debug the applicaiton once I turned this "OFF". You can turn it off by navigate to Settings -> Safari -> Private browsing.

Happy Debugging!!