top of page

75 results found with an empty search

  • Server Performance Prediction using ML Models - Part 2

    In the first part of the blog, we described the problem that we intend to solve, the data gathering, post processing, and generating the final training data. In the 2nd part, we will take a look at the Machine Learning model we used for training and for inference with new data. Correlation between various counters We have captured various counters for various benchmarks. Here is a graph that shows the correlation between each counter with every other counter. ​ K Neighbors Regression Given a snapshot of the system as a test data row, the K Neighbors algorithm first finds the K nearest among all neighbors using a distance metric such as Euclidean distance (default), Manhattan Distance, Minkowski distance, etc. It then averages the Y values of the K nearest neighbors for the given test row, and assigns the result as the predicted Y value of the test row. Standard Normalization of Counters: In order for the K Neighbors Regression algorithm to calculate these distances in an unbiased manner, we bring all the counters to a comparable scale by using standard normalization. Which means that all the columns will have values that have a standard normal distribution with mean equal to 0 and standard deviation equal to 1. Why did we use K=1? Since we know that given two snapshots whose X values are exactly the same, the ratios would also be the same, we chose K=1 to find the closest neighbor whose input variables match the test data very closely to get a nearly accurate prediction of the IPC ratio. Shown below is a sample of the prediction made using K Neighbors Regression for the IPC Ratio. ​ The above IPC ratio prediction is for the ‘502.gcc_r_gcc-pp_3’ benchmark. The “Actual” line in the graph is present since we have already calculated the IPC Ratio for ‘502.gcc_r_gcc-pp_3’. This dataframe was excluded from the training data for the K Neighbors Regression and was used as a test dataframe. The Runtime can be calculated using the predicted IPC by assuming a particular clock speed of the CPU. We calculate the total number of cycles first, followed by the runtime calculation. The following formula can be used: total_cycles = total_instructions/predicted_ipc predicted_runtime = total_cycles/(2.5*10^9) The above formula for predicted runtime assumes that the clock speed of the processor is 2.5 GHz. The predicted IPC and the runtime for the same benchmark can be seen in the following graph: ​ It shows around 30% improvement, which is close to the predicted value.

  • Responsive Next.js Image Using Aspect Ratio

    One of our customers at Whileone wants to build cards for their website which contains an image and some other content. Image will cover its container and should adjust its dimensions accordingly without cropping the image. While Using Next.js Image and making it responsive we always faced one challenge that we need to keep the aspect ratio of image so that image will be neat and clean in given space. We can do it by mentioning height and width of image at different breakpoints. Which is actually a time consuming and trial and error method, So we came up with a solution for this problem is css property called aspect-ratio. Below is example Card, Let’s see it in two scenario, With Aspect Ratio and Without Aspect Ratio I’m using Tailwind CSS for styling. Fig. 1 1. Without Aspect Ratio: Without aspect ratio we’ll have the same card on the mobile screen, If we compare the below image (Fig. 2) with the first Image (Fig.1) the bottom corner part of the image get disappeared/cropped on the mobile screen. Fig. 2 2. With Aspect Ratio: With aspect ratio we’ll have the same card on the mobile screen, If we compare the below image (Fig. 3) with the first Image (Fig.1) both are rendered properly. That’s the advantage of Aspect Ratio Property. Let’s see the code what exactly changed, We are using two next image properties Which means Image will take width and height from its parent, and by default Next Image has position absolute so we need its parent to be relative in position. And we need to add the aspect ratio property to its parent, so the question arises, how can we calculate the aspect ratio of an image? So, 1.5 is the aspect ratio for this particular image. Fig. 3 Conclusion: This blog intends to help you understand how aspect ratio works with Next Image, and how It helps us to build responsive images. Here is CodeSandBox link of given example for better understanding, where you can see the code, make changes see the difference: https://codesandbox.io/p/sandbox/next-image-with-aspect-ratio-l6fplz?file=%2Fapp%2Fpage.tsx%3A1%2C1

  • AWS Lambda to generate SSH Keys

    For the past few months, my team and I at WhileOne Techsoft Pvt. Ltd. have been helping our customer setup a system wherein access to a remote server in the cloud for testing can be granted to users. One of our client’s requirements is to generate SSH keys from the JIRA board. In JIRA use a custom script to generate SSH keys which will help our client for project automation. SSH key pairs are two cryptographically secure keys that can be used to authenticate a client to an SSH server. The private key is retained by the client and should be kept absolutely secret. Why use the AWS Lambda function? AWS Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying compute resource. AWS Lambda automatically runs code in response to multiple events, such as HTTP requests via Amazon API Gateway, modifications to objects in Amazon Simple Storage Service (Amazon S3) buckets, table updates in Amazon DynamoDB, and state transitions in AWS Step Functions. With AWS Lambda, there are no new languages, tools, or frameworks to learn. You can use any third- party library, even native ones. You can also package any code (frameworks, SDKs, libraries, and more) as a Lambda Layer, and manage and share them easily across multiple functions. Lambda natively supports Java, Go, PowerShell, Node.js, C#, Python, and Ruby code, and provides a Runtime API allowing you to use any additional programming languages to author your functions. Steps to generate SSH keys: 1. As a team we have decided to use Ruby as a language in AWS Lambda function. - AWS Lambda functions using Ruby 2.7 which supports architectures such as x86_64 and arm64. - I tried using the OpenSSL- Cipher algorithm which requires an “openssl” gem in Ruby. This algorithm will generate random keys. cipher = OpenSSL:: Cipher.new (‘AES-128-CBC’) Above line will generate the keys with a public and private key pair. But this key pair was not working as expected. - OpenSSL puts a hard limit of 256 bits on key sizes, causing less efficiency. - To overcome above problem, I tried to generate keys using “sshkey” gem which is supported by Ruby with inbuilt function as “sshkey. generate”. This method is quite easy and gives the accurate result for public and private key pair. 2. Once the keys get generated, the next task is to zip these 2 files. As per client requirement, keys should be zipped and sent to the customer as an attachment in an email. - AWS Lambda supports the built function “tar.gz”. But as per requirement I need “.zip” format. - To zip these keys, I have used the “zip” gem in Ruby which is quite easy to use. 3. Next step is to create an Email template and send keys in email format. - First, I tried with SES (Amazon Simple Email Service). I was able to send the emails but it always goes to Junk mail. So, I need to search for some other way to send the emails. - Ruby supports SMTP via “net/smtp”. This method is quite straightforward as add your credentials, make a template and send the email. But this method supports only one attachment in email, which again is a drawback for me since I need to send some PDF documents as an attachment with keys. - To overcome this problem, I have used the “mail” gem which is supported by Ruby. This gem is supported by SMTP. It also supports HTML templates to send the email. 4. Since this AWS Lambda function is going to call from the JIRA board. I need some data from the JIRA board such as clients name, email id etc. In ruby, “jira-ruby” gem is used to fetch the information from the JIRA board. To get the information needed to create API token in Jira board which will act as a password. Use JIRA credentials in the lambda function and get the information. In JIRA, each issue is created with an issue id which is unique. Get all the information from issue id which will be the parameter passed by JIRA board. Steps to call Lambda function in Jira Board: 1. First Create API and add lambda function in POST method. Add any parameters if needed. I need an ID from the JIRA board. Add issue id in URL Query String Parameter. 2. Now to call this API, JIRA supports JIRA webhook. Create a JIRA webhook and add API URL into this. Also add a JQL query when this API should get called. Limitations: AWS Lambda function supports only 3MB space for each function. Since I am using a lot of gems to create this functionality, I need more space for my function. - To resolve this issue, I need to split the function into 2 separate functions each of 3MB. - First function will work to get the information from the JIRA board, create a temporary file and save it into S3 bucket. And also create SSH keys and save those keys in S3 bucket. - Second function will generate an Email template, fetch keys from S3 bucket and send email to the customer. - Now the main task is how these 2 functions will communicate with each other. Since I am going to create an API for this where only 1 function can get called. AWS ruby supports “invoke” functions where you can call other functions. This was a very good way for me to get hands on with AWS Lambda and do a soft landing to understand AWS Lambda. The cost of the lambda functions is in the below table This means no IT department in the company is going to raise eyebrows for cost overheads. These costs are equivalent to negligible or none at all. The same can also be done on other cloud providers namely on GCP which can use Google Cloud functions, or Azure which can use Azure Automation or OCI which uses Oracle functions.

  • Android on RiscV Part - I

    The Problem statement: Our customer expressed their desire to know if Android (AOSP) was already ported by community to the RiscV platform and if we could provide a detailed summary of the current status of AOSP compilation/build and Qemu emulation progress for RiscV Introduction to AOSP: Android is an open source operating system for mobile devices and an open source project led by Google. Android Open Source Project (AOSP) repository offers the information and source code needed to create custom variants of the Android OS, port devices and accessories to the Android platform, and ensure that the devices meet the compatibility requirements that keep the Android ecosystem a healthy and stable environment for millions of users. AOSP build requirements: 1. Follow instructions to download AOSP source code from this link https://source.android.com/docs/setup/download/downloading 2. Follow Readme.md to configure and build AOSP from this link https://github.com/google/android-riscv64 3. Follow the instructions to setup cuttlefish for riscv64 at this link https://source.android.com/docs/setup/create/cuttlefish-use To build the target AOSP project, follow the steps below: # Start a Baremetal instance on AWS # Install basic dependencies on the instance $> sudo apt-get update && sudo apt-get install build-essential repo # Clone the source code to Baremetal instance. $> mkdir ~/aosp $> cd aosp $> sudo ln -s /usr/bin/python3 /usr/bin/python $> git config --global user.name $> git config --global user.email $> repo init -u https://android.googlesource.com/platform/manifest $> repo sync Note: The above command “repo sync” takes some time to download the sources to the folder. # Once the sources are downloaded successfully, next run the configure command as below. $> source build/envsetup.sh $> lunch aosp_cf_riscv64_phone-trunk_staging-userdebug Figure 2: Run the configuration command “lunch” # Next, run the command to compile and build the AOSP sources $> make –j Figure 3: Run the build command “make” # Build Error: make fails with permission error for /dev/loop* Figure 4: Error as a result of missing loop devices and permissions # Solution: In case, loop devices are not available, then we need to create them. Run command to grant user “ubuntu” permissions to modify loop devices. $> sudo chown ubuntu /dev/loop* # After granting permissions and some minutes later, the AOSP build completes successfully. Figure 5: Error as a result of missing loop devices and root permissions # Verify the images in the directory “vscoc_riscv64” as shown below. Figure 6: Resulting Binaries of AOSP build This completes Part-I of our blog. In the next part, i.e; Part-II, we shall launch the Cuttlefish Emulator and boot Android !!

  • Android on RiscV Part - II

    The Problem statement: Our customer expressed their desire to know if Android (AOSP) was already ported by community to the RiscV platform and if we could provide a detailed summary of the current status of AOSP compilation/build and Qemu emulation progress for RiscV What have we accomplished in Part - I: We launched a Bare Metal instance on AWS Downloaded the AOSP source code Configured and built the binaries for AOSP RiscV What we plan to accomplish in Part - II: Install Cuttlefish Emulator Launch the emulator Setup reverse tunnel from remote terminal to Bare Metal instance Connect RealVNC and view the Android display *Before trying to launch the emulator, install the cuttlefish package as below. After installing, reboot the Bare metal machine. Follow below steps to install cuttlefish: # Install cuttlefish dependencies: $> sudo apt-get install -y libgtest-dev libssl-dev libxml2-dev libgoogle-glog-dev libcurl4-openssl-dev libcurl4 libjsoncpp-dev libgflags-dev cmake libfmt-dev libprotobuf-dev protobuf-compiler meson bison build-essential $> sudo apt install -y git devscripts config-package-dev debhelper-compat golang curl $> git clone https://github.com/google/android-cuttlefish $> cd android-cuttlefish $> for dir in base frontend; do \ cd $dir \ debuild -i -us -uc -b -d \ cd .. \ done \ $> sudo dpkg -i ./cuttlefish-base_*_*64.deb || sudo apt-get install -f $> sudo dpkg -i ./cuttlefish-user_*_*64.deb || sudo apt-get install -f $> sudo usermod -aG kvm,cvdnetwork,render $USER $> sudo reboot # Run command to launch the Cuttlefish emulator $> launch_cvd -cpus=8 -memory_mb=8192 *The emulator launch takes considerable time to complete. Please wait until you can see the message “VIRTUAL DEVICE BOOT COMPLETED” before proceeding ahead to create a remote connection to the Baremetal instance. Please check the snapshot below. Figure 1: Successful launch of Android RiscV on Cuttlefish Emulator *To view the Android Display, connect to the bare metal machine from the laptop (remote terminal). Type the below command on console window of the laptop (remote terminal). The below has been done inside WSL on Windows. # Set up a reverse tunnel from remote terminal (Laptop) to Bare Metal instance: $> ssh -i ~/.ssh/mykey.pem -p 22 -L5901:localhost:6444 ubuntu@3.93.48.57 -Nf Note : You can replace the IP Address above “ 3.93.48.57 ” with your Bare Metal IP address! # Open RealVNC Viewer ⦁ Open the RealVNC viewer on the remote terminal (Local machine / Laptop). ⦁ Click to create a new connection ⦁ Give a name to the connection “AndroidRiscV” or as per your need. ⦁ Set the IP Address as “localhost:5901” ⦁ Click OK and exit the “New Connection” dialog ⦁ Double click on the connection icon to open the VNC display Figure 2: Android Display Snapshots taken from Remote terminal (Laptop) Note : Wait for the Display to update. It takes considerable time. This brings us to the end of Part - II. Hope you enjoyed it and got some insights into the Android AOSP project.

  • Introducing CloudNudge: Your Partner in Cloud Cost Optimization

    Hey there, Cloud Enthusiasts! We get it—managing cloud costs can sometimes feel like trying to catch a cloud itself: elusive,   elusive, ever-changing, and sometimes, downright overwhelming. That’s why we created CloudNudge—a tool designed to take the guesswork out of cloud cost optimization and make your life just a little bit easier. Why We Built CloudNudge The cloud is amazing. It’s flexible, scalable, and powers much of what we do today. But with great power comes great responsibility—and sometimes, great expenses. When we started talking to teams and organizations, one thing became clear: cloud costs can spiral out of control faster than you can say “provisioned instance.” That’s where CloudNudge comes in. We designed this tool to help you not only manage your cloud spending but also gain deeper insights and optimize it effectively. We wanted something that didn’t just throw numbers at you but offered actionable insights—like a friendly nudge in the right direction. What CloudNudge Can Do for You So, what makes CloudNudge different? Here’s a sneak peek: Real-Time Monitoring : Keep an eye on your cloud costs as they happen. No more end-of-month surprises! Cost Forecasting : Predict future costs based on your current usage, so you can budget with confidence. Budget Alerts : Set your spending limits, and we’ll make sure you never exceed them. Think of it as your financial safety net. Savings Recommendations : We analyze your cloud usage and offer suggestions to save you money without compromising performance. Multi-Cloud Support : Whether you’re on AWS, Azure, Google Cloud, or a mix, we’ve got you covered. The Human Side of CloudNudge But CloudNudge is more than just a tool—it’s a team of passionate individuals who understand the challenges you face. We’re constantly listening, learning, and improving so that CloudNudge evolves with your needs. We believe in transparency, simplicity, and above all, putting you in control of your cloud journey. Our mission is to empower you with the insights you need to make informed decisions, not just for today but for the future. Join Us on This Journey We’re just getting started, and we’re excited to have you along for the ride. Whether you’re a seasoned cloud expert or just beginning to navigate the complexities of cloud management, CloudNudge is here to support you every step of the way. So, here’s our nudge to you: give CloudNudge a try. We’d love to hear your thoughts, your feedback, and ideas. After all, we’re building this together. Stay tuned for more updates, and let’s make cloud management a breeze!

  • Debugging Tool for workloads using Java

    GCeasy -  We have used the Debugging tool , GCeasy in one of our projects to generate reports of performance in a particular way. GCeasy helped us to visualize the performance numbers in a better way. Purpose to use GCeasy -  GC easy Log analyzer portal, which we will be using for report generation. Why we use GCeasy - We can use Garbage collection log analysis for the following purposes: It is a wrapper Python script that generates graphs for visual inspections. It captures the GC calls for the young generation, and old generation and metadata and gives a visual representation of each call, time spent, etc. It also allows us to understand the memory-related performance throughout the test. GC easy tool provides a rich set of Garbage collection, memory-related performance. There are below Key Performance Indicators when it comes to Garbage Collection analysis: 1. Throughput 2. Latency (i.e. GC Pause time) I. Throughput -  a. Throughput is the amount of productive work done by your application in a given period. b. Let’s say your application runs for 60 minutes. In this 60 minutes let’s say 2 minutes is spent on GC activities. It means the application has spent 3.33% on GC activities (i.e. 2 / 60 * 100). It means application throughput is 96.67%  (i.e. 100 – 3.33). II.Latency –   This is the amount of time taken by one single Garbage collection event to run. There is a paid version of this tool that can give you more detailed use of it. How to   Enable GC Logs java –version <= 8 §-XX:+PrintGCDetails –Xloggc: java –version >= 9 §-Xlog:gc*:file= Steps to GC Logs Zip the log files zip -r gcLogFiles.zip gcLogFiles Sign in to     https://gceasy.io/ Upload the zip file to the GCeasy tool A report will be generated, allowing you to explore graphs related to garbage collection (GC) Below is an example to show what GCeasy dashboard steps look like. When to measure Measure at different phases during the test. GC Behavior is a function of traffic, so measure at various points in tests for full disclosure. When Performance Degrades - A sudden drop in performance may be due to inefficient GC, memory leaks, or excessive pause times. You can look for Increased GC pause times, higher memory usage, and full GC frequency. If you’ve made JVM tuning changes (e.g., heap size, GC algorithm), you’ll want to verify their impact on garbage collection. To detect issues proactively and ensure the JVM is functioning optimally. Over time, memory consumption patterns can reveal problems like memory leaks.

  • Top 10 Libraries you should consider along with your React Project

    Here’s an introduction to the top 10 most popular and essential libraries that work well with React: Vite : Vite helps you to create a React app, it's an alternative to the CRA method to create a React project. Vite uses native ES modules during development and doesn't bundle the entire app upfront. This leads to faster HMR and quicker updates in the browser compared to CRA, which relies on Webpack, leading to slower development builds. React Router DOM : React is Library itself so it does not have any routing functionality by default, here React Router Dom comes into the picture which helps in routing in SPA (single page application) efficiently. Emotion : Emotion provides a powerful and flexible way to write CSS in JavaScript with optimized performance. It offers styled and CSS APIs and can be used with both object and template literal syntax. Framer motion : Framer Motion is a powerful library that helps build simple animations, complex gesture-based interactions, drag-and-drop, and more. It makes it easy to control animation using Javascript which is more flexible and easy to manage rather than writing and handling everything using only CSS. React Hook Form : React Hook Form is a lightweight, performant library for handling form inputs and validation. It’s way more efficient than handing form using states. Axios : Axios is a promise-based HTTP client that simplifies making HTTP requests to interact with REST APIs. It has built-in features like automatic JSON transformation, easier error handling, and support for interceptors, making life easier. React Query (TanStack Query) : React Query simplifies data fetching, caching, synchronizing, and updating server state. It reduces the need for Redux in a server-related state. React Data Table (TanStack Table) : TanStack Table makes life easier when dealing with tables in React. It has some amazing features like Client Side Pagination, filters, Global search, column search, sorting, and much more. Recharts : Recharts is a simple, declarative charting library that is built specifically for React. It supports a variety of chart types like bar, line, area, and pie charts with customization. StoryBook : Storybook is a frontend workshop for building, and testing UI components and pages in isolation outside your app. This makes it easier to focus on building well-structured, reusable components without the distractions of routing, state, or the rest of your app. It's great for building design systems or reusable UI libraries. or building design systems or reusable UI libraries.

  • Developing a Backend with Django for a Cloud Cost Optimization Tool

    Cloud cost optimization is critical for businesses leveraging cloud infrastructure, as it allows them to reduce spending, streamline resource use, and maintain performance. Using Django as the backend framework for a cloud cost optimization tool can expedite development while providing robust features for handling complex data and APIs. Here’s a step-by-step look into how Django is used effectively in building a backend for such a tool. 1. Why Django for Cloud Cost Optimization? Scalability : Django's ORM and middleware capabilities make it suitable for handling large datasets typical in cloud cost analysis. Speed of Development : Django’s "batteries-included" approach offers pre-built components (e.g., ORM, authentication, admin panel), allowing faster iterations. Security : Django is known for its security features, which include protection against common vulnerabilities like SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF). Extensibility : Django's modularity allows developers to integrate with third-party APIs and cloud providers, critical for cloud cost monitoring. 2. Setting Up the Django Project for Cloud Cost Optimization Initialize the Project : Start with django-admin startproject  to set up the foundational structure. Set Up Environment Variables : To keep API keys and sensitive data secure, configure environment variables using libraries like django-environ . Install Necessary Libraries : Include libraries for REST APIs (Django REST framework), cloud SDKs (e.g., AWS SDK for Python boto3), and data processing (e.g., Pandas). 3. Creating Core Models and Database Structure Design the database schema: Define models to represent cloud resources, costs, usage metrics, and alerts. Key models might include: Resource: Stores details of each cloud resource (instance ID, type, region, etc.). Usage Record: Logs data on usage patterns and time-based cost metrics. Optimization Suggestion: stores actionable insights like scaling recommendations or idle resource alerts. Integrate Django ORM : Leverage Django ORM to interact with the database, creating tables and querying cost and usage data effortlessly. 4. Building RESTful APIs with Django REST Framework (DRF) Set Up API Endpoints : Create REST API endpoints using the Django REST framework to allow data interactions. For instance: GET /api/usage  to retrieve usage and cost data. POST /api/optimise  to submit optimization configurations. GET /api/alerts  to fetch cost-saving alerts. Authentication & Permissions : Implement token-based authentication (using libraries like djangorestframework-simplej wt) to ensure secure access to data. 5. Integrating with Cloud Provider APIs Fetch Cost and Usage Data : Use SDKs (like Boto3 for AWS) to pull cost and usage reports from cloud providers, aggregating data for analysis. Store Data in Django Models : Load this data into Django models for further processing and optimization. Automate Data Collection : Schedule periodic data collection with Django’s Celery for task scheduling or with AWS Lambda functions, creating a seamless data pipeline. 6. Implementing Data Analysis and Cost Optimization Logic Analyze Resource Utilization : Develop Python scripts or use Django’s management commands to analyze usage patterns and identify underutilized resources. Generate Optimization Suggestions : Apply custom rules (e.g., flag instances running below 10% CPU utilization) or use machine learning to generate actionable insights. Present insights via API : store and deliver cost-saving suggestions through dedicated API endpoints, integrating them into frontend dashboards. 7. Creating Alerts and Notifications Define Alert Criteria : Set alert thresholds based on user-defined budgets or usage patterns (e.g., 80% of budget exhausted). Schedule and Trigger Alerts : Use Celery to schedule regular checks and trigger alerts. Send notifications via email or integrate with messaging services like Slack. 8. Optimizing Performance and Deployment Optimize Database Queries : Use Django’s select_related  and prefetch_related  to optimize ORM queries, ensuring fast data retrieval. Caching : Implement caching strategies (using Redis or Memcached) to reduce response time for frequently accessed data. Deploying on the Cloud : Deploy the Django backend on a cloud platform (like AWS, Google Cloud, or Azure) using Docker and Kubernetes for scalability and resilience. 9. Monitoring and Scaling Use Cloud Monitoring Tools : Integrate monitoring tools like AWS CloudWatch or Prometheus to observe server health and resource consumption. Scale as Demand Grows : Use Django’s compatibility with WSGI servers like Gunicorn and Nginx to handle large traffic, enabling horizontal scaling when necessary. Conclusion Using Django to develop the backend for a cloud cost optimization tool is both efficient and powerful. Django’s ecosystem enables rapid setup, secure API development, and seamless integration with cloud services. By implementing these practices, developers can build a scalable, data-driven solution that helps businesses optimize their cloud expenses effectively.

  • Performance Testing with NeoLoad: A Detailed Exploration of WebFocus Use Case

    In today’s software-driven world, ensuring the seamless performance of applications under varying workloads is a necessity. For performance testing, tools like NeoLoad  empower testers to simulate real-world conditions and optimize application behavior. In this blog, we’ll delve into the practical use of NeoLoad for WebFocus performance testing , focusing on scenarios like chart rendering, page loads, data uploads, and resource utilization. Understanding the Scope of Performance Testing In the WebFocus performance testing project, the primary focus areas included: Chart Rendering : Time required to render single and multiple charts. Page Performance : Monitoring login/logout and page rendering times. Resource Utilization : Tracking CPU and memory usage during various operations. Data Uploads : Measuring the time, CPU, and memory required for data uploads. These benchmarks were vital to ensuring optimal application performance under increasing workloads. The Process: From Recording to Optimization 1. Recording the Scenario Flow The first step was to record user interactions. For instance, in a chart-rendering scenario, the flow of loading, interacting, and rendering charts was captured using NeoLoad's recording feature . 2. Script Customization Post-recording, scripts were adjusted to ensure reusability: Dynamic Parameters : Tokens (e.g., session IDs) generated during requests were identified and correlated. For hidden tokens, developer tools were utilized to trace their origin and ensure they were passed correctly in subsequent requests. Parameterization : URLs, usernames, passwords, and IPs were parameterized to make scripts adaptable for different environments, releases, or multiple users. Loops : Requests with multiple calls were optimized by implementing loops to handle repetitions effectively. 3. Preliminary Testing Before load testing, the scripts were tested with a single user to verify functionality. This step ensured that all dynamic parameters and correlations were correctly handled. Load Testing with NeoLoad Once the scripts were verified, load testing was performed: Setup : A Load Generator (LG)  was configured to simulate user traffic, and the controller executed the recorded scripts. Testing Parameters : CPU and memory usage were monitored. Response times were analyzed for specific workloads. Scaling Observations : Tests assessed how the system scaled with increasing loads, identifying bottlenecks or performance degradation. Performance Analysis and Optimization When performance issues arose during testing, the following measures were taken: Resource Allocation : Increased CPU and memory for the WebFocus environment. Request and Database Optimization : Identified long-processing database queries and collaborated with developers for optimization. Heap Dump Analysis : Collected heap dumps for detailed investigation of memory-related issues. Detailed Reporting : Created detailed reports highlighting response times, resource utilization, and optimization recommendations. NeoLoad Features that Enhanced Testing Dynamic Parameter Handling : Simplified the treatment of session tokens and hidden parameters. Parameterization : Enabled reusability of scripts across environments and user scenarios. Realistic User Simulation : Simulated complex interactions such as rendering multiple charts or uploading large datasets. Resource Monitoring : Provided real-time insights into CPU and memory usage, enabling quicker bottleneck identification. Integration : Collaborated with tools like developer consoles for deeper analysis. Example Scenario: Chart Rendering Benchmarking Objective : Test the rendering performance of 10 simultaneous charts. Process : Recorded the chart rendering workflow in NeoLoad. Customized the script to handle dynamic session tokens and parameterized user inputs. Ran a baseline test with a single user, followed by load testing with 100, 500, and 1,000 concurrent users. Outcome : Identified bottlenecks at 500 users due to CPU exhaustion. Recommended increasing resources and optimizing chart generation queries. Benefits of Using NeoLoad Efficiency : Parameterization and dynamic handling reduced scripting effort. Scalability : Load generators allowed easy simulation of large-scale user traffic. Precision : Real-time monitoring and detailed reporting enabled accurate issue identification. Reusability : Modular scripts streamlined testing across environments and releases. Conclusion NeoLoad proved instrumental in ensuring the WebFocus environment could handle real-world workloads. From recording and customizing scripts to analyzing performance under stress, NeoLoad simplified the performance benchmarking process. Its ability to simulate realistic conditions, monitor resource utilization, and provide actionable insights makes it a vital tool for any performance testing team. Whether you’re testing chart rendering, page loads, or data uploads, NeoLoad  offers the tools and flexibility needed for comprehensive performance benchmarking. Start leveraging NeoLoad today to unlock your application’s full potential!

  • A Day in the Life of a Dashboard Engineer - Turning Data into Decisions

    Data has immense value for modern businesses, and dashboard engineers play a crucial role in making it accessible and actionable. From designing intuitive visualizations to ensuring data accuracy, the job of a dashboard engineer is a blend of technical expertise and user-centric design. Here’s a look into a typical day in the life of a dashboard engineer. 1. Checking Overnight Reports Dashboard Engineer's first task of the day often involves reviewing reports and dashboards to ensure they run successfully. Any discrepancies or failures in data refresh processes need immediate attention to prevent misleading insights. 2. Reviewing Dashboard Performance Performance metrics such as load times, query execution speeds, and system logs are checked to ensure dashboards are running optimally. A slow or unresponsive dashboard can impact decision-making efficiency. 3. Stand-Up Meetings Participating in daily stand-up meetings with data analysts, product managers. These meetings provide updates on ongoing projects, prioritize tasks, and discuss any roadblocks that need addressing. Dashboard Engineering Starts With Debugging, UX Refinements, and Collaboration  – Laying the Foundation for a Productive Day 1. Debugging Issues Errors in dashboards, such as incorrect data representation or slow-loading components, require debugging. This often involves checking data pipelines, optimizing queries, and ensuring data consistency across sources. 2. Refining UI/UX A dashboard is only as effective as its usability. Engineers spend time refining layouts, adjusting color schemes, improving responsiveness, and ensuring accessibility for all users. Deployments, Documentation, and Planning  – Closing the Loop for a Seamless Tomorrow 1. Deploying Updates Finalizing and pushing updates to production is a critical evening task. This includes version control management, testing changes, and ensuring seamless integration. 2. Documentation Maintaining clear documentation is essential for long-term maintainability. Engineers document dashboard structures, query optimizations, and troubleshooting steps to assist future developers. 3. Planning for the Next Day Before wrapping up, review task lists, prioritize upcoming requests, and prepare for the next workday’s challenges. Tools of the Trade A dashboard engineer relies on a variety of tools to get the job done efficiently: WebStrom  – Code editor SQL  – Writing and optimizing queries for data retrieval Python  – Automating data processing and analytics Git  – Version control for tracking changes Github actions  - Streamline the deployment process using CICD. Grafana -  Show data in form of graphs Prometheus  - Monitoring system and time series db ReactJS -  Build UI for dashboard Flask/Django -  Build API’s for dashboard Challenges & Rewards Common Challenges Data Inconsistencies : Handling discrepancies in data sources Performance Optimization : Balancing dashboard complexity with speed Job Satisfaction Impactful Insights : Seeing how dashboards drive business decisions Creative Problem-Solving : Finding innovative ways to present data Collaboration : Working with different teams to create meaningful reports Final Thoughts Being a dashboard engineer is both challenging and rewarding. The role bridges the gap between raw data and actionable insights, ensuring businesses can make informed decisions efficiently. Whether troubleshooting errors, refining UI elements every task contributes to making data more accessible and meaningful.

  • Which Cloud Provider is best for you? A pricing and performance Breakdown

    Cloud computing has become the backbone of modern businesses, with AWS, Google Cloud (GCP), Microsoft Azure, and Oracle Cloud Infrastructure (OCI) leading the market. Choosing the right cloud provider depends on various factors like pricing, performance, scalability, security, and real-world use cases. In this blog, we’ll break down these aspects to help you make an informed decision, using detailed tables, graphs, and deep insights. 1. Cloud Provider: Pricing Comparison Prici ng is one of the most critical factors when selecting a cloud provider. The following tables summarize the compute, storage, and networking costs across AWS, GCP, Azure, and OCI. Compute Pricing (On-Demand Instances) Cloud Provider Instance Type vCPUs RAM Cost per Hour AWS EC2 t4g.medium 2 8GB $0.0336 GCP Compute Engine e2-standard-2 2 8GB $0.067 Azure VM B2s 2 8GB $0.04 OCI Compute VM.Standard.E2.2 2 8GB $0.025 Spot Instances : AWS, GCP, and Azure provide steep discounts (up to 90%) for spot instances, whereas OCI offers flexible preemptible instances at a lower base price. Storage Pricing Cloud Provider Storage Type Cost per GB/Month AWS S3 Standard $0.023 GCP Cloud Storage $0.020 Azure Blob Storage $0.0184 OCI Object Storage $0.0255 Cold Storage : AWS (Glacier), GCP (Nearline/Coldline), and Azure (Cool/Archive) provide lower-cost options for infrequent access data. Networking Pricing (Egress Costs) Cloud Provider Free Data Egress Additional Cost per GB AWS 1GB $0.08 - $0.23 GCP 1GB $0.08 - $0.23 Azure 5GB $0.0875 - $0.20 OCI 10TB Free up to 10TB Key Takeaways: OCI offers the most generous free data egress, making it ideal for data-heavy applications. AWS, GCP, and Azure have varying costs based on regions and tiers. 2. Cloud Provider: Performance Breakdown Performance is crucial for workloads like AI, analytics, and enterprise applications. Below is a comparative breakdown. Compute Performance Cloud Provider Processor Type Max IOPS Specialized Chips AWS Intel/AMD/ARM 16,000 AWS Graviton GCP Intel/AMD 100,000 TPUs for AI/ML Azure Intel/AMD 20,000 Azure AI Chips OCI Intel/AMD/ARM 35,000 Ampere ARM Storage Performance Cloud Provider Max IOPS Latency AWS EBS 16,000 Low GCP Persistent Disks 100,000 Very Low Azure Managed Disks 20,000 Medium OCI Block Storage 35,000 Low Networking Performance AWS, GCP, and Azure have extensive global networks with low latencies. OCI provides a high-speed RDMA network, beneficial for HPC workloads. 3. Real-World Scenarios and Best Cloud Choices Scenario 1: Startup with Budget Constraints Best Choice: OCI Offers lower compute and storage costs. Free-tier outbound data transfer up to 10TB. Ideal for cost-conscious businesses. Scenario 2: AI/ML Workloads Best Choice: GCP Tensor Processing Units (TPUs) provide superior AI/ML acceleration. Competitive pricing for GPU instances. Recommended for AI-first companies and data science teams. Scenario 3: Enterprise with Multi-Cloud Strategy Best Choice: Azure Seamless integration with Microsoft products (Active Directory, Office 365, etc.). Hybrid cloud support with Azure Arc. Ideal for enterprises already using Microsoft services. Scenario 4: E-commerce with High Traffic Best Choice: AWS Scalable auto-scaling groups and robust global presence. Strong security and compliance. Best for e-commerce and global applications. 4. Security and Compliance AWS : Offers FedRAMP, HIPAA, GDPR compliance, best for regulated industries. GCP : Strong in data security with encryption-by-default policies. Azure : Best for enterprises needing seamless Microsoft security integration. OCI : Provides isolated network virtualization for enhanced security. Conclusion: Which Cloud is Best? The choice depends on your business needs: For cost-effectiveness , OCI is the best choice. For AI and ML workloads , GCP stands out. For Microsoft-based enterprises , Azure is the best fit. For global scalability and reliability , AWS is the leader.

bottom of page