Scaling, Ruby on Rails, Kubernetes, Container Orchestration
Scaling Ruby on Rails Applications Conclusion - Part 2
Continuing our post series about rails app performance. This week we are going to see a summary of our current analysis.
Final Analysis Considerations - Part 2
On advanced caching problems, the benefits are even less worth it. If you are managing huge enterprise-level applications with thousands of simultaneous user access and a team to maintain your application, you will benefit from advanced caching. Our analysis has shown us no significant improvements when compared to the basic caching scenario. If we keep in mind that advanced caching is one of the main sources of bugs on the enterprise-level application, when can infer that caching page, partials, query results all over the code to decrease even more the database access and lower our resources is not worth the work you will have to spend on those upgrades, unless you are managing high traffic servers.
Read more...Scaling Ruby on Rails Applications Conclusion - Part 1
Continuing our post series about rails app performance. This week we are going to see a summary of our current analysis.
Final Analysis Considerations - Part 1
We have already concluded all our experiments and wet extensively thought all of them. The raw data and the Jupyter notebook, which produce these experiments, are available to download in the appendix A. To organize the ideas behind this study and what we have accomplished so far, we will recapitulate the most important parts and how they correlate to each other.
Read more...Test Analysis - Advanced caching
Continuing our post series about rails app performance. This week we are going to see a summary of our current analysis.
Tests analysis
Comparing box plot 7.43 a and 7.22, referring for percentile 50% of the requests, we see the same behavior and the same mean response time interval. Figures 7.43 b and 7.23 as on box plot for 50% show the exact time interval and outliers groups. As the reader might already expect, the same behavior happens when we are talking about 95% of the requests on plots 7.43 c and 7.2499 on plots 7.43 d and 7.25.
Read more...Explanatory analysis - Advanced caching
Continuing our post series about rails app performance. This week we are going to see a summary of our current analysis.
Exploratory analysis
2 cores and 8000MB RAM
{:class=“img-fluid”}
Accordingly to what we have been studying so far, we will focus this part of the analysis on 1 to 100 simultaneous users. Unlike the last section, where we relied on rails gem’s automatic caching to make things simpler a bug-free, we will use advance caching strategies where we select specific parts of the code in the cache. This section will use a cache of partials of views, query results, and compare to the last section results and hope to discover if it is worth increasing software complexity is given performance improvements.
Read more...Tests Analysis on Improved code
Continuing our post series about rails app performance. This week we are going to see a summary of our current analysis.
Tests analysis
Before we jump into a general analysis of our case scenario, we would like to use the plot 7.20 and 7.21 to demonstrate why focus on simulations with a higher number of users is of no need for us in this study. First, let us look at the plot 7.21 we can handle the request pretty well, all of the tested groups keep under 3 seconds limit set by Google that we already mentioned. But, if we pay attention to the plot 7.20 we can spot the problem right on the way. Our request handled peak is 80% for 500 users, increasing all the way up to 2000 simultaneous users make our requests rate drops to as lower as 20%-30% which is even less than 1000 simultaneous users rate, which is between 50% and 70%. The problem here is that even on 500 simultaneous users our acceptance ratio is pretty low, we can not miss one out of 5 requests.
Read more...Explanatory analysis - Simple caching
Continuing our post series about rails app performance. This week we are going to see a summary of our current analysis with .
2 cores and 8000MB RAM
{:class=“img-fluid”}
Let us stop here for a moment. We have a 2 cores cluster, only 9000MB dedicated to running the simulations, and ramp users access during 60 seconds. The reader might remember from the last chapter that we will focus on or analyze one to one hundred simultaneous users because, after that, we start to throttle the host machine. If we were to look at the whole range until 2000 users, we would STILL be trapped on the same problem as before, 80-90% of handled requests; let us keep a closer look at a smaller range of users, focus on 1 to 100. On the above chart figure7.5, we can clearly see that our requests keep inside a range of 86% to 100% requests handled; what is interesting is that with 50 simultaneous users, we can keep up to 94% requests handled, keeping our application on a 5% failure acceptance ratio, but remember, we are running with 9000MB and 2 cores only. Before we scale up to 4 cores, we need to confer the response time on such hard environment conditions.
Read more...Tests methodology and enhancements proposal
Continuing our post series about rails app performance. This week we are going to see our tests methodology and enhancements proposal.
By this time, the reader might already have guessed correctly; our proposed enhancements are related to cache. For this study, we are going to use Redis it as a cache server. Following Towards Scalable and Reliable In-Memory Storage System: A Case Study with Redis, we are using a similar configuration based on a five nodes cluster. The idea here is to simulate a Redis cluster with 1 master node, with 2 slave nodes and two replicas for consistency so we can achieve a much similar production level environment on our study. Each of the nodes gets 1000MB of RAM available to cache using the LRU (Least Recently Used)2 strategy. If we look at Analysis of a Least Recently Used Cache Management Policy for Web Browsers from Vijay S. Mookerjee and Yong Tan where they study this strategy for web browser caching, it perfectly fits our study case where we are displaying a commonly accessed page. This behavior happens a lot on our application. For example, on Consul, a platform for public debates is naturally expected to the page which lists the frequently accessed debates. Most of the users would be navigating from different debates and, frequently, accessing the same debates pages list to choose a new debate to participates in. The same logic applies to polls and the legislation process.
Read more...Proposed Improvements on Ruby on Rails application performance test compare
Continuing our post series about rails app performance. This week we are going to see a summary of our proposed improvements.
Strategy to decrease page load time
As we are trying to deliver data as fast as possible to increase a web application’s performance, our goal is simple: cache. We are going to implement some caching on Consul on some levels. First, we are going to cache only translations, i18n1 related queries. Our second approach is more aggressive. This time, we will cache an entire page to see how it affects the performance since some pages like your website’s index are frequently accessed. Sometimes, your application has to make hundreds of access to the database to get the information required to deploy an important and frequently accessed page. This seems to be a clear sign of a great fit for caching.
Read more...Exploratory analysis on Ruby on Rails application performance test compare
Continuing our post series about rails app performance. This week we are going to see a summary of our current analysis.
Tests analysis
Now we have to split this task into 2 tasks; first, we are going to compare the results between 16GB and 26GB of memory to see what changed, separately for 4 and 8 cores, then we look at jumping from 4 to 8 cores at 26GB of memory.
Read more...Exploratory analysis on Ruby on Rails application performance very memory
Continuing our post series about rails app performance. This week we are going to focus on varying the amount of memory of our simulated cluster.
8 cores and 16000MB RAM
We are investigating the cluster performance focusing on process power only (amount of cores of the cluster), but we were limited to 8291MB of RAM. What if we increase the cluster memory? We will check out what happens when we increase the cluster memory to 16384MB and 25000MB. It is wise to remember that the server on which we perform these tests only has 26624MB RAM, so we will leave 6000MB for the OS to handle our tests without page faults, causing it to slow down its performance.
Read more...Exploratory analysis on Ruby on Rails application performance
Hi folks, this week we are going to take a look at our experiments running a fixed amount of memory, 9000mb, but varying our core count, to understand the processor impact on our cluster simulation. Let’s dive in!
2 cores and 9000MB RAM
{:class=“img-fluid”}
As expected, as we increase the number of simultaneous users accessing the application, the number of answered requests decreases as low as 20%. But, let us take a look from the other side, starting with simple math. We are talking about 2000 users accessing the application for 1 second, which means we have 12000 users each minute and a total of approximately 518 million simultaneous users accessing during the month. That is too much; let’s not forget that we are using only 2 nodes on our cluster.
Read more...Scaling Ruby on Rails application
Now i will try a series of posts discussing performance increase on a Ruby on Rails application. To do so, we are going to use Consul as a sample application. In this first post, we are going to see the lab environment where the tests will the performed. Each batch of tests will be released on a week basis, after all posts have been releases, i will drop the raw files.
Read more...Subscription success
You have been subscribed to hour mailing list.
Subscription failed
Something not ideal might be happening.