The 1-Second Rule For Backend Development

The 1-Second Rule For Backend Development

·

6 min read

Introduction - Everybody in the software engineering world talks about scalable and high performing architectures. To tell you the truth, building scalable and high performing backends is more of an art than science and requires years of experience to master and understand its intricacies.

Does it really mean that young newbie engineers cannot develop good backends? No, definitely not! There are some easy to understand, clear and measurable rules, which if followed scrupulously, allows anybody, irrespective of their level of experience, to build great backends. This article discusses one such rule.

The Challenge in Backend Development

As the backend gets more users and more functionality, certain functionality slows down over time. The prime challenge in backend development is to ensure that all API requests perform optimally as the backend grows in functionality and user data.

Why do APIs Tend To Become Slower Over Time?

DB Bloating

Over time as the backend grows in the number of users and functionality, a lot of data is generated and then stored in the database. Because of this bloating, database queries slow down as more data gets dumped in the database.

Improper System Design

Sometimes the design of the database or the APIs is not in sync with the way the application grows or scales. It could lead to symptoms where the API that gets consumed the highest becomes the bottleneck and slows down the entire backend performance.

Hardware issues

Hardware issues can also slow down the backend performance. More specifically, if the traffic begins to exceed the provisioned server capacity.

Are Slow APIs Inevitable Then?

One needs to understand two facts. One, during design time, it is impossible to predict the entire behaviour of the system in production. Two, as more users use the system, more data gets generated that needs to be stored somewhere.

With good system design and infrastructure support, it is possible to defer the slowing down of APIs to a great extent. But eventually slowing down will happen and one will have to deal with it, one way or another. I would say, it is better to assume that some APIs will slow down from time-to-time and prepare a plan to tackle them beforehand.

What Is The Solution To Slow APIs? - Refactoring

Refactoring is the only solution to speed up slow APIs. It is necessary to refactor and speed up the functionality that is slowing down, fairly regularly. Some even of say that refactoring is a continuous activity. ([2])

Refactoring has a cost impact. Delaying it or avoiding it is even costlier. It is important to do it, just at the right time.

Following are few commonly used strategies for refactoring:

Divide and Conquer

For a slow api request that queries possibly a bloated database, Refactoring could be breaking down that API request into multiple smaller requests that are individually faster, and invoke them separately from the frontend. This strategy has low to moderate cost.

Caching

For a slow api request that is slower because it needs a lot of computation or because it needs to fetch data from multiple sources, caching could be used to speed up its response. This strategy has low to moderate cost.

Restructuring

Sometimes refactoring requires changes at the architectural level as well. This may lead to considerable restructuring. This method is extremely costly because it needs a high amount of time and resource-effort, as it has a system-wide impact. Ideally, if the system design is good, this should not be required.

In most cases, refactoring early requires only the use of divide and conquer and caching strategies, resulting in controlled cost impact. Leave the slow APIs untouched, they tend to get slower, start affecting other APIs, eventually end up impacting the entire system. Late refactoring often warrants restructuring and proves to be extremely costly.

What is the challenge then? Especially for newbies?

Newbie engineers are usually not equipped with the knowledge and experience to understand ‘When to refactor?’. This poses a huge challenge for them.

Fortunately, the 1-second rule comes to their rescue.

The 1-second rule

If one is not sure about when to refactor, simply follow this rule.

The 1-second rule states that every api request should return a response within 1 second, high traffic situations included. If the response time of any api request goes beyond the limit of 1 second, it should be refactored immediately.

What is the significance of 1 second? Why not 2 seconds or 3 seconds?

Request time of 1 second is a good benchmark to strive for from the following two perspectives.

From the user experience perspective: if requests are slower than 1 second, the users perceive the app as slow or laggy. For a good end-user experience, every API request should respond under a second. ([1])

From the performance perspective: it is known that most cloud hosting providers provide optimal performance for requests that have response times within 1 second. ([4])

Staying within the limit of 1 second gives the highest bang for your buck.

Benefits of the 1 second rule

Early Warning Systems

Based on the 1-second rule, one can set up early warning systems on the server, which can fire alerts when the need of refactoring arises. When the response times of any of the requests go beyond 1 second, automated alarms can go off indicating the requests that have exceeded the 1-second window. In case multiple alarms trigger simultaneously, prioritisation can happen based on consumption patterns. API requests that are consumed more frequently need to be refactored first.

Performance at Scale

If every api request of a backend responds within a second, the performance of the backend becomes predictable. With predictability, resource planning & provisioning becomes easy. With good resource planning, one can achieve scalability. As mentioned above, performance of most auto-scalable cloud solutions is optimized for requests responding within the 1 second window. ([4])

Pathway For a Scalable Codebase

The 1-second rule provides a reliable mechanism to identify the right opportunities to refactor the api code, which are otherwise difficult to identify, particularly for inexperienced engineers.

It forms a natural pathway for the code to evolve into becoming scalable and maintainable.

Great User Experience

Users don’t experience any perceptible delay as long as the API response time is within a second. [1]. A backend that follows the 1-second rule, is capable of providing excellent experience to the users.

Minimization of Technical Debt

The 1-second rule needs timely refactoring of code. Because of which, the technical debt doesn’t pile up. The additional work required to manage the technical debt is naturally incorporated into the development process itself, as a part of continuous refactoring. ([3])

Conclusion

The 1-second rule states that every api request should return a response within 1 second, high traffic situations included. If the response time of any api request goes beyond the limit of 1 second, it should be refactored immediately. Using the 1-second rule, one can develop high performing, scalable and maintainable backends. The rule helps establish a delivery methodology, which tends to improve the software over time. It also minimizes the technical debt as the project journeys ahead, passes hands and passes teams.

References

[1] API Response Time: blog.hubspot.com/website/api-response-time#...

[2] Continuous Refactoring: codit.eu/blog/continuous-refactoring/?count..

[3] Technical Debt & Refactoring: refactoring.guru/refactoring/technical-debt

[4] Google App Engine Autoscaling: cloud.google.com/appengine/docs/legacy/stan..

About the author

Hrushi M is an entrepreneur by profession and software engineer by training. He has the experience of bootstrapping a software consulting company and leading as its CEO for more than a decade. He is currently the developer and maintainer of superflows.dev, a framework for developing cloud-based server-less applications.