How To Rate Limit Requests In Blazor/ASP.NET Core – CloudSavvy IT

If you’re making a public API or website, you’re probably worried about performance. Rate limiting can help prevent abuse from basic DDoS attacks, and it’s pretty easy to set up for Blazor/ASP.NET Core applications.

Why Rate Limit Requests?

There are plenty of reasons to rate limit requests. Most services should probably set some kind of rate limit, because no reasonable human is going to be making 100 requests a second for ten minutes straight. By default, your application is going to respond to every request, so setting a reasonable limit is a good idea.

Of course, your cloud provider may also have DDoS protection. This will usually protect well against Layer 3 and 4 attacks targeting your server. However, you’ll still want make sure your server does everything it can to block attackers from accessing it.

You also have the option of setting the limit much lower to limit requests on public APIs. For example, maybe a certain endpoint takes a lot of processing to respond to the request. You might want to limit this endpoint so that no single IP address can make more than a few requests every couple seconds, limiting stress on your server/database.

Setting Up Rate Limiting In ASP.NET Core

Blazor as a framework is built on top of ASP.NET Core, which handles all of the under-the-hood stuff for running an HTTP server and responding to requests. So, you’ll need to configure rate limiting for ASP.NET Core. The same steps will apply to anyone not using Blazor.

Rate limiting isn’t a default feature in ASP.NET Core unfortunately. However, there is a very popular NuGet package, AspNetCoreRateLimit, which does the job quite well. You can install it by right clicking your project in Visual Studio and selecting “Manage NuGet Packages…”:

Search for AspNetCoreRateLimit and install it.

There are a few ways to do rate limiting. If you’re using an API that needs keys, we recommend rate limiting based on API key, which covers all cases. For most people, rate limiting based on IP address is likely fine, and is the default recommended by AspNetCoreRateLimit.

You’ll need to add it as a service to ASP.NET. All services are configured in Startup.cs, which adds them with the ConfigureServices(IServiceCollection services) function.

There’s quite a few services to configure. The first function configures the services to load settings from your configuration file. You’ll also want to add Microsoft’s memory cache if you haven’t already. Then, you’ll need to configure IpRateLimiting from the JSON file, and then add the rate limiter.

            // needed to load configuration from appsettings.json 
            services.AddOptions();
 
            // needed to store rate limit counters and ip rules
            services.AddMemoryCache();
 
            //load general configuration from appsettings.json
            services.Configure(Configuration.GetSection("IpRateLimiting"));
 
            // inject counter and rules stores
            services.AddInMemoryRateLimiting();
 
            // configuration (resolvers, counter key builders)
            services.AddSingleton<IRateLimitConfiguration, RateLimitConfiguration>();

Also in Startup.cs, you’ll need to configure the application builder to use IP rate limiting.

app.UseIpRateLimiting();

Keep in mind that this uses in-memory rate limiting, which is per-instance. If you’re load balancing your application, you’ll need to use a distributed memory store like Redis, which this package also has support for.

Configuring Rate Limiting

Once it’s added to ASP.NET, you’ll need to head over to your appsettings.json configuration file to set it up. The configuration looks something like the following:

"IpRateLimiting": {
    "EnableEndpointRateLimiting": false,
    "StackBlockedRequests": true,
    "RealIpHeader": "X-Real-IP",
    "ClientIdHeader": "X-ClientId",
    "HttpStatusCode": 429,
    "IpWhitelist": [ "127.0.0.1", "::1/10", "192.168.0.0/24" ],
    "EndpointWhitelist": [ "get:/api/license", "*:/api/status" ],
    "ClientWhitelist": [ "dev-id-1", "dev-id-2" ],
    "GeneralRules": [
      {
        "Endpoint": "*",
        "Period": "1s",
        "Limit": 2
      },
      {
        "Endpoint": "*",
        "Period": "15m",
        "Limit": 100
      },
      {
        "Endpoint": "*",
        "Period": "12h",
        "Limit": 1000
      },
      {
        "Endpoint": "*",
        "Period": "7d",
        "Limit": 10000
      }
    ]
  }

First off, if you plan to rate limit certain endpoints differently, you’ll want to turn on EnableEndpointRateLimiting, which is false by default.

StackBlockedRequests will make any blocked requests count towards the counter. Basically, with this off, anyone making requests over and over will be served X responses per period. With it on, they’ll work up the max responses very quickly, and then won’t be responded too again.

RealIpHeader and ClientIdHeader used when your server is behind a reverse proxy, which is a common set up. Since the requests will always come from the proxy server, the proxy sets a header with the user’s actual info. You’ll need to check your proxy and ensure that this header is set correctly, or else the rate limiter will treat everyone as the same IP.

Then, there are three whitelists, one for IPs, client IDs, and endpoints. You can remove these if you don’t need them.

Then, you’ll need to configure each endpoint, as well as a period and limit. A wildcard will cover everything and is the only thing that works with EnableEndpointRateLimiting set to false. If it’s not, you can define endpoints using {HTTP_VERB}{PATH}, including wildcards, so  *:/api/values will match all GET and POST requests to /api/values.

You’ll want to make sure that your endpoint matches a file, and not a directory. In my case, *:/download/*/* was a valid endpoint, but *:/download/*/*/ was not, because of the trailing slash.

This default config includes an IP whitelist for localhost, which you’ll need to comment out if you’re doing testing. But, you should be able to test your configuration by setting the limit very low, like 5 per minute, and making a bunch of requests. You should get this error, “API calls quota exceeded,” which means it’s working properly.

There’s a lot more that this package can do, so if you have more specific needs than this, we recommend checking out their documentation and seeing what’s possible.