/ API

Typely API - almost complete

I wanted to keep you guys up to date with some of the things I've been working on recently, more exactly, the API. It is almost ready on the server side and about 80% complete on the client app. I spent a lot of time making decisions on how to tackle this part of Typely.

Being a feature that will open the doors for many integrations I decided to require a flat fee ($3) for every API token generated by subscribers. This is to prevent an overload on our servers since I don't check who is using what and how many tokens are generated by a given user. A free, but limited, plan won't be of any help because one could create as many accounts and tokens possible and cycle through them at will.

Creating a token

I use many APIs on a daily basis and one of the things I hate most is someone requiring me to pay for packages that I rarely use fully. To avoid this situation I brainstormed a way for users to create their own plan with custom buckets/limits on requests:

  • per second
  • per minute
  • per hour
  • per day
  • …per month

create_plan-1

Pricing everything

Creating big buckets does not mean I'm going to charge for requests not being performed. It does, however, increase the flat fee which starts at $3 as mentioned and increases by $1 for each 1 million requests (per month) you reserve. In addition to the dollar per million fee, I will also place a penalty on "requests per second". Starting at 1 and capping at 100 that means that, the highest plan (100 per second), will also have an additional fee of $100 applied.

For example, our biggest possible plan which is capped at 260 million requests per month (with 100 per second) will cost:

$3(flat fee) + $100(rps) + $260(plan fee) = $363

So $363 is the monthly fee of the highest plan. Considering the amount of load such a user could put on our servers, I don't consider it to be that big. Of course, requests will have to be paid separately at a price of 0.0001/request. If the plan monthly limits go past 100 million requests, the price goes down to 0,00008/request.

A more realistic example would be one user which sends a request at every second. That means about 2,600,000 requests per month. A monthly total will result in only $260 +/- which I consider quite fair considering how CPU hungry Typely is.

Tokens, one or many

I went for the "many". Being able to define limits for each token in part means that users could create multiple ones and distribute them to different parts of their application or resell them; I don't know, I don't care, I don't check.

api_tokens

Implementation

At the core of everything is a custom package/rate limiter that checks every request against the provided token and its limits. The rate limiter is written in Lua and injected into a Redis database.

With this upgrade, I will have to move Typely to Google cloud and manage it's different components and states via Kubernetes. This will allow me to scale up or down based on the requirements. I know I shouldn't be thinking of scaling at this stage but I can start low and also benefit from the elasticity that Kubernetes provides…among many other things.

Payment gateway

I wish we could use Stripe here but we can't. Instead I went for a similar solution which is Braintree. I like the fact that my visitors don't have to leave the page for checkout and that no IPN's are in the loop. I had my WTF moments when implementing it but it works fine now…in tests at least.

Client application

The API management app will be de-coupled from the Typely editor because I don't want to bloat that one with code that doesn't make sense there. It's written in similar manner and style and starting to look nice. Here's a chart that shows your requests:

api_chart

I worked quite a few hours for those gradients that point out where exactly you are hitting your plan's limits. It's near complete now and it looks great.

Release date?

I would say…in about a month or so. I have a lot of tests on the backend, I'm still working on the client app but that's just glue code at this stage and it's working smooth.