174
Five Tools for Building REST APIs: Notes Ngo Nguyen Chinh Ha Noi 2016 (*) Here are my notes after completing a course of study offered by Pluralsight 1. Collaborative Design Introducing the Course We'll be looking at tools that help you in the different phases of delivering your API. In this first module, once we're done with the introductions, we'll look at collaborative design and tools for defining the shape of your API, documenting your interface, and even providing stub endpoints for you. Next, we'll look at functional testing and some simple tools that let you send requests into your API and analyze the traffic so when you're building the functionality, you can make sure that it works as expected and meets the agreed design. In module 3, we'll look at the most important tool in REST APIs, which is the delivery protocol, HTTP. We'll look at some specific areas of hosting APIs on the web to provide better performance, reliability, and security. When you've designed, built, and hosted your API, there are some fantastic tools for performance testing, which let you very easily run large- scale load tests to verify the performance of your API so you can be confident about how many clients you can support. In module 5, we'll look at monitoring tools that run with your API in production. These give you insight into the health of your service and will help you track down any issues that occur. In the last module, we'll summarize the course and will briefly look at other tools that can help you, like API management suites.

Five tools for building rest APIs - Notes

Embed Size (px)

Citation preview

Page 1: Five tools for building rest APIs - Notes

Five Tools for Building REST APIs: Notes

Ngo Nguyen Chinh

Ha Noi 2016

(*) Here are my notes after completing a course of study offered by Pluralsight

1. Collaborative Design

Introducing the Course

We'll be looking at tools that help you in the different phases of delivering your API.

In this first module, once we're done with the introductions, we'll look at collaborative

design and tools for defining the shape of your API, documenting your interface, and

even providing stub endpoints for you.

Next, we'll look at functional testing and some simple tools that let you send requests

into your API and analyze the traffic so when you're building the functionality, you can

make sure that it works as expected and meets the agreed design.

In module 3, we'll look at the most important tool in REST APIs, which is the delivery

protocol, HTTP.

We'll look at some specific areas of hosting APIs on the web to provide better

performance, reliability, and security. When you've designed, built, and hosted your API,

there are some fantastic tools for performance testing, which let you very easily run

large- scale load tests to verify the performance of your API so you can be confident

about how many clients you can support.

In module 5, we'll look at monitoring tools that run with your API in production. These

give you insight into the health of your service and will help you track down any issues

that occur.

In the last module, we'll summarize the course and will briefly look at other tools that can

help you, like API management suites.

Page 2: Five tools for building rest APIs - Notes

Choosing Your Tools

All the tools that we'll cover in the course are genuinely useful and will change the way

you deliver API projects. But as with any toolset, you need to be aware when a

helpful tool becomes a dependency for your product.

Here's me and here's my product.

Page 3: Five tools for building rest APIs - Notes

If I take a dependency on a tool, then I can't deliver my product without it. A lot of

the tools we'll cover are hosted services, and hosted services have a habit of being

bought and moved around or suddenly changing their payment model, or disappearing

altogether, and that could mean no more product.

When I introduce a tool into my delivery toolset, I need to be confident that I can still use

it, or something very like it, in a year or two years' time. And that's the case for all the

tools I'll recommend in this course.

There are two things I need to get that confidence, and the first is competition. A

unique service offering is great if you're the provider, but less great if you're a consumer.

You want to know that if your provider stops providing, there are alternatives that you

can switch to.

And the second is price. I'm a big fan of the free tier, which supports the community

and helps spread the word.

Page 4: Five tools for building rest APIs - Notes

All the tools I recommend are either free or have a usable free tier, which you can run

with during development and into production, letting you start up an API with very low

running costs and upgrade to a paid tier if your product takes off.

Collaborative Design

So, let's get started.

For the rest of this module, we'll look at tools that help you in the design phase when

you're putting together the shape of your API.

Typically, that involves a three-way discussion between the architects or engineers

representing the API client, the API itself, and the data provider.

On smaller projects, those roles may all be part of one team or even one person, but

there's usually an agreement to reach to make sure those different interests are all

happy. The client may want data that isn't available from the provider, or the provider

Page 5: Five tools for building rest APIs - Notes

may want the API to format data coming from the client. And, ultimately, everyone needs

to agree or the solution just won't work.

With REST APIs, JSON is the dominant format because it's concise, easily

understood, and widely accepted. JSON doesn't inherently have a schema, which

gives it a lot of flexibility. Different clients can use the same API and only read the fields

that they're interested in. But if you're designing the contract first, there's nothing in the

core of JSON's specification to help you describe it.

Enter the first essential tool for building REST APIs, Apiary. Apiary is a web-based tool

which lets you capture the blueprint for an API and share it with the delivery teams. It

gives you a simple interface for editing blueprints and provides a rich documentation

view for your API, which makes it very easy to understand.

Page 6: Five tools for building rest APIs - Notes

Demo 1: API Blueprints in Apiary

We'll create a blueprint in Apiary now and look at the main features of the tool. You can

register with an email address or sign in with an existing GitHub account, which is what

When you sign up, Apiary creates you a sample API, which supports an app for making

notes. We'll replace this and write our own API soon, but we can use it for now to see

the two views that Apiary gives us.

Firstly, we'll look at the documentation view, which gives us a nice, simple UI to navigate

our API.

Page 7: Five tools for building rest APIs - Notes

The Spider Log API

We'll build our own blueprint on Apiary and show how it fits into the design process.

We're building the API for a team which is delivering a killer mobile app, Spider Log,

which lets you record any interesting spiders you see on your travels so you can look

them up later. The home screen shows a list of spiders with their picture, some text

Page 8: Five tools for building rest APIs - Notes

about when they were sighted, and some tags for classification.

So, we'll need an endpoint in our API to get the spiders for a user. We've got a

wireframe of what the screen will show, so we can work out the minimum payload that

the API needs to return. We'll create a first draft of how the endpoint looks in Apiary, and

Page 9: Five tools for building rest APIs - Notes

we'll use that to focus the freeway.

When everyone gets together to talk around the details, it's useful to have a draft of the

contract in advance to focus the discussion.

Demo 2: First Draft in Apiary

So, in Apiary, I'll create a new API, Spider Log, which is created with the stock blueprint,

so I'll delete that to replace it with my own.

Page 10: Five tools for building rest APIs - Notes

The first two lines are the format of the blueprint, so the schema for the document can

be versioned, and the hostname that we're going to use in production.

A single hash is used for the title of the blueprint, which is Spider Log API, and any text

that follows on the next line is the general description of the API. So, this is providing

resources to record users' spider sightings.

Another single hash and the keyword, Group, will let me group endpoints together. So,

this is the Spiders group, and as the blueprint grows, I could also have a user group for

account management and a chat group for sending messages. That's all we need to set

up the API details. Apiary validates the blueprint as you type, and I can turn the preview

on to see the documentation emerging.

Now we can get straight into the endpoints. Two hashes denotes an endpoint, and in the

rest of the line, you give the endpoint a friendly name and then state the relative URL in

square brackets. In HTTP, there are multiple ways to access an endpoint, so we can

define what happens when you get the endpoint with three hashes, a friendly

description, and the method name in square brackets. This is all pretty simple. We're

building up the structure of the API, and, again, we can add some descriptive text for

the endpoint in the body of the blueprint.

Page 11: Five tools for building rest APIs - Notes

With our blueprint, we define an example of a request and response for each

endpoint.

So, in our API request, we'll need to know the user's identity, and to keep this simple,

we'll use basic authentication, and we'll look at making our API more secure with SSL

later in the course. We need a plus sign and the request keyword to mark the start of the

request definition, then indent, and another plus sign with a keyword header. Indent

again, and I can put my required header list. Apiary uses indentation heavily to

separate parts of the blueprint, and it can be fussy about using tabs and spaces,

so if you get validation warnings, that's usually the cause. I only need one header,

which is the standard HTTP authorization header, and I separate the header name and

value with a colon. Apiary documents your API by showing the usage, so it's useful to

keep the example data as realistic as possible. Basic auth uses the basic keyword, and

then a base64 encoding of the user name and password. There are plenty of web tools

that will generate base64 for you, but the instant answers feature of the DuckDuckGo

search engine is the easiest. So, I can paste that string into my blueprint, and it's clear

what type of authentication my API needs with some real sample data to illustrate it.

Now I can define my response, and in the normal flow, I'll be returning a 200 status code

with a content type of application/JSON.

In the body of the response, I need to capture the sample JSON that the API's going to

return, which I've already got in my clipboard. We're returning the spiders a user has

seen. So, for a first draft, let's start with an array, and the objects in the array will have a

timestamp for when the spider was seen, some text that the user entered, and a URL to

the image of the spider that the user uploaded.

Page 12: Five tools for building rest APIs - Notes
Page 13: Five tools for building rest APIs - Notes

I can check the preview, and now I'm happy with it.

I can save and invite other users to access the blueprint.

Page 14: Five tools for building rest APIs - Notes

With Apiary, I can email invites to the app engineer and data provider. I can make them

editors so anyone in the team can edit the definition, but depending on the team size, it

may be better to have one or two nominated editors and make sure any changes are

discussed and agreed before the blueprint gets updated.

Payload Design

So, now we have our draft API, and we've sent it round to the team.

The app engineer can check this through and make sure the API gives them all the data

they need. And they may look and see the image is a relative URL, and they want an

absolute URL. This is the sort of thing that gets missed in first-draft contracts and might

not be picked up until the client integrates with the real API, but using Apiary with

Page 15: Five tools for building rest APIs - Notes

representative sample data makes that issue obvious.

The data provider can check through and make sure all the data the API's going to

provide is really available, and the data provider may have more knowledge from

existing data sets, so they may question if the API is going to return all spiders, which

Page 16: Five tools for building rest APIs - Notes

could be a very big list.

And the API designer can check through and make sure everything's there to support

the functional requirements, the actual data, and the format for representing it. The API

designer is also going to be interested in the nonfunctional requirements of the API,

making sure it can support the expected load and continue to support different clients as

Page 17: Five tools for building rest APIs - Notes

the API evolves.

Apiary isn't great at capturing that three-way discussion. You can add comments to the

API, but the audit trail isn't so good, so your actual contract discussions are better done

outside of Apiary, and that could be face-to-face meetings or a Skype chat, or something

more formal like Basecamp, which records who said what.

Demo 3: Second Draft in Apiary

Let's go back to Apiary and make some changes from the feedback that we had in the

three-way session.

Firstly, the data provider says that for the Spider Log website, the average user has

thousands of sightings, so returning them all in the API isn't a good idea. Now we've

decided that we should use paging. In the request, I can capture parameters with curly

braces, so I'll add page number and size variables to the URL. The question mark

means these will be query string parameters. Before the request definition, I'll add a

parameter's block, and for each parameter, capture whether it's required, what type it

should be, a default value for the documentation, and a description. So these

Page 18: Five tools for building rest APIs - Notes

parameters will let the client fetch one page of data with a given number of items in it,

and the example request will be for page one containing ten spiders.

We'll change the response to return an object, which includes a page object. That states

the page number that's returned, page size, and total number of pages. And the item

array will be inside the root object, too, so we'll call that property Spiders. The

documentation will show the JSON exactly as we have it here, and Apiary doesn't do the

formatting for me. So, I'll need to indent my array another level as it's inside another

object now.

Page 19: Five tools for building rest APIs - Notes

Preview that, and I can see my new paged request URL and the paged response.

Some feedback from the app engineers. They don't want a relative URL for the

images. They're expecting to use a content delivery network, and they want the full path.

That doesn't change the structure of the JSON, but it does change how the data gets

used. So, we can switch to using the CDN URLs, and now it's clear that this is an

absolute URL. I'll save the changes, and I've linked this blueprint to a GitHub repository,

so when I save, Apiary will commit the changes and push them to the origin. So, I'll add

a commit message here.

From the API designers' feedback, we want to have a version number in the request

header so we can support versioning as the API evolves. I'll use a custom header name

Page 20: Five tools for building rest APIs - Notes

by convention prefixed with x, and capture the API version the client wants to work with.

And, lastly, I'll want to return response headers which support caching on the client and

in proxies. I'll add a header section to the response and capture an ETag and a cache

control header, which will enable both expiration and validation caching, which we'll look

at in more detail in the HTTP module later in the course. Because I've added a header

section to the response, I need to specify where the body begins, and I need to add

another level of indentation.

Page 21: Five tools for building rest APIs - Notes

Preview again, and here are my new request and response headers, so I'll save again,

committing the changes and pushing to GitHub.

Now we have our agreed API contract in a centralized definition, which all the team can

work from.

The Apiary Blueprint

By putting the API blueprint in the middle, we've broken the dependency between

the different concerns, and now we have an agreed contract that each party can work

from. At this point, the teams or team members can go off and do their own thing. The

data providers may have a bunch of data migration scripts to prepare. The API guys can

start building the API to deliver the agreed contract. And the app guys can start building

the home screen and plugging the UI into the API. Apiary supports this part of the

Page 22: Five tools for building rest APIs - Notes

delivery too, as well as giving us documentation.

Apiary provides a stub endpoint for our API. That means the app engineers can start

making external rest calls straightaway using the Apiary stub while the actual API is

being built. That's a great feature, which means the app team isn't blocked waiting for

the API team, and they don't have to build throwaway code to mock the data. They can

Page 23: Five tools for building rest APIs - Notes

start with the real JSON response straightaway.

And the stub is useful for the API team, too. As you're building the API, you could create

seed data for testing that matches the data in Apiary. Then, in your automated tests, you

can get the response from the Apiary stub and from the API implementation and make

sure that they exactly match. During the build, the tests will fail if your implementation

isn't correct, and once the endpoint is delivered, the tests will tell you if the Apiary

contract changes and your API is out of date.

Demo 4: Apiary Stub and Traffic Inspector

In the documentation view, you can see that Apiary has set up an endpoint for me, which

I can use to test my API. They call it a mock, but it's a real external service, so I'd call

Page 24: Five tools for building rest APIs - Notes

that a stub, but that's a side point.

So, in a new browser window, I can point to the base URL for the stub and then to the

Spiders endpoint, which I've defined, and I get my sample JSON out as the response.

Page 25: Five tools for building rest APIs - Notes

So, this is a live, REST API that implements my blueprint but it just returns static data.

That's pretty useful, although it is limited. I didn't supply an authentication header

because I've gone direct from the browser, but I still get the response.

Apiary doesn't let you build logic into the stub to give different responses for

different conditions, but it does have a request tracking component, so I can switch

back to Apiary, open the Traffic Inspector, and see that my endpoint has been called,

and Apiary also validates the request to see that it meets the contract. The request is

flagged with a warning because Apiary knows this request didn't meet the contract. I can

see why the request has failed, and if this was from the app guys integrating with Apiary,

they'd know they're missing the authentication request header and the API version

header.

We'll come back to this in the next module when we look at tools for making requests to

REST APIs so we can build up the correct request and build all the details in the

response.

Apiary Features

That's our first essential tool. It enables you to very quickly shape an API, and it provides

documentation and a stub endpoint for testing all with very little effort.

Page 26: Five tools for building rest APIs - Notes

I've been using Apiary for a year or so, and it's become a key tool for delivering APIs.

It supports and encourages collaborate design so you build the look and feel of your

API working with clients and providers to make sure that it meets everyone's needs. And

you can grant access to team members or make the whole API public so the

documentation is visible to everyone.

And then the tool goes on supporting you through the build process giving you an easy

way to verify that your actual API matches the contract and the client matches the

contract. So, if there's a mismatch, you can quickly track it down, and the Traffic

Monitor helps with that. GitHub integration keeps your contracts versioned.

The blueprint format is Open Source, and there are a set of tools available to integrate

with your own processes. Apiary does more than I've shown, and it's all equally easy to

use, like you can define resources independently from endpoints and reference them in

the blueprint so you don't have to keep repeating the same JSON.

Page 27: Five tools for building rest APIs - Notes

And Apiary's free. Leastways, the free tier gives you all the functionality you need to be

productive.

Demo 5: Spider Log API - V1

Here's a more complete API in Apiary.

I've added a post to the Spiders endpoint for creating a new sighting and added an

endpoint for a specific sighting with GET and PUT methods to read and update.

I've linked this blueprint to a GitHub repository, so for every change I save, Apiary

has pushed a commit to the repo. Here are the different commits where I can see the

raw text for the blueprint, and I get all the usual GET goodness like highlighting the divs.

There are also a few switches you can play with in Apiary if you're building a public

API. You can make the whole blueprint public so anyone can view the documentation,

and it gets indexed by the search engines. And you can tweak the UI and switch to the

Page 28: Five tools for building rest APIs - Notes

more fancy documentation view. When I'm looking at the documentation for an endpoint

by default, I see the raw HTTP request and response.

Page 29: Five tools for building rest APIs - Notes

But I can also use Apiary to generate some client code for me.

Alternative Tools – Swagger

Alternative Tools – RAML

Page 30: Five tools for building rest APIs - Notes

Module Summary

2. Testing

Testing REST APIs

This is the next module in Five Essential Tools for Building REST APIs where we'll look

at tools for testing APIs, sending HTTP requests, and checking the response so you can

test the API behaves as you expect.

REST sits on top of HTTP, the transport protocol, which specifies what kind of data is

being sent, and HTTP sits on top of TCP/IP, the network protocol which specifies how

the HTTP content is physically spit up and sent on the wire.

When you're building an API, you need to be able to test your API and see what's

happening at these different levels.

In this module, we'll see tools that let you do just that. We'll start with REST Clients that

you use to make requests to your API and see the response, moving down to Web

Page 31: Five tools for building rest APIs - Notes

Debuggers that let you see and modify the underlying communication, and finally with

Packet Sniffers that show you the actual network traffic involved in your API calls.

At the consumer level, HTTP is a pretty simple protocol. You build a request that

consists of some headers and a payload. You send it to a URL with a method specifying

what you want to do. The server builds a response which contains some headers and a

Page 32: Five tools for building rest APIs - Notes

payload and sends it back along with a status code saying how the request went.

That's exactly what a browser does, and my favorite API testing tool, Postman, is an

extension for the Chrome browser, so it's cross-platform. It's a simple tool to use,

so we'll spend some time looking at what it can do, and then I'll highlight a couple of

alternatives and move on to more detailed tools that let you dig a bit deeper into what's

happening with your HTTP request.

Demo 1: Postman

To use Postman, you'll need a recent version of Chrome, and then you can just install

the extension from the Web Store. It runs as a packaged application in a separate

window so you get a dedicated interface for testing your APIs.

Let's start simple with a GET request to my Spider Log API running on the Apiary stub.

I've got the base URL for the stub, and I can enter the full URL, hit Send, and I get the

response. Remember, this is a static response from the stub, and Apiary doesn't do any

validation. So, I get the response even though I haven't sent any authentication

Page 33: Five tools for building rest APIs - Notes

credentials.

Postman lets you parameterize the URL so I can replace the Apiary stub host with a

variable name, baseUrl, enclosed in double braces. That lets me easily switch between

environments, so in the environments drop-down, I can add a new one, call it the Apiary

Page 34: Five tools for building rest APIs - Notes

Stub, and set the baseUrl variable to be the Apiary Stub URL.

Run that again, and I get the same response.

Postman has the best security support in any of the browser-based REST clients. I

can use Basic, Digest, or OAuth, and each of these tabs lets me capture everything I

need to successfully authenticate with the correct protocol. The OAuth 2 tab even lets

me set up Postman for the callback and grab an access token, which I can add as a

Page 35: Five tools for building rest APIs - Notes

header or append to the URL for my request.

The response defaults to JSON, although I haven't specified the format that I want in

the request. I should explicitly do that in case the API changes its mind about the default

format. So, I can add an Accept header and request XML instead. That response is the

same data, but as XML, and Postman gives me a nice outline pane to navigate through

Page 36: Five tools for building rest APIs - Notes

it.

Demo 2: Postman Collections

So you can easily run them again. But you can also save your requests into a collection.

I'll create my Spider Log collection and add a folder for the Spiders endpoint so I can

group my collection by resource. I'll add this request to my collection, and now I can run

it easily without having to track through the history.

Of course, Postman lets me flex the different HTTP methods. So, I can switch to Post

and use the same Spiders endpoint to create a new spider sighting for my user. I'll add

Page 37: Five tools for building rest APIs - Notes

this post to my collection and then repeat my GET, which returns me the original Spider

in my list and the new one that I've just created.

Postman lets you export your collections so you can download the collection and

share it among the team. Or, if it's a public API, you can host the collection in

Postman's own API directory and make it available to anyone.

By publishing your collection and linking to it from your API documentation, you

can give clients a way to test your API and see exactly how to use it and what they can

expect it to do.

REST Client Alternatives

Postman is a great REST client. It's easy to use, runs on any platform that has Chrome,

and gives a nice way of building, collecting, and sharing complex HTTP requests.

Page 38: Five tools for building rest APIs - Notes

It does a lot more than I've shown. It has a proxy built in, so it can intercept and modify

HTTP traffic, which lets it overwrite request headers that Chrome normally sends. And

there's a very reasonably priced upgrade pack, which adds a whole lot of testing

functionality so you can add checks like the response status, duration, and verify the

content of the response. Postman, itself, is free and unrestricted for normal requests, but

the upgrade turns it into a powerful, automated test tool. You can run whole collections

of tests and write scripts to orchestrate them. So, you can build up a collection of tests

that flex your whole API and check the health of an environment with a single click or

through the command-line tool, Newman. If Postman integrated with GitHub to

automatically commit and push collection changes, it would be just about perfect, but it

does depend on Chrome, which isn't always an option.

Page 39: Five tools for building rest APIs - Notes

And there are alternatives.

Page 40: Five tools for building rest APIs - Notes

HTTP Debuggers

REST or HTTP clients give you a nice way of interacting with your API and testing

the functionality from the outside, but they don't show you what's happening when

the client sends a request and gets a response.

HTTP debuggers, also called Web Debuggers, let you see the web traffic going

between your clients and the API. They show you exactly what's being sent and

received and let you modify the requests so you can tailor them to flex scenarios that

are, otherwise, difficult to test. They're a lower-level tool that's very useful when you

Page 41: Five tools for building rest APIs - Notes

need to dig deeper into the detail of your API conversation.

Page 42: Five tools for building rest APIs - Notes

So, if the client requests JSON

I can intercept the request, change it so we actually request XML, and then the

XML response gets sent back to the client, and we can test how it reacts with the

Page 43: Five tools for building rest APIs - Notes

unexpected format.

Page 44: Five tools for building rest APIs - Notes

These tools can be way more advanced that the simple REST clients we've seen, but

they give you a very good understanding of what's happening under the covers. A good

all-rounder is Burp, which is a suite of tools that's primarily used for security

testing, but it's very useful for general API testing too. And you can ignore a lot of the

advanced features and use it as a free and cross-platform HTTP debugger.

Demo 4: Burp

Burp runs as a proxy in between the client and the server, so it can record all the HTTP

traffic that it sees. It's got a no-frills UI, and it's clearly focused on function over form.

In the options tab, here are the proxy details. So, I can set Firefox up to use the proxy

running locally from Burp, which by default is on the local host port 8080, and any calls

Page 45: Five tools for building rest APIs - Notes

that come from Firefox, including the RESTClient plugin, will go through the Burp proxy.

This is the Firefox RESTClient, and I'll set a GET request to my API to get the list of

spiders for a user.

Page 46: Five tools for building rest APIs - Notes

This request just hangs because, by default, the Burp proxy intercepts calls and

keeps them waiting until you forward them on.

Here's my request waiting in the Burp proxy. It's in raw HTTP format, and I can see this

is a GET request to my URL using HTTP version 1.1.

Because my RESTClient is running in the browser, there are some request headers

going out which I haven't specified. Those are routed by Firefox.

I'll forward the request, which means Burp passes it on to my server.

Page 47: Five tools for building rest APIs - Notes

I've also got the proxy set up to intercept responses, so now I see the response from the

API waiting in Burp where I can forward it back to the client.

And there's my response, which the RESTClient shows in a much friendlier format.

I can use the Burp proxy to edit the request and response, so I'll send it again and

take out the user agent, which might be something I want to test if my API does content

negotiation based on the type of client. Forward that on, and I can edit the response

from the API before forwarding it back to the client.

Page 48: Five tools for building rest APIs - Notes

So, if I want to test what happens when the response body is not in the expected

format, I can change the Content-Type header to XML, even though the body is

actually JSON. Forward that, and the response goes back to the Firefox RESTClient.

The response in RESTClient is the hacked version from Burp, so it looks like it should be

XML. When I try to preview it, though, I get an error because, actually, it's JSON.

I can change any parts of the request in the proxy, so I can change the destination

for the request. I can switch my Spider Log API request to go to a different host so I

can use 503.badapi.net, which always returns a 503 service unavailable response. Now

when I make the call, I get a 503, and the client is unaware that the request has been

routed to a different host by the proxy.

Burp's good for this sort of exploratory testing,

Demo 5: Burp Security Features

but it's real strength is in security tests. We'll have a quick look at what Burp can do to

expose weaknesses in your API.

I'll make a post to my API to create a new Spider Log.

Page 49: Five tools for building rest APIs - Notes

That's going through Burp, so I can intercept the request and response, but I won't alter

them. I'll just forward them on.

Page 50: Five tools for building rest APIs - Notes

Once I have a request captured in the proxy history, I can do some good things with it.

I can send this to the Intruder, which lets me modify the request in a structured way to

look for security weaknesses.

Page 51: Five tools for building rest APIs - Notes

Intruder understands request formats and it's highlighted property values in the JSON

payload.

Those values are potential sources of attack if a hacker can force malicious content into

your API. I can ignore the cookie, which is just used for sticky sessions, and I'll set up

the Intruder to use a list of values to inject into those JSON fields. And in the list, I'll just

add one string, which contains some malicious HTML. I need to tell Burp not to escape

Page 52: Five tools for building rest APIs - Notes

the HTML.

Page 53: Five tools for building rest APIs - Notes

And in this case, if the API doesn't sanitize the incoming request, then it will store a

JavaScript call to show an alert when viewed in the browser.

Run the Intruder, and in the free version of Burp, there are some restrictions in what I

can do, but it goes on to make four post requests to my API, and I can see in each

Page 54: Five tools for building rest APIs - Notes

request it has inserted my malicious payload into a different field.

Page 55: Five tools for building rest APIs - Notes

Each request gets a 200 response, which suggests that the API is happy with them. I'll

switch the proxy intercept off, and back in RESTClient, make a GET call to receive all

my spiders.

Page 56: Five tools for building rest APIs - Notes

And I've been hacked.

That simple test has shown my API isn't validating or sanitizing the input data that

it receives, which means that it's down to the client to enforce the security.

HTTP Debugger Alternatives

Burp is well worth getting to know better. You can use it for testing with the proxy tool

that we've looked at, or you can intercept requests and responses and modify them

without changing the client. With the repeater, you can simulate requests coming from

clients with greater control than with browser-based clients because you're not tied to the

browser's HTTP stack. So, you can spoof user agents or origins. And there's much more

to testing with Burp, like configuring DNS so requests routed through the proxy can be

sent to a different host without the client knowing. The security features of Burp are

beyond the scope of this course, but it's relatively straightforward to do some attack

testing using the Spider to map the layout of your API and the Intruder to add malicious

content to your request payloads. For the basic features, Burp is free and will give you

everything you need to flex your API in more imaginative ways than you can with REST

Page 57: Five tools for building rest APIs - Notes

clients.

If you're not interested in all the security features of Burp, or you just can't stand that

Java UI, there are some good alternatives that work in the same way running as proxies

between your clients and the API server.

In the Windows world, Fiddler is hugely popular. It's a free tool, which is well looked

after, easy to use, and has a good features set.

For cross-platform work, Charles is a great choice. It's easy to get started and has all the

features you need. You can even route other traffic on your network through the Charles

proxy on your machine. So, you could test your mobile app directly from your tablet and

see the traffic on your laptop. Charles isn't free but is not expensive for the features it

offers, so if you don't get on with Burp or Fiddler, you can try out Charles with the

Page 58: Five tools for building rest APIs - Notes

evaluation version.

Packet Sniffers

Browser-based REST clients and HTTP debuggers give you most of the tools you need

to test your API. But, occasionally, you need to dig deeper by one more level to see

what's actually being sent and received on the network.

Packet Sniffers monitor your network interface and keep a copy of all the TCP packets

that pass through. That includes low-level information like the handshake messages that

get sent between client and server.

It's not often that you need a trace of this level, but when you've deployed your

API and you're calling it remotely, you can get some strange problems from the

network setup, and you need a Packet Sniffer to understand exactly what's being

sent on the wire.

Wireshark is a great tool for that. It captures and shows a huge amount of detail but is

very easy to get started and focus on the information that you're interested in. It's

Page 59: Five tools for building rest APIs - Notes

another free cross-platform tool.

Demo 6: Wireshark

When you run Wireshark, you need to tell it which network interface you want to

listen on. It's a low-level tool, and it gives the interfaces their system name rather than a

friendly name to help you choose. But the home screen tries to make it easy and selects

Page 60: Five tools for building rest APIs - Notes

a sensible default, and I only have one interface, so I can start listening here.

When you start the trace, it captures all protocols, so you'll see a good amount of

traffic running through even though you're not making any network requests

yourself. The network protocols that we use are very chatty and regularly send out

broadcast messages and health checks.

We're not interested in any of that, so we can add a filter so that all we see is HTTP

exchanges. Because Wireshark listens on the network interface, you don't need to set

up a proxy to make calls. So, I'm configured to go direct to the network in Firefox, and I

Page 61: Five tools for building rest APIs - Notes

can make a call to my remote API and see all the traffic in Wireshark.

Page 62: Five tools for building rest APIs - Notes

Here's the GET request

Page 63: Five tools for building rest APIs - Notes

And I can see the traffic split by layer, so in the HTTP view, I see things like the host and

the authorization header. HTTP runs on top of the TCP/IP network protocol, and a single

HTTP exchange can actually be broken down into multiple TCP packets. This is a small

request with just one packet, and TCP itself is built on IP, and I can see the IP traffic

here.

And Wireshark understands JSON so it can present the response logically. This

breakdown is interesting, but it's not all that useful. Often when you get down to this

level, you want to see the actual request and response as a full conversation. And

Wireshark does that with the Follow TCP Stream command.

Look closer, and we'll see that Firefox added on Accept-Encoding request header

telling the server it's happy with the compressed response, and the server has

honored that and sent a gzip encoded response body saving network traffic.

Supporting gzip is something that can make your end-to-end calls faster. And looking at

Page 64: Five tools for building rest APIs - Notes

the packets is the best way to verify that the client and the server are actually using gzip.

Compare that to an uncompressed response.

In cURL, I can make a call to my Spider Log API on HTTP and not specify that

compressed responses are okay. So, the client sees the JSON response as expected,

and if I follow the TCP stream here in Wireshark, I can see the server has responded

with plain text and not a compressed body.

Page 65: Five tools for building rest APIs - Notes
Page 66: Five tools for building rest APIs - Notes

Another good use for Wireshark is to verify encryption. If I make an HTTPS request,

then the traffic is encrypted at the transport level. GitHub has a status API that runs over

HTTPS and that ensures the conversation between client and server can't be overheard

or tampered with. Request the status in cURL, and I get a JSON response, but there's

no new HTTP traffic in Wireshark.

That's because HTTPS uses a different transport protocol. Change the filter to SSL,

and I'll see my exchange with a lot more traffic. I can still follow the TCP stream, but it's

mostly garbage, although I can see the host name in the request and some of the

Page 67: Five tools for building rest APIs - Notes

certificate details, namely GitHub, in the response.

In the stream for this conversation, which was for a single HTTP request and response,

there's a lot going on. These are all part of the low-level SSL protocol where the client

and server handshake to see what protocols they support and then exchange keys and

then send encrypted traffic. If I own the SSL certificate, I can load it into Wireshark and

see the unencrypted content. But without owning or hacking the cert, I can't make much

sense of the traffic, so I can use Wireshark to verify that my HTTPS is set up

correctly.

Page 68: Five tools for building rest APIs - Notes

Module Summary

3. HTTP

HTTP

This is the next module in Five Essential Tools for Building REST APIs where we'll look

at a tool which doesn't always get enough attention, HTTP.

Page 69: Five tools for building rest APIs - Notes

HTTP is the communication protocol for REST APIs, and it's the most important tool to

understand if you want to build a fast, scalable, robust, and secure service.

The power and simplicity of HTTP is the reason why REST APIs have grown so

quickly to become the preferred way of connecting systems.

HTTP transports thousands of millions of gigabytes of data around the world every day,

and it has all the mechanisms you need to support the performance, reliability,

and security of your API.

Most of that traffic uses HTTP version 1.1, the spec that was released in 1997. And

some parts of the internet fall back to the original version 1.0 from 1990. So, it's a

protocol which has been fit for purpose for a very long time. And as the internet has

grown astronomically, the communication protocol hasn't needed to change to keep up

with the demand.

In this module, we're going to look at HTTP as a tool and understand which parts of it

can help you deliver better APIs. The protocol is a fair size, split between multiple RFCs,

Page 70: Five tools for building rest APIs - Notes

but we're going to focus on three parts of it.

We'll start with performance and look at Caching in HTTP to see how you can

encourage clients to cache your responses and how that benefits your users and your

servers.

Then we'll look at DNS, the routing system of the internet, which directs a particular

client request to a particular server and see how you can use DNS to provide highly

available, highly scalable systems.

And, lastly, we'll look at SSL and see how encryption at the transport layer secures your

API without any change to the design or implementation.

Page 71: Five tools for building rest APIs - Notes

These are parts of HTTP that are often not well understood, and that lack of knowledge

can produce designs which work but which could work more efficiently and more safely.

REST Design Using HTTP

A simple REST API design that uses HTTP but doesn't make good use of it would look

something like this -- a single entry point using HTTPS for everything and relying on load

Page 72: Five tools for building rest APIs - Notes

balancing to give you scale and failover.

Every request comes to the same load balancer, and at peak times, the API servers will

have to work hard to service every request while clients could be sat waiting in a

processing queue.

With a better understanding of HTTP, that design can become much more sophisticated

without much more effort. In this example, we use multiple endpoints to segregate

types of requests. Requests for semi-static data, which is the same for every

client, are routed to a reverse proxy, which can cache responses from the API

Page 73: Five tools for building rest APIs - Notes

service.

Caching isn't only useful for long-lived data. The cache responses may be set to

live for just one minute before the proxy returns to the server to refresh them. But

that means if you have a thousand requests per second for this type of data, they will all

be quickly served from the proxy cache, and your API only actually services one request

per minute from the proxy. That's one API request for every 60,000 client requests.

Transactional requests which are individual to different users use the secure endpoint,

which uses DNS to provide a second layer of load balancing. That could use the same

single data center as the original design, or it could use something much smarter to load

Page 74: Five tools for building rest APIs - Notes

balance across different data centers.

That gives us much higher reliability because we can keep running even if the whole

data center goes down, and also better performance because client requests could be

routed to a server in a data center near them. This design can serve more traffic than the

original design with fewer API servers because they're only dealing with requests that

need individual responses. The general responses are all served from the proxy. And

even the proxy isn't called for all requests. Anything that could be long lived can be

cached client-side, so the request doesn't even leave the client.

HTTP 1.0 Expiration Caching

Caching is one of the best ways to improve user performance and reduce the load

on your API by encouraging clients to save a local copy of resources that they get

Page 75: Five tools for building rest APIs - Notes

from the API.

The next time they need that resource, they can fetch it from the local cache rather than

returning to the API, which saves network traffic and server load.

HTTP has two very useful caching mechanisms, which date back to version 1 of the

protocol. And although they were designed for resources like HTML pages and

images, they apply just as well to JSON responses from REST APIs.

The first mechanism is Expiration Caching, which gives the best performance boost

and is the most difficult to get right. You use expiration caching when your API

responds with a resource that isn't likely to change for a reasonable period. So, the client

can get a copy from the server and save it locally, then use that local copy until it expires

Page 76: Five tools for building rest APIs - Notes

when it goes back to the server for a fresh copy.

That's all transparent to the client. The code should be the same whether the resource is

fetched from the API or from cache. You may have to deliberately opt into the caching

feature in your HTTP stack, but you don't have to worry about managing the cache

yourself. That saves latency and bandwidth for the client, and it can mean far fewer

requests that the API has to deal with.

Demo 1: Expires Header

Here's a request to the original version of my API.

Version 1 returns 200 OK with no caching headers and with a spideroftheday in the

response body. So, if I repeat the request, it still makes the same call to my API. I've put

a two-second delay into the API to make it clear when the call is going back to the

Page 77: Five tools for building rest APIs - Notes

server, and the client has to wait two seconds to get exactly the same response.

For versions 2, I've added Expiration caching. I like the practice of putting versions

into the request headers, so versioning follows the same pattern as content negotiation.

So, here I'm asking to run version 2 of the API. Now when I make the call, I get the same

response body and 200 status code, but now there's an expires header saying that my

client, which in this case is the Firefox RESTClient, can cache this response until

Page 78: Five tools for building rest APIs - Notes

midnight.

When I repeat that call, Firefox gets the response from the browser cache so

there's no additional API request, and the response comes back instantly because

it's read from the local cache and not from the API server with the two-second

delay. Of course, clients usually have control of their cache, and they can choose to

empty them, but that doesn't affect the outcome.

If I clear mine down using the Clear Cache plugin and then repeat the request, the HTTP

client stack doesn't find a match in the cache, so it goes back to the server, and my

response takes two seconds. But when the client gets that response, it adds it to the

cache again so the next request gets served from the cache.

HTTP 1.1 Expiration Caching

The expires header is simple and very useful but is a bit crude. It's good when you

have some behavior that justifies a fixed expiration time, but you don't always have that.

Page 79: Five tools for building rest APIs - Notes

With HTTP 1.1, we have the Cache Control Header to support expiration caching,

which is much more flexible. Cache control still uses the client's cache, but it

works on a lifespan rather than a fixed expiry time, so you can return a cache control

header with a max-age value of 3,600 seconds, and that tells the client to use the cache

copy for one hour.

Unlike the expires header, it doesn't matter what time the client makes the call. When

they cache the response, they will always use the cache copy for one hour, and

Page 80: Five tools for building rest APIs - Notes

when the hour has passed, they'll come back to the server.

There's a lot more to the cache control header that you can do smart things with. Where

you have proxies in the network between the client and the server, you can tell the

proxy to cache the response, too, but for a different time span. So you could have

clients caching the response for an hour but proxies caching for only 20 minutes,

meaning individual clients only come back for fresh data every hour, but proxies, which

could be serving many clients, come back three times an hour, which makes it less likely

Page 81: Five tools for building rest APIs - Notes

that a proxy will give a new client stale data.

Demo 1: Cache-control Header

Page 82: Five tools for building rest APIs - Notes

Version 3 of the Spider Log API uses Cache-control, so when I make this GET

request, the server gets called, and the response includes a cache control header.

The max-age value tells Firefox it can cache this response for 60 minutes, so when I

repeat the call, the response is loaded from the cache.

The client side of this works in the same way as expires. So, when I clear the cache

and repeat the GET call, the request goes back to the API.

In Firefox, I can see the cache contents from the about:cache screen, which tells me

there's a single item on disk, my spideroftheday, keyed by URL. So, this is where Firefox

Page 83: Five tools for building rest APIs - Notes

will fetch the resource if that URL is requested again.

Page 84: Five tools for building rest APIs - Notes

The cache control header also has an S max-age value for shared max-age, meaning

that intermediaries can cache the response for 20 minutes.

Page 85: Five tools for building rest APIs - Notes

So, if I had a caching proxy between my client and the API, then even if my client cache

was empty, the response would come from the proxy and not the API. Proxy cache

responses aren't instant because it's still a network call, but they do avoid the API

compute, which in this case is that two-second delay.

HTTP Validation Caching

Expiration Caching gives you the biggest win because it saves network traffic and

API compute, so clients load more quickly, and the API has fewer requests to deal

with.

But it's also dangerous because once the response is cached, clients will use it

until it expires. And if the data has changed in the meantime, the client will never

know. Where another client gets the resource for the first time from the server, they will

get the updated version, and these two clients will be out of sync. More importantly,

expiration caching is only useful for shared resources.

For personalized resources, HTTP clients won't cache the response even if it gets

returned with caching headers. If your request has an authorization header, then

it's probably for a personalized resource, and it doesn't make sense for the

Page 86: Five tools for building rest APIs - Notes

browser to cache it. Another user on the same machine could see your data if the

browser served your response from cache rather than returning to the server to fetch the

new user's response.

There are a lot of scenarios where you can use expiration caching and balance the risk

of stale data with the performance benefits, but when you can't accept the client having

out-of-date resources, or if you're working with personalized data, you can use

validation caching instead.

With validation caching, your HTTP response includes a header that indicates the state

of the resource, either an ETag, which is like a version number for the resource, or a last

modified timestamp, which identifies when the resource was changed.

If your API returns either of those headers with a GET response, the client will store the

header value, and the next time the same resource is requested, it can make a

conditional GET request sending the API back the header value of the resource that it

Page 87: Five tools for building rest APIs - Notes

has in its cache.

Then the API needs to decide if the client's resource is still fresh or if it's been changed.

If it has changed, the API returns the full resource with a 200 response containing the

new validation header values.

If the resource hasn't changed and the client's cache is still current, the API sends back

a 304 Not Modified response with no body, just headers, and the client knows that it's

safe to use its own copy.

Demo 3: Etag and Last-modified Headers

Page 88: Five tools for building rest APIs - Notes

Back in version 2 of the Spider Log API, we're using validation caching with an ETag for

personalized resources.

When I request my own spider list, I get an ETag header in the response with a string

value. That could be any value. It just needs to represent the current state of this

resource, so when it changes, the ETag will be different.

Page 89: Five tools for building rest APIs - Notes

My ETag is a GUID that starts b63f. So, I'll post a new spider to my list

Get the list again, and see that the ETag has changed, and now it starts f6b09.

Now if I add an If-None-Match header to my GET request and pass in the ETag that I

received.

Page 90: Five tools for building rest APIs - Notes

I get a 304 Not Modified response, which is the API telling me that my cached copy of

the resource is valid, and I can use that so it doesn't send me the response in the body.

I'll add another spider, make by GET call again with the old ETag, and this time, I get a

200 response, which tells me that the resource has changed.

I get a new ETag for the new state, and in the response body, I get the full resource,

which will now be added to the client cache.

The principle is exactly the same for using dates with validation caching.

Page 91: Five tools for building rest APIs - Notes

With version 3, I've switched to using a Last-Modified data instead of an ETag.

When I add a new spider to my list and repeat the GET, I get a new Last-Modified

timestamp. If I make a GET call and add the If-Modified-Since header, then I get a 304

response with no body telling me that my cache is up to date.

Add a new sighting and with the old Last-Modified date, I get a 200 response with the

new Spider Log and a new Last-Modified date.

Page 92: Five tools for building rest APIs - Notes

HTTP Caching

Caching in HTTP is very simple, and it can make a huge difference to how much

load your API can handle.

Without caching every single client request has to go to the server to get its response.

With caching, clients or proxies can serve the response from their own caches so the

API will have a lot less traffic to deal with. That gives you more space in your API. Fewer

Page 93: Five tools for building rest APIs - Notes

calls to your servers means you can serve more clients more quickly.

Any shared resources, which are not likely to change, should use expiration caching

with an expires header if there's a business rule for when the data goes out of date or a

cache control header if you just want the client to cache for a fixed period.

Validation caching is useful for personalized data, which isn't the same for all users

and which may be expensive to fetch in the API. If a user's list of spiders had to be

fetched from multiple data sources, transformed, sorted, and paged, that could be

compute intensive. When we've done that compute once for a user, if we send them an

ETag or a Last-Modified date with their response, then the next time we get that client

request, we don't need to do all the expensive fetching unless the data has actually

Page 94: Five tools for building rest APIs - Notes

changed.

To get the most benefit from expiration caching, you should keep your tags

somewhere cheap, like a memory cache or a fast document database so it's a very

quick operation to look up the tag and tell the client if they have the latest version. And

you can update or clear the value in any operations which would mean a change to the

resource state.

You should also be caching inside your API to minimize the amount of time and effort

required to service each request.

DNS

The Domain Name System, DNS, is used as the address book for the internet. It

finds the IP address for a domain name so clients can route their requests to the correct

server.

Page 95: Five tools for building rest APIs - Notes

When you make an HTTP request, clients first query a DNS server to find the address

for the host path, and then they can send their requests direct to an IP address.

DNS is a multi-tiered system which uses caching heavily to make sure that

requests aren't slowed down by the process of finding the server to send them to.

HTTP clients have their own hostname cache, and there are intermediate DNS servers

with their own caches.

Page 96: Five tools for building rest APIs - Notes

Ultimately, there is one source of truth for a domain called the name server, and that

holds the current information.

With a reasonable understanding of DNS, you can maximize the infrastructure

design for your API.

Let's say we start small with our Spider Log service, and we're going to host the website

and the API on the same physical kit. The website will live at www.spiderlog.net, but

when we design the API, we have a choice of base URLs to use. We could use

www.spiderlog.net/api or api.spiderlog.net. They look very similar, but they have very

Page 97: Five tools for building rest APIs - Notes

different implications for our infrastructure design and our capabilities.

If we take the first option, everything is hosted at www.spiderlog.net. That's just

one domain name. The path of the URL is www.spiderlog.net, so whether you're a

browser viewing /home or an app making an API call to /API/spiders, it's just one host

as far as DNS is concerned. The www part is an alias for a CNAME entry, a canonical

name which points the Spider Log domain to a single alternative endpoint. It's like a

forwarding address. So, if I was running on Amazon AWS, my CNAME might point

www.spiderlog.net to spider-log.elasticbeanstalk.com. This design means all traffic

needs to come to one endpoint. That endpoint could be a load balancer, but it will

distribute all the traffic equally, and that means you can't scale your API and your

website independently.

But with the other option, we have one domain for web traffic and another domain

for API traffic. So, in the DNS name server, we have two CNAMEs set up, one for

www and one for API. Two CNAME aliases can point to the same location. So, if we

Page 98: Five tools for building rest APIs - Notes

start small, we could have everything on a single server.

But as we grow bigger, we could change that to use load balances and a couple of

servers for the website and multiple servers for the API. All we have to do to make that

change is set up the new infrastructure and alter the CNAME entries in DNS.

Page 99: Five tools for building rest APIs - Notes

It's all transparent to the HTTP clients. They keep using the same addresses. CNAMEs

also let us segregate parts of our API.

For GETs, we could have separate CNAMEs for static and transactional resources,

which would let us put a caching proxy in front of our API servers for the calls that we

know are likely to be cache hits. Or we can use a CNAME for posts to separate reads

and writes so we can scale them independently. Or even provide a fast lane for our API

so VIP customers get a better experience.

Demo 4: DNS CNAMEs

Let's see how that works. I have my domain, spiderlog.net, registered through

name.com, which is where I bought the domain.

In the DNS section, I have three entries at the moment -- API, which points to an Azure

website where the API is running, image, which is my media store for images on

Page 100: Five tools for building rest APIs - Notes

Amazon S3, and www, which points to my static web content also hosted on S3.

In MX Toolbox, which is a useful tool for checking CNAMEs and other DNS entries, I see

the IP address is the host for the generic S3 website's domain, and that Amazon is using

the ultradns provider.

Page 101: Five tools for building rest APIs - Notes

Advanced DNS

The name servers you get with the domain name registry don't tend to have

advanced DNS functions. They just let you do the basics. When you need to set up

high availability and disaster recovery, you'll need to move to a dedicated DNS service.

With advanced DNS systems, you can set up rules so that requests are dynamically

routed to the host which is going to give the client the best response.

You still set up a CNAME, but rather than resolving to a single static location, you

configure multiple locations and let DNS decide which one to use for each request. The

rules could be quite simple, like distributing traffic evenly between different data centers

or even different clouds. And the rules can become much more involved. You can

configure the DNS provider to ping each of your endpoints every few seconds and

resolve the next request by sending it to the endpoint which is running fastest. The pings

are also a health check, so if one endpoint becomes unavailable, the DNS server will

stop routing traffic to that endpoint until it comes back online.

Demo 5: DNS With Route53

Page 102: Five tools for building rest APIs - Notes

DNS services can get very expensive. The best of them offer 100 percent uptime, but

the Route 53 service from Amazon's AWS cloud has most of the features you need and

is very reasonably priced. A few dollars a month will buy you millions of DNS lookups.

I've set this up to use weighted routing with three quarters of client requests coming to

the European data center, which has a health check set up so it only gets routed to if it's

healthy. The other API record also has a 20-second TTL, points to my American data

center, and is weighted to get 25 percent of calls. That short TTL is less efficient as it

means we don't maximize the DNS cache, and clients have to make frequent calls all the

way back to the name servers. But it means if an API server goes down, there won't be

lots of clients with cached DNS entries trying to use that unavailable server. They'll all

come back to Route 53, which knows the server is offline and will send them to another

one.

Page 103: Five tools for building rest APIs - Notes

The health checks are set up just to ping the spideroftheday endpoint. And if Route 53

gets no response or gets a non-okay response, it considers that endpoint is not healthy.

If I run some CNAME lookups for the API, some resolve to the European data center,

others resolve to the U.S. data center, so with the weighting rules, I'm spreading the load

Page 104: Five tools for building rest APIs - Notes

across my server estate.

Page 105: Five tools for building rest APIs - Notes

With the health checks, I get failover too, which I can prove by calling my API.

So, I make a GET request and get a 200 response. In the Azure portal, I'll stop my

website in Europe, and if I repeat my GET call quickly, I get a 403 response telling

me the site is disabled. My TTL is set to 20 seconds, and my health check is set to 10

seconds. So, in 30 seconds, Route 53 will have flagged the European site as

unavailable and will be directing all requests to the U.S, and the DNS caches will

have cleared so the next request goes back to Route 53 and is told to use the U.S.

Azure website. And my API is back up again.

You can set an even lower TTL if you need the failover to happen faster, but at this level,

we can use Route 53 to failover between different data centers or different clouds so

we're catering for exceptional situations, which may resolve in a few seconds' downtime

for your servers. There's much more to DNS than CNAMEs. It's actually a very highly

Page 106: Five tools for building rest APIs - Notes

performant, endlessly scalable database that can be used for different sorts of network

data.

SSL

Caching and DNS are key parts of the HTTP protocol and making good use of them

gives you a lot of flexibility to provide better API performance and reliability.

The last part of the protocol that we'll look at in this course is HTTPS, the secure

transport for HTTP, which uses the same protocol but encrypts the traffic over a

secure channel between the client and the server.

We've seen using Wireshark how easy it is to reconstruct an HTTP exchange if you can

capture the TCP packets from the network.

A compromised machine could have a packet sniffer running without the user knowing it

sending any interesting looking traffic, like a post which contains credit card details, to

someone who shouldn't be seeing it. It doesn't even need a compromised machine. The

network could be compromised with traffic being recorded and sent to a criminal gang or

a government agency.

Page 107: Five tools for building rest APIs - Notes

And any HTTP exchange contains all the information they need to see who sent a

request, who sent the response, and all the content in between.

We'll briefly look at securing your API with SSL because it's something which a lot of

APIs don't do, and it's something you should consider for all your APIs. HTTPS works on

the basic security principle of a key pair.

To enable HTTPS on your API, you need a certificate on your server which contains

both public and private keys. The server sends the public key to the client, which

uses it to encrypt data, and the server uses its private key to decrypt it.

When a client first connects to an HTTPS endpoint, there's a handshake where the client

and server agree how to encrypt the traffic and exchange another key unique to their

session, which is used to encrypt and decrypt data for the life of that session.

There's a performance cost in the initial handshake, but, afterwards, the client and

server use the same channel using fairly cheap encryption so the overhead isn't

noticeable.

Demo 6: SSL

Page 108: Five tools for building rest APIs - Notes

The details for providing the server with your certificate and configuring your API so it

only accepts SSL are different for every platform.

But I will show what happens when we have SSL enabled for our API. I've bought a

certificate for api.spiderlog.net and configured the API to use it. The only change that the

client sees is the protocol, and the URL is now HTTPS. And provided the client has a full

HTTP stack available, they shouldn't need to make any other changes.

I make my request and get my response, which is the same JSON response, and I get

the same header values. But at the transport level, the exchange looks very different.

Wireshark shows me those SSL handshakes so I can see the client initiates the

connection, the server responds with the certificate and states what encryption it

supports. The client responds with a key and a preference for the encryption type, and

then the server responds agreeing to that encryption type, and now we have a secure

channel between the client and the server.

Page 109: Five tools for building rest APIs - Notes
Page 110: Five tools for building rest APIs - Notes

Those handshakes add overhead to the initial call, usually about three-tenths of a

second, but it's only the first call that takes the hit.

Page 111: Five tools for building rest APIs - Notes

In Wireshark, I can see the request response exchange, but when I follow the stream, it's

all garbage. That's the encrypted traffic going over TCP, which is meaningless to any

observers.

If anyone is listening to this exchange, they can see the IP addresses of the client and

server. That's part of the underlying TCP/IP protocol, so that isn't obscured by using

HTTPS. And they can estimate the size of the payload, but they can't see the URL or

any of the headers or body in the request or the response, so we get a good level of

protection from SSL for the content.

To see that, here's the GET request for a user's Spiders running over plain HTTP. With

Wireshark, I can see the traffic, see that basic authentication is being used, and

Page 112: Five tools for building rest APIs - Notes

Wireshark even decodes the base 64 string and shows me the username and password.

With the same request over HTTPS, the whole exchange is encrypted, so not only can I

not see the value of the authorization field, I can't even see that basic auth is being used.

If you think your API is too insignificant for anyone to bother hacking, remember that

people use the same username and password for many services, so hacking your API

could give people a valuable set of credentials for trying with other services.

SSL & DNS

SSL can and has been broken, but with a long-enough certificate key, 2,048 bits is the

minimum that most authorities will use, it would take a determined hacker to break your

security.

There's an argument for encrypting all your traffic, which is a decision that Google took

when all the government snooping hit the press, but SSL certs do have a financial

cost, as well as the performance cost.

A certificate secures a particular domain, so if you're using CNAMEs to logically and

physically distribute your API traffic, then you could have multiple domains to secure,

and each one would need its own certificate.

Page 113: Five tools for building rest APIs - Notes

You can get wildcard certs, but the wildcard only applies for a single level of the

domain. So, if I had a cert for *.spiderlog.net, then I could security

www.spiderlog.net and API.spiderlog.net with the same cert but not

static.api.spiderlog.net because the wildcard only covers something dot, not something

dot something. Certificates also expire, and expired certificates can bring your whole

service down as clients will no longer allow any traffic to the host. In the past, both

Amazon and Microsoft have had nasty outages on their cloud services due to expired

certificates.

So minimizing the number of certs you have to manage and managing them properly is

critical. In a highly distributed, highly available system, your SSL cert can be an

embarrassing single point of failure.

SSL & DNS & Caching

We've seen how SSL creates a secure channel between the client and the server and all

the traffic in the channel is encrypted. That makes things interesting for caching if you

want to use a proxy server to take load off your API servers.

For the proxy to cache an SSL response and serve it up to different clients, it

needs to decrypt the response from the source server to understand how to cache it and

then encrypt it again before sending back to the client. Some proxies can be set up to let

you do that, and you can upload your SSL cert onto the proxy so when the outbound

Page 114: Five tools for building rest APIs - Notes

traffic is re-encrypted, it is done with the correct certificate. But a lot of proxies won't let

you do it, or they generate their own certs, meaning your client gets an encrypted

response that looks like it's been through a man-in-the-middle attack, which actually it

has, but the client doesn't know that it was intentional.

Personalized responses with authentication headers won't be cached anyway, which is

why a design that splits traffic to different CNAMEs is using DNS to support performance

and security.

Common resources are served unencrypted from the HTTP static host. That goes to a

reverse proxy, which serves most responses from its own cache and will refresh expired

Page 115: Five tools for building rest APIs - Notes

responses from the API via the load balancer. Personal resources are served securely

from the HTTPS API host. That goes to the load balancer and then onto the API servers.

So, this setup means we segregate cacheable and non-cacheable content so the proxy

doesn't get requests that it can't fulfil itself, and the API doesn't get requests which could

be fulfilled by the proxy. It means additional entry points into your API, but the

performance and security benefits will be worth it if you can segregate your shared and

personal resources.

Module Summary

4. Performance Testing

Performance Testing

This is the next module in Five Essential Tools for Building REST APIs where we'll look

at performance testing our API with load tests tools.

There are different flavors of performance tests, but they all have the basic principle of

sending lots and lots of requests to your API to simulate heavy client load, seeing

how your API copes, and using that information to design your production

infrastructure and plan your operations.

To generate enough load to stress your API, you need more than one machine. And not

so long ago, you had to manually set up and coordinate multiple machines to send load

Page 116: Five tools for building rest APIs - Notes

at the same time with a master node to organize them and collect the results.

Now there are some excellent cloud-based, load-testing tools, which do all the hard

work for you and make it trivially easy to send thousands of requests per second to your

API and stress your servers to the max. To see just how easy this is, we'll start

straightaway with some load tests on my Spider Log API using my recommended load-

testing tool, Loader.io.

Loader has a web UI and an API for creating and running tests, so you can easily add it

to your build and deployment pipeline and include a load test as part of a nightly build.

Demo 1: loader.io

Page 117: Five tools for building rest APIs - Notes

I'm logged into Loader, and I'll create a new test to flex my spideroftheday endpoint. This

API call is very simple. There's no authentication, and everyone gets the same

response.

Give my test a name, and I need to choose the load profile.

The test type specifies how the load is generated, and I can choose between a total

number of clients for the test, so the requests per second will be the total number

divided by the duration of the test, or a number of clients sending new requests

every second, or maintain a client load so Loader builds up to the total number of

clients. For APIs, you typically get load in bursts, so I find clients per second is the most

useful type of test, and I'll set it to use 100 clients. That means Loader will make 100

concurrent GET requests to the URL every second. Unless the API can respond to all

those requests within a second, it will still be servicing some when the next 100 clients

come in. So for longer responses, this means the API could be dealing with up to 200

concurrent client requests per second. Specify how long the test should run, and we'll

start with 30 seconds, and move on to the actual client request.

I've already added my Spider Log API as the host, so I'll enter the path for the

spideroftheday resource, and I can add any HTTP headers that I need, which is where I

can request JSON. And that's my test set up. When I run this, Loader will allocate a

bunch of resources from its compute cloud and use them to generate the requests. More

Page 118: Five tools for building rest APIs - Notes

clients means more of Loader's compute power, but that's all transparent to us.

For any platform, there's a limit on how much load your API can take before performance

degrades, and, ultimately, it will refuse to handle new requests. If I use enough load, I

can take down the domain in a distributed denial of service attack, so Loader won't let

me run this test until I've proved that this is my domain.

Page 119: Five tools for building rest APIs - Notes

To do that, I had to upload this text file to the root of the domain, and Loader verifies that

the file exists every time I run the test.

Obviously, I can only put that text file in the route if I have access to the domain, so I

can't use Loader to DDoS somebody else's host.

I've already verified the host for my API, so I can run my test now.

Loader starts by reserving the compute for the test, which gives you enough time to

ignore the hammer diagrams, and then it starts sending load. While the test is running,

Page 120: Five tools for building rest APIs - Notes

you get this nice graph showing you the total number of users and the average response

time, together with cumulative stamps for the session. At the moment, the average

response time is 13 msec and falling, and we can see the number of users has leveled

at 100, meaning the API is dealing with all requests in under a second. My 30 seconds is

up, and now I get the stats. I can see my server sent 3,000 responses over 30 seconds,

which exactly matches the 100 client requests per second with an average 11-msec

response time and with no errors or timeouts.

So, we have some baseline performance stats. Loader also shows me the breakdown of

response statuses. Here, mine are all 200s, but if the API were stretched, I'd be seeing

Page 121: Five tools for building rest APIs - Notes

500s, too

And I can also see the bandwidth for the requests sent by Loader clients and the

responses from my API.

Demo 2: Variables With loader.io

If you know the client will send several requests in sequence, you can set up a load test

to exactly simulate that. I'll copy my original test, which just called spideroftheday and set

up a new test to mimic when the client application loads. For that, I need to add a call to

get my spider list. This call is different because it's personalized. So I'll use HTTPS, and

I need to send an authorization header. I'll start simple and use the token that I have

saved from my Postman collection, so when this load test runs, every client will request

the spideroftheday and then the spider list for the same user. I'll run this for just 15

Page 122: Five tools for building rest APIs - Notes

seconds to verify that it's working correctly.

Again, we see the stats building up, but now the average response time is the total time

for all the URLs in the test, so it's the combined time for fetching my spideroftheday

Page 123: Five tools for building rest APIs - Notes

and my spider list.

At the moment, Loader doesn't let you specify that clients should make some requests in

parallel, so you can't set up tests to mimic clients which make concurrent requests to

your API. They have to be in sequence. That test is done, and the average response

time for the two URLs is ten times the response time for one URL, but it's still less than

two-tenths of a second. With the paid plans, Loader would also show me the

breakdown for each URL.

Page 124: Five tools for building rest APIs - Notes

Hard coding input variables in your test can be fine if you know the path through the

code is the same for any value of that variable.

But if the variable changes the code path, then you need your test to be smarter to

simulate real traffic. If my API loads the user's spider list from storage on the first fetch

and then caches it for a few minutes, this test will be mostly hitting the cache, and we'd

see very good performance. But that doesn't give us a true representation when we have

lots of users connecting just once, because all their responses would need to be loaded

from storage, and we wouldn't be getting any cache hits.

To flex those different code paths with each client run, Loader lets you put variables into

your parameters using the same double brace format as Postman. So I can replace my

Page 125: Five tools for building rest APIs - Notes

hard-coded authentication token with a variable called Token.

Now I can give Loader the URL of any publicly accessible file which contains a set of

values to use for that variable.

Page 126: Five tools for building rest APIs - Notes

The file is in JSON format, and it needs to contain an array of keys, which are the

variable names configured in the test, and an array of arrays, which contain the set of

variable values for each different client to use. My JSON file has a list of auth tokens.

In the test, the first client will use the auth token ZTA, then the next one 0TK, and so on.

So, they all represent different users. These tokens are all fakes just to test the API, but

if you have scripts to generate test users, you can also generate a matching token file for

your automated load test at the same time.

Now when I run the test, the average response times are doubled. Even with my stub

API implementation, I have to do different work for different users, so using variables like

this gives me much more realistic results.

Page 127: Five tools for building rest APIs - Notes

Baseline Performance Tests

You can use Loader to run different types of load tests. You'll want to know how your

API behaves with average load, peak load, and way over peak load to give you

some baseline performance figures.

That's just a case of cloning a test that flexes your scenario and running with

increasingly higher numbers of clients. Your average expected load may be 100

requests per second, and your expected peak could be 500 requests per second. You

can run the same test configuration in Loader with different numbers of users and

compare the output.

Typically, you'll see that higher numbers of users increases the average response time

because the API is working hard accepting incoming requests and dealing with inflight

requests.

Here, there are two servers, and the API can service 100 concurrent requests with an

average response time of 1.2 seconds. But when there are 500 concurrent users, the

response time increases to 4.2 seconds.

Page 128: Five tools for building rest APIs - Notes

And if you scale your infrastructure in between test runs, you'll know how much kit you

need to serve your expected load and to respond to all client requests in a reasonable

time. As you add more servers, you should see lower response times with higher loads.

Now with four servers, the API handles 100 requests with sub-second responses, and it

handles 500 concurrent requests in 1.4 seconds.

The extra servers mean performance is acceptable at peak load and very good at

average load.

Stress and Soak Tests

You also want to know what your API does when it gets much more traffic than it was

designed to handle. If you run with four times the expected peak at 2,000 requests per

second, you'll see much worse performance. The API will slow down massively, so if

your test mimics clients with a modest timeout, like 30 seconds, you'll see lots of

timeouts in the stats. And if the API's maxed out and can't cope with anymore requests,

Page 129: Five tools for building rest APIs - Notes

it will send 503 Service Unavailable responses.

A stress test is a good way to see how your API behaves when it's under an

unreasonable amount of pressure. You should see slow responses, timeouts, and

503s, but you shouldn't see any other errors. If your API starts throwing functional errors

under excessive load, then you have an implementation problem. If it doesn't slow down

or throw 503s, then you're either running much more kit than you actually need or you've

built the world's leanest and most efficient API. Baseline performance and stress

testing is just a case of creating test configurations in Loader and running them

repeatedly. You'll want to run them for a long period to smooth out any temporary

factors like warmup times or caches refreshing, but the one-minute runs that you can do

with the free tier in Loader will give you a good idea.

You'll also need to do soak testing running a constant moderate amount of load into

your API for a long period, 12 hours or more. This will give you confidence that once

your API is running, it will keep running, and it will find any nasty issues with

Page 130: Five tools for building rest APIs - Notes

memory leaks or date changes when you run overnight. Loader supports that too.

It has its own REST API, which you can use to start tests. You define them in the web UI

in the normal way, and then you can start them through the API. So, it's trivially easy to

run soak tests for long periods by repeatedly running the same tests. You build your test

to run for one minute with a reasonable amount of load, and then you can simply write a

script to start a test run by calling the Loader API with cURL. In your script, wait 60

seconds for the run to finish, and then call again. You can do that in a loop for as long as

you like and just leave it running.

We'll do that next so you can see how the Loader API works.

Demo 3: loader.io API

I've already set up the test configuration, which will be my soak test.

Page 131: Five tools for building rest APIs - Notes

It runs at 300 requests per second for one minute. And from the Webhook page, I can

see the unique ID for this test.

Loader's API documentation is a little bit thin, but it does cover what you need.

To start a new test, I need to make a PUT request to the runs collection for this test

resource.

Personally, I would have modelled that differently because the PUT isn't changing an

existing resource, so that method doesn't follow the HTTP model. You're creating a new

resource in the run collection for that test, so to me, that should be a POST, but REST

API design is a different course. I've got my API key in the request header set up in

Page 132: Five tools for building rest APIs - Notes

Postman, so I can send this, and it will take a while to come back.

That's a good thing. While I'm waiting, Loader is allocating its resources, like we see in

the UI with the hamster diagram.

So, the API returns when the test has actually started running. In the UI, I can open the

test, and I see that there's a run in progress.

Page 133: Five tools for building rest APIs - Notes

So the web view is always up to date, even for tests which were started through the API.

I'm seeing some alarming variations in response time here, which I'd want to look at, but

I guess it's because Loader is sending requests from one region, and Route 53 is

splitting the traffic between API servers in Europe and the U.S.

I've used that handy Postman functionality to export my requests as a cURL statement,

and to make it a long-running soak test, I just need to call the API and wait for the run to

finish in a loop. Whichever scripting shell you use, that's pretty simple. Here's how it

Page 134: Five tools for building rest APIs - Notes

looks in a Bash script for Linux and Mac users--an infinite loop which writes some

output, calls the API to start the test, and then sleeps.

Because we know the length of a test is one minute, and the API doesn't return until the

test is started, we can wait for exactly 60 seconds if we want to keep a constant load. Or

you could wait for longer if you wanted to let the API cool down in between calls.

This is a very easy way to do some really valuable testing on your API. I'll start it so we

can see it running, and all I'll get is some output on the screen every time a new one-

minute run gets kicked off.

This example is a very simple soak test, but you can easily make a more sophisticated

script running a constant load to simulate background levels and then mixing in peak

load tests, so you can actually build up a test run, which is a realistic day's traffic and

verify that your API copes as you want it to.

Load Test Alternatives

You can do more with Loader than I've shown. We've seen the testing part, where you

have a nice UI for building tests and viewing results, and the API, which lets you

automate running tests. And you can also query test results in the API. But Loader also

integrates with lots of other systems like Jenkins, GitHub, and TeamCity, and a whole

Page 135: Five tools for building rest APIs - Notes

host of other services, so you can use it as part of your build process. Loader supports

Webhooks, too, so you can make POST requests to any URL when a test run has

completed and integrate Loader with your own custom tools. The free tier lets you run as

many tests as you like up to a maximum length of 60 seconds each and a maximum of

10,000 clients per test. That's a perfectly usable amount of load testing for many

projects.

But if you need more, you can upgrade to the pay monthly premium service for unlimited

tests of up to 10 minutes with up to 100,000 clients per test, and you get richer analytics.

That means you could use the free tier for the build phase of your project and then pay

for some premium usage in the run up to major releases. Loader's a great tool. If you've

never done proper load testing, it's super easy to get started and get a good idea of what

your API can do. The analytics that you get from Loader are a bit basic. You get

good overall stats, but it's not great for drilling down. It groups failed responses into

400s, 500s, or timeouts, but it doesn't show you the breakdown of 500s. So, if you do a

Page 136: Five tools for building rest APIs - Notes

stress test, you don't know if they're 503s, which is understandable, or generic 500s,

which is not.

A good alternative, which isn't free but is reasonably priced, is Blitz.io.

Quantifying Performance Tests

Cloud load testing tools like Loader and Blitz make it very easy to get answers to

important questions about your API, questions like how many servers do I need to run

and what performance can clients expect?

For REST APIs, which are or always should be stateless, your performance testing

should tell you how many concurrent requests each of your API nodes can handle

without error and what sort of response times the API will give you under different

levels of load.

Page 137: Five tools for building rest APIs - Notes

But the tests should also tell you whether your solution scales linearly so you know if

adding more nodes increases the number of requests that you can handle in a

predictable way.

Page 138: Five tools for building rest APIs - Notes

You'll need to do a range of tests to get those answers, scaling up and down in between

tests, and realistically, you should allow a full day to do a full set of performance tests.

But for most of that day, you'll just be letting the test run and collating the results at the

end. Loader or Blitz will do all the hard work for you. The output of your tests will be a set

of statistics that most people won't be interested in. But you can use those to compile a

set of headline statistics that people will be interested in.

Page 139: Five tools for building rest APIs - Notes

Your headlines will be something like Soak Test with two server nodes over 48 hours.

Average response time is consistent and less than 0.2 seconds.

Load Test to support peak load of 2,000 requests per second. We need six servers to

consistently average one-second response times.

Stress Test with 10,000 requests per second and six server nodes. Response times

slow down to 27 seconds and 1 percent of calls fail because the client times out or gets

Page 140: Five tools for building rest APIs - Notes

a 503 response.

That output is invaluable because it tells you your API will keep running because it's

stable and that it's fast enough to deal with peak load and give users acceptable

Page 141: Five tools for building rest APIs - Notes

performance. it also shows your API is predictable.

So you know if it's taking ten seconds to response, then it's running on empty, and you

need to add more nodes and then the responses will be faster. But your testing could

also identify that your solution just doesn't run quickly enough and needs a period of time

spent tuning the code or the design. Making good use of HTTP as we saw in the last

module will definitely help with that. But the worst case is finding that adding more nodes

doesn't improve performance, which means there's a bottleneck somewhere in your

solution, which you need to find and fix. Because, otherwise, your API just can't scale.

Module Summary

You should consider load and performance testing to be a necessary part of API

delivery.

I've seen more than one API project where a simple round of load testing has found a

fundamental but easily fixed issue, something as simple as setting your logging levels

Page 142: Five tools for building rest APIs - Notes

too high in production. Adding latency to every call is something that you might not see

with exploratory testing with Postman. But a single run with Loader will expose your API

if it's running inefficiently, and then you can decide how much tuning you need to do.

I recommend the cloud load testing tools because they're so easy to use. They make

performance testing a breeze, and they easily integrate with build servers so you can do

load testing as part of your regular builds. And I recommend Loader because it has a

huge amount of functionality, and it has that free tier. When you can run a load test

against your API sending in thousands of requests per second without needing

management support to spend money and without spending more than an hour's effort,

it's practically negligent to deploy a REST API without knowing exactly how it performs.

5. Monitoring

Monitoring

This is the next module in Five Essential Tools for Building REST APIs where we'll look

at monitoring your API with some tools that make it easy to see what's going on inside

your API and help you diagnose problems.

When you've designed and built your API, tested and load tested it, you deployed to the

production environment, and then you should be able to forget all about it and focus on

what you're going to build into the next release. But the reality is you can't do that.

Hopefully, your API will live in production far longer than it spent in development, and

running fulltime will almost certainly uncover problems that you didn't see during the

Page 143: Five tools for building rest APIs - Notes

build phase. Maybe there are bugs in scenarios that weren't tested, so users are getting

500 errors from API calls that should be working perfectly.

Or maybe a new deployment has impacted performance for everyone.

Whether you're in a dev ops role or providing backup for a dedicated support team, you

need to be able to see what's happening inside your API so you can diagnose

Page 144: Five tools for building rest APIs - Notes

problems, and monitoring tools will help you find and fix issues before you start

losing customers.

There's a growing industry providing services which give you insight into what's

happening with your API. They've grown from the explosion in cloud hosting and the

demand for consistent information across multiple platforms. And because of that, they

apply just as well to on-premise or IAS solutions as they do for hosted path solutions.

These tools work by collecting information from your API instances and forwarding it to a

central location where it's analyzed and presented to you in a useful way.

In this module, we'll look at hosted services, logging, and instrumentation.

Logging Levels

Logging is something you need to do explicitly in your API. Platforms and engineers

have their own preferred logging frameworks, but they typically have a similar approach

to the log4 family, which includes log4j, log4net, and log4js.

In your code, you explicitly write log statements to say what your code is doing at

different points, and the logging framework writes those logs to different outputs, which

could be a text file or a database or a remote service. Log statements are classified by

Page 145: Five tools for building rest APIs - Notes

how detailed they are, and frameworks let you specify the logging level at runtime.

Logging isn't cheap, especially if your logs are sending off to a remote server, so

you need to get the logging level right.

In production, you should be configured to run at the minimum possible logging level to

give you the assurance that your API is running smoothly. We'll look at some great tools

for capturing and analyzing logs shortly. But because remote logging is so much

more expensive than local logging, we should look at the type of information that we

want to log and how we can classify different levels of log entry.

Getting the logging level right isn't easy, and you may need a few iterations to make sure

you're logging useful information without a flood of logs that you can't make sense of,

and you're also not slowing down your API with excessive logging.

Your logging framework should let you turn the logging level up at runtime so you get

more detailed information to help you diagnose when things go wrong.

Page 146: Five tools for building rest APIs - Notes

Here's some pseudo-code for my spideroftheday API call.

If I run at debug level, I'll get eight or nine log entries for every call depending on the

path through the code, but when you have a dozen remote servers running this code

and you get intermittent errors, you need this level of instrumentation to diagnose the

problems.

As a very rough rule of thumb, I've found these proportions are about right for the sort of

solutions I build. Something like one log entry for every dozen or so lines of code, more if

it's complex. And the Log level will be split to about 60 percent Debug, 25 percent

Info, 10 percent Warn, and 5 percent Error.

API Logging

Page 147: Five tools for building rest APIs - Notes

So our code is able to produce lots of logs, and we can configure how verbose the

logging is. But in API solutions, where should the log entries live?

You could write them to the local file store, but if you have multiple servers

running, then you need to do some work to collect and collate all the log files.

Page 148: Five tools for building rest APIs - Notes

Or you could use a store which is suitable for your platform, like the syslog in Linux or

the event log in windows.

But, again, that's one store per server, so you need to ship them somewhere to collect

them, and that option is platform specific, so you'll have different types of log if you use

different platforms in your stack.

With any local log store, you need to cap the amount of data you log and ensure

you roll logs over so you keep file sizes at a sensible level. And when you've got all

that working, you need to parse the logs that you've shipped centrally to make some

sense of them.

That's too much hard work for something that every API needs, which is where cloud-

based log stores come in. The best of them have got multiplatform support so you can

log using the framework you're already comfortable with and just configure the cloud

service as your log store. Then all the log entries get sent to one store from all your

servers, and you search and analyze them from the central dashboard. The best logging

Page 149: Five tools for building rest APIs - Notes

service around right now is Loggly, which does all this and does it very well.

And we'll look at it next.

Demo 1: loggly

Loggly has excellent platform integration, and there are packages to use Loggly as the

store for the log4 family, Linux syslogs, Windows event logs, and more.

I've added something similar to the logging statements we looked at in pseudo-code to

my Azure API deployment using the Loggly log4net appender. Here's the dashboard

showing me log statements made in the last ten minutes, and there are already plenty

thanks to the Route 53 health checks that I set up in the last module, which are pinging

Page 150: Five tools for building rest APIs - Notes

the spideroftheday endpoint every few seconds.

The API is running at debug log level, so I'm seeing lots of entries.

Loggly uses Kafka to process log entries, which is an ultra-scalable and efficient

message queue.

There's some latency between writing a log statement and seeing it in the dashboard,

but it's a minimal amount and is worth it for the rich view of data that Loggly gives you.

The bars show how many log entries there are for that time period. If I select a bar, I see

all the log entries in a list view, and I can expand any one entry and see the detail below

it.

With my Loggly configuration in the API, I've captured the server name and date in UTC

and in the local server time, the thread number where the log was sent, and the actual

Page 151: Five tools for building rest APIs - Notes

payload for the entry.

Loggly understands JSON, so if your log payload is a JSON string, Loggly can parse it

and give you some really useful ways of navigating the data.

Loggly takes a big data approach to logging and parses all your log entries to make

searching easy.

That's one of the key things to recommend it. As the number of logs you have grows,

Loggly becomes more and more useful letting you zone in on problem areas and cut

straight to the diagnosis.

Logging Alternatives

Loggly is a great service, very easy to use, and very flexible. It has plugins for all

the major logging frameworks on all the major platforms. And if there's no integration

available for your stack, you can use their standard REST API. If you use JSON for your

log payload, then you get automatic searching and filtering by all the fields in your

JSON logs. And the dashboard gives you aggregated views, as well as letting you drill

down into specific log entries.

Page 152: Five tools for building rest APIs - Notes

Loggly has a free tier, which lets you do all this and store 200 MB of log data every

day, which Loggly will retain for seven days. That's a pretty generous amount. Unless

your log entries are uncommonly large, you should be able to store 300,000 JSON log

entries every day, which means you can easily run a fairly large API with multiple

servers at warn level and stay within the free tier limit.

If you upgrade to the premium service, you get more storage and longer retention

of data. You also unlock useful features like grouping servers together into known

environments, adding users to give them dashboard access, and getting alerts on

conditions that you configure like being emailed if there are too many error log entries

within one minute. Those extra features are definitely useful, but the free tier is very

usable for moderately sized systems, so Loggly has a lot to recommend it.

The main alternative to Loggly: is Papertrail, which runs in a similar way.

Both Loggly and Papertrail have the same issue, which is latency. Your API sends

its logs to a set of servers across the internet, which is slower than logging locally and

Page 153: Five tools for building rest APIs - Notes

will impact your own response times if you turn up the logging level to diagnose a

problem.

If you can't live with the latency, then you'll need to look at a specific logging framework

for your platform, like App Insights on Microsoft Azure or Logplex on Heroku where log

entries are written closer to your servers so the performance hit is less of an issue.

But those platform logging tools aren't typically as usable as Loggly, and if you rely on

them, you could find yourself tied in to one provider and having to rewrite your API if you

want to move to a different host.

Instrumentation

Logging is only one part of monitoring. It's the most detailed level where you explicitly

describe what's happening inside your API. Wider instrumentation monitors the health of

your API estate with generic factors like server CPU and memory utilization and

network bandwidth usage, or they can ping your endpoints and monitor the response

Page 154: Five tools for building rest APIs - Notes

times.

Logging and instrumentation work together so you may see error logs that you can

track down to excessive memory use on one server. Or you may see excessive

bandwidth, which you track down in your logs to a failed cache.

Most cloud platforms provide their own basic dashboards, which monitor the stats of

your servers, but if your API spans many physical components or multiple platforms,

then you'll want a consolidated view with third-party monitoring service. These work in a

similar way to logging services with a component on your API servers collecting data

and forwarding it to the central store.

But unlike logging where you write out the data yourself, this monitoring is all done for

you. You typically just deploy an agent on your server or add an agent component to

your solution. The agent will monitor all the stats that the service understands and

periodically forward them on. Then you see the results in a friendly dashboard, and you

have a single place to view the health of your infrastructure while your logging service

gives you a single place to view the health of your implementation.

Page 155: Five tools for building rest APIs - Notes

The leader in remote monitoring is New Relic, which is massively cross-platform.

They have packages to install at server level, which monitor the health of a virtual

machine. But they also have platform packages for running alongside your API in a

hosted service like Azure websites or Elastic Beanstalk.

New Relic have various products that provide a wealth of information, and that can really

become essential when you have a large server estate to administer. With smaller

services, it's still good practice to include an instrumentation tool like this.

Infrastructure instrumentation works at a higher level than logging and gives you a

general picture of how things are running, but New Relic also provides platform

instrumentation, so the agent integrates with your technology stack. That means you

get more detailed data because the agent understands what's happening when the API

gets called, and it can monitor key events like entry and exit points and calls to external

services. if you have that level of instrumentation with no effort, then you don't need to

include that in your custom logging statements, so you can focus your logging on more

useful events.

Let's see what New Relic can tell you.

Demo 3: NewRelic

I've integrated New Relic into my Azure API, and this is how the APM dashboard looks

after my load test run. The three graphs here are showing me the average response

times, perceived speed for users, and the throughput of the API in requests per

Page 156: Five tools for building rest APIs - Notes

minute.

I can customize the timeframe to display so I can focus on recent activity or get a view

over the whole of the last week.

Page 157: Five tools for building rest APIs - Notes

New Relic also shows me all those stats averaged for each server together with the

CPU usage and memory.

So I can see for this run, two servers did the majority of the work, which is correct with

my weighted DNS setup in Route 53.

For an individual transaction, New Relic gives you a breakdown, which shows the

external performance but also the internal performance, so you can see which parts of

Page 158: Five tools for building rest APIs - Notes

your stack are taking the time.

There's a lot more that New Relic does, but one particularly useful feature shows you

how the performance of your API is affected by downstream integrations.

Page 159: Five tools for building rest APIs - Notes

The map view shows me that for a call to the Spider Log API, which averages 2.6

seconds to respond, 175 msec is spent in network traffic to Loggly.

If I had multiple downstream services, New Relic would show me all of them. And with

the map view and the external services view, I get a very clear picture of how much my

calls to external components are costing.

Diagnostics

That demo only scratched the service of New Relic's various offerings, which have a

combined feature set that's very impressive.

The application monitoring component, APM, is the most useful one that I find for APIs.

That will tell you the performance of your API from the client perspective and the server

perspective, and it will give you the breakdown of what's happening inside your

API. The tight platform integration means APM can tell you whereabouts in your stack

the CPU time is being used, and you can even run profiling sessions in production to see

detailed breakdowns at the code level. The smart network monitoring tells you how

much time is being spent waiting on external services. You get all that in the free plan,

as well as customizable alerting to tell you when things aren't going so well.

Page 160: Five tools for building rest APIs - Notes

With the pro upgrade, you get additional reporting so you can see views over periods

up to three months, customize your dashboards, and check on SLAs so you can quantify

the service level agreement that you have with your clients. You can also set up key

transactions so you can focus your dashboards and monitoring on the most important

features of your API.

New Relic has the best feature set for server and platform instrumentation, and you get

a huge amount of information from the free tier.

The main alternative is Stackify, which does a lot of what New Relic does and also

provides a centralized logging service, but it doesn't have a free tier, so you need a paid

subscription to get any level of service. Just like the logging services, remote

instrumentation has a cost in shipping data from your API to the centralized servers for

storage and analysis. If you're going to integrate something like New Relic into your API,

it's definitely worth doing a load test before and after so you know exactly what your

monitoring is costing you in terms of performance.

Page 161: Five tools for building rest APIs - Notes

If you don't need the level of detail that the dedicated services give you, then the basic

stats from your Platform may be enough. Azure and AWS can tell you the main details,

like CPU, memory, and network usage, and combined with your own logging, that may

be all you need.

The tools we've seen are great for giving you a background check on your API and for

digging into problems when they do occur. They're cross-platform and easy to work with,

but there's one more tool which I've put into APIs, which requires bespoke development

but is always worthwhile.

I like to include a diagnostics endpoint, which tells me the current setup of the

environment and the status of all the dependencies as a hierarchy. So at a single

glance, I can tell if there are problems with the end-to-end solution and where those

Page 162: Five tools for building rest APIs - Notes

problems are.

Demo 4: Diagnostics Endpoint

Here's the diagnostics endpoint for my Spider Log API. It's a standard API call, so it

returns JSON, which includes generally useful things like the current server, date and

Page 163: Five tools for building rest APIs - Notes

time in UTC and in local time to the server.

That's handy if you have some API calls filtering by date, which aren't working, and you

can see if the server is on a different time zone from the client. Things like the server

name and IP addresses are also useful for verifying your infrastructure setup. But then

the diagnostic response includes a set of checks which are specific to each API project.

Those are checking that the API can get to its downstream dependencies, and you put

into them as much information as you're going to find useful.

So this API uses a SQL database, and the check tells me the name of the server and

database, and it also verifies that it can connect, which give me a status of "OK."

Page 164: Five tools for building rest APIs - Notes

similarly, there are cache store checks and downstream API checks.

The health check object has nested health checks, so you can group components

together and build up a hierarchical view over the state of your service and all its

dependencies. A diagnostics endpoint like this is a custom build for your API, but it's a

simple thing to do, and the couple of hours it will take can easily be saved in problem

Page 165: Five tools for building rest APIs - Notes

solving time later.

The diagnostics endpoint is a useful tool for checking the environment is set up

correctly, and the API can access all the components that it needs to use.

Exposing it as JSON means you can easily build up a dashboard to show the health of

your API and get an early warning if the service you rely on becomes unavailable. It's

possible that your diagnostics endpoint will contain confidential information, so it needs

to have an extra level of security.

It should obviously be HTTPS, and you may want to use custom authentication, like I

have with this auth token or an obfuscated URL to reduce the chances of someone

maliciously getting access to your infrastructure details. Or you may have a public

endpoint, which just returns an overall status of red or green, so users can check your

API's running correctly, and have extra security around the more detailed data for

Page 166: Five tools for building rest APIs - Notes

internal users only.

Module Summary

Page 167: Five tools for building rest APIs - Notes

6. Summary

Course Summary

In this module, we'll look at how the tools we've seen fit into your API delivery process

and why it is that they make good delivery better. We'll cover one more type of tool that

doesn't quite fit the essential label, and I'll introduce the EssentialRestTools.net

reference website. Whatever process you use to deliver projects, Waterfall, Scrum,

Kanban, none of the above, there are phases to their delivery, and they're all helped by

the tools that we've seen in this course.

Typically, your delivery process will have a design phase where you work with the

stakeholders to agree what's going to be delivered, a build phase where you implement

the functionality, a test phase where you prove it does what it should, and the run phase

where you push out your new release to the live environment. Whether those phases are

all in one two-week sprint or in a six-month release program, the tools in this course help

you with them all.

Page 168: Five tools for building rest APIs - Notes

Design & Build

Page 169: Five tools for building rest APIs - Notes

Test & Run

Page 170: Five tools for building rest APIs - Notes

API Management

So these are the types of tools that I think are essentials for REST API deliveries that will

help you deliver better quality APIs. As I said way back in module one, a lot of these

tools are cloud-based services, and they may not be around forever. So, I've set up

a companion website for this course, www.EssentialRestTools.net, which I will keep up

Page 171: Five tools for building rest APIs - Notes

to date with the best tools in these categories and add any new ones that come along.

There's one more type of tool that's definitely worth looking at but doesn't really qualify

as an essential, and that's API management suites. I'll walk through those briefly so you

can see what they offer, and you can go on to investigate further if they appeal.

By definition, a REST API should provide access to a set of related resources.

For enterprises with multiple business units or smaller companies with multiple products,

you will end up with multiple APIs to manage. If that's where your API estate is or

where it's heading to, then API management suites are worth a look. The offerings out

there are all slightly different.

Page 172: Five tools for building rest APIs - Notes

But typically, they offer a core set of services -- a management portal, a developer

portal, and a runtime which acts as a gateway to your actual APIs.

The developer portal is for your API documentation, but it also lets clients provision

themselves by signing up and getting an API key.

The management portal is for you to configure what happens inside the

management runtime, which is where the real value comes in.

You configure your DNS so the endpoints for your API actually go to the API

management runtime. Different providers will call it a proxy or a gateway or a host, but

in effect, they all work like very smart proxies. They'll have a caching layer, which will

make use of HTTP caching so the proxy can send responses and save traffic to your

APIs. They'll have throttling policies so you can control how many calls each client

can make. So, entry-level clients may be limited to 1,000 requests per day and a

maximum of 50 requests per second. They'll have policies that let you choose where

requests get serviced, so if your URL contains a version number, the proxy can direct

different versioned requests to different servers. Or you may be migrating to a new

platform, and the proxy will let you do a partial migration configured to serve some

Page 173: Five tools for building rest APIs - Notes

resources from the old platform and some from the new one.

I haven't included API management as an essential tool because the majority of

API projects can be delivered without one and won't be any worse for it. But if you

expect very large client bases, or your API estate is very large, you should look into

them. Mashery and SOA Software offer two of the best products, and if you're working

in Azure, Microsoft recently added API management to their cloud suite.

None of those products has a free tier, and with the big suppliers, you'll need to engage

with actual humans to set up any sort of product trial.

But another popular tool, Apigee, does have a free tier which you can provision yourself.

Apigee takes a slightly different approach to the others, but it's a good place to start if

Page 174: Five tools for building rest APIs - Notes

you think one of these tools could be useful for you.

Five Essential Tools

7. References

https://app.pluralsight.com/library/courses/five-essential-tools-building-rest-api