Shuhei Kagawa

Getting memory usage in Linux and Docker

May 27, 2017 - Linux, Docker

Recently I started monitoring a Node.js app that we have been developing at work. After a while, I found that its memory usage % was growing slowly, like 20% in 3 days. The memory usage was measured in the following Node.js code.

const os = require("os");

const total = os.totalmem();
const free = os.freemem();
const usage = ((free - total) / total) * 100;

So, they are basically from OS, which was Alpine Linux on Docker in this case. Luckily I also had memory usages of application processes recorded, but they were not increasing. Then why is the OS memory usage increasing?

Buffers and cached memory

I used top command with Shift+m (sort by memory usage) and compared processes on a long-running server and ones on a newly deployed server. Processes on each side were almost same. The only difference was that buffers and cached Mem were high on the long-running one.

After some research, or googling, I concluded that it was not a problem. Most of buffers and cached Mem are given up when application processes claim more memory.

Actually free -m command provides a row for used and free taking buffers and cached into consideration.

$ free -m
             total  used  free  shared  buffers cached
Mem:          3950   285  3665     183       12    188
-/+ buffers/cache:    84  3866
Swap:         1896     0  1896

So, what are they actually? According to the manual of /proc/meminfo, which is a pseudo file and the data source of free, top and friends:

Buffers %lu
       Relatively temporary storage for raw disk blocks that
       shouldn't get tremendously large (20MB or so).

Cached %lu
       In-memory cache for files read from the disk (the page
       cache).  Doesn't include SwapCached.

I am still not sure what exactly Buffers contains, but it contains metadata of files, etc. and it’s relatively trivial in size. Cached contains cached file contents, which are called page cache. OS keeps page cache while RAM has enough free space. That was why the memory usage was increasing even when processes were not leaking memory.

If you are interested, What is the difference between Buffers and Cached columns in /proc/meminfo output? on Quora has more details about Buffers and Cached.

MemAvailable

So, should we use free + buffers + cached? /proc/meminfo has an even better metric called MemAvailable.

MemAvailable %lu (since Linux 3.14)
       An estimate of how much memory is available for
       starting new applications, without swapping.
$ cat /proc/meminfo
MemTotal:        4045572 kB
MemFree:         3753648 kB
MemAvailable:    3684028 kB
Buffers:           13048 kB
Cached:           193336 kB
...

Its background is explained well in the commit in Linux Kernel, but essentially it excludes non-freeable page cache and includes reclaimable slab memory. The current implementation in Linux v4.12-rc2 still looks almost same.

Some implementation of free -m have available column. For example, on Boot2Docker:

$ free -m
       total  used  free  shared  buff/cache  available
Mem:    3950    59  3665     183         226       3597
Swap:   1896     0  1896

It is also available on AWS CloudWatch metrics via --mem-avail flag.

Some background about Docker

My another question was “Are those metrics same in Docker?”. Before diving into this question, let’s check how docker works.

According to Docker Overview: The Underlying Technology, processes in a Docker container directly run in their host OS without any virtualization, but they are isolated from the host OS and other containers in effect thanks to these Linux kernel features:

  • namespaces: Isolate PIDs, hostnames, user IDs, network accesses, IPC, etc.
  • cgroups: Limit resource usage
  • UnionFS: Isolate file system

Because of the namespaces, ps command lists processes of Docker containers in addition to other processes in the host OS, while it cannot list processes of host OS or other containers in a docker container.

By default, Docker containers have no resource constraints. So, if you run one container in a host and don’t limit resource usage of the container, and this is my case, the container’s “free memory” is same as the host OS’s “free memory”.

Memory metrics on Docker container

If you want to monitor a Docker container's memory usage from outside of the container, it's easy. You can use docker stats.

$ docker stats
CONTAINER     CPU %  MEM USAGE / LIMIT  MEM %  NET I/O     BLOCK I/O  PIDS
fc015f31d9d1  0.00%  220KiB / 3.858GiB  0.01%  1.3kB / 0B  0B / 0B    2

But if you want to get the memory usage in the container or get more detailed metrics, it gets complicated. Memory inside Linux containers describes the difficulties in details.

/proc/meminfo and sysinfo, which is used by os.totalmem() and os.freemem() of Node.js, are not isolated, you get metrics of host OS if you use normal utilities like top and free in a Docker container.

To get metrics specific to your Docker container, you can check pseudo files in /sys/fs/cgroup/memory/. They are not standardized according to Memory inside Linux containers though.

$ cat /sys/fs/cgroup/memory/memory.usage_in_bytes
303104
$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes
9223372036854771712

memory.limit_in_bytes returns a very big number if there is no limit. In that case, you can find the host OS’s total memory with /proc/meminfo or commands that use it.

Conclusion

It was a longer journey than I initially thought. My takeaways are:

  • Available Memory > Free Memory
  • Use MemAvailable if available (pun intended)
  • Processes in a Docker container run directly in host OS
  • Understand what you are measuring exactly, especially in a Docker container

HTTP request timeouts in JavaScript

May 13, 2017 - JavaScript, Node.js

These days I have been working on a Node.js front-end server that calls back-end APIs and renders HTML with React components. In this microservices setup, I am making sure that the server doesn't become too slow even when its dependencies have problems. So I need to set timeouts to the API calls so that the server can give up non-essential dependencies quickly and fail fast when essential dependencies are out of order.

As I started looking at timeout options carefully, I quickly found that there were many different kinds of timeouts even in the very limited field, HTTP request with JavaScript.

Node.js http and https

Let's start with the standard library of Node.js. http and https packages provide request() function, which makes a HTTP(S) request.

Timeouts on http.request()

http.request() takes a timeout option. Its documentation says:

timeout <number>: A number specifying the socket timeout in milliseconds. This will set the timeout before the socket is connected.

So what does it actually do? It internally calls net.createConnection() with its timeout option, which eventually calls socket.setTimeout() before the socket starts connecting.

There is also http.ClientRequest.setTimeout(). Its documentation says:

Once a socket is assigned to this request and is connected socket.setTimeout() will be called.

So this also calls socket.setTimeout().

Either of them doesn't close the connection when the socket timeouts but only emits a timeout event.

So, what does socket.setTimeout() do? Let's check.

net.Socket.setTimeout()

The documentation says:

Sets the socket to timeout after timeout milliseconds of inactivity on the socket. By default net.Socket does not have a timeout.

OK, but what does "inactivity on the socket" exactly mean? In a happy path, a TCP socket follows the following steps:

  1. Start connecting
  2. DNS lookup is done: lookup event (Doesn't happen in HTTP Keep-Alive)
  3. Connection is made: connect event (Doesn't happen in HTTP Keep-Alive)
  4. Read data or write data

When you call socket.setTimeout(), a timeout timer is created and restarted before connecting, after lookup, after connect and each data read & write. So the timeout event is emitted on one of the following cases:

  • DNS lookup doesn't finish in the given timeout
  • TCP connection is not made in the given timeout after DNS lookup
  • No data read or write in the given timeout after connection, previous data read or write

This might be a bit counter-intuitive. Let's say you called socket.setTimeout(300) to set the timeout as 300 ms, and it took 100 ms for DNS lookup, 100 ms for making a connection with a remote server, 200 ms for the remote server to send response headers, 50 ms for transferring the first half of the response body and another 50 ms for the rest. While the entire request & response took more than 500 ms, timeout event is not emitted at all.

Because the timeout timer is restarted in each step, timeout happens only when a step is not completed in the given time.

Then what happens if timeouts happen in all of the steps? As far as I tried, timeout event is triggered only once.

Another concern is HTTP Keep-Alive, which reuses a socket for multiple HTTP requests. What happens if you set a timeout for a socket and the socket is reused for another HTTP request? Never mind. timeout set in an HTTP request does not affect subsequent HTTP requests because the timeout is cleaned up when it's kept alive.

HTTP Keep-Alive & TCP Keep-Alive

This is not directly related to timeout, but I found Keep-Alive options in http/https are a bit confusing. They mix HTTP Keep-Alive and TCP Keep-Alive, which are completely different things but coincidentally have the same name. For example, the options of http.Agent constructor has keepAlive for HTTP Keep-Alive and keepAliveMsecs for TCP Keep-Alive.

So, how are they different?

  • HTTP Keep-Alive reuses a TCP connection for multiple HTTP requests. It saves the TCP connection overhead such as DNS lookup and TCP slow start.
  • TCP Keep-Alive closes invalid connections, and it is normally handled by OS.

So?

http/https use socket.setTimeout() whose timer is restarted in stages of socket lifecycle. It doesn't ensure a timeout for the overall request & response. If you want to make sure that a request completes in a specific time or fails, you need to prepare your own timeout solution.

Third-party modules

request module

request is a very popular HTTP request library that supports many convenient features on top of http/https module. Its README says:

timeout - Integer containing the number of milliseconds to wait for a server to send response headers (and start the response body) before aborting the request.

However, as far as I checked the implementation, timeout is not applied to the timing of response headers as of v2.81.1.

Currently this module emits the two types of timeout errors:

  • ESOCKETTIMEDOUT: Emitted from http.ClientRequest.setTimeout() described above, which uses socket.setTimeout().
  • ETIMEDOUT: Emitted when a connection is not established in the given timeout. It was applied to the timing of response headers before v2.76.0.

There is a GitHub issue for it, but I'm not sure if it's intended and the README is outdated, or it's a bug.

By the way, request provides a useful timing measurement feature that you can enable with time option. It will help you to define a proper timeout value.

axios module

axios is another popular library that uses Promise. Like request module's README, its timeout option timeouts if the response status code and headers don't arrive in the given timeout.

Browser APIs

While my initial interest was server-side HTTP requests, I become curious about browser APIs as I was investigating Node.js options.

XMLHttpRequest

XMLHttpRequest.timeout aborts a request after the given timeout and calls ontimeout event listeners. The documentation does not explain the exact timing, but I guess that it is until readyState === 4, which means that the entire response body has arrived.

fetch()

As far as I read fetch()'s documentation on MDN, it does not have any way to specify a timeout. So we need to handle by ourselves. We can do that easily using Promise.race().

function withTimeout(msecs, promise) {
  const timeout = new Promise((resolve, reject) => {
    setTimeout(() => {
      reject(new Error("timeout"));
    }, msecs);
  });
  return Promise.race([timeout, promise]);
}

withTimeout(1000, fetch("https://foo.com/bar/"))
  .then(doSomething)
  .catch(handleError);

This kind of external approach works with any HTTP client and timeouts for the overall request and response. However, it does not abort the underlying HTTP request while preceding timeouts actually abort HTTP requests and save some resources.

Conclusion

Most of the HTTP request APIs in JavaScript doesn't offer timeout mechanism for the overall request and response. If you want to limit the maximum processing time for your piece of code, you have to prepare your own timeout solution. However, if your solution relies on a high-level abstraction like Promise and cannot abort underlying TCP socket and HTTP request when timeout, it is nice to use an existing low-level timeout mechanisms like socket.setTimeout() together to save some resources.

main, jsnext:main and module

Jan 5, 2017 - JavaScript

Node module's package.json has main property. It's the entry point of a package, which is exported when a client requires the package.

Recently, I got an issue on one of my popular GitHub repos, material-colors. It claimed that "colors.es2015.js const not supported in older browser (Safari 9)", which looked pretty obvious to me. ES2015 is a new spec. Why do older browsers support it?

I totally forgot about it at the time, but the colors.es2015.js was exposed as the npm package's jsnext:main. And to my surprise, it turned out that jsnext:main shouldn't have jsnext or ES2015+ features like const, arrow function and class. What a contradiction!

jsnext:main

Module bundlers that utilizes tree shaking to reduce bundle size, like Rollup and Webpack 2, require packages to expose ES Modules with import and export. So they invented a non-standard property called jsnext:main.

However, it had a problem. If the file specified jsnext:main contains ES2015+ features, it won't run without transpilation on browsers that don't support those features. But normally people don't transpile packages in node_modules, and many issues were created on GitHub. To solve the problem, people concluded that jsnext:main shouldn't have ES2015+ features other than import and export. What an irony.

module

Now the name jsnext:main is too confusing. I was confused at least. People discussed for a better name, and module came out that supersedes jsnext:main. And it might be standardized.

So?

I looked into a couple of popular repos, and they had both of jsnext:main and module in addition to main.

At this time, it seems to be a good idea to have both of them if you want to support tree shaking. If you don't, just go with only the plain old main.