Modern web infrastructure consists of many components for various purposes, with obvious and not very interconnections. This becomes especially visible when using applications with different software stacks, what with the advent of microservices began to occur literally at every step. External factors (third-party APIs, services, etc.) are added to the general “fun”, which complicates an already difficult picture.
In general, even if these applications will be united by common architectural ideas and solutions, to eliminate unusual problems in them often have to move through the next unfamiliar jungles. Whether such problems occur is only a matter of time. These are the examples from our latest practice that this article is dedicated to.
Golang and HTTP / 2
Running a benchmark that performs many HTTP requests to a web application has led to unexpected results. A simple Go application in the benchmark process goes to another Go application located behind ingress / openresty. When HTTP / 2 is turned on, we get errors with code 400 for some requests. To understand the reason for this behavior, we removed the Go application at the far end from the chain and made a simple location in Ingress, which always returns 200. The behavior has not changed!
Then it was decided to reproduce the script outside the Kubernetes environment – on a different piece of hardware. The result was a Makefile, with the help of which two containers are launched: in one – benchmarks that go to nginx, in the other – in Apache. Both listen to HTTP / 2 with a self-signed certificate.
Let’s run the benchmarks with concurrency = 200:
1.1. Nginx:
Completed 0 requests Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests ----- Bench results begin ----- Requests per second: 10336.09 Failed requests: 1623 ----- Bench results end -----
1.2. Apache:
… ----- Bench results begin ----- Requests per second: 11427.60 Failed requests: 0 ----- Bench results end -----
We assume that the point here is a less strict implementation of HTTP / 2 in Apache.
Let’s try with concurrency = 1000:
2.1. Nginx:
… ----- Bench results begin ----- Requests per second: 11274.92 Failed requests: 4205 ----- Bench results end -----
2.2. Apache
… ----- Bench results begin ----- Requests per second: 11211.48 Failed requests: 5 ----- Bench results end -----
At the same time, we note that the results are not reproduced every time: some of the launches pass without problems.
A search for issues on GitHang’s GitHub’s project led to disabling HTTP / 2 in Go by default!
Interpreting benchmark results without diving deep into the architecture of the above web servers is quite difficult. In a specific case, it was enough to disable HTTP / 2 for the specified service.
Old symfony and sentry
One of the projects still has a very old version of the symfony PHP framework (v2.3). The old Raven-client and a self-written class in PHP are attached to it “in a set”, which complicates debugging a little.
After transferring one of the services in Kubernetes to Sentry, which is used to track errors in the application of this project, events suddenly stopped coming. To reproduce this behavior, we used examples from the Sentry website, taking two options and copying the DSN from the Sentry settings. Visually, everything worked: error messages (allegedly) were sent one after another.
JavaScript check option:
<!DOCTYPE html> <html> <body> <script src="https://browser.sentry-cdn.com/5.6.3/bundle.min.js" integrity="sha384-/Cqa/8kaWn7emdqIBLk3AkFMAHBk0LObErtMhO+hr52CntkaurEnihPmqYj3uJho" crossorigin="anonymous"> </script> <h2>JavaScript in Body</h2> <p id="demo">A Paragraph.</p> <button type="button" onclick="myFunction()">Try it</button> <script> Sentry.init({ dsn: 'http://[email protected]//12' }); try { throw new Error('Caught'); } catch (err) { Sentry.captureException(err); } </script> </body> </html>
Similarly in Python:
from sentry_sdk import init, capture_message init("http://[email protected]//12") capture_message("Hello World") # Will create an event. raise ValueError()
However, they did not get into Sentry. When sending a message, is created the illusion that the message was sent, because clients immediately generate a hash for the issue.
As a result, the problem was solved very simply: the sending of events went to HTTP, and the Sentry service listened only to HTTPS. A redirect from HTTP to HTTPS was provided, but the old client (the code on the symfony side) was not able to follow the redirects, which you don’t expect by default nowadays.