For one of our services we needed to stream logging data from a backend (available via HTTP) to the frontend (requested also via HTTP).
We initially used simply io.Copy but ran into a few problems:
If the backend does not provide any more data (but the stream should be keept open), io.Copy blocks on r.Read() even though the client request was closed. Thus the request to the backend ain't closed for a long time (maybe never).
http.ResponseWriter uses a buffer internally so not all written data is immediately sent to the client
For this we implemented our own streaming version of io.Copy with the following features:
Still copies data with io.Copy (so it still supports io.WriteTo and others)
Watches for closing of the clients request using net/http.CloseNotifier and closes the backend reader
Any copy error after closing is ignored
Data to the net/http.ResponseWriter is automatically flushed (WriteFlusher is copied from Docker)
HttpStream streams the given reader to the given writer using the ResponseWriter
flush and watch canceling the request.
The ResponseWriter must implement `net/http.CloseNotifier` and `net/http.Flusher`.
If the request is canceled, the reader will be closed.
Any returned error of `io.Copy` is ignored, if the request is already canceled.
Stream continously reads the data from r and writes them w using io.Copy.
The copy operation can by sending any bool to cancel.
Any returned error of `io.Copy` is ignored, if the request is already canceled.