-
Notifications
You must be signed in to change notification settings - Fork 18.9k
Content encoding negotiation added to archive request. #36164
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
091e283 to
59b9e7e
Compare
|
hmm, it looks like our vendor check is broken. I believe |
dnephin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new dependency needs to be added to vendor.conf I will look into why it didn't fail.
The approach seems reasonable to me. A middleware might be too much because it's not necessarily safe for all cases, but moving it out to a function should at least let us re-use it.
api/server/router/container/copy.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This suggests that maybe the content-type should change as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think since the Content-Encoding header is being set to gzip the Content-Type should stay as application/x-tar as it is the underlying content type after decoding:
The
Content-Encodingentity header is used to compress the media-type. When present, its value indicates which encodings were applied to the entity-body. It lets the client know, how to decode in order to obtain the media-type referenced by theContent-Typeheader.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding
api/server/router/container/copy.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this code should go into a function so that we'll be able to re-use it later.
func writeCompressedResponse(r *http.Request, w http.ResponseWriter, body io.Reader) error {
...
}
client/container_copy.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting!
We have ioutils.ReadCloserWrapper from github.com/docker/docker/pkg/ioutils which does this as well without the anonymous struct. I think it would be good to use that.
Is it really safe to defer gr.Close() here? Doesn't that close the Reader before it's used? I think you would need to use a closer on ioutils.ReadCloserWrapper that closes both Readers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch on the closer. I need to setup some stuff to develop on the server side, so I hadn't really tested this.
client/container_copy.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should also be a new function readResponsebody(resp serverResponse) (io.ReadCloser, error).
2440772 to
394116d
Compare
|
@dnephin I made some fixes as you suggested. On the client side, as long as the |
dnephin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
I think it could use a test for writeCompressedResponse(), but feel free to wait until someone else has reviewed the design
client/client.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice find! I didn't know about this
|
Resolved conflicts and rebased this on master. |
vdemeester
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🐸
Should we split the vendoring to the code in two separate commits ? 👼
|
@vdemeester It is up to you guys, I have no preference on splitting up the commit. |
|
Given that this was opened to improve performance, do we need bench tests for this? Silly question; is the compression done "streaming" or is the content read into memory first? |
|
ping @unclejack @cpuguy83 PTAL |
I'm not sure benchmarking is necessary in this case, because it would simply be testing the compression rate which would depend on the particular data chosen for the benchmark.
The data is being compressed in as a stream as it is written to the client. |
|
Rebased it on master to resolve conflicts. |
client/client.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be the default, so not needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I specified it to be explicit that the proper behavior of the client depends on that property value.
Also, if for whatever reason the DisableCompression field is removed or renamed the build would break and then things could be corrected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's up to you though, do you want me to remove the setting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if for whatever reason the DisableCompression field is removed or renamed
I regard the Go team highly enough that I'm confident that would never happen. I think it's safe to remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I have removed this setting and rebased on master.
client/client.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
|
LGTM |
Codecov Report
@@ Coverage Diff @@
## master #36164 +/- ##
==========================================
+ Coverage 34.64% 34.65% +<.01%
==========================================
Files 612 612
Lines 45310 45310
==========================================
+ Hits 15696 15700 +4
+ Misses 27556 27553 -3
+ Partials 2058 2057 -1 |
Signed-off-by: Emil Davtyan <emil2k@gmail.com>
- What I did
Enabled content encoding negotiation to speed up retrieval of container archives.
A better solution might be, some sort of middleware applied to most endpoints on the server and from the client side an
Accept-Encodingheader could be set either using a client'scustomHTTPHeadersattribute or in individual methods.I'm not really sure this is the best way to do this, but I'm posting this here to get feedback and guidance.
- How I did it
Parses the
Accept-Encodingheader in handler forGET /containers/{id}/archiverequests, determines the best encoding, compresses the files, and adds the appropriateContent-Encodingheader.- How to verify it
I'm not sure how to add a test for this, looking for feedback.
- Description for the changelog
Content encoding negotiation added to archive request.