The Wayback Machine - https://web.archive.org/web/20201213161847/https://github.com/humanmade/S3-Uploads/pull/296
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add support for S3-compatible Object Storage #296

Open
wants to merge 1 commit into
base: master
from

Conversation

@Kocal
Copy link

@Kocal Kocal commented Feb 12, 2019

Hi,

This feature add the support for S3-compatible Object Storage (like Scaleway which is cheaper).

Actually we can configure the URL rewriting when displaying files (when using a CDN), but we can't configure the S3 endpoint (that is used for uploads, removal, ...) because the endpoint [bucket name].s3.amazonaws.com is hardcoded.

With this feature, we can configure where our files will be uploaded like this:

define('S3_UPLOADS_BUCKET_ENDPOINT_URL', 'https://s3.nl-ams.scw.cloud');

Also, it will automatically configure the URL rewriting (if it has not been configured manually).
In this case, $uploads->get_s3_url() will returns https://[bucket name].s3.nl-ams.scw.cloud.

I added some tests but I was not able to ran them locally.

This is the behavior on a real-project (already in production with our fork):

bibliotheque de medias bwt wordpress - google chrome_176
The file have the good URL

selection_177
The file is properly uploaded to Scaleway's Object Storage.

Thanks to consider this PR 馃檪

Copy link

@hm-linter hm-linter bot left a comment

Linting failed (1110 errors, 84 warnings).

(1141 notices occurred in your codebase, but were on files/lines not included in this PR.)

@tristanbes
Copy link
Contributor

@tristanbes tristanbes commented Feb 12, 2019

PS: We're using this on production 馃憤
Also a good candidate for v2.0

@tristanbes
Copy link
Contributor

@tristanbes tristanbes commented Feb 15, 2019

@roborourke
Copy link
Contributor

@roborourke roborourke commented Feb 15, 2019

@tristanbes the endpoint can be configured with the following:

// Filter S3 Uploads params.
add_filter( 's3_uploads_s3_client_params', function ( $params ) {
	$params['endpoint'] = 'https://your.endpoint.com;
	$params['use_path_style_endpoint'] = true;
	$params['debug'] = false; // Set to true if uploads are failing.
	return $params;
} );

This is the approach we used on a Minio extension for Chassis to help with testing this plugin locally.

I'll add this filter to the readme but I don't think this PR will be merged I'm afraid - there are others for the same functionality in #208 and #268

@tristanbes
Copy link
Contributor

@tristanbes tristanbes commented Feb 15, 2019

We know but:

1/ Why add an ugly filter to 150+ of our WordPress when a few lines can enable this behaviour in a clean way inside your plugin ?

2/ This feature has been tested and documented in this pull request

3/ It's not breaking the existing setup, nor introduce a BC break. It's just for those who needs it who can use it....

Please don't make us use a fork of your plugin :/

@roborourke
Copy link
Contributor

@roborourke roborourke commented Feb 15, 2019

The main reason is to avoid adding another constant where it isn't strictly necessary. Constants are always global so it makes it harder to have multiple instances of S3 Uploads - some use cases we've seen involve sharding uploads to different buckets on multisite for example so using the filter is the most flexible solution.

The scope of this plugin is really just to support AWS S3 as it's a core part of our own infrastructure, support for other providers is a happy coincidence.

It can be made a bit cleaner by adding the filter in your own separate plugin that can be added to those sites (maybe as a mu-plugin) that then uses your constant to set the value.

@tristanbes
Copy link
Contributor

@tristanbes tristanbes commented Feb 15, 2019

It's not a happy coincidence, it's because they work to make themselves compliant with the S3 API.

Also, it's something that is supported inside AwsClient :https://github.com/aws/aws-sdk-php/blob/master/src/AwsClient.php#L94

Your point is basically the same like "we don't use the region feature, so we don't provide a way to configure them and we harcoded it for Arizona region inside our plugin"

馃槥

@roborourke
Copy link
Contributor

@roborourke roborourke commented Feb 15, 2019

Fair points. I'll re-raise the issue internally but last time I did it resulted in just adding the client params filter so we'll see :/

For what it's worth in my own experience setting the endpoint alone wasn't enough and I had to configure other params like path style to make it work so adding the constant isn't necessarily a silver bullet.

@rmccue
Copy link
Member

@rmccue rmccue commented Feb 15, 2019

Your point is basically the same like "we don't use the region feature, so we don't provide a way to configure them and we harcoded it for Arizona region inside our plugin"

This is basically our support policy. If we add features that we're not using, it's entirely possible that we'll break them in the future without noticing. Tests do help with this, but they can increase the effort needed for us to introduce new features in the future. Adding features increases our support and maintenance burdens, even if we don't have to write the code in the first place.

Given that the functionality here is already possible in a more generic way, I'm not sure what we really gain by adding a constant. The code in #296 (comment) basically achieves the same thing (you'll also need to filter s3_uploads_bucket_url for the same behaviour as in the PR).

1/ Why add an ugly filter to 150+ of our WordPress when a few lines can enable this behaviour in a clean way inside your plugin ?

You'll need to define the constant on your sites anyway, so I'm not sure what you mean here. The difference between a constant and a filter isn't that large.


With that said, I'm not going to say it's a definite no here, as this functionality may be useful for our purposes as well (for example, running Minio for testing). We're continuing to discuss this internally.

@tristanbes
Copy link
Contributor

@tristanbes tristanbes commented Mar 7, 2019

Hi, is there any status update on this matter ?

@homerjam
Copy link

@homerjam homerjam commented Mar 19, 2019

FWIW DigitalOcean spaces is also S3 compatible - this feature seems like a good idea

@Kocal
Copy link
Author

@Kocal Kocal commented Mar 19, 2019

This is basically our support policy. If we add features that we're not using, it's entirely possible that we'll break them in the future without noticing. Tests do help with this, but they can increase the effort needed for us to introduce new features in the future

Well, even if my pull request have been used in production and worked like a charm, that's sure it's better to write tests for this.

But how? I wasn't even able to run my own tests because PHPUnit is not installed locally, and even if you use PHPUnit globally, all (except one 馃槷) of the tests are failing... And it's also happens on every opened PR, on Travis... come on 馃槥

Also, since d93f035 that document how to use filter s3_uploads_s3_client_params to configure the endpoint for uploads, maybe we don't need this pull request anymore.

And it's true that:

add_filter( 's3_uploads_s3_client_params', function ( $params ) {
	$params['endpoint'] = 'https://your.endpoint.com';
    // ...
	return $params;
} );

it 100 times more flexible than this:

define('S3_UPLOADS_BUCKET_ENDPOINT_URL', 'https://s3.nl-ams.scw.cloud');

but with the second solution, we are able to easily configure both:

  • endpoint URL for upload
  • and the URL rewriting (if not explicitly configured)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
5 participants
You can鈥檛 perform that action at this time.