Using Cloud Storage
First setup the GC CLI interface. The CLI allows for creating/deleting buckets or copy/move/delete content to a bucket or between buckets.
The default bucket access is 'private' so content can only be accessed if requests are signed using the S3 signatures as outlined in the Apache Configuration from the Using S3 with Authentication section. Google uses the header signature method as defined by AWS.
If that is not desired then permissions can be used to create different settings.
Following the storage proxy Installation documentation, the
just created bucket and uploaded content can be streamed by adding the
UspEnableSubreq
directive and defining <Proxy>
sections for each remote
storage server used.
<Location "/">
UspHandleIsm on
UspEnableSubreq on
IsmProxyPass https://storage.googleapis.com/your-bucket/
</Location>
<Proxy "https://storage.googleapis.com/your-bucket/">
S3SecretKey YOUR_SECRET_KEY
S3AccessKey YOUR_ACCESS_KEY
S3Region YOUR_REGION
S3UseHeaders on
ProxySet connectiontimeout=5 enablereuse=on keepalive=on retry=0 timeout=30 ttl=300
</Proxy>
Attention
It is important to use the 'User account HMAC' as described in the header signature method document and not the 'Service account HMAC'. Using the wrong signature will result in a 403 response - this can be easily tested with the code snippet presented at the beginning of the header signature method document: in Python for instance, the Boto library will list the available buckets if the right access and secret keys have been used but otherwise present an error.
The URL to the content then becomes the following, for instance for MPEG-DASH:
http://www.example.com/tears-of-steel.ism/.mpd
where www.example.com is the webserver running USP and has the previous vhost
snippet (and the tears-of-steel content in 'your-bucket' used with both
IsmProxPass
and Proxy
directives.
Note
For guidelines on how to use Unified Packager with Google Storage see How to write directly to Google Cloud Storage.