Update Azure SDK with release v16.2.1 Update Azure autorest SDK with release v10.8.1 Signed-off-by: Yu Wang <yuwa@microsoft.com>
This commit is contained in:
parent
83389a1480
commit
62797237b9
79 changed files with 12075 additions and 2797 deletions
5
vendor/github.com/Azure/azure-sdk-for-go/NOTICE
generated
vendored
Normal file
5
vendor/github.com/Azure/azure-sdk-for-go/NOTICE
generated
vendored
Normal file
|
@ -0,0 +1,5 @@
|
|||
Microsoft Azure-SDK-for-Go
|
||||
Copyright 2014-2017 Microsoft
|
||||
|
||||
This product includes software developed at
|
||||
the Microsoft Corporation (https://www.microsoft.com).
|
321
vendor/github.com/Azure/azure-sdk-for-go/README.md
generated
vendored
321
vendor/github.com/Azure/azure-sdk-for-go/README.md
generated
vendored
|
@ -1,60 +1,301 @@
|
|||
# Microsoft Azure SDK for Go
|
||||
[](https://godoc.org/github.com/Azure/azure-sdk-for-go)
|
||||
[](https://travis-ci.org/Azure/azure-sdk-for-go)
|
||||
# Azure SDK for Go
|
||||
|
||||
[](https://godoc.org/github.com/Azure/azure-sdk-for-go)
|
||||
[](https://travis-ci.org/Azure/azure-sdk-for-go)
|
||||
[](https://goreportcard.com/report/github.com/Azure/azure-sdk-for-go)
|
||||
|
||||
azure-sdk-for-go provides Go packages for managing and using Azure services. It has been
|
||||
tested with Go 1.8, 1.9 and 1.10.
|
||||
|
||||
This is Microsoft Azure's core repository for hosting Go packages which offer a more convenient way of targeting Azure
|
||||
REST endpoints. Here, you'll find a mix of code generated by [Autorest](https://github.com/Azure/autorest) and hand
|
||||
maintained packages.
|
||||
To be notified about updates and changes, subscribe to the [Azure update
|
||||
feed](https://azure.microsoft.com/updates/).
|
||||
|
||||
> **NOTE:** This repository is under heavy ongoing development and should be considered a preview. Vendoring your
|
||||
dependencies is always a good idea, but it is doubly important if you're consuming this library.
|
||||
Users of the SDK may prefer to jump right in to our samples repo at
|
||||
[github.com/Azure-Samples/azure-sdk-for-go-samples][samples_repo].
|
||||
|
||||
# Installation
|
||||
- If you don't already have it, install [the Go Programming Language](https://golang.org/dl/).
|
||||
- Go get the SDK:
|
||||
### Build Details
|
||||
|
||||
```
|
||||
$ go get -u github.com/Azure/azure-sdk-for-go
|
||||
Most packages in the SDK are generated from [Azure API specs][azure_rest_specs]
|
||||
using [Azure/autorest.go][] and [Azure/autorest][]. These generated packages
|
||||
depend on the HTTP client implemented at [Azure/go-autorest][].
|
||||
|
||||
[azure_rest_specs]: https://github.com/Azure/azure-rest-api-specs
|
||||
[Azure/autorest]: https://github.com/Azure/autorest
|
||||
[Azure/autorest.go]: https://github.com/Azure/autorest.go
|
||||
[Azure/go-autorest]: https://github.com/Azure/go-autorest
|
||||
|
||||
The SDK codebase adheres to [semantic versioning](https://semver.org) and thus
|
||||
avoids breaking changes other than at major (x.0.0) releases. However,
|
||||
occasionally Azure API fixes require breaking updates within an individual
|
||||
package; these exceptions are noted in release changelogs.
|
||||
|
||||
To more reliably manage dependencies like the Azure SDK in your applications we
|
||||
recommend [golang/dep](https://github.com/golang/dep).
|
||||
|
||||
# Install and Use:
|
||||
|
||||
### Install
|
||||
|
||||
```sh
|
||||
$ go get -u github.com/Azure/azure-sdk-for-go/...
|
||||
```
|
||||
|
||||
> **IMPORTANT:** We highly suggest vendoring Azure SDK for Go as a dependency. For vendoring dependencies, Azure SDK
|
||||
for Go uses [glide](https://github.com/Masterminds/glide).
|
||||
or if you use dep, within your repo run:
|
||||
|
||||
```sh
|
||||
$ dep ensure -add github.com/Azure/azure-sdk-for-go
|
||||
```
|
||||
|
||||
If you need to install Go, follow [the official instructions](https://golang.org/dl/).
|
||||
|
||||
### Use
|
||||
|
||||
For complete examples of many scenarios see [Azure-Samples/azure-sdk-for-go-samples][samples_repo].
|
||||
|
||||
1. Import a package from the [services][services_dir] directory.
|
||||
2. Create and authenticate a client with a `New*Client` func, e.g.
|
||||
`c := compute.NewVirtualMachinesClient(...)`.
|
||||
3. Invoke API methods using the client, e.g. `c.CreateOrUpdate(...)`.
|
||||
4. Handle responses.
|
||||
|
||||
[services_dir]: https://github.com/Azure/azure-sdk-for-go/tree/master/services
|
||||
|
||||
For example, to create a new virtual network (substitute your own values for
|
||||
strings in angle brackets):
|
||||
|
||||
Note: For more on authentication and the `Authorizer` interface see [the next
|
||||
section](#authentication).
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"github.com/Azure/azure-sdk-for-go/services/network/mgmt/2017-09-01/network"
|
||||
"github.com/Azure/go-autorest/autorest/azure/auth"
|
||||
"github.com/Azure/go-autorest/autorest/to"
|
||||
)
|
||||
|
||||
func main() {
|
||||
vnetClient := network.NewVirtualNetworksClient("<subscriptionID>")
|
||||
authorizer, err := auth.NewAuthorizerFromEnvironment()
|
||||
|
||||
if err == nil {
|
||||
vnetClient.Authorizer = authorizer
|
||||
}
|
||||
|
||||
vnetClient.CreateOrUpdate(context.Background(),
|
||||
"<resourceGroupName>",
|
||||
"<vnetName>",
|
||||
network.VirtualNetwork{
|
||||
Location: to.StringPtr("<azureRegion>"),
|
||||
VirtualNetworkPropertiesFormat: &network.VirtualNetworkPropertiesFormat{
|
||||
AddressSpace: &network.AddressSpace{
|
||||
AddressPrefixes: &[]string{"10.0.0.0/8"},
|
||||
},
|
||||
Subnets: &[]network.Subnet{
|
||||
{
|
||||
Name: to.StringPtr("<subnet1Name>"),
|
||||
SubnetPropertiesFormat: &network.SubnetPropertiesFormat{
|
||||
AddressPrefix: to.StringPtr("10.0.0.0/16"),
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: to.StringPtr("<subnet2Name>"),
|
||||
SubnetPropertiesFormat: &network.SubnetPropertiesFormat{
|
||||
AddressPrefix: to.StringPtr("10.1.0.0/16"),
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
```
|
||||
|
||||
### Authentication
|
||||
|
||||
Most SDK operations require an OAuth token for authentication and authorization. These are
|
||||
made available in the Go SDK For Azure through types implementing the `Authorizer` interface.
|
||||
You can get one from Azure Active Directory using the SDK's
|
||||
[authentication](https://godoc.org/github.com/Azure/go-autorest/autorest/azure/auth) package. The `Authorizer` returned should
|
||||
be set as the authorizer for the resource client, as shown in the [previous section](#use).
|
||||
|
||||
You can get an authorizer in the following ways:
|
||||
1. From the **Environment**:
|
||||
- Use `auth.auth.NewAuthorizerFromEnvironment()`. This call will try to get an authorizer based on the environment
|
||||
variables with different types of credentials in the following order:
|
||||
1. **Client Credentials**: Uses the AAD App Secret for auth.
|
||||
- `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
- `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
- `AZURE_CLIENT_SECRET`: Specifies the app secret to use.
|
||||
2. **Client Certificate**: Uses a certificate that was configured on the AAD Service Principal.
|
||||
- `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
- `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
- `AZURE_CERTIFICATE_PATH`: Specifies the certificate Path to use.
|
||||
- `AZURE_CERTIFICATE_PASSWORD`: Specifies the certificate password to use.
|
||||
3. **Username Pasword**: Uses a username and a password for auth. This is not recommended. Use `Device Flow` Auth instead for user interactive acccess.
|
||||
- `AZURE_TENANT_ID`: Specifies the Tenant to which to authenticate.
|
||||
- `AZURE_CLIENT_ID`: Specifies the app client ID to use.
|
||||
- `AZURE_USERNAME`: Specifies the username to use.
|
||||
- `AZURE_PASSWORD`: Specifies the password to use.
|
||||
4. **MSI**: Only available for apps running in Azure. No configuration needed as it leverages the fact that the app is running in Azure. See [Azure Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/msi-overview).
|
||||
|
||||
- Optionally, the following environment variables can be defined:
|
||||
- `AZURE_ENVIRONMENT`: Specifies the Azure Environment to use. If not set, it defaults to `AzurePublicCloud`. (Not applicable to MSI based auth)
|
||||
- `AZURE_AD_RESOURCE`: Specifies the AAD resource ID to use. If not set, it defaults to `ResourceManagerEndpoint`which allows management operations against Azure Resource Manager.
|
||||
|
||||
2. From an **Auth File**:
|
||||
- Create a service principal and output the file content using `az ad sp create-for-rbac --sdk-auth` from the Azure CLI.For more details see [az ad sp](https://docs.microsoft.com/en-us/cli/azure/ad/sp).
|
||||
- Set environment variable `AZURE_AUTH_LOCATION` for finding the file.
|
||||
- Use `auth.NewAuthorizerFromFile()` for getting the `Authorizer` based on the auth file.
|
||||
|
||||
3. From **Device Flow** by configuring `auth.DeviceFlowConfig` and calling the `Authorizer()` method.
|
||||
|
||||
Note: To authenticate you first need to create a service principal in Azure. To create a new service principal, run
|
||||
`az ad sp create-for-rbac -n "<app_name>"` in the
|
||||
[azure-cli](https://github.com/Azure/azure-cli). See
|
||||
[these docs](https://docs.microsoft.com/cli/azure/create-an-azure-service-principal-azure-cli?view=azure-cli-latest)
|
||||
for more info. Copy the new principal's ID, secret, and tenant ID for use in your app.
|
||||
|
||||
Alternatively, if your apps are running in Azure, you can now leverage the [Managed Service Identity](https://docs.microsoft.com/en-us/azure/active-directory/msi-overview).
|
||||
|
||||
# Versioning
|
||||
## SDK Versions
|
||||
The tags in this repository are based on, but do not conform to [SemVer.org's recommendations](http://semver.org/).
|
||||
For now, the "-beta" tag is an indicator that we are still in preview and still are planning on releasing some breaking
|
||||
changes.
|
||||
|
||||
In repositories that are children of this one, [storage for example](https://github.com/Azure/azure-storage-go), we
|
||||
have adopted SemVer.org's recommendations.
|
||||
azure-sdk-for-go provides at least a basic Go binding for every Azure API. To
|
||||
provide maximum flexibility to users, the SDK even includes previous versions of
|
||||
Azure APIs which are still in use. This enables us to support users of the
|
||||
most updated Azure datacenters, regional datacenters with earlier APIs, and
|
||||
even on-premises installations of Azure Stack.
|
||||
|
||||
## Azure Versions
|
||||
Azure services _mostly_ do not use SemVer based versions. Rather, they use profiles identified by dates. One will often
|
||||
see this casually referred to as an "APIVersion". At the moment, our SDK only supports the most recent profiles. In
|
||||
order to lock to an API version, one must also lock to an SDK version. However, as discussed in
|
||||
[#517](https://github.com/Azure/azure-sdk-for-go/issues/517), our objective is to reorganize and publish independent
|
||||
packages for each profile. In that way, we'll be able to have parallel support in a single SDK version for all
|
||||
APIVersions supported by Azure.
|
||||
**SDK versions** apply globally and are tracked by git
|
||||
[tags](https://github.com/Azure/azure-sdk-for-go/tags). These are in x.y.z form
|
||||
and generally adhere to [semantic versioning](https://semver.org) specifications.
|
||||
|
||||
# Documentation
|
||||
**Service API versions** are generally represented by a date string and are
|
||||
tracked by offering separate packages for each version. For example, to choose the
|
||||
latest API versions for Compute and Network, use the following imports:
|
||||
|
||||
- Azure SDK for Go Documentation is available at [GoDoc.org](http://godoc.org/github.com/Azure/azure-sdk-for-go/).
|
||||
- Azure REST APIs used by packages in this repository are documented at [Microsoft Docs, Azure REST](https://docs.microsoft.com/en-us/rest/api/).
|
||||
- Azure Services are discussed in detail at [Microsoft Docs, Azure Services](https://docs.microsoft.com/en-us/azure/#pivot=services).
|
||||
```go
|
||||
import (
|
||||
"github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2017-12-01/compute"
|
||||
"github.com/Azure/azure-sdk-for-go/services/network/mgmt/2017-09-01/network"
|
||||
)
|
||||
```
|
||||
|
||||
# License
|
||||
Occasionally service-side changes require major changes to existing versions.
|
||||
These cases are noted in the changelog.
|
||||
|
||||
This project is published under [Apache 2.0 License](LICENSE).
|
||||
All avilable services and versions are listed under the `services/` path in
|
||||
this repo and in [GoDoc][services_godoc]. Run `find ./services -type d
|
||||
-mindepth 3` to list all available service packages.
|
||||
|
||||
[services_godoc]: https://godoc.org/github.com/Azure/azure-sdk-for-go/services
|
||||
|
||||
# Contribute
|
||||
### Profiles
|
||||
|
||||
If you would like to become an active contributor to this project please follow the instructions provided in [Microsoft
|
||||
Azure Projects Contribution Guidelines](http://azure.github.io/guidelines/).
|
||||
Azure **API profiles** specify subsets of Azure APIs and versions. Profiles can provide:
|
||||
|
||||
* **stability** for your application by locking to specific API versions; and/or
|
||||
* **compatibility** for your application with Azure Stack and regional Azure datacenters.
|
||||
|
||||
In the Go SDK, profiles are available under the `profiles/` path and their
|
||||
component API versions are aliases to the true service package under
|
||||
`services/`. You can use them as follows:
|
||||
|
||||
```go
|
||||
import "github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/compute/mgmt/compute"
|
||||
import "github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/network/mgmt/network"
|
||||
import "github.com/Azure/azure-sdk-for-go/profiles/2017-03-09/storage/mgmt/storage"
|
||||
```
|
||||
|
||||
The 2017-03-09 profile is the only one currently available and is for use in
|
||||
hybrid Azure and Azure Stack environments. More profiles are under development.
|
||||
|
||||
In addition to versioned profiles, we also provide two special profiles
|
||||
`latest` and `preview`. These *always* include the most recent respective stable or
|
||||
preview API versions for each service, even when updating them to do so causes
|
||||
breaking changes. That is, these do *not* adhere to semantic versioning rules.
|
||||
|
||||
The `latest` and `preview` profiles can help you stay up to date with API
|
||||
updates as you build applications. Since they are by definition not stable,
|
||||
however, they **should not** be used in production apps. Instead, choose the
|
||||
latest specific API version (or an older one if necessary) from the `services/`
|
||||
path.
|
||||
|
||||
As an example, to automatically use the most recent Compute APIs, use one of
|
||||
the following imports:
|
||||
|
||||
```go
|
||||
import "github.com/Azure/azure-sdk-for-go/profiles/latest/compute/mgmt/compute"
|
||||
import "github.com/Azure/azure-sdk-for-go/profiles/preview/compute/mgmt/compute"
|
||||
```
|
||||
|
||||
## Inspecting and Debugging
|
||||
|
||||
All clients implement some handy hooks to help inspect the underlying requests being made to Azure.
|
||||
|
||||
- `RequestInspector`: View and manipulate the go `http.Request` before it's sent
|
||||
- `ResponseInspector`: View the `http.Response` received
|
||||
|
||||
Here is an example of how these can be used with `net/http/httputil` to see requests and responses.
|
||||
|
||||
```go
|
||||
|
||||
vnetClient := network.NewVirtualNetworksClient("<subscriptionID>")
|
||||
vnetClient.RequestInspector = LogRequest()
|
||||
vnetClient.ResponseInspector = LogResponse()
|
||||
|
||||
...
|
||||
|
||||
func LogRequest() autorest.PrepareDecorator {
|
||||
return func(p autorest.Preparer) autorest.Preparer {
|
||||
return autorest.PreparerFunc(func(r *http.Request) (*http.Request, error) {
|
||||
r, err := p.Prepare(r)
|
||||
if err != nil {
|
||||
log.Println(err)
|
||||
}
|
||||
dump, _ := httputil.DumpRequestOut(r, true)
|
||||
log.Println(string(dump))
|
||||
return r, err
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func LogResponse() autorest.RespondDecorator {
|
||||
return func(p autorest.Responder) autorest.Responder {
|
||||
return autorest.ResponderFunc(func(r *http.Response) error {
|
||||
err := p.Respond(r)
|
||||
if err != nil {
|
||||
log.Println(err)
|
||||
}
|
||||
dump, _ := httputil.DumpResponse(r, true)
|
||||
log.Println(string(dump))
|
||||
return err
|
||||
})
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
# Resources
|
||||
|
||||
- SDK docs are at [godoc.org](https://godoc.org/github.com/Azure/azure-sdk-for-go/).
|
||||
- SDK samples are at [Azure-Samples/azure-sdk-for-go-samples](https://github.com/Azure-Samples/azure-sdk-for-go-samples).
|
||||
- SDK notifications are published via the [Azure update feed](https://azure.microsoft.com/updates/).
|
||||
- Azure API docs are at [docs.microsoft.com/rest/api](https://docs.microsoft.com/rest/api/).
|
||||
- General Azure docs are at [docs.microsoft.com/azure](https://docs.microsoft.com/azure).
|
||||
|
||||
### Other Azure packages for Go
|
||||
|
||||
- [Azure Storage Blobs](https://azure.microsoft.com/services/storage/blobs) - [github.com/Azure/azure-storage-blob-go](https://github.com/Azure/azure-storage-blob-go)
|
||||
- [Azure Applications Insights](https://azure.microsoft.com/en-us/services/application-insights/) - [github.com/Microsoft/ApplicationInsights-Go](https://github.com/Microsoft/ApplicationInsights-Go)
|
||||
|
||||
## License
|
||||
|
||||
Apache 2.0, see [LICENSE](./LICENSE).
|
||||
|
||||
## Contribute
|
||||
|
||||
See [CONTRIBUTING.md](./CONTRIBUTING.md).
|
||||
|
||||
[samples_repo]: https://github.com/Azure-Samples/azure-sdk-for-go-samples
|
||||
|
||||
This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
|
||||
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact
|
||||
[opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
|
||||
|
|
18
vendor/github.com/Azure/azure-sdk-for-go/storage/README.md
generated
vendored
Normal file
18
vendor/github.com/Azure/azure-sdk-for-go/storage/README.md
generated
vendored
Normal file
|
@ -0,0 +1,18 @@
|
|||
# Azure Storage SDK for Go (Preview)
|
||||
|
||||
:exclamation: IMPORTANT: This package is in maintenance only and will be deprecated in the
|
||||
future. Consider using the new package for blobs currently in preview at
|
||||
[github.com/Azure/azure-storage-blob-go](https://github.com/Azure/azure-storage-blob-go).
|
||||
New Table, Queue and File packages are also in development.
|
||||
|
||||
The `github.com/Azure/azure-sdk-for-go/storage` package is used to manage
|
||||
[Azure Storage](https://docs.microsoft.com/en-us/azure/storage/) data plane
|
||||
resources: containers, blobs, tables, and queues.
|
||||
|
||||
To manage storage *accounts* use Azure Resource Manager (ARM) via the packages
|
||||
at [github.com/Azure/azure-sdk-for-go/services/storage](https://github.com/Azure/azure-sdk-for-go/tree/master/services/storage).
|
||||
|
||||
This package also supports the [Azure Storage
|
||||
Emulator](https://azure.microsoft.com/documentation/articles/storage-use-emulator/)
|
||||
(Windows only).
|
||||
|
91
vendor/github.com/Azure/azure-sdk-for-go/storage/appendblob.go
generated
vendored
Normal file
91
vendor/github.com/Azure/azure-sdk-for-go/storage/appendblob.go
generated
vendored
Normal file
|
@ -0,0 +1,91 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/md5"
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"time"
|
||||
)
|
||||
|
||||
// PutAppendBlob initializes an empty append blob with specified name. An
|
||||
// append blob must be created using this method before appending blocks.
|
||||
//
|
||||
// See CreateBlockBlobFromReader for more info on creating blobs.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob
|
||||
func (b *Blob) PutAppendBlob(options *PutBlobOptions) error {
|
||||
params := url.Values{}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-blob-type"] = string(BlobTypeAppend)
|
||||
headers = mergeHeaders(headers, headersFromStruct(b.Properties))
|
||||
headers = b.Container.bsc.client.addMetadataToHeaders(headers, b.Metadata)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return b.respondCreation(resp, BlobTypeAppend)
|
||||
}
|
||||
|
||||
// AppendBlockOptions includes the options for an append block operation
|
||||
type AppendBlockOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
MaxSize *uint `header:"x-ms-blob-condition-maxsize"`
|
||||
AppendPosition *uint `header:"x-ms-blob-condition-appendpos"`
|
||||
IfModifiedSince *time.Time `header:"If-Modified-Since"`
|
||||
IfUnmodifiedSince *time.Time `header:"If-Unmodified-Since"`
|
||||
IfMatch string `header:"If-Match"`
|
||||
IfNoneMatch string `header:"If-None-Match"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
ContentMD5 bool
|
||||
}
|
||||
|
||||
// AppendBlock appends a block to an append blob.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Append-Block
|
||||
func (b *Blob) AppendBlock(chunk []byte, options *AppendBlockOptions) error {
|
||||
params := url.Values{"comp": {"appendblock"}}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-blob-type"] = string(BlobTypeAppend)
|
||||
headers["Content-Length"] = fmt.Sprintf("%v", len(chunk))
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
if options.ContentMD5 {
|
||||
md5sum := md5.Sum(chunk)
|
||||
headers[headerContentMD5] = base64.StdEncoding.EncodeToString(md5sum[:])
|
||||
}
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, bytes.NewReader(chunk), b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return b.respondCreation(resp, BlobTypeAppend)
|
||||
}
|
67
vendor/github.com/Azure/azure-sdk-for-go/storage/authorization.go
generated
vendored
67
vendor/github.com/Azure/azure-sdk-for-go/storage/authorization.go
generated
vendored
|
@ -1,6 +1,20 @@
|
|||
// Package storage provides clients for Microsoft Azure Storage Services.
|
||||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
|
@ -20,33 +34,39 @@ const (
|
|||
sharedKeyLiteForTable authentication = "sharedKeyLiteTable"
|
||||
|
||||
// headers
|
||||
headerAuthorization = "Authorization"
|
||||
headerContentLength = "Content-Length"
|
||||
headerDate = "Date"
|
||||
headerXmsDate = "x-ms-date"
|
||||
headerXmsVersion = "x-ms-version"
|
||||
headerContentEncoding = "Content-Encoding"
|
||||
headerContentLanguage = "Content-Language"
|
||||
headerContentType = "Content-Type"
|
||||
headerContentMD5 = "Content-MD5"
|
||||
headerIfModifiedSince = "If-Modified-Since"
|
||||
headerIfMatch = "If-Match"
|
||||
headerIfNoneMatch = "If-None-Match"
|
||||
headerIfUnmodifiedSince = "If-Unmodified-Since"
|
||||
headerRange = "Range"
|
||||
headerAcceptCharset = "Accept-Charset"
|
||||
headerAuthorization = "Authorization"
|
||||
headerContentLength = "Content-Length"
|
||||
headerDate = "Date"
|
||||
headerXmsDate = "x-ms-date"
|
||||
headerXmsVersion = "x-ms-version"
|
||||
headerContentEncoding = "Content-Encoding"
|
||||
headerContentLanguage = "Content-Language"
|
||||
headerContentType = "Content-Type"
|
||||
headerContentMD5 = "Content-MD5"
|
||||
headerIfModifiedSince = "If-Modified-Since"
|
||||
headerIfMatch = "If-Match"
|
||||
headerIfNoneMatch = "If-None-Match"
|
||||
headerIfUnmodifiedSince = "If-Unmodified-Since"
|
||||
headerRange = "Range"
|
||||
headerDataServiceVersion = "DataServiceVersion"
|
||||
headerMaxDataServiceVersion = "MaxDataServiceVersion"
|
||||
headerContentTransferEncoding = "Content-Transfer-Encoding"
|
||||
)
|
||||
|
||||
func (c *Client) addAuthorizationHeader(verb, url string, headers map[string]string, auth authentication) (map[string]string, error) {
|
||||
authHeader, err := c.getSharedKey(verb, url, headers, auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
if !c.sasClient {
|
||||
authHeader, err := c.getSharedKey(verb, url, headers, auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
headers[headerAuthorization] = authHeader
|
||||
}
|
||||
headers[headerAuthorization] = authHeader
|
||||
return headers, nil
|
||||
}
|
||||
|
||||
func (c *Client) getSharedKey(verb, url string, headers map[string]string, auth authentication) (string, error) {
|
||||
canRes, err := c.buildCanonicalizedResource(url, auth)
|
||||
canRes, err := c.buildCanonicalizedResource(url, auth, false)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
@ -58,15 +78,18 @@ func (c *Client) getSharedKey(verb, url string, headers map[string]string, auth
|
|||
return c.createAuthorizationHeader(canString, auth), nil
|
||||
}
|
||||
|
||||
func (c *Client) buildCanonicalizedResource(uri string, auth authentication) (string, error) {
|
||||
func (c *Client) buildCanonicalizedResource(uri string, auth authentication, sas bool) (string, error) {
|
||||
errMsg := "buildCanonicalizedResource error: %s"
|
||||
u, err := url.Parse(uri)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf(errMsg, err.Error())
|
||||
}
|
||||
|
||||
cr := bytes.NewBufferString("/")
|
||||
cr.WriteString(c.getCanonicalizedAccountName())
|
||||
cr := bytes.NewBufferString("")
|
||||
if c.accountName != StorageEmulatorAccountName || !sas {
|
||||
cr.WriteString("/")
|
||||
cr.WriteString(c.getCanonicalizedAccountName())
|
||||
}
|
||||
|
||||
if len(u.Path) > 0 {
|
||||
// Any portion of the CanonicalizedResource string that is derived from
|
||||
|
|
1252
vendor/github.com/Azure/azure-sdk-for-go/storage/blob.go
generated
vendored
1252
vendor/github.com/Azure/azure-sdk-for-go/storage/blob.go
generated
vendored
File diff suppressed because it is too large
Load diff
174
vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go
generated
vendored
Normal file
174
vendor/github.com/Azure/azure-sdk-for-go/storage/blobsasuri.go
generated
vendored
Normal file
|
@ -0,0 +1,174 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// OverrideHeaders defines overridable response heaedrs in
|
||||
// a request using a SAS URI.
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas
|
||||
type OverrideHeaders struct {
|
||||
CacheControl string
|
||||
ContentDisposition string
|
||||
ContentEncoding string
|
||||
ContentLanguage string
|
||||
ContentType string
|
||||
}
|
||||
|
||||
// BlobSASOptions are options to construct a blob SAS
|
||||
// URI.
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas
|
||||
type BlobSASOptions struct {
|
||||
BlobServiceSASPermissions
|
||||
OverrideHeaders
|
||||
SASOptions
|
||||
}
|
||||
|
||||
// BlobServiceSASPermissions includes the available permissions for
|
||||
// blob service SAS URI.
|
||||
type BlobServiceSASPermissions struct {
|
||||
Read bool
|
||||
Add bool
|
||||
Create bool
|
||||
Write bool
|
||||
Delete bool
|
||||
}
|
||||
|
||||
func (p BlobServiceSASPermissions) buildString() string {
|
||||
permissions := ""
|
||||
if p.Read {
|
||||
permissions += "r"
|
||||
}
|
||||
if p.Add {
|
||||
permissions += "a"
|
||||
}
|
||||
if p.Create {
|
||||
permissions += "c"
|
||||
}
|
||||
if p.Write {
|
||||
permissions += "w"
|
||||
}
|
||||
if p.Delete {
|
||||
permissions += "d"
|
||||
}
|
||||
return permissions
|
||||
}
|
||||
|
||||
// GetSASURI creates an URL to the blob which contains the Shared
|
||||
// Access Signature with the specified options.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas
|
||||
func (b *Blob) GetSASURI(options BlobSASOptions) (string, error) {
|
||||
uri := b.GetURL()
|
||||
signedResource := "b"
|
||||
canonicalizedResource, err := b.Container.bsc.client.buildCanonicalizedResource(uri, b.Container.bsc.auth, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
permissions := options.BlobServiceSASPermissions.buildString()
|
||||
return b.Container.bsc.client.blobAndFileSASURI(options.SASOptions, uri, permissions, canonicalizedResource, signedResource, options.OverrideHeaders)
|
||||
}
|
||||
|
||||
func (c *Client) blobAndFileSASURI(options SASOptions, uri, permissions, canonicalizedResource, signedResource string, headers OverrideHeaders) (string, error) {
|
||||
start := ""
|
||||
if options.Start != (time.Time{}) {
|
||||
start = options.Start.UTC().Format(time.RFC3339)
|
||||
}
|
||||
|
||||
expiry := options.Expiry.UTC().Format(time.RFC3339)
|
||||
|
||||
// We need to replace + with %2b first to avoid being treated as a space (which is correct for query strings, but not the path component).
|
||||
canonicalizedResource = strings.Replace(canonicalizedResource, "+", "%2b", -1)
|
||||
canonicalizedResource, err := url.QueryUnescape(canonicalizedResource)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
protocols := ""
|
||||
if options.UseHTTPS {
|
||||
protocols = "https"
|
||||
}
|
||||
stringToSign, err := blobSASStringToSign(permissions, start, expiry, canonicalizedResource, options.Identifier, options.IP, protocols, c.apiVersion, headers)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
sig := c.computeHmac256(stringToSign)
|
||||
sasParams := url.Values{
|
||||
"sv": {c.apiVersion},
|
||||
"se": {expiry},
|
||||
"sr": {signedResource},
|
||||
"sp": {permissions},
|
||||
"sig": {sig},
|
||||
}
|
||||
|
||||
if start != "" {
|
||||
sasParams.Add("st", start)
|
||||
}
|
||||
|
||||
if c.apiVersion >= "2015-04-05" {
|
||||
if protocols != "" {
|
||||
sasParams.Add("spr", protocols)
|
||||
}
|
||||
if options.IP != "" {
|
||||
sasParams.Add("sip", options.IP)
|
||||
}
|
||||
}
|
||||
|
||||
// Add override response hedaers
|
||||
addQueryParameter(sasParams, "rscc", headers.CacheControl)
|
||||
addQueryParameter(sasParams, "rscd", headers.ContentDisposition)
|
||||
addQueryParameter(sasParams, "rsce", headers.ContentEncoding)
|
||||
addQueryParameter(sasParams, "rscl", headers.ContentLanguage)
|
||||
addQueryParameter(sasParams, "rsct", headers.ContentType)
|
||||
|
||||
sasURL, err := url.Parse(uri)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
sasURL.RawQuery = sasParams.Encode()
|
||||
return sasURL.String(), nil
|
||||
}
|
||||
|
||||
func blobSASStringToSign(signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedIP, protocols, signedVersion string, headers OverrideHeaders) (string, error) {
|
||||
rscc := headers.CacheControl
|
||||
rscd := headers.ContentDisposition
|
||||
rsce := headers.ContentEncoding
|
||||
rscl := headers.ContentLanguage
|
||||
rsct := headers.ContentType
|
||||
|
||||
if signedVersion >= "2015-02-21" {
|
||||
canonicalizedResource = "/blob" + canonicalizedResource
|
||||
}
|
||||
|
||||
// https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx#Anchor_12
|
||||
if signedVersion >= "2015-04-05" {
|
||||
return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s", signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedIP, protocols, signedVersion, rscc, rscd, rsce, rscl, rsct), nil
|
||||
}
|
||||
|
||||
// reference: http://msdn.microsoft.com/en-us/library/azure/dn140255.aspx
|
||||
if signedVersion >= "2013-08-15" {
|
||||
return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s", signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedVersion, rscc, rscd, rsce, rscl, rsct), nil
|
||||
}
|
||||
|
||||
return "", errors.New("storage: not implemented SAS for versions earlier than 2013-08-15")
|
||||
}
|
114
vendor/github.com/Azure/azure-sdk-for-go/storage/blobserviceclient.go
generated
vendored
114
vendor/github.com/Azure/azure-sdk-for-go/storage/blobserviceclient.go
generated
vendored
|
@ -1,9 +1,26 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// BlobStorageClient contains operations for Microsoft Azure Blob Storage
|
||||
|
@ -38,13 +55,28 @@ type ListContainersParameters struct {
|
|||
}
|
||||
|
||||
// GetContainerReference returns a Container object for the specified container name.
|
||||
func (b BlobStorageClient) GetContainerReference(name string) Container {
|
||||
return Container{
|
||||
bsc: &b,
|
||||
func (b *BlobStorageClient) GetContainerReference(name string) *Container {
|
||||
return &Container{
|
||||
bsc: b,
|
||||
Name: name,
|
||||
}
|
||||
}
|
||||
|
||||
// GetContainerReferenceFromSASURI returns a Container object for the specified
|
||||
// container SASURI
|
||||
func GetContainerReferenceFromSASURI(sasuri url.URL) (*Container, error) {
|
||||
path := strings.Split(sasuri.Path, "/")
|
||||
if len(path) <= 1 {
|
||||
return nil, fmt.Errorf("could not find a container in URI: %s", sasuri.String())
|
||||
}
|
||||
cli := newSASClient().GetBlobService()
|
||||
return &Container{
|
||||
bsc: &cli,
|
||||
Name: path[1],
|
||||
sasuri: sasuri,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// ListContainers returns the list of containers in a storage account along with
|
||||
// pagination token and other response details.
|
||||
//
|
||||
|
@ -54,18 +86,53 @@ func (b BlobStorageClient) ListContainers(params ListContainersParameters) (*Con
|
|||
uri := b.client.getEndpoint(blobServiceName, "", q)
|
||||
headers := b.client.getStandardHeaders()
|
||||
|
||||
var out ContainerListResponse
|
||||
type ContainerAlias struct {
|
||||
bsc *BlobStorageClient
|
||||
Name string `xml:"Name"`
|
||||
Properties ContainerProperties `xml:"Properties"`
|
||||
Metadata BlobMetadata
|
||||
sasuri url.URL
|
||||
}
|
||||
type ContainerListResponseAlias struct {
|
||||
XMLName xml.Name `xml:"EnumerationResults"`
|
||||
Xmlns string `xml:"xmlns,attr"`
|
||||
Prefix string `xml:"Prefix"`
|
||||
Marker string `xml:"Marker"`
|
||||
NextMarker string `xml:"NextMarker"`
|
||||
MaxResults int64 `xml:"MaxResults"`
|
||||
Containers []ContainerAlias `xml:"Containers>Container"`
|
||||
}
|
||||
|
||||
var outAlias ContainerListResponseAlias
|
||||
resp, err := b.client.exec(http.MethodGet, uri, headers, nil, b.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
err = xmlUnmarshal(resp.body, &out)
|
||||
|
||||
// assign our client to the newly created Container objects
|
||||
for i := range out.Containers {
|
||||
out.Containers[i].bsc = &b
|
||||
defer resp.Body.Close()
|
||||
err = xmlUnmarshal(resp.Body, &outAlias)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
out := ContainerListResponse{
|
||||
XMLName: outAlias.XMLName,
|
||||
Xmlns: outAlias.Xmlns,
|
||||
Prefix: outAlias.Prefix,
|
||||
Marker: outAlias.Marker,
|
||||
NextMarker: outAlias.NextMarker,
|
||||
MaxResults: outAlias.MaxResults,
|
||||
Containers: make([]Container, len(outAlias.Containers)),
|
||||
}
|
||||
for i, cnt := range outAlias.Containers {
|
||||
out.Containers[i] = Container{
|
||||
bsc: &b,
|
||||
Name: cnt.Name,
|
||||
Properties: cnt.Properties,
|
||||
Metadata: map[string]string(cnt.Metadata),
|
||||
sasuri: cnt.sasuri,
|
||||
}
|
||||
}
|
||||
|
||||
return &out, err
|
||||
}
|
||||
|
||||
|
@ -82,11 +149,34 @@ func (p ListContainersParameters) getParameters() url.Values {
|
|||
out.Set("include", p.Include)
|
||||
}
|
||||
if p.MaxResults != 0 {
|
||||
out.Set("maxresults", fmt.Sprintf("%v", p.MaxResults))
|
||||
out.Set("maxresults", strconv.FormatUint(uint64(p.MaxResults), 10))
|
||||
}
|
||||
if p.Timeout != 0 {
|
||||
out.Set("timeout", fmt.Sprintf("%v", p.Timeout))
|
||||
out.Set("timeout", strconv.FormatUint(uint64(p.Timeout), 10))
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func writeMetadata(h http.Header) map[string]string {
|
||||
metadata := make(map[string]string)
|
||||
for k, v := range h {
|
||||
// Can't trust CanonicalHeaderKey() to munge case
|
||||
// reliably. "_" is allowed in identifiers:
|
||||
// https://msdn.microsoft.com/en-us/library/azure/dd179414.aspx
|
||||
// https://msdn.microsoft.com/library/aa664670(VS.71).aspx
|
||||
// http://tools.ietf.org/html/rfc7230#section-3.2
|
||||
// ...but "_" is considered invalid by
|
||||
// CanonicalMIMEHeaderKey in
|
||||
// https://golang.org/src/net/textproto/reader.go?s=14615:14659#L542
|
||||
// so k can be "X-Ms-Meta-Lol" or "x-ms-meta-lol_rofl".
|
||||
k = strings.ToLower(k)
|
||||
if len(v) == 0 || !strings.HasPrefix(k, strings.ToLower(userDefinedMetadataHeaderPrefix)) {
|
||||
continue
|
||||
}
|
||||
// metadata["lol"] = content of the last X-Ms-Meta-Lol header
|
||||
k = k[len(userDefinedMetadataHeaderPrefix):]
|
||||
metadata[k] = v[len(v)-1]
|
||||
}
|
||||
return metadata
|
||||
}
|
||||
|
|
270
vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go
generated
vendored
Normal file
270
vendor/github.com/Azure/azure-sdk-for-go/storage/blockblob.go
generated
vendored
Normal file
|
@ -0,0 +1,270 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// BlockListType is used to filter out types of blocks in a Get Blocks List call
|
||||
// for a block blob.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179400.aspx for all
|
||||
// block types.
|
||||
type BlockListType string
|
||||
|
||||
// Filters for listing blocks in block blobs
|
||||
const (
|
||||
BlockListTypeAll BlockListType = "all"
|
||||
BlockListTypeCommitted BlockListType = "committed"
|
||||
BlockListTypeUncommitted BlockListType = "uncommitted"
|
||||
)
|
||||
|
||||
// Maximum sizes (per REST API) for various concepts
|
||||
const (
|
||||
MaxBlobBlockSize = 100 * 1024 * 1024
|
||||
MaxBlobPageSize = 4 * 1024 * 1024
|
||||
)
|
||||
|
||||
// BlockStatus defines states a block for a block blob can
|
||||
// be in.
|
||||
type BlockStatus string
|
||||
|
||||
// List of statuses that can be used to refer to a block in a block list
|
||||
const (
|
||||
BlockStatusUncommitted BlockStatus = "Uncommitted"
|
||||
BlockStatusCommitted BlockStatus = "Committed"
|
||||
BlockStatusLatest BlockStatus = "Latest"
|
||||
)
|
||||
|
||||
// Block is used to create Block entities for Put Block List
|
||||
// call.
|
||||
type Block struct {
|
||||
ID string
|
||||
Status BlockStatus
|
||||
}
|
||||
|
||||
// BlockListResponse contains the response fields from Get Block List call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179400.aspx
|
||||
type BlockListResponse struct {
|
||||
XMLName xml.Name `xml:"BlockList"`
|
||||
CommittedBlocks []BlockResponse `xml:"CommittedBlocks>Block"`
|
||||
UncommittedBlocks []BlockResponse `xml:"UncommittedBlocks>Block"`
|
||||
}
|
||||
|
||||
// BlockResponse contains the block information returned
|
||||
// in the GetBlockListCall.
|
||||
type BlockResponse struct {
|
||||
Name string `xml:"Name"`
|
||||
Size int64 `xml:"Size"`
|
||||
}
|
||||
|
||||
// CreateBlockBlob initializes an empty block blob with no blocks.
|
||||
//
|
||||
// See CreateBlockBlobFromReader for more info on creating blobs.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob
|
||||
func (b *Blob) CreateBlockBlob(options *PutBlobOptions) error {
|
||||
return b.CreateBlockBlobFromReader(nil, options)
|
||||
}
|
||||
|
||||
// CreateBlockBlobFromReader initializes a block blob using data from
|
||||
// reader. Size must be the number of bytes read from reader. To
|
||||
// create an empty blob, use size==0 and reader==nil.
|
||||
//
|
||||
// Any headers set in blob.Properties or metadata in blob.Metadata
|
||||
// will be set on the blob.
|
||||
//
|
||||
// The API rejects requests with size > 256 MiB (but this limit is not
|
||||
// checked by the SDK). To write a larger blob, use CreateBlockBlob,
|
||||
// PutBlock, and PutBlockList.
|
||||
//
|
||||
// To create a blob from scratch, call container.GetBlobReference() to
|
||||
// get an empty blob, fill in blob.Properties and blob.Metadata as
|
||||
// appropriate then call this method.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob
|
||||
func (b *Blob) CreateBlockBlobFromReader(blob io.Reader, options *PutBlobOptions) error {
|
||||
params := url.Values{}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-blob-type"] = string(BlobTypeBlock)
|
||||
|
||||
headers["Content-Length"] = "0"
|
||||
var n int64
|
||||
var err error
|
||||
if blob != nil {
|
||||
type lener interface {
|
||||
Len() int
|
||||
}
|
||||
// TODO(rjeczalik): handle io.ReadSeeker, in case blob is *os.File etc.
|
||||
if l, ok := blob.(lener); ok {
|
||||
n = int64(l.Len())
|
||||
} else {
|
||||
var buf bytes.Buffer
|
||||
n, err = io.Copy(&buf, blob)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
blob = &buf
|
||||
}
|
||||
|
||||
headers["Content-Length"] = strconv.FormatInt(n, 10)
|
||||
}
|
||||
b.Properties.ContentLength = n
|
||||
|
||||
headers = mergeHeaders(headers, headersFromStruct(b.Properties))
|
||||
headers = b.Container.bsc.client.addMetadataToHeaders(headers, b.Metadata)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, blob, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return b.respondCreation(resp, BlobTypeBlock)
|
||||
}
|
||||
|
||||
// PutBlockOptions includes the options for a put block operation
|
||||
type PutBlockOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
ContentMD5 string `header:"Content-MD5"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// PutBlock saves the given data chunk to the specified block blob with
|
||||
// given ID.
|
||||
//
|
||||
// The API rejects chunks larger than 100 MiB (but this limit is not
|
||||
// checked by the SDK).
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Block
|
||||
func (b *Blob) PutBlock(blockID string, chunk []byte, options *PutBlockOptions) error {
|
||||
return b.PutBlockWithLength(blockID, uint64(len(chunk)), bytes.NewReader(chunk), options)
|
||||
}
|
||||
|
||||
// PutBlockWithLength saves the given data stream of exactly specified size to
|
||||
// the block blob with given ID. It is an alternative to PutBlocks where data
|
||||
// comes as stream but the length is known in advance.
|
||||
//
|
||||
// The API rejects requests with size > 100 MiB (but this limit is not
|
||||
// checked by the SDK).
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Block
|
||||
func (b *Blob) PutBlockWithLength(blockID string, size uint64, blob io.Reader, options *PutBlockOptions) error {
|
||||
query := url.Values{
|
||||
"comp": {"block"},
|
||||
"blockid": {blockID},
|
||||
}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["Content-Length"] = fmt.Sprintf("%v", size)
|
||||
|
||||
if options != nil {
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), query)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, blob, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return b.respondCreation(resp, BlobTypeBlock)
|
||||
}
|
||||
|
||||
// PutBlockListOptions includes the options for a put block list operation
|
||||
type PutBlockListOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
IfModifiedSince *time.Time `header:"If-Modified-Since"`
|
||||
IfUnmodifiedSince *time.Time `header:"If-Unmodified-Since"`
|
||||
IfMatch string `header:"If-Match"`
|
||||
IfNoneMatch string `header:"If-None-Match"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// PutBlockList saves list of blocks to the specified block blob.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Block-List
|
||||
func (b *Blob) PutBlockList(blocks []Block, options *PutBlockListOptions) error {
|
||||
params := url.Values{"comp": {"blocklist"}}
|
||||
blockListXML := prepareBlockListRequest(blocks)
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["Content-Length"] = fmt.Sprintf("%v", len(blockListXML))
|
||||
headers = mergeHeaders(headers, headersFromStruct(b.Properties))
|
||||
headers = b.Container.bsc.client.addMetadataToHeaders(headers, b.Metadata)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, strings.NewReader(blockListXML), b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
// GetBlockListOptions includes the options for a get block list operation
|
||||
type GetBlockListOptions struct {
|
||||
Timeout uint
|
||||
Snapshot *time.Time
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// GetBlockList retrieves list of blocks in the specified block blob.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Get-Block-List
|
||||
func (b *Blob) GetBlockList(blockType BlockListType, options *GetBlockListOptions) (BlockListResponse, error) {
|
||||
params := url.Values{
|
||||
"comp": {"blocklist"},
|
||||
"blocklisttype": {string(blockType)},
|
||||
}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
params = addSnapshot(params, options.Snapshot)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
var out BlockListResponse
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodGet, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return out, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
return out, err
|
||||
}
|
728
vendor/github.com/Azure/azure-sdk-for-go/storage/client.go
generated
vendored
728
vendor/github.com/Azure/azure-sdk-for-go/storage/client.go
generated
vendored
|
@ -1,8 +1,22 @@
|
|||
// Package storage provides clients for Microsoft Azure Storage Services.
|
||||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"bufio"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"encoding/xml"
|
||||
|
@ -10,12 +24,18 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"mime"
|
||||
"mime/multipart"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"regexp"
|
||||
"runtime"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/Azure/azure-sdk-for-go/version"
|
||||
"github.com/Azure/go-autorest/autorest"
|
||||
"github.com/Azure/go-autorest/autorest/azure"
|
||||
)
|
||||
|
||||
|
@ -26,9 +46,11 @@ const (
|
|||
|
||||
// DefaultAPIVersion is the Azure Storage API version string used when a
|
||||
// basic client is created.
|
||||
DefaultAPIVersion = "2015-04-05"
|
||||
DefaultAPIVersion = "2016-05-31"
|
||||
|
||||
defaultUseHTTPS = true
|
||||
defaultUseHTTPS = true
|
||||
defaultRetryAttempts = 5
|
||||
defaultRetryDuration = time.Second * 5
|
||||
|
||||
// StorageEmulatorAccountName is the fixed storage account used by Azure Storage Emulator
|
||||
StorageEmulatorAccountName = "devstoreaccount1"
|
||||
|
@ -46,15 +68,79 @@ const (
|
|||
storageEmulatorQueue = "127.0.0.1:10001"
|
||||
|
||||
userAgentHeader = "User-Agent"
|
||||
|
||||
userDefinedMetadataHeaderPrefix = "x-ms-meta-"
|
||||
|
||||
connectionStringAccountName = "accountname"
|
||||
connectionStringAccountKey = "accountkey"
|
||||
connectionStringEndpointSuffix = "endpointsuffix"
|
||||
connectionStringEndpointProtocol = "defaultendpointsprotocol"
|
||||
|
||||
connectionStringBlobEndpoint = "blobendpoint"
|
||||
connectionStringFileEndpoint = "fileendpoint"
|
||||
connectionStringQueueEndpoint = "queueendpoint"
|
||||
connectionStringTableEndpoint = "tableendpoint"
|
||||
connectionStringSAS = "sharedaccesssignature"
|
||||
)
|
||||
|
||||
var (
|
||||
validStorageAccount = regexp.MustCompile("^[0-9a-z]{3,24}$")
|
||||
defaultValidStatusCodes = []int{
|
||||
http.StatusRequestTimeout, // 408
|
||||
http.StatusInternalServerError, // 500
|
||||
http.StatusBadGateway, // 502
|
||||
http.StatusServiceUnavailable, // 503
|
||||
http.StatusGatewayTimeout, // 504
|
||||
}
|
||||
)
|
||||
|
||||
// Sender sends a request
|
||||
type Sender interface {
|
||||
Send(*Client, *http.Request) (*http.Response, error)
|
||||
}
|
||||
|
||||
// DefaultSender is the default sender for the client. It implements
|
||||
// an automatic retry strategy.
|
||||
type DefaultSender struct {
|
||||
RetryAttempts int
|
||||
RetryDuration time.Duration
|
||||
ValidStatusCodes []int
|
||||
attempts int // used for testing
|
||||
}
|
||||
|
||||
// Send is the default retry strategy in the client
|
||||
func (ds *DefaultSender) Send(c *Client, req *http.Request) (resp *http.Response, err error) {
|
||||
rr := autorest.NewRetriableRequest(req)
|
||||
for attempts := 0; attempts < ds.RetryAttempts; attempts++ {
|
||||
err = rr.Prepare()
|
||||
if err != nil {
|
||||
return resp, err
|
||||
}
|
||||
resp, err = c.HTTPClient.Do(rr.Request())
|
||||
if err != nil || !autorest.ResponseHasStatusCode(resp, ds.ValidStatusCodes...) {
|
||||
return resp, err
|
||||
}
|
||||
drainRespBody(resp)
|
||||
autorest.DelayForBackoff(ds.RetryDuration, attempts, req.Cancel)
|
||||
ds.attempts = attempts
|
||||
}
|
||||
ds.attempts++
|
||||
return resp, err
|
||||
}
|
||||
|
||||
// Client is the object that needs to be constructed to perform
|
||||
// operations on the storage account.
|
||||
type Client struct {
|
||||
// HTTPClient is the http.Client used to initiate API
|
||||
// requests. If it is nil, http.DefaultClient is used.
|
||||
// requests. http.DefaultClient is used when creating a
|
||||
// client.
|
||||
HTTPClient *http.Client
|
||||
|
||||
// Sender is an interface that sends the request. Clients are
|
||||
// created with a DefaultSender. The DefaultSender has an
|
||||
// automatic retry strategy built in. The Sender can be customized.
|
||||
Sender Sender
|
||||
|
||||
accountName string
|
||||
accountKey []byte
|
||||
useHTTPS bool
|
||||
|
@ -62,17 +148,13 @@ type Client struct {
|
|||
baseURL string
|
||||
apiVersion string
|
||||
userAgent string
|
||||
}
|
||||
|
||||
type storageResponse struct {
|
||||
statusCode int
|
||||
headers http.Header
|
||||
body io.ReadCloser
|
||||
sasClient bool
|
||||
accountSASToken url.Values
|
||||
}
|
||||
|
||||
type odataResponse struct {
|
||||
storageResponse
|
||||
odata odataErrorMessage
|
||||
resp *http.Response
|
||||
odata odataErrorWrapper
|
||||
}
|
||||
|
||||
// AzureStorageServiceError contains fields of the error response from
|
||||
|
@ -85,22 +167,25 @@ type AzureStorageServiceError struct {
|
|||
QueryParameterName string `xml:"QueryParameterName"`
|
||||
QueryParameterValue string `xml:"QueryParameterValue"`
|
||||
Reason string `xml:"Reason"`
|
||||
Lang string
|
||||
StatusCode int
|
||||
RequestID string
|
||||
Date string
|
||||
APIVersion string
|
||||
}
|
||||
|
||||
type odataErrorMessageMessage struct {
|
||||
type odataErrorMessage struct {
|
||||
Lang string `json:"lang"`
|
||||
Value string `json:"value"`
|
||||
}
|
||||
|
||||
type odataErrorMessageInternal struct {
|
||||
Code string `json:"code"`
|
||||
Message odataErrorMessageMessage `json:"message"`
|
||||
type odataError struct {
|
||||
Code string `json:"code"`
|
||||
Message odataErrorMessage `json:"message"`
|
||||
}
|
||||
|
||||
type odataErrorMessage struct {
|
||||
Err odataErrorMessageInternal `json:"odata.error"`
|
||||
type odataErrorWrapper struct {
|
||||
Err odataError `json:"odata.error"`
|
||||
}
|
||||
|
||||
// UnexpectedStatusCodeError is returned when a storage service responds with neither an error
|
||||
|
@ -108,6 +193,7 @@ type odataErrorMessage struct {
|
|||
type UnexpectedStatusCodeError struct {
|
||||
allowed []int
|
||||
got int
|
||||
inner error
|
||||
}
|
||||
|
||||
func (e UnexpectedStatusCodeError) Error() string {
|
||||
|
@ -118,7 +204,7 @@ func (e UnexpectedStatusCodeError) Error() string {
|
|||
for _, v := range e.allowed {
|
||||
expected = append(expected, s(v))
|
||||
}
|
||||
return fmt.Sprintf("storage: status code from service response is %s; was expecting %s", got, strings.Join(expected, " or "))
|
||||
return fmt.Sprintf("storage: status code from service response is %s; was expecting %s. Inner error: %+v", got, strings.Join(expected, " or "), e.inner)
|
||||
}
|
||||
|
||||
// Got is the actual status code returned by Azure.
|
||||
|
@ -126,6 +212,60 @@ func (e UnexpectedStatusCodeError) Got() int {
|
|||
return e.got
|
||||
}
|
||||
|
||||
// Inner returns any inner error info.
|
||||
func (e UnexpectedStatusCodeError) Inner() error {
|
||||
return e.inner
|
||||
}
|
||||
|
||||
// NewClientFromConnectionString creates a Client from the connection string.
|
||||
func NewClientFromConnectionString(input string) (Client, error) {
|
||||
// build a map of connection string key/value pairs
|
||||
parts := map[string]string{}
|
||||
for _, pair := range strings.Split(input, ";") {
|
||||
if pair == "" {
|
||||
continue
|
||||
}
|
||||
|
||||
equalDex := strings.IndexByte(pair, '=')
|
||||
if equalDex <= 0 {
|
||||
return Client{}, fmt.Errorf("Invalid connection segment %q", pair)
|
||||
}
|
||||
|
||||
value := strings.TrimSpace(pair[equalDex+1:])
|
||||
key := strings.TrimSpace(strings.ToLower(pair[:equalDex]))
|
||||
parts[key] = value
|
||||
}
|
||||
|
||||
// TODO: validate parameter sets?
|
||||
|
||||
if parts[connectionStringAccountName] == StorageEmulatorAccountName {
|
||||
return NewEmulatorClient()
|
||||
}
|
||||
|
||||
if parts[connectionStringSAS] != "" {
|
||||
endpoint := ""
|
||||
if parts[connectionStringBlobEndpoint] != "" {
|
||||
endpoint = parts[connectionStringBlobEndpoint]
|
||||
} else if parts[connectionStringFileEndpoint] != "" {
|
||||
endpoint = parts[connectionStringFileEndpoint]
|
||||
} else if parts[connectionStringQueueEndpoint] != "" {
|
||||
endpoint = parts[connectionStringQueueEndpoint]
|
||||
} else {
|
||||
endpoint = parts[connectionStringTableEndpoint]
|
||||
}
|
||||
|
||||
return NewAccountSASClientFromEndpointToken(endpoint, parts[connectionStringSAS])
|
||||
}
|
||||
|
||||
useHTTPS := defaultUseHTTPS
|
||||
if parts[connectionStringEndpointProtocol] != "" {
|
||||
useHTTPS = parts[connectionStringEndpointProtocol] == "https"
|
||||
}
|
||||
|
||||
return NewClient(parts[connectionStringAccountName], parts[connectionStringAccountKey],
|
||||
parts[connectionStringEndpointSuffix], DefaultAPIVersion, useHTTPS)
|
||||
}
|
||||
|
||||
// NewBasicClient constructs a Client with given storage service name and
|
||||
// key.
|
||||
func NewBasicClient(accountName, accountKey string) (Client, error) {
|
||||
|
@ -153,13 +293,13 @@ func NewEmulatorClient() (Client, error) {
|
|||
// NewClient constructs a Client. This should be used if the caller wants
|
||||
// to specify whether to use HTTPS, a specific REST API version or a custom
|
||||
// storage endpoint than Azure Public Cloud.
|
||||
func NewClient(accountName, accountKey, blobServiceBaseURL, apiVersion string, useHTTPS bool) (Client, error) {
|
||||
func NewClient(accountName, accountKey, serviceBaseURL, apiVersion string, useHTTPS bool) (Client, error) {
|
||||
var c Client
|
||||
if accountName == "" {
|
||||
return c, fmt.Errorf("azure: account name required")
|
||||
if !IsValidStorageAccount(accountName) {
|
||||
return c, fmt.Errorf("azure: account name is not valid: it must be between 3 and 24 characters, and only may contain numbers and lowercase letters: %v", accountName)
|
||||
} else if accountKey == "" {
|
||||
return c, fmt.Errorf("azure: account key required")
|
||||
} else if blobServiceBaseURL == "" {
|
||||
} else if serviceBaseURL == "" {
|
||||
return c, fmt.Errorf("azure: base storage service url required")
|
||||
}
|
||||
|
||||
|
@ -169,23 +309,114 @@ func NewClient(accountName, accountKey, blobServiceBaseURL, apiVersion string, u
|
|||
}
|
||||
|
||||
c = Client{
|
||||
HTTPClient: http.DefaultClient,
|
||||
accountName: accountName,
|
||||
accountKey: key,
|
||||
useHTTPS: useHTTPS,
|
||||
baseURL: blobServiceBaseURL,
|
||||
baseURL: serviceBaseURL,
|
||||
apiVersion: apiVersion,
|
||||
sasClient: false,
|
||||
UseSharedKeyLite: false,
|
||||
Sender: &DefaultSender{
|
||||
RetryAttempts: defaultRetryAttempts,
|
||||
ValidStatusCodes: defaultValidStatusCodes,
|
||||
RetryDuration: defaultRetryDuration,
|
||||
},
|
||||
}
|
||||
c.userAgent = c.getDefaultUserAgent()
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// IsValidStorageAccount checks if the storage account name is valid.
|
||||
// See https://docs.microsoft.com/en-us/azure/storage/storage-create-storage-account
|
||||
func IsValidStorageAccount(account string) bool {
|
||||
return validStorageAccount.MatchString(account)
|
||||
}
|
||||
|
||||
// NewAccountSASClient contructs a client that uses accountSAS authorization
|
||||
// for its operations.
|
||||
func NewAccountSASClient(account string, token url.Values, env azure.Environment) Client {
|
||||
c := newSASClient()
|
||||
c.accountSASToken = token
|
||||
c.accountName = account
|
||||
c.baseURL = env.StorageEndpointSuffix
|
||||
|
||||
// Get API version and protocol from token
|
||||
c.apiVersion = token.Get("sv")
|
||||
c.useHTTPS = token.Get("spr") == "https"
|
||||
return c
|
||||
}
|
||||
|
||||
// NewAccountSASClientFromEndpointToken constructs a client that uses accountSAS authorization
|
||||
// for its operations using the specified endpoint and SAS token.
|
||||
func NewAccountSASClientFromEndpointToken(endpoint string, sasToken string) (Client, error) {
|
||||
u, err := url.Parse(endpoint)
|
||||
if err != nil {
|
||||
return Client{}, err
|
||||
}
|
||||
|
||||
token, err := url.ParseQuery(sasToken)
|
||||
if err != nil {
|
||||
return Client{}, err
|
||||
}
|
||||
|
||||
// the host name will look something like this
|
||||
// - foo.blob.core.windows.net
|
||||
// "foo" is the account name
|
||||
// "core.windows.net" is the baseURL
|
||||
|
||||
// find the first dot to get account name
|
||||
i1 := strings.IndexByte(u.Host, '.')
|
||||
if i1 < 0 {
|
||||
return Client{}, fmt.Errorf("failed to find '.' in %s", u.Host)
|
||||
}
|
||||
|
||||
// now find the second dot to get the base URL
|
||||
i2 := strings.IndexByte(u.Host[i1+1:], '.')
|
||||
if i2 < 0 {
|
||||
return Client{}, fmt.Errorf("failed to find '.' in %s", u.Host[i1+1:])
|
||||
}
|
||||
|
||||
c := newSASClient()
|
||||
c.accountSASToken = token
|
||||
c.accountName = u.Host[:i1]
|
||||
c.baseURL = u.Host[i1+i2+2:]
|
||||
|
||||
// Get API version and protocol from token
|
||||
c.apiVersion = token.Get("sv")
|
||||
c.useHTTPS = token.Get("spr") == "https"
|
||||
return c, nil
|
||||
}
|
||||
|
||||
func newSASClient() Client {
|
||||
c := Client{
|
||||
HTTPClient: http.DefaultClient,
|
||||
apiVersion: DefaultAPIVersion,
|
||||
sasClient: true,
|
||||
Sender: &DefaultSender{
|
||||
RetryAttempts: defaultRetryAttempts,
|
||||
ValidStatusCodes: defaultValidStatusCodes,
|
||||
RetryDuration: defaultRetryDuration,
|
||||
},
|
||||
}
|
||||
c.userAgent = c.getDefaultUserAgent()
|
||||
return c
|
||||
}
|
||||
|
||||
func (c Client) isServiceSASClient() bool {
|
||||
return c.sasClient && c.accountSASToken == nil
|
||||
}
|
||||
|
||||
func (c Client) isAccountSASClient() bool {
|
||||
return c.sasClient && c.accountSASToken != nil
|
||||
}
|
||||
|
||||
func (c Client) getDefaultUserAgent() string {
|
||||
return fmt.Sprintf("Go/%s (%s-%s) Azure-SDK-For-Go/%s storage-dataplane/%s",
|
||||
return fmt.Sprintf("Go/%s (%s-%s) azure-storage-go/%s api-version/%s",
|
||||
runtime.Version(),
|
||||
runtime.GOARCH,
|
||||
runtime.GOOS,
|
||||
sdkVersion,
|
||||
version.Number,
|
||||
c.apiVersion,
|
||||
)
|
||||
}
|
||||
|
@ -210,7 +441,7 @@ func (c *Client) protectUserAgent(extraheaders map[string]string) map[string]str
|
|||
return extraheaders
|
||||
}
|
||||
|
||||
func (c Client) getBaseURL(service string) string {
|
||||
func (c Client) getBaseURL(service string) *url.URL {
|
||||
scheme := "http"
|
||||
if c.useHTTPS {
|
||||
scheme = "https"
|
||||
|
@ -229,18 +460,14 @@ func (c Client) getBaseURL(service string) string {
|
|||
host = fmt.Sprintf("%s.%s.%s", c.accountName, service, c.baseURL)
|
||||
}
|
||||
|
||||
u := &url.URL{
|
||||
return &url.URL{
|
||||
Scheme: scheme,
|
||||
Host: host}
|
||||
return u.String()
|
||||
Host: host,
|
||||
}
|
||||
}
|
||||
|
||||
func (c Client) getEndpoint(service, path string, params url.Values) string {
|
||||
u, err := url.Parse(c.getBaseURL(service))
|
||||
if err != nil {
|
||||
// really should not be happening
|
||||
panic(err)
|
||||
}
|
||||
u := c.getBaseURL(service)
|
||||
|
||||
// API doesn't accept path segments not starting with '/'
|
||||
if !strings.HasPrefix(path, "/") {
|
||||
|
@ -256,6 +483,160 @@ func (c Client) getEndpoint(service, path string, params url.Values) string {
|
|||
return u.String()
|
||||
}
|
||||
|
||||
// AccountSASTokenOptions includes options for constructing
|
||||
// an account SAS token.
|
||||
// https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-an-account-sas
|
||||
type AccountSASTokenOptions struct {
|
||||
APIVersion string
|
||||
Services Services
|
||||
ResourceTypes ResourceTypes
|
||||
Permissions Permissions
|
||||
Start time.Time
|
||||
Expiry time.Time
|
||||
IP string
|
||||
UseHTTPS bool
|
||||
}
|
||||
|
||||
// Services specify services accessible with an account SAS.
|
||||
type Services struct {
|
||||
Blob bool
|
||||
Queue bool
|
||||
Table bool
|
||||
File bool
|
||||
}
|
||||
|
||||
// ResourceTypes specify the resources accesible with an
|
||||
// account SAS.
|
||||
type ResourceTypes struct {
|
||||
Service bool
|
||||
Container bool
|
||||
Object bool
|
||||
}
|
||||
|
||||
// Permissions specifies permissions for an accountSAS.
|
||||
type Permissions struct {
|
||||
Read bool
|
||||
Write bool
|
||||
Delete bool
|
||||
List bool
|
||||
Add bool
|
||||
Create bool
|
||||
Update bool
|
||||
Process bool
|
||||
}
|
||||
|
||||
// GetAccountSASToken creates an account SAS token
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-an-account-sas
|
||||
func (c Client) GetAccountSASToken(options AccountSASTokenOptions) (url.Values, error) {
|
||||
if options.APIVersion == "" {
|
||||
options.APIVersion = c.apiVersion
|
||||
}
|
||||
|
||||
if options.APIVersion < "2015-04-05" {
|
||||
return url.Values{}, fmt.Errorf("account SAS does not support API versions prior to 2015-04-05. API version : %s", options.APIVersion)
|
||||
}
|
||||
|
||||
// build services string
|
||||
services := ""
|
||||
if options.Services.Blob {
|
||||
services += "b"
|
||||
}
|
||||
if options.Services.Queue {
|
||||
services += "q"
|
||||
}
|
||||
if options.Services.Table {
|
||||
services += "t"
|
||||
}
|
||||
if options.Services.File {
|
||||
services += "f"
|
||||
}
|
||||
|
||||
// build resources string
|
||||
resources := ""
|
||||
if options.ResourceTypes.Service {
|
||||
resources += "s"
|
||||
}
|
||||
if options.ResourceTypes.Container {
|
||||
resources += "c"
|
||||
}
|
||||
if options.ResourceTypes.Object {
|
||||
resources += "o"
|
||||
}
|
||||
|
||||
// build permissions string
|
||||
permissions := ""
|
||||
if options.Permissions.Read {
|
||||
permissions += "r"
|
||||
}
|
||||
if options.Permissions.Write {
|
||||
permissions += "w"
|
||||
}
|
||||
if options.Permissions.Delete {
|
||||
permissions += "d"
|
||||
}
|
||||
if options.Permissions.List {
|
||||
permissions += "l"
|
||||
}
|
||||
if options.Permissions.Add {
|
||||
permissions += "a"
|
||||
}
|
||||
if options.Permissions.Create {
|
||||
permissions += "c"
|
||||
}
|
||||
if options.Permissions.Update {
|
||||
permissions += "u"
|
||||
}
|
||||
if options.Permissions.Process {
|
||||
permissions += "p"
|
||||
}
|
||||
|
||||
// build start time, if exists
|
||||
start := ""
|
||||
if options.Start != (time.Time{}) {
|
||||
start = options.Start.UTC().Format(time.RFC3339)
|
||||
}
|
||||
|
||||
// build expiry time
|
||||
expiry := options.Expiry.UTC().Format(time.RFC3339)
|
||||
|
||||
protocol := "https,http"
|
||||
if options.UseHTTPS {
|
||||
protocol = "https"
|
||||
}
|
||||
|
||||
stringToSign := strings.Join([]string{
|
||||
c.accountName,
|
||||
permissions,
|
||||
services,
|
||||
resources,
|
||||
start,
|
||||
expiry,
|
||||
options.IP,
|
||||
protocol,
|
||||
options.APIVersion,
|
||||
"",
|
||||
}, "\n")
|
||||
signature := c.computeHmac256(stringToSign)
|
||||
|
||||
sasParams := url.Values{
|
||||
"sv": {options.APIVersion},
|
||||
"ss": {services},
|
||||
"srt": {resources},
|
||||
"sp": {permissions},
|
||||
"se": {expiry},
|
||||
"spr": {protocol},
|
||||
"sig": {signature},
|
||||
}
|
||||
if start != "" {
|
||||
sasParams.Add("st", start)
|
||||
}
|
||||
if options.IP != "" {
|
||||
sasParams.Add("sip", options.IP)
|
||||
}
|
||||
|
||||
return sasParams, nil
|
||||
}
|
||||
|
||||
// GetBlobService returns a BlobStorageClient which can operate on the blob
|
||||
// service of the storage account.
|
||||
func (c Client) GetBlobService() BlobStorageClient {
|
||||
|
@ -320,7 +701,7 @@ func (c Client) getStandardHeaders() map[string]string {
|
|||
}
|
||||
}
|
||||
|
||||
func (c Client) exec(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*storageResponse, error) {
|
||||
func (c Client) exec(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*http.Response, error) {
|
||||
headers, err := c.addAuthorizationHeader(verb, url, headers, auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -331,66 +712,43 @@ func (c Client) exec(verb, url string, headers map[string]string, body io.Reader
|
|||
return nil, errors.New("azure/storage: error creating request: " + err.Error())
|
||||
}
|
||||
|
||||
if clstr, ok := headers["Content-Length"]; ok {
|
||||
// content length header is being signed, but completely ignored by golang.
|
||||
// instead we have to use the ContentLength property on the request struct
|
||||
// (see https://golang.org/src/net/http/request.go?s=18140:18370#L536 and
|
||||
// https://golang.org/src/net/http/transfer.go?s=1739:2467#L49)
|
||||
req.ContentLength, err = strconv.ParseInt(clstr, 10, 64)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
// http.NewRequest() will automatically set req.ContentLength for a handful of types
|
||||
// otherwise we will handle here.
|
||||
if req.ContentLength < 1 {
|
||||
if clstr, ok := headers["Content-Length"]; ok {
|
||||
if cl, err := strconv.ParseInt(clstr, 10, 64); err == nil {
|
||||
req.ContentLength = cl
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for k, v := range headers {
|
||||
req.Header.Add(k, v)
|
||||
req.Header[k] = append(req.Header[k], v) // Must bypass case munging present in `Add` by using map functions directly. See https://github.com/Azure/azure-sdk-for-go/issues/645
|
||||
}
|
||||
|
||||
httpClient := c.HTTPClient
|
||||
if httpClient == nil {
|
||||
httpClient = http.DefaultClient
|
||||
if c.isAccountSASClient() {
|
||||
// append the SAS token to the query params
|
||||
v := req.URL.Query()
|
||||
v = mergeParams(v, c.accountSASToken)
|
||||
req.URL.RawQuery = v.Encode()
|
||||
}
|
||||
resp, err := httpClient.Do(req)
|
||||
|
||||
resp, err := c.Sender.Send(&c, req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
statusCode := resp.StatusCode
|
||||
if statusCode >= 400 && statusCode <= 505 {
|
||||
var respBody []byte
|
||||
respBody, err = readAndCloseBody(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
requestID := resp.Header.Get("x-ms-request-id")
|
||||
if len(respBody) == 0 {
|
||||
// no error in response body, might happen in HEAD requests
|
||||
err = serviceErrFromStatusCode(resp.StatusCode, resp.Status, requestID)
|
||||
} else {
|
||||
// response contains storage service error object, unmarshal
|
||||
storageErr, errIn := serviceErrFromXML(respBody, resp.StatusCode, requestID)
|
||||
if err != nil { // error unmarshaling the error response
|
||||
err = errIn
|
||||
}
|
||||
err = storageErr
|
||||
}
|
||||
return &storageResponse{
|
||||
statusCode: resp.StatusCode,
|
||||
headers: resp.Header,
|
||||
body: ioutil.NopCloser(bytes.NewReader(respBody)), /* restore the body */
|
||||
}, err
|
||||
if resp.StatusCode >= 400 && resp.StatusCode <= 505 {
|
||||
return resp, getErrorFromResponse(resp)
|
||||
}
|
||||
|
||||
return &storageResponse{
|
||||
statusCode: resp.StatusCode,
|
||||
headers: resp.Header,
|
||||
body: resp.Body}, nil
|
||||
return resp, nil
|
||||
}
|
||||
|
||||
func (c Client) execInternalJSON(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*odataResponse, error) {
|
||||
func (c Client) execInternalJSONCommon(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*odataResponse, *http.Request, *http.Response, error) {
|
||||
headers, err := c.addAuthorizationHeader(verb, url, headers, auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
|
||||
req, err := http.NewRequest(verb, url, body)
|
||||
|
@ -398,42 +756,122 @@ func (c Client) execInternalJSON(verb, url string, headers map[string]string, bo
|
|||
req.Header.Add(k, v)
|
||||
}
|
||||
|
||||
httpClient := c.HTTPClient
|
||||
if httpClient == nil {
|
||||
httpClient = http.DefaultClient
|
||||
}
|
||||
|
||||
resp, err := httpClient.Do(req)
|
||||
resp, err := c.Sender.Send(&c, req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
|
||||
respToRet := &odataResponse{}
|
||||
respToRet.body = resp.Body
|
||||
respToRet.statusCode = resp.StatusCode
|
||||
respToRet.headers = resp.Header
|
||||
respToRet := &odataResponse{resp: resp}
|
||||
|
||||
statusCode := resp.StatusCode
|
||||
if statusCode >= 400 && statusCode <= 505 {
|
||||
var respBody []byte
|
||||
respBody, err = readAndCloseBody(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, nil, nil, err
|
||||
}
|
||||
|
||||
requestID, date, version := getDebugHeaders(resp.Header)
|
||||
if len(respBody) == 0 {
|
||||
// no error in response body, might happen in HEAD requests
|
||||
err = serviceErrFromStatusCode(resp.StatusCode, resp.Status, resp.Header.Get("x-ms-request-id"))
|
||||
return respToRet, err
|
||||
err = serviceErrFromStatusCode(resp.StatusCode, resp.Status, requestID, date, version)
|
||||
return respToRet, req, resp, err
|
||||
}
|
||||
// try unmarshal as odata.error json
|
||||
err = json.Unmarshal(respBody, &respToRet.odata)
|
||||
return respToRet, err
|
||||
}
|
||||
|
||||
return respToRet, req, resp, err
|
||||
}
|
||||
|
||||
func (c Client) execInternalJSON(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*odataResponse, error) {
|
||||
respToRet, _, _, err := c.execInternalJSONCommon(verb, url, headers, body, auth)
|
||||
return respToRet, err
|
||||
}
|
||||
|
||||
func (c Client) execBatchOperationJSON(verb, url string, headers map[string]string, body io.Reader, auth authentication) (*odataResponse, error) {
|
||||
// execute common query, get back generated request, response etc... for more processing.
|
||||
respToRet, req, resp, err := c.execInternalJSONCommon(verb, url, headers, body, auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// return the OData in the case of executing batch commands.
|
||||
// In this case we need to read the outer batch boundary and contents.
|
||||
// Then we read the changeset information within the batch
|
||||
var respBody []byte
|
||||
respBody, err = readAndCloseBody(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// outer multipart body
|
||||
_, batchHeader, err := mime.ParseMediaType(resp.Header["Content-Type"][0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// batch details.
|
||||
batchBoundary := batchHeader["boundary"]
|
||||
batchPartBuf, changesetBoundary, err := genBatchReader(batchBoundary, respBody)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// changeset details.
|
||||
err = genChangesetReader(req, respToRet, batchPartBuf, changesetBoundary)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return respToRet, nil
|
||||
}
|
||||
|
||||
func genChangesetReader(req *http.Request, respToRet *odataResponse, batchPartBuf io.Reader, changesetBoundary string) error {
|
||||
changesetMultiReader := multipart.NewReader(batchPartBuf, changesetBoundary)
|
||||
changesetPart, err := changesetMultiReader.NextPart()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
changesetPartBufioReader := bufio.NewReader(changesetPart)
|
||||
changesetResp, err := http.ReadResponse(changesetPartBufioReader, req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if changesetResp.StatusCode != http.StatusNoContent {
|
||||
changesetBody, err := readAndCloseBody(changesetResp.Body)
|
||||
err = json.Unmarshal(changesetBody, &respToRet.odata)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
respToRet.resp = changesetResp
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func genBatchReader(batchBoundary string, respBody []byte) (io.Reader, string, error) {
|
||||
respBodyString := string(respBody)
|
||||
respBodyReader := strings.NewReader(respBodyString)
|
||||
|
||||
// reading batchresponse
|
||||
batchMultiReader := multipart.NewReader(respBodyReader, batchBoundary)
|
||||
batchPart, err := batchMultiReader.NextPart()
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
batchPartBufioReader := bufio.NewReader(batchPart)
|
||||
|
||||
_, changesetHeader, err := mime.ParseMediaType(batchPart.Header.Get("Content-Type"))
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
changesetBoundary := changesetHeader["boundary"]
|
||||
return batchPartBufioReader, changesetBoundary, nil
|
||||
}
|
||||
|
||||
func readAndCloseBody(body io.ReadCloser) ([]byte, error) {
|
||||
defer body.Close()
|
||||
out, err := ioutil.ReadAll(body)
|
||||
|
@ -443,37 +881,109 @@ func readAndCloseBody(body io.ReadCloser) ([]byte, error) {
|
|||
return out, err
|
||||
}
|
||||
|
||||
func serviceErrFromXML(body []byte, statusCode int, requestID string) (AzureStorageServiceError, error) {
|
||||
var storageErr AzureStorageServiceError
|
||||
if err := xml.Unmarshal(body, &storageErr); err != nil {
|
||||
return storageErr, err
|
||||
}
|
||||
storageErr.StatusCode = statusCode
|
||||
storageErr.RequestID = requestID
|
||||
return storageErr, nil
|
||||
// reads the response body then closes it
|
||||
func drainRespBody(resp *http.Response) {
|
||||
io.Copy(ioutil.Discard, resp.Body)
|
||||
resp.Body.Close()
|
||||
}
|
||||
|
||||
func serviceErrFromStatusCode(code int, status string, requestID string) AzureStorageServiceError {
|
||||
func serviceErrFromXML(body []byte, storageErr *AzureStorageServiceError) error {
|
||||
if err := xml.Unmarshal(body, storageErr); err != nil {
|
||||
storageErr.Message = fmt.Sprintf("Response body could no be unmarshaled: %v. Body: %v.", err, string(body))
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func serviceErrFromJSON(body []byte, storageErr *AzureStorageServiceError) error {
|
||||
odataError := odataErrorWrapper{}
|
||||
if err := json.Unmarshal(body, &odataError); err != nil {
|
||||
storageErr.Message = fmt.Sprintf("Response body could no be unmarshaled: %v. Body: %v.", err, string(body))
|
||||
return err
|
||||
}
|
||||
storageErr.Code = odataError.Err.Code
|
||||
storageErr.Message = odataError.Err.Message.Value
|
||||
storageErr.Lang = odataError.Err.Message.Lang
|
||||
return nil
|
||||
}
|
||||
|
||||
func serviceErrFromStatusCode(code int, status string, requestID, date, version string) AzureStorageServiceError {
|
||||
return AzureStorageServiceError{
|
||||
StatusCode: code,
|
||||
Code: status,
|
||||
RequestID: requestID,
|
||||
Date: date,
|
||||
APIVersion: version,
|
||||
Message: "no response body was available for error status code",
|
||||
}
|
||||
}
|
||||
|
||||
func (e AzureStorageServiceError) Error() string {
|
||||
return fmt.Sprintf("storage: service returned error: StatusCode=%d, ErrorCode=%s, ErrorMessage=%s, RequestId=%s, QueryParameterName=%s, QueryParameterValue=%s",
|
||||
e.StatusCode, e.Code, e.Message, e.RequestID, e.QueryParameterName, e.QueryParameterValue)
|
||||
return fmt.Sprintf("storage: service returned error: StatusCode=%d, ErrorCode=%s, ErrorMessage=%s, RequestInitiated=%s, RequestId=%s, API Version=%s, QueryParameterName=%s, QueryParameterValue=%s",
|
||||
e.StatusCode, e.Code, e.Message, e.Date, e.RequestID, e.APIVersion, e.QueryParameterName, e.QueryParameterValue)
|
||||
}
|
||||
|
||||
// checkRespCode returns UnexpectedStatusError if the given response code is not
|
||||
// one of the allowed status codes; otherwise nil.
|
||||
func checkRespCode(respCode int, allowed []int) error {
|
||||
func checkRespCode(resp *http.Response, allowed []int) error {
|
||||
for _, v := range allowed {
|
||||
if respCode == v {
|
||||
if resp.StatusCode == v {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return UnexpectedStatusCodeError{allowed, respCode}
|
||||
err := getErrorFromResponse(resp)
|
||||
return UnexpectedStatusCodeError{
|
||||
allowed: allowed,
|
||||
got: resp.StatusCode,
|
||||
inner: err,
|
||||
}
|
||||
}
|
||||
|
||||
func (c Client) addMetadataToHeaders(h map[string]string, metadata map[string]string) map[string]string {
|
||||
metadata = c.protectUserAgent(metadata)
|
||||
for k, v := range metadata {
|
||||
h[userDefinedMetadataHeaderPrefix+k] = v
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func getDebugHeaders(h http.Header) (requestID, date, version string) {
|
||||
requestID = h.Get("x-ms-request-id")
|
||||
version = h.Get("x-ms-version")
|
||||
date = h.Get("Date")
|
||||
return
|
||||
}
|
||||
|
||||
func getErrorFromResponse(resp *http.Response) error {
|
||||
respBody, err := readAndCloseBody(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
requestID, date, version := getDebugHeaders(resp.Header)
|
||||
if len(respBody) == 0 {
|
||||
// no error in response body, might happen in HEAD requests
|
||||
err = serviceErrFromStatusCode(resp.StatusCode, resp.Status, requestID, date, version)
|
||||
} else {
|
||||
storageErr := AzureStorageServiceError{
|
||||
StatusCode: resp.StatusCode,
|
||||
RequestID: requestID,
|
||||
Date: date,
|
||||
APIVersion: version,
|
||||
}
|
||||
// response contains storage service error object, unmarshal
|
||||
if resp.Header.Get("Content-Type") == "application/xml" {
|
||||
errIn := serviceErrFromXML(respBody, &storageErr)
|
||||
if err != nil { // error unmarshaling the error response
|
||||
err = errIn
|
||||
}
|
||||
} else {
|
||||
errIn := serviceErrFromJSON(respBody, &storageErr)
|
||||
if err != nil { // error unmarshaling the error response
|
||||
err = errIn
|
||||
}
|
||||
}
|
||||
err = storageErr
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
|
38
vendor/github.com/Azure/azure-sdk-for-go/storage/commonsasuri.go
generated
vendored
Normal file
38
vendor/github.com/Azure/azure-sdk-for-go/storage/commonsasuri.go
generated
vendored
Normal file
|
@ -0,0 +1,38 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"net/url"
|
||||
"time"
|
||||
)
|
||||
|
||||
// SASOptions includes options used by SAS URIs for different
|
||||
// services and resources.
|
||||
type SASOptions struct {
|
||||
APIVersion string
|
||||
Start time.Time
|
||||
Expiry time.Time
|
||||
IP string
|
||||
UseHTTPS bool
|
||||
Identifier string
|
||||
}
|
||||
|
||||
func addQueryParameter(query url.Values, key, value string) url.Values {
|
||||
if value != "" {
|
||||
query.Add(key, value)
|
||||
}
|
||||
return query
|
||||
}
|
426
vendor/github.com/Azure/azure-sdk-for-go/storage/container.go
generated
vendored
426
vendor/github.com/Azure/azure-sdk-for-go/storage/container.go
generated
vendored
|
@ -1,13 +1,27 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
|
@ -16,20 +30,76 @@ type Container struct {
|
|||
bsc *BlobStorageClient
|
||||
Name string `xml:"Name"`
|
||||
Properties ContainerProperties `xml:"Properties"`
|
||||
Metadata map[string]string
|
||||
sasuri url.URL
|
||||
}
|
||||
|
||||
// Client returns the HTTP client used by the Container reference.
|
||||
func (c *Container) Client() *Client {
|
||||
return &c.bsc.client
|
||||
}
|
||||
|
||||
func (c *Container) buildPath() string {
|
||||
return fmt.Sprintf("/%s", c.Name)
|
||||
}
|
||||
|
||||
// GetURL gets the canonical URL to the container.
|
||||
// This method does not create a publicly accessible URL if the container
|
||||
// is private and this method does not check if the blob exists.
|
||||
func (c *Container) GetURL() string {
|
||||
container := c.Name
|
||||
if container == "" {
|
||||
container = "$root"
|
||||
}
|
||||
return c.bsc.client.getEndpoint(blobServiceName, pathForResource(container, ""), nil)
|
||||
}
|
||||
|
||||
// ContainerSASOptions are options to construct a container SAS
|
||||
// URI.
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas
|
||||
type ContainerSASOptions struct {
|
||||
ContainerSASPermissions
|
||||
OverrideHeaders
|
||||
SASOptions
|
||||
}
|
||||
|
||||
// ContainerSASPermissions includes the available permissions for
|
||||
// a container SAS URI.
|
||||
type ContainerSASPermissions struct {
|
||||
BlobServiceSASPermissions
|
||||
List bool
|
||||
}
|
||||
|
||||
// GetSASURI creates an URL to the container which contains the Shared
|
||||
// Access Signature with the specified options.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas
|
||||
func (c *Container) GetSASURI(options ContainerSASOptions) (string, error) {
|
||||
uri := c.GetURL()
|
||||
signedResource := "c"
|
||||
canonicalizedResource, err := c.bsc.client.buildCanonicalizedResource(uri, c.bsc.auth, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// build permissions string
|
||||
permissions := options.BlobServiceSASPermissions.buildString()
|
||||
if options.List {
|
||||
permissions += "l"
|
||||
}
|
||||
|
||||
return c.bsc.client.blobAndFileSASURI(options.SASOptions, uri, permissions, canonicalizedResource, signedResource, options.OverrideHeaders)
|
||||
}
|
||||
|
||||
// ContainerProperties contains various properties of a container returned from
|
||||
// various endpoints like ListContainers.
|
||||
type ContainerProperties struct {
|
||||
LastModified string `xml:"Last-Modified"`
|
||||
Etag string `xml:"Etag"`
|
||||
LeaseStatus string `xml:"LeaseStatus"`
|
||||
LeaseState string `xml:"LeaseState"`
|
||||
LeaseDuration string `xml:"LeaseDuration"`
|
||||
LastModified string `xml:"Last-Modified"`
|
||||
Etag string `xml:"Etag"`
|
||||
LeaseStatus string `xml:"LeaseStatus"`
|
||||
LeaseState string `xml:"LeaseState"`
|
||||
LeaseDuration string `xml:"LeaseDuration"`
|
||||
PublicAccess ContainerAccessType `xml:"PublicAccess"`
|
||||
}
|
||||
|
||||
// ContainerListResponse contains the response fields from
|
||||
|
@ -69,6 +139,14 @@ type BlobListResponse struct {
|
|||
Delimiter string `xml:"Delimiter"`
|
||||
}
|
||||
|
||||
// IncludeBlobDataset has options to include in a list blobs operation
|
||||
type IncludeBlobDataset struct {
|
||||
Snapshots bool
|
||||
Metadata bool
|
||||
UncommittedBlobs bool
|
||||
Copy bool
|
||||
}
|
||||
|
||||
// ListBlobsParameters defines the set of customizable
|
||||
// parameters to make a List Blobs call.
|
||||
//
|
||||
|
@ -77,9 +155,10 @@ type ListBlobsParameters struct {
|
|||
Prefix string
|
||||
Delimiter string
|
||||
Marker string
|
||||
Include string
|
||||
Include *IncludeBlobDataset
|
||||
MaxResults uint
|
||||
Timeout uint
|
||||
RequestID string
|
||||
}
|
||||
|
||||
func (p ListBlobsParameters) getParameters() url.Values {
|
||||
|
@ -94,19 +173,32 @@ func (p ListBlobsParameters) getParameters() url.Values {
|
|||
if p.Marker != "" {
|
||||
out.Set("marker", p.Marker)
|
||||
}
|
||||
if p.Include != "" {
|
||||
out.Set("include", p.Include)
|
||||
if p.Include != nil {
|
||||
include := []string{}
|
||||
include = addString(include, p.Include.Snapshots, "snapshots")
|
||||
include = addString(include, p.Include.Metadata, "metadata")
|
||||
include = addString(include, p.Include.UncommittedBlobs, "uncommittedblobs")
|
||||
include = addString(include, p.Include.Copy, "copy")
|
||||
fullInclude := strings.Join(include, ",")
|
||||
out.Set("include", fullInclude)
|
||||
}
|
||||
if p.MaxResults != 0 {
|
||||
out.Set("maxresults", fmt.Sprintf("%v", p.MaxResults))
|
||||
out.Set("maxresults", strconv.FormatUint(uint64(p.MaxResults), 10))
|
||||
}
|
||||
if p.Timeout != 0 {
|
||||
out.Set("timeout", fmt.Sprintf("%v", p.Timeout))
|
||||
out.Set("timeout", strconv.FormatUint(uint64(p.Timeout), 10))
|
||||
}
|
||||
|
||||
return out
|
||||
}
|
||||
|
||||
func addString(datasets []string, include bool, text string) []string {
|
||||
if include {
|
||||
datasets = append(datasets, text)
|
||||
}
|
||||
return datasets
|
||||
}
|
||||
|
||||
// ContainerAccessType defines the access level to the container from a public
|
||||
// request.
|
||||
//
|
||||
|
@ -142,123 +234,160 @@ const (
|
|||
ContainerAccessHeader string = "x-ms-blob-public-access"
|
||||
)
|
||||
|
||||
// GetBlobReference returns a Blob object for the specified blob name.
|
||||
func (c *Container) GetBlobReference(name string) *Blob {
|
||||
return &Blob{
|
||||
Container: c,
|
||||
Name: name,
|
||||
}
|
||||
}
|
||||
|
||||
// CreateContainerOptions includes the options for a create container operation
|
||||
type CreateContainerOptions struct {
|
||||
Timeout uint
|
||||
Access ContainerAccessType `header:"x-ms-blob-public-access"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// Create creates a blob container within the storage account
|
||||
// with given name and access level. Returns error if container already exists.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179468.aspx
|
||||
func (c *Container) Create() error {
|
||||
resp, err := c.create()
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Create-Container
|
||||
func (c *Container) Create(options *CreateContainerOptions) error {
|
||||
resp, err := c.create(options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusCreated})
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
// CreateIfNotExists creates a blob container if it does not exist. Returns
|
||||
// true if container is newly created or false if container already exists.
|
||||
func (c *Container) CreateIfNotExists() (bool, error) {
|
||||
resp, err := c.create()
|
||||
func (c *Container) CreateIfNotExists(options *CreateContainerOptions) (bool, error) {
|
||||
resp, err := c.create(options)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusCreated || resp.statusCode == http.StatusConflict {
|
||||
return resp.statusCode == http.StatusCreated, nil
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusConflict {
|
||||
return resp.StatusCode == http.StatusCreated, nil
|
||||
}
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
func (c *Container) create() (*storageResponse, error) {
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), url.Values{"restype": {"container"}})
|
||||
func (c *Container) create(options *CreateContainerOptions) (*http.Response, error) {
|
||||
query := url.Values{"restype": {"container"}}
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
headers = c.bsc.client.addMetadataToHeaders(headers, c.Metadata)
|
||||
|
||||
if options != nil {
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), query)
|
||||
|
||||
return c.bsc.client.exec(http.MethodPut, uri, headers, nil, c.bsc.auth)
|
||||
}
|
||||
|
||||
// Exists returns true if a container with given name exists
|
||||
// on the storage account, otherwise returns false.
|
||||
func (c *Container) Exists() (bool, error) {
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), url.Values{"restype": {"container"}})
|
||||
q := url.Values{"restype": {"container"}}
|
||||
var uri string
|
||||
if c.bsc.client.isServiceSASClient() {
|
||||
q = mergeParams(q, c.sasuri.Query())
|
||||
newURI := c.sasuri
|
||||
newURI.RawQuery = q.Encode()
|
||||
uri = newURI.String()
|
||||
|
||||
} else {
|
||||
uri = c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), q)
|
||||
}
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
|
||||
resp, err := c.bsc.client.exec(http.MethodHead, uri, headers, nil, c.bsc.auth)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusOK || resp.statusCode == http.StatusNotFound {
|
||||
return resp.statusCode == http.StatusOK, nil
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNotFound {
|
||||
return resp.StatusCode == http.StatusOK, nil
|
||||
}
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
// SetPermissions sets up container permissions as per https://msdn.microsoft.com/en-us/library/azure/dd179391.aspx
|
||||
func (c *Container) SetPermissions(permissions ContainerPermissions, timeout int, leaseID string) error {
|
||||
// SetContainerPermissionOptions includes options for a set container permissions operation
|
||||
type SetContainerPermissionOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
IfModifiedSince *time.Time `header:"If-Modified-Since"`
|
||||
IfUnmodifiedSince *time.Time `header:"If-Unmodified-Since"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// SetPermissions sets up container permissions
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-Container-ACL
|
||||
func (c *Container) SetPermissions(permissions ContainerPermissions, options *SetContainerPermissionOptions) error {
|
||||
body, length, err := generateContainerACLpayload(permissions.AccessPolicies)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
params := url.Values{
|
||||
"restype": {"container"},
|
||||
"comp": {"acl"},
|
||||
}
|
||||
|
||||
if timeout > 0 {
|
||||
params.Add("timeout", strconv.Itoa(timeout))
|
||||
}
|
||||
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params)
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
if permissions.AccessType != "" {
|
||||
headers[ContainerAccessHeader] = string(permissions.AccessType)
|
||||
}
|
||||
|
||||
if leaseID != "" {
|
||||
headers[headerLeaseID] = leaseID
|
||||
}
|
||||
|
||||
body, length, err := generateContainerACLpayload(permissions.AccessPolicies)
|
||||
headers = addToHeaders(headers, ContainerAccessHeader, string(permissions.AccessType))
|
||||
headers["Content-Length"] = strconv.Itoa(length)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params)
|
||||
|
||||
resp, err := c.bsc.client.exec(http.MethodPut, uri, headers, body, c.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusOK})
|
||||
}
|
||||
|
||||
if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil {
|
||||
return errors.New("Unable to set permissions")
|
||||
}
|
||||
|
||||
return nil
|
||||
// GetContainerPermissionOptions includes options for a get container permissions operation
|
||||
type GetContainerPermissionOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// GetPermissions gets the container permissions as per https://msdn.microsoft.com/en-us/library/azure/dd179469.aspx
|
||||
// If timeout is 0 then it will not be passed to Azure
|
||||
// leaseID will only be passed to Azure if populated
|
||||
func (c *Container) GetPermissions(timeout int, leaseID string) (*ContainerPermissions, error) {
|
||||
func (c *Container) GetPermissions(options *GetContainerPermissionOptions) (*ContainerPermissions, error) {
|
||||
params := url.Values{
|
||||
"restype": {"container"},
|
||||
"comp": {"acl"},
|
||||
}
|
||||
|
||||
if timeout > 0 {
|
||||
params.Add("timeout", strconv.Itoa(timeout))
|
||||
}
|
||||
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params)
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
|
||||
if leaseID != "" {
|
||||
headers[headerLeaseID] = leaseID
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params)
|
||||
|
||||
resp, err := c.bsc.client.exec(http.MethodGet, uri, headers, nil, c.bsc.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
defer resp.Body.Close()
|
||||
|
||||
var ap AccessPolicy
|
||||
err = xmlUnmarshal(resp.body, &ap.SignedIdentifiersList)
|
||||
err = xmlUnmarshal(resp.Body, &ap.SignedIdentifiersList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return buildAccessPolicy(ap, &resp.headers), nil
|
||||
return buildAccessPolicy(ap, &resp.Header), nil
|
||||
}
|
||||
|
||||
func buildAccessPolicy(ap AccessPolicy, headers *http.Header) *ContainerPermissions {
|
||||
|
@ -284,17 +413,26 @@ func buildAccessPolicy(ap AccessPolicy, headers *http.Header) *ContainerPermissi
|
|||
return &permissions
|
||||
}
|
||||
|
||||
// DeleteContainerOptions includes options for a delete container operation
|
||||
type DeleteContainerOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
IfModifiedSince *time.Time `header:"If-Modified-Since"`
|
||||
IfUnmodifiedSince *time.Time `header:"If-Unmodified-Since"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// Delete deletes the container with given name on the storage
|
||||
// account. If the container does not exist returns error.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179408.aspx
|
||||
func (c *Container) Delete() error {
|
||||
resp, err := c.delete()
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/delete-container
|
||||
func (c *Container) Delete(options *DeleteContainerOptions) error {
|
||||
resp, err := c.delete(options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusAccepted})
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusAccepted})
|
||||
}
|
||||
|
||||
// DeleteIfExists deletes the container with given name on the storage
|
||||
|
@ -302,47 +440,142 @@ func (c *Container) Delete() error {
|
|||
// false if the container did not exist at the time of the Delete Container
|
||||
// operation.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179408.aspx
|
||||
func (c *Container) DeleteIfExists() (bool, error) {
|
||||
resp, err := c.delete()
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/delete-container
|
||||
func (c *Container) DeleteIfExists(options *DeleteContainerOptions) (bool, error) {
|
||||
resp, err := c.delete(options)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound {
|
||||
return resp.statusCode == http.StatusAccepted, nil
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound {
|
||||
return resp.StatusCode == http.StatusAccepted, nil
|
||||
}
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
func (c *Container) delete() (*storageResponse, error) {
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), url.Values{"restype": {"container"}})
|
||||
func (c *Container) delete(options *DeleteContainerOptions) (*http.Response, error) {
|
||||
query := url.Values{"restype": {"container"}}
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), query)
|
||||
|
||||
return c.bsc.client.exec(http.MethodDelete, uri, headers, nil, c.bsc.auth)
|
||||
}
|
||||
|
||||
// ListBlobs returns an object that contains list of blobs in the container,
|
||||
// pagination token and other information in the response of List Blobs call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd135734.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Blobs
|
||||
func (c *Container) ListBlobs(params ListBlobsParameters) (BlobListResponse, error) {
|
||||
q := mergeParams(params.getParameters(), url.Values{
|
||||
"restype": {"container"},
|
||||
"comp": {"list"}},
|
||||
)
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), q)
|
||||
"comp": {"list"},
|
||||
})
|
||||
var uri string
|
||||
if c.bsc.client.isServiceSASClient() {
|
||||
q = mergeParams(q, c.sasuri.Query())
|
||||
newURI := c.sasuri
|
||||
newURI.RawQuery = q.Encode()
|
||||
uri = newURI.String()
|
||||
} else {
|
||||
uri = c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), q)
|
||||
}
|
||||
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
headers = addToHeaders(headers, "x-ms-client-request-id", params.RequestID)
|
||||
|
||||
var out BlobListResponse
|
||||
resp, err := c.bsc.client.exec(http.MethodGet, uri, headers, nil, c.bsc.auth)
|
||||
if err != nil {
|
||||
return out, err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
defer resp.Body.Close()
|
||||
|
||||
err = xmlUnmarshal(resp.body, &out)
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
for i := range out.Blobs {
|
||||
out.Blobs[i].Container = c
|
||||
}
|
||||
return out, err
|
||||
}
|
||||
|
||||
// ContainerMetadataOptions includes options for container metadata operations
|
||||
type ContainerMetadataOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// SetMetadata replaces the metadata for the specified container.
|
||||
//
|
||||
// Some keys may be converted to Camel-Case before sending. All keys
|
||||
// are returned in lower case by GetBlobMetadata. HTTP header names
|
||||
// are case-insensitive so case munging should not matter to other
|
||||
// applications either.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/set-container-metadata
|
||||
func (c *Container) SetMetadata(options *ContainerMetadataOptions) error {
|
||||
params := url.Values{
|
||||
"comp": {"metadata"},
|
||||
"restype": {"container"},
|
||||
}
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
headers = c.bsc.client.addMetadataToHeaders(headers, c.Metadata)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params)
|
||||
|
||||
resp, err := c.bsc.client.exec(http.MethodPut, uri, headers, nil, c.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusOK})
|
||||
}
|
||||
|
||||
// GetMetadata returns all user-defined metadata for the specified container.
|
||||
//
|
||||
// All metadata keys will be returned in lower case. (HTTP header
|
||||
// names are case-insensitive.)
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/get-container-metadata
|
||||
func (c *Container) GetMetadata(options *ContainerMetadataOptions) error {
|
||||
params := url.Values{
|
||||
"comp": {"metadata"},
|
||||
"restype": {"container"},
|
||||
}
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params)
|
||||
|
||||
resp, err := c.bsc.client.exec(http.MethodGet, uri, headers, nil, c.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
if err := checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.writeMetadata(resp.Header)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Container) writeMetadata(h http.Header) {
|
||||
c.Metadata = writeMetadata(h)
|
||||
}
|
||||
|
||||
func generateContainerACLpayload(policies []ContainerAccessPolicy) (io.Reader, int, error) {
|
||||
sil := SignedIdentifiers{
|
||||
SignedIdentifiers: []SignedIdentifier{},
|
||||
|
@ -374,3 +607,34 @@ func (capd *ContainerAccessPolicy) generateContainerPermissions() (permissions s
|
|||
|
||||
return permissions
|
||||
}
|
||||
|
||||
// GetProperties updated the properties of the container.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/get-container-properties
|
||||
func (c *Container) GetProperties() error {
|
||||
params := url.Values{
|
||||
"restype": {"container"},
|
||||
}
|
||||
headers := c.bsc.client.getStandardHeaders()
|
||||
|
||||
uri := c.bsc.client.getEndpoint(blobServiceName, c.buildPath(), params)
|
||||
|
||||
resp, err := c.bsc.client.exec(http.MethodGet, uri, headers, nil, c.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
if err := checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// update properties
|
||||
c.Properties.Etag = resp.Header.Get(headerEtag)
|
||||
c.Properties.LeaseStatus = resp.Header.Get("x-ms-lease-status")
|
||||
c.Properties.LeaseState = resp.Header.Get("x-ms-lease-state")
|
||||
c.Properties.LeaseDuration = resp.Header.Get("x-ms-lease-duration")
|
||||
c.Properties.LastModified = resp.Header.Get("Last-Modified")
|
||||
c.Properties.PublicAccess = ContainerAccessType(resp.Header.Get(ContainerAccessHeader))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
237
vendor/github.com/Azure/azure-sdk-for-go/storage/copyblob.go
generated
vendored
Normal file
237
vendor/github.com/Azure/azure-sdk-for-go/storage/copyblob.go
generated
vendored
Normal file
|
@ -0,0 +1,237 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
blobCopyStatusPending = "pending"
|
||||
blobCopyStatusSuccess = "success"
|
||||
blobCopyStatusAborted = "aborted"
|
||||
blobCopyStatusFailed = "failed"
|
||||
)
|
||||
|
||||
// CopyOptions includes the options for a copy blob operation
|
||||
type CopyOptions struct {
|
||||
Timeout uint
|
||||
Source CopyOptionsConditions
|
||||
Destiny CopyOptionsConditions
|
||||
RequestID string
|
||||
}
|
||||
|
||||
// IncrementalCopyOptions includes the options for an incremental copy blob operation
|
||||
type IncrementalCopyOptions struct {
|
||||
Timeout uint
|
||||
Destination IncrementalCopyOptionsConditions
|
||||
RequestID string
|
||||
}
|
||||
|
||||
// CopyOptionsConditions includes some conditional options in a copy blob operation
|
||||
type CopyOptionsConditions struct {
|
||||
LeaseID string
|
||||
IfModifiedSince *time.Time
|
||||
IfUnmodifiedSince *time.Time
|
||||
IfMatch string
|
||||
IfNoneMatch string
|
||||
}
|
||||
|
||||
// IncrementalCopyOptionsConditions includes some conditional options in a copy blob operation
|
||||
type IncrementalCopyOptionsConditions struct {
|
||||
IfModifiedSince *time.Time
|
||||
IfUnmodifiedSince *time.Time
|
||||
IfMatch string
|
||||
IfNoneMatch string
|
||||
}
|
||||
|
||||
// Copy starts a blob copy operation and waits for the operation to
|
||||
// complete. sourceBlob parameter must be a canonical URL to the blob (can be
|
||||
// obtained using the GetURL method.) There is no SLA on blob copy and therefore
|
||||
// this helper method works faster on smaller files.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Copy-Blob
|
||||
func (b *Blob) Copy(sourceBlob string, options *CopyOptions) error {
|
||||
copyID, err := b.StartCopy(sourceBlob, options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return b.WaitForCopy(copyID)
|
||||
}
|
||||
|
||||
// StartCopy starts a blob copy operation.
|
||||
// sourceBlob parameter must be a canonical URL to the blob (can be
|
||||
// obtained using the GetURL method.)
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Copy-Blob
|
||||
func (b *Blob) StartCopy(sourceBlob string, options *CopyOptions) (string, error) {
|
||||
params := url.Values{}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-copy-source"] = sourceBlob
|
||||
headers = b.Container.bsc.client.addMetadataToHeaders(headers, b.Metadata)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = addToHeaders(headers, "x-ms-client-request-id", options.RequestID)
|
||||
// source
|
||||
headers = addToHeaders(headers, "x-ms-source-lease-id", options.Source.LeaseID)
|
||||
headers = addTimeToHeaders(headers, "x-ms-source-if-modified-since", options.Source.IfModifiedSince)
|
||||
headers = addTimeToHeaders(headers, "x-ms-source-if-unmodified-since", options.Source.IfUnmodifiedSince)
|
||||
headers = addToHeaders(headers, "x-ms-source-if-match", options.Source.IfMatch)
|
||||
headers = addToHeaders(headers, "x-ms-source-if-none-match", options.Source.IfNoneMatch)
|
||||
//destiny
|
||||
headers = addToHeaders(headers, "x-ms-lease-id", options.Destiny.LeaseID)
|
||||
headers = addTimeToHeaders(headers, "x-ms-if-modified-since", options.Destiny.IfModifiedSince)
|
||||
headers = addTimeToHeaders(headers, "x-ms-if-unmodified-since", options.Destiny.IfUnmodifiedSince)
|
||||
headers = addToHeaders(headers, "x-ms-if-match", options.Destiny.IfMatch)
|
||||
headers = addToHeaders(headers, "x-ms-if-none-match", options.Destiny.IfNoneMatch)
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err := checkRespCode(resp, []int{http.StatusAccepted, http.StatusCreated}); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
copyID := resp.Header.Get("x-ms-copy-id")
|
||||
if copyID == "" {
|
||||
return "", errors.New("Got empty copy id header")
|
||||
}
|
||||
return copyID, nil
|
||||
}
|
||||
|
||||
// AbortCopyOptions includes the options for an abort blob operation
|
||||
type AbortCopyOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// AbortCopy aborts a BlobCopy which has already been triggered by the StartBlobCopy function.
|
||||
// copyID is generated from StartBlobCopy function.
|
||||
// currentLeaseID is required IF the destination blob has an active lease on it.
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Abort-Copy-Blob
|
||||
func (b *Blob) AbortCopy(copyID string, options *AbortCopyOptions) error {
|
||||
params := url.Values{
|
||||
"comp": {"copy"},
|
||||
"copyid": {copyID},
|
||||
}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-copy-action"] = "abort"
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
// WaitForCopy loops until a BlobCopy operation is completed (or fails with error)
|
||||
func (b *Blob) WaitForCopy(copyID string) error {
|
||||
for {
|
||||
err := b.GetProperties(nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if b.Properties.CopyID != copyID {
|
||||
return errBlobCopyIDMismatch
|
||||
}
|
||||
|
||||
switch b.Properties.CopyStatus {
|
||||
case blobCopyStatusSuccess:
|
||||
return nil
|
||||
case blobCopyStatusPending:
|
||||
continue
|
||||
case blobCopyStatusAborted:
|
||||
return errBlobCopyAborted
|
||||
case blobCopyStatusFailed:
|
||||
return fmt.Errorf("storage: blob copy failed. Id=%s Description=%s", b.Properties.CopyID, b.Properties.CopyStatusDescription)
|
||||
default:
|
||||
return fmt.Errorf("storage: unhandled blob copy status: '%s'", b.Properties.CopyStatus)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// IncrementalCopyBlob copies a snapshot of a source blob and copies to referring blob
|
||||
// sourceBlob parameter must be a valid snapshot URL of the original blob.
|
||||
// THe original blob mut be public, or use a Shared Access Signature.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/incremental-copy-blob .
|
||||
func (b *Blob) IncrementalCopyBlob(sourceBlobURL string, snapshotTime time.Time, options *IncrementalCopyOptions) (string, error) {
|
||||
params := url.Values{"comp": {"incrementalcopy"}}
|
||||
|
||||
// need formatting to 7 decimal places so it's friendly to Windows and *nix
|
||||
snapshotTimeFormatted := snapshotTime.Format("2006-01-02T15:04:05.0000000Z")
|
||||
u, err := url.Parse(sourceBlobURL)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
query := u.Query()
|
||||
query.Add("snapshot", snapshotTimeFormatted)
|
||||
encodedQuery := query.Encode()
|
||||
encodedQuery = strings.Replace(encodedQuery, "%3A", ":", -1)
|
||||
u.RawQuery = encodedQuery
|
||||
snapshotURL := u.String()
|
||||
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-copy-source"] = snapshotURL
|
||||
|
||||
if options != nil {
|
||||
addTimeout(params, options.Timeout)
|
||||
headers = addToHeaders(headers, "x-ms-client-request-id", options.RequestID)
|
||||
headers = addTimeToHeaders(headers, "x-ms-if-modified-since", options.Destination.IfModifiedSince)
|
||||
headers = addTimeToHeaders(headers, "x-ms-if-unmodified-since", options.Destination.IfUnmodifiedSince)
|
||||
headers = addToHeaders(headers, "x-ms-if-match", options.Destination.IfMatch)
|
||||
headers = addToHeaders(headers, "x-ms-if-none-match", options.Destination.IfNoneMatch)
|
||||
}
|
||||
|
||||
// get URI of destination blob
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err := checkRespCode(resp, []int{http.StatusAccepted}); err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
copyID := resp.Header.Get("x-ms-copy-id")
|
||||
if copyID == "" {
|
||||
return "", errors.New("Got empty copy id header")
|
||||
}
|
||||
return copyID, nil
|
||||
}
|
81
vendor/github.com/Azure/azure-sdk-for-go/storage/directory.go
generated
vendored
81
vendor/github.com/Azure/azure-sdk-for-go/storage/directory.go
generated
vendored
|
@ -1,9 +1,24 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// Directory represents a directory on a share.
|
||||
|
@ -25,8 +40,9 @@ type DirectoryProperties struct {
|
|||
// ListDirsAndFilesParameters defines the set of customizable parameters to
|
||||
// make a List Files and Directories call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166980.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Directories-and-Files
|
||||
type ListDirsAndFilesParameters struct {
|
||||
Prefix string
|
||||
Marker string
|
||||
MaxResults uint
|
||||
Timeout uint
|
||||
|
@ -35,7 +51,7 @@ type ListDirsAndFilesParameters struct {
|
|||
// DirsAndFilesListResponse contains the response fields from
|
||||
// a List Files and Directories call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166980.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Directories-and-Files
|
||||
type DirsAndFilesListResponse struct {
|
||||
XMLName xml.Name `xml:"EnumerationResults"`
|
||||
Xmlns string `xml:"xmlns,attr"`
|
||||
|
@ -60,14 +76,15 @@ func (d *Directory) buildPath() string {
|
|||
// Create this directory in the associated share.
|
||||
// If a directory with the same name already exists, the operation fails.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166993.aspx
|
||||
func (d *Directory) Create() error {
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Create-Directory
|
||||
func (d *Directory) Create(options *FileRequestOptions) error {
|
||||
// if this is the root directory exit early
|
||||
if d.parent == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
headers, err := d.fsc.createResource(d.buildPath(), resourceDirectory, nil, mergeMDIntoExtraHeaders(d.Metadata, nil), []int{http.StatusCreated})
|
||||
params := prepareOptions(options)
|
||||
headers, err := d.fsc.createResource(d.buildPath(), resourceDirectory, params, mergeMDIntoExtraHeaders(d.Metadata, nil), []int{http.StatusCreated})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -80,23 +97,24 @@ func (d *Directory) Create() error {
|
|||
// directory does not exists. Returns true if the directory is newly created or
|
||||
// false if the directory already exists.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166993.aspx
|
||||
func (d *Directory) CreateIfNotExists() (bool, error) {
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Create-Directory
|
||||
func (d *Directory) CreateIfNotExists(options *FileRequestOptions) (bool, error) {
|
||||
// if this is the root directory exit early
|
||||
if d.parent == nil {
|
||||
return false, nil
|
||||
}
|
||||
|
||||
resp, err := d.fsc.createResourceNoClose(d.buildPath(), resourceDirectory, nil, nil)
|
||||
params := prepareOptions(options)
|
||||
resp, err := d.fsc.createResourceNoClose(d.buildPath(), resourceDirectory, params, nil)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusCreated || resp.statusCode == http.StatusConflict {
|
||||
if resp.statusCode == http.StatusCreated {
|
||||
d.updateEtagAndLastModified(resp.headers)
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusConflict {
|
||||
if resp.StatusCode == http.StatusCreated {
|
||||
d.updateEtagAndLastModified(resp.Header)
|
||||
return true, nil
|
||||
}
|
||||
|
||||
return false, d.FetchAttributes()
|
||||
return false, d.FetchAttributes(nil)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -106,20 +124,20 @@ func (d *Directory) CreateIfNotExists() (bool, error) {
|
|||
// Delete removes this directory. It must be empty in order to be deleted.
|
||||
// If the directory does not exist the operation fails.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166969.aspx
|
||||
func (d *Directory) Delete() error {
|
||||
return d.fsc.deleteResource(d.buildPath(), resourceDirectory)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Delete-Directory
|
||||
func (d *Directory) Delete(options *FileRequestOptions) error {
|
||||
return d.fsc.deleteResource(d.buildPath(), resourceDirectory, options)
|
||||
}
|
||||
|
||||
// DeleteIfExists removes this directory if it exists.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166969.aspx
|
||||
func (d *Directory) DeleteIfExists() (bool, error) {
|
||||
resp, err := d.fsc.deleteResourceNoClose(d.buildPath(), resourceDirectory)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Delete-Directory
|
||||
func (d *Directory) DeleteIfExists(options *FileRequestOptions) (bool, error) {
|
||||
resp, err := d.fsc.deleteResourceNoClose(d.buildPath(), resourceDirectory, options)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound {
|
||||
return resp.statusCode == http.StatusAccepted, nil
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound {
|
||||
return resp.StatusCode == http.StatusAccepted, nil
|
||||
}
|
||||
}
|
||||
return false, err
|
||||
|
@ -135,8 +153,10 @@ func (d *Directory) Exists() (bool, error) {
|
|||
}
|
||||
|
||||
// FetchAttributes retrieves metadata for this directory.
|
||||
func (d *Directory) FetchAttributes() error {
|
||||
headers, err := d.fsc.getResourceHeaders(d.buildPath(), compNone, resourceDirectory, http.MethodHead)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-directory-properties
|
||||
func (d *Directory) FetchAttributes(options *FileRequestOptions) error {
|
||||
params := prepareOptions(options)
|
||||
headers, err := d.fsc.getResourceHeaders(d.buildPath(), compNone, resourceDirectory, params, http.MethodHead)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -164,13 +184,14 @@ func (d *Directory) GetFileReference(name string) *File {
|
|||
Name: name,
|
||||
parent: d,
|
||||
share: d.share,
|
||||
mutex: &sync.Mutex{},
|
||||
}
|
||||
}
|
||||
|
||||
// ListDirsAndFiles returns a list of files and directories under this directory.
|
||||
// It also contains a pagination token and other response details.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166980.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Directories-and-Files
|
||||
func (d *Directory) ListDirsAndFiles(params ListDirsAndFilesParameters) (*DirsAndFilesListResponse, error) {
|
||||
q := mergeParams(params.getParameters(), getURLInitValues(compList, resourceDirectory))
|
||||
|
||||
|
@ -179,9 +200,9 @@ func (d *Directory) ListDirsAndFiles(params ListDirsAndFilesParameters) (*DirsAn
|
|||
return nil, err
|
||||
}
|
||||
|
||||
defer resp.body.Close()
|
||||
defer resp.Body.Close()
|
||||
var out DirsAndFilesListResponse
|
||||
err = xmlUnmarshal(resp.body, &out)
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
return &out, err
|
||||
}
|
||||
|
||||
|
@ -192,9 +213,9 @@ func (d *Directory) ListDirsAndFiles(params ListDirsAndFilesParameters) (*DirsAn
|
|||
// are case-insensitive so case munging should not matter to other
|
||||
// applications either.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/mt427370.aspx
|
||||
func (d *Directory) SetMetadata() error {
|
||||
headers, err := d.fsc.setResourceHeaders(d.buildPath(), compMetadata, resourceDirectory, mergeMDIntoExtraHeaders(d.Metadata, nil))
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-Directory-Metadata
|
||||
func (d *Directory) SetMetadata(options *FileRequestOptions) error {
|
||||
headers, err := d.fsc.setResourceHeaders(d.buildPath(), compMetadata, resourceDirectory, mergeMDIntoExtraHeaders(d.Metadata, nil), options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
456
vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go
generated
vendored
Normal file
456
vendor/github.com/Azure/azure-sdk-for-go/storage/entity.go
generated
vendored
Normal file
|
@ -0,0 +1,456 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/satori/go.uuid"
|
||||
)
|
||||
|
||||
// Annotating as secure for gas scanning
|
||||
/* #nosec */
|
||||
const (
|
||||
partitionKeyNode = "PartitionKey"
|
||||
rowKeyNode = "RowKey"
|
||||
etagErrorTemplate = "Etag didn't match: %v"
|
||||
)
|
||||
|
||||
var (
|
||||
errEmptyPayload = errors.New("Empty payload is not a valid metadata level for this operation")
|
||||
errNilPreviousResult = errors.New("The previous results page is nil")
|
||||
errNilNextLink = errors.New("There are no more pages in this query results")
|
||||
)
|
||||
|
||||
// Entity represents an entity inside an Azure table.
|
||||
type Entity struct {
|
||||
Table *Table
|
||||
PartitionKey string
|
||||
RowKey string
|
||||
TimeStamp time.Time
|
||||
OdataMetadata string
|
||||
OdataType string
|
||||
OdataID string
|
||||
OdataEtag string
|
||||
OdataEditLink string
|
||||
Properties map[string]interface{}
|
||||
}
|
||||
|
||||
// GetEntityReference returns an Entity object with the specified
|
||||
// partition key and row key.
|
||||
func (t *Table) GetEntityReference(partitionKey, rowKey string) *Entity {
|
||||
return &Entity{
|
||||
PartitionKey: partitionKey,
|
||||
RowKey: rowKey,
|
||||
Table: t,
|
||||
}
|
||||
}
|
||||
|
||||
// EntityOptions includes options for entity operations.
|
||||
type EntityOptions struct {
|
||||
Timeout uint
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// GetEntityOptions includes options for a get entity operation
|
||||
type GetEntityOptions struct {
|
||||
Select []string
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// Get gets the referenced entity. Which properties to get can be
|
||||
// specified using the select option.
|
||||
// See:
|
||||
// https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/query-entities
|
||||
// https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/querying-tables-and-entities
|
||||
func (e *Entity) Get(timeout uint, ml MetadataLevel, options *GetEntityOptions) error {
|
||||
if ml == EmptyPayload {
|
||||
return errEmptyPayload
|
||||
}
|
||||
// RowKey and PartitionKey could be lost if not included in the query
|
||||
// As those are the entity identifiers, it is best if they are not lost
|
||||
rk := e.RowKey
|
||||
pk := e.PartitionKey
|
||||
|
||||
query := url.Values{
|
||||
"timeout": {strconv.FormatUint(uint64(timeout), 10)},
|
||||
}
|
||||
headers := e.Table.tsc.client.getStandardHeaders()
|
||||
headers[headerAccept] = string(ml)
|
||||
|
||||
if options != nil {
|
||||
if len(options.Select) > 0 {
|
||||
query.Add("$select", strings.Join(options.Select, ","))
|
||||
}
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
|
||||
uri := e.Table.tsc.client.getEndpoint(tableServiceName, e.buildPath(), query)
|
||||
resp, err := e.Table.tsc.client.exec(http.MethodGet, uri, headers, nil, e.Table.tsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
respBody, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = json.Unmarshal(respBody, e)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
e.PartitionKey = pk
|
||||
e.RowKey = rk
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Insert inserts the referenced entity in its table.
|
||||
// The function fails if there is an entity with the same
|
||||
// PartitionKey and RowKey in the table.
|
||||
// ml determines the level of detail of metadata in the operation response,
|
||||
// or no data at all.
|
||||
// See: https://docs.microsoft.com/rest/api/storageservices/fileservices/insert-entity
|
||||
func (e *Entity) Insert(ml MetadataLevel, options *EntityOptions) error {
|
||||
query, headers := options.getParameters()
|
||||
headers = mergeHeaders(headers, e.Table.tsc.client.getStandardHeaders())
|
||||
|
||||
body, err := json.Marshal(e)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers = addBodyRelatedHeaders(headers, len(body))
|
||||
headers = addReturnContentHeaders(headers, ml)
|
||||
|
||||
uri := e.Table.tsc.client.getEndpoint(tableServiceName, e.Table.buildPath(), query)
|
||||
resp, err := e.Table.tsc.client.exec(http.MethodPost, uri, headers, bytes.NewReader(body), e.Table.tsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if ml != EmptyPayload {
|
||||
if err = checkRespCode(resp, []int{http.StatusCreated}); err != nil {
|
||||
return err
|
||||
}
|
||||
data, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err = e.UnmarshalJSON(data); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Update updates the contents of an entity. The function fails if there is no entity
|
||||
// with the same PartitionKey and RowKey in the table or if the ETag is different
|
||||
// than the one in Azure.
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/update-entity2
|
||||
func (e *Entity) Update(force bool, options *EntityOptions) error {
|
||||
return e.updateMerge(force, http.MethodPut, options)
|
||||
}
|
||||
|
||||
// Merge merges the contents of entity specified with PartitionKey and RowKey
|
||||
// with the content specified in Properties.
|
||||
// The function fails if there is no entity with the same PartitionKey and
|
||||
// RowKey in the table or if the ETag is different than the one in Azure.
|
||||
// Read more: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/merge-entity
|
||||
func (e *Entity) Merge(force bool, options *EntityOptions) error {
|
||||
return e.updateMerge(force, "MERGE", options)
|
||||
}
|
||||
|
||||
// Delete deletes the entity.
|
||||
// The function fails if there is no entity with the same PartitionKey and
|
||||
// RowKey in the table or if the ETag is different than the one in Azure.
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/delete-entity1
|
||||
func (e *Entity) Delete(force bool, options *EntityOptions) error {
|
||||
query, headers := options.getParameters()
|
||||
headers = mergeHeaders(headers, e.Table.tsc.client.getStandardHeaders())
|
||||
|
||||
headers = addIfMatchHeader(headers, force, e.OdataEtag)
|
||||
headers = addReturnContentHeaders(headers, EmptyPayload)
|
||||
|
||||
uri := e.Table.tsc.client.getEndpoint(tableServiceName, e.buildPath(), query)
|
||||
resp, err := e.Table.tsc.client.exec(http.MethodDelete, uri, headers, nil, e.Table.tsc.auth)
|
||||
if err != nil {
|
||||
if resp.StatusCode == http.StatusPreconditionFailed {
|
||||
return fmt.Errorf(etagErrorTemplate, err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return e.updateTimestamp(resp.Header)
|
||||
}
|
||||
|
||||
// InsertOrReplace inserts an entity or replaces the existing one.
|
||||
// Read more: https://docs.microsoft.com/rest/api/storageservices/fileservices/insert-or-replace-entity
|
||||
func (e *Entity) InsertOrReplace(options *EntityOptions) error {
|
||||
return e.insertOr(http.MethodPut, options)
|
||||
}
|
||||
|
||||
// InsertOrMerge inserts an entity or merges the existing one.
|
||||
// Read more: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/insert-or-merge-entity
|
||||
func (e *Entity) InsertOrMerge(options *EntityOptions) error {
|
||||
return e.insertOr("MERGE", options)
|
||||
}
|
||||
|
||||
func (e *Entity) buildPath() string {
|
||||
return fmt.Sprintf("%s(PartitionKey='%s', RowKey='%s')", e.Table.buildPath(), e.PartitionKey, e.RowKey)
|
||||
}
|
||||
|
||||
// MarshalJSON is a custom marshaller for entity
|
||||
func (e *Entity) MarshalJSON() ([]byte, error) {
|
||||
completeMap := map[string]interface{}{}
|
||||
completeMap[partitionKeyNode] = e.PartitionKey
|
||||
completeMap[rowKeyNode] = e.RowKey
|
||||
for k, v := range e.Properties {
|
||||
typeKey := strings.Join([]string{k, OdataTypeSuffix}, "")
|
||||
switch t := v.(type) {
|
||||
case []byte:
|
||||
completeMap[typeKey] = OdataBinary
|
||||
completeMap[k] = t
|
||||
case time.Time:
|
||||
completeMap[typeKey] = OdataDateTime
|
||||
completeMap[k] = t.Format(time.RFC3339Nano)
|
||||
case uuid.UUID:
|
||||
completeMap[typeKey] = OdataGUID
|
||||
completeMap[k] = t.String()
|
||||
case int64:
|
||||
completeMap[typeKey] = OdataInt64
|
||||
completeMap[k] = fmt.Sprintf("%v", v)
|
||||
default:
|
||||
completeMap[k] = v
|
||||
}
|
||||
if strings.HasSuffix(k, OdataTypeSuffix) {
|
||||
if !(completeMap[k] == OdataBinary ||
|
||||
completeMap[k] == OdataDateTime ||
|
||||
completeMap[k] == OdataGUID ||
|
||||
completeMap[k] == OdataInt64) {
|
||||
return nil, fmt.Errorf("Odata.type annotation %v value is not valid", k)
|
||||
}
|
||||
valueKey := strings.TrimSuffix(k, OdataTypeSuffix)
|
||||
if _, ok := completeMap[valueKey]; !ok {
|
||||
return nil, fmt.Errorf("Odata.type annotation %v defined without value defined", k)
|
||||
}
|
||||
}
|
||||
}
|
||||
return json.Marshal(completeMap)
|
||||
}
|
||||
|
||||
// UnmarshalJSON is a custom unmarshaller for entities
|
||||
func (e *Entity) UnmarshalJSON(data []byte) error {
|
||||
errorTemplate := "Deserializing error: %v"
|
||||
|
||||
props := map[string]interface{}{}
|
||||
err := json.Unmarshal(data, &props)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// deselialize metadata
|
||||
e.OdataMetadata = stringFromMap(props, "odata.metadata")
|
||||
e.OdataType = stringFromMap(props, "odata.type")
|
||||
e.OdataID = stringFromMap(props, "odata.id")
|
||||
e.OdataEtag = stringFromMap(props, "odata.etag")
|
||||
e.OdataEditLink = stringFromMap(props, "odata.editLink")
|
||||
e.PartitionKey = stringFromMap(props, partitionKeyNode)
|
||||
e.RowKey = stringFromMap(props, rowKeyNode)
|
||||
|
||||
// deserialize timestamp
|
||||
timeStamp, ok := props["Timestamp"]
|
||||
if ok {
|
||||
str, ok := timeStamp.(string)
|
||||
if !ok {
|
||||
return fmt.Errorf(errorTemplate, "Timestamp casting error")
|
||||
}
|
||||
t, err := time.Parse(time.RFC3339Nano, str)
|
||||
if err != nil {
|
||||
return fmt.Errorf(errorTemplate, err)
|
||||
}
|
||||
e.TimeStamp = t
|
||||
}
|
||||
delete(props, "Timestamp")
|
||||
delete(props, "Timestamp@odata.type")
|
||||
|
||||
// deserialize entity (user defined fields)
|
||||
for k, v := range props {
|
||||
if strings.HasSuffix(k, OdataTypeSuffix) {
|
||||
valueKey := strings.TrimSuffix(k, OdataTypeSuffix)
|
||||
str, ok := props[valueKey].(string)
|
||||
if !ok {
|
||||
return fmt.Errorf(errorTemplate, fmt.Sprintf("%v casting error", v))
|
||||
}
|
||||
switch v {
|
||||
case OdataBinary:
|
||||
props[valueKey], err = base64.StdEncoding.DecodeString(str)
|
||||
if err != nil {
|
||||
return fmt.Errorf(errorTemplate, err)
|
||||
}
|
||||
case OdataDateTime:
|
||||
t, err := time.Parse("2006-01-02T15:04:05Z", str)
|
||||
if err != nil {
|
||||
return fmt.Errorf(errorTemplate, err)
|
||||
}
|
||||
props[valueKey] = t
|
||||
case OdataGUID:
|
||||
props[valueKey] = uuid.FromStringOrNil(str)
|
||||
case OdataInt64:
|
||||
i, err := strconv.ParseInt(str, 10, 64)
|
||||
if err != nil {
|
||||
return fmt.Errorf(errorTemplate, err)
|
||||
}
|
||||
props[valueKey] = i
|
||||
default:
|
||||
return fmt.Errorf(errorTemplate, fmt.Sprintf("%v is not supported", v))
|
||||
}
|
||||
delete(props, k)
|
||||
}
|
||||
}
|
||||
|
||||
e.Properties = props
|
||||
return nil
|
||||
}
|
||||
|
||||
func getAndDelete(props map[string]interface{}, key string) interface{} {
|
||||
if value, ok := props[key]; ok {
|
||||
delete(props, key)
|
||||
return value
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func addIfMatchHeader(h map[string]string, force bool, etag string) map[string]string {
|
||||
if force {
|
||||
h[headerIfMatch] = "*"
|
||||
} else {
|
||||
h[headerIfMatch] = etag
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
// updates Etag and timestamp
|
||||
func (e *Entity) updateEtagAndTimestamp(headers http.Header) error {
|
||||
e.OdataEtag = headers.Get(headerEtag)
|
||||
return e.updateTimestamp(headers)
|
||||
}
|
||||
|
||||
func (e *Entity) updateTimestamp(headers http.Header) error {
|
||||
str := headers.Get(headerDate)
|
||||
t, err := time.Parse(time.RFC1123, str)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Update timestamp error: %v", err)
|
||||
}
|
||||
e.TimeStamp = t
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *Entity) insertOr(verb string, options *EntityOptions) error {
|
||||
query, headers := options.getParameters()
|
||||
headers = mergeHeaders(headers, e.Table.tsc.client.getStandardHeaders())
|
||||
|
||||
body, err := json.Marshal(e)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers = addBodyRelatedHeaders(headers, len(body))
|
||||
headers = addReturnContentHeaders(headers, EmptyPayload)
|
||||
|
||||
uri := e.Table.tsc.client.getEndpoint(tableServiceName, e.buildPath(), query)
|
||||
resp, err := e.Table.tsc.client.exec(verb, uri, headers, bytes.NewReader(body), e.Table.tsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return e.updateEtagAndTimestamp(resp.Header)
|
||||
}
|
||||
|
||||
func (e *Entity) updateMerge(force bool, verb string, options *EntityOptions) error {
|
||||
query, headers := options.getParameters()
|
||||
headers = mergeHeaders(headers, e.Table.tsc.client.getStandardHeaders())
|
||||
|
||||
body, err := json.Marshal(e)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers = addBodyRelatedHeaders(headers, len(body))
|
||||
headers = addIfMatchHeader(headers, force, e.OdataEtag)
|
||||
headers = addReturnContentHeaders(headers, EmptyPayload)
|
||||
|
||||
uri := e.Table.tsc.client.getEndpoint(tableServiceName, e.buildPath(), query)
|
||||
resp, err := e.Table.tsc.client.exec(verb, uri, headers, bytes.NewReader(body), e.Table.tsc.auth)
|
||||
if err != nil {
|
||||
if resp.StatusCode == http.StatusPreconditionFailed {
|
||||
return fmt.Errorf(etagErrorTemplate, err)
|
||||
}
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err = checkRespCode(resp, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return e.updateEtagAndTimestamp(resp.Header)
|
||||
}
|
||||
|
||||
func stringFromMap(props map[string]interface{}, key string) string {
|
||||
value := getAndDelete(props, key)
|
||||
if value != nil {
|
||||
return value.(string)
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
func (options *EntityOptions) getParameters() (url.Values, map[string]string) {
|
||||
query := url.Values{}
|
||||
headers := map[string]string{}
|
||||
if options != nil {
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = headersFromStruct(*options)
|
||||
}
|
||||
return query, headers
|
||||
}
|
252
vendor/github.com/Azure/azure-sdk-for-go/storage/file.go
generated
vendored
252
vendor/github.com/Azure/azure-sdk-for-go/storage/file.go
generated
vendored
|
@ -1,5 +1,19 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
|
@ -8,11 +22,16 @@ import (
|
|||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"sync"
|
||||
)
|
||||
|
||||
const fourMB = uint64(4194304)
|
||||
const oneTB = uint64(1099511627776)
|
||||
|
||||
// Export maximum range and file sizes
|
||||
const MaxRangeSize = fourMB
|
||||
const MaxFileSize = oneTB
|
||||
|
||||
// File represents a file on a share.
|
||||
type File struct {
|
||||
fsc *FileServiceClient
|
||||
|
@ -22,6 +41,7 @@ type File struct {
|
|||
Properties FileProperties `xml:"Properties"`
|
||||
share *Share
|
||||
FileCopyProperties FileCopyState
|
||||
mutex *sync.Mutex
|
||||
}
|
||||
|
||||
// FileProperties contains various properties of a file.
|
||||
|
@ -32,7 +52,7 @@ type FileProperties struct {
|
|||
Etag string
|
||||
Language string `header:"x-ms-content-language"`
|
||||
LastModified string
|
||||
Length uint64 `xml:"Content-Length"`
|
||||
Length uint64 `xml:"Content-Length" header:"x-ms-content-length"`
|
||||
MD5 string `header:"x-ms-content-md5"`
|
||||
Type string `header:"x-ms-content-type"`
|
||||
}
|
||||
|
@ -54,26 +74,22 @@ type FileStream struct {
|
|||
}
|
||||
|
||||
// FileRequestOptions will be passed to misc file operations.
|
||||
// Currently just Timeout (in seconds) but will expand.
|
||||
// Currently just Timeout (in seconds) but could expand.
|
||||
type FileRequestOptions struct {
|
||||
Timeout uint // timeout duration in seconds.
|
||||
}
|
||||
|
||||
// getParameters, construct parameters for FileRequestOptions.
|
||||
// currently only timeout, but expecting to grow as functionality fills out.
|
||||
func (p FileRequestOptions) getParameters() url.Values {
|
||||
out := url.Values{}
|
||||
|
||||
if p.Timeout != 0 {
|
||||
out.Set("timeout", fmt.Sprintf("%v", p.Timeout))
|
||||
func prepareOptions(options *FileRequestOptions) url.Values {
|
||||
params := url.Values{}
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
}
|
||||
|
||||
return out
|
||||
return params
|
||||
}
|
||||
|
||||
// FileRanges contains a list of file range information for a file.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166984.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Ranges
|
||||
type FileRanges struct {
|
||||
ContentLength uint64
|
||||
LastModified string
|
||||
|
@ -83,7 +99,7 @@ type FileRanges struct {
|
|||
|
||||
// FileRange contains range information for a file.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166984.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Ranges
|
||||
type FileRange struct {
|
||||
Start uint64 `xml:"Start"`
|
||||
End uint64 `xml:"End"`
|
||||
|
@ -100,9 +116,13 @@ func (f *File) buildPath() string {
|
|||
|
||||
// ClearRange releases the specified range of space in a file.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn194276.aspx
|
||||
func (f *File) ClearRange(fileRange FileRange) error {
|
||||
headers, err := f.modifyRange(nil, fileRange, nil)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Range
|
||||
func (f *File) ClearRange(fileRange FileRange, options *FileRequestOptions) error {
|
||||
var timeout *uint
|
||||
if options != nil {
|
||||
timeout = &options.Timeout
|
||||
}
|
||||
headers, err := f.modifyRange(nil, fileRange, timeout, nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -113,24 +133,23 @@ func (f *File) ClearRange(fileRange FileRange) error {
|
|||
|
||||
// Create creates a new file or replaces an existing one.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn194271.aspx
|
||||
func (f *File) Create(maxSize uint64) error {
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Create-File
|
||||
func (f *File) Create(maxSize uint64, options *FileRequestOptions) error {
|
||||
if maxSize > oneTB {
|
||||
return fmt.Errorf("max file size is 1TB")
|
||||
}
|
||||
params := prepareOptions(options)
|
||||
headers := headersFromStruct(f.Properties)
|
||||
headers["x-ms-content-length"] = strconv.FormatUint(maxSize, 10)
|
||||
headers["x-ms-type"] = "file"
|
||||
|
||||
extraHeaders := map[string]string{
|
||||
"x-ms-content-length": strconv.FormatUint(maxSize, 10),
|
||||
"x-ms-type": "file",
|
||||
}
|
||||
|
||||
headers, err := f.fsc.createResource(f.buildPath(), resourceFile, nil, mergeMDIntoExtraHeaders(f.Metadata, extraHeaders), []int{http.StatusCreated})
|
||||
outputHeaders, err := f.fsc.createResource(f.buildPath(), resourceFile, params, mergeMDIntoExtraHeaders(f.Metadata, headers), []int{http.StatusCreated})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
f.Properties.Length = maxSize
|
||||
f.updateEtagAndLastModified(headers)
|
||||
f.updateEtagAndLastModified(outputHeaders)
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -142,70 +161,94 @@ func (f *File) CopyFile(sourceURL string, options *FileRequestOptions) error {
|
|||
"x-ms-type": "file",
|
||||
"x-ms-copy-source": sourceURL,
|
||||
}
|
||||
params := prepareOptions(options)
|
||||
|
||||
var parameters url.Values
|
||||
if options != nil {
|
||||
parameters = options.getParameters()
|
||||
}
|
||||
|
||||
headers, err := f.fsc.createResource(f.buildPath(), resourceFile, parameters, mergeMDIntoExtraHeaders(f.Metadata, extraHeaders), []int{http.StatusAccepted})
|
||||
headers, err := f.fsc.createResource(f.buildPath(), resourceFile, params, mergeMDIntoExtraHeaders(f.Metadata, extraHeaders), []int{http.StatusAccepted})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
f.updateEtagLastModifiedAndCopyHeaders(headers)
|
||||
f.updateEtagAndLastModified(headers)
|
||||
f.FileCopyProperties.ID = headers.Get("X-Ms-Copy-Id")
|
||||
f.FileCopyProperties.Status = headers.Get("X-Ms-Copy-Status")
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete immediately removes this file from the storage account.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn689085.aspx
|
||||
func (f *File) Delete() error {
|
||||
return f.fsc.deleteResource(f.buildPath(), resourceFile)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Delete-File2
|
||||
func (f *File) Delete(options *FileRequestOptions) error {
|
||||
return f.fsc.deleteResource(f.buildPath(), resourceFile, options)
|
||||
}
|
||||
|
||||
// DeleteIfExists removes this file if it exists.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn689085.aspx
|
||||
func (f *File) DeleteIfExists() (bool, error) {
|
||||
resp, err := f.fsc.deleteResourceNoClose(f.buildPath(), resourceFile)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Delete-File2
|
||||
func (f *File) DeleteIfExists(options *FileRequestOptions) (bool, error) {
|
||||
resp, err := f.fsc.deleteResourceNoClose(f.buildPath(), resourceFile, options)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound {
|
||||
return resp.statusCode == http.StatusAccepted, nil
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound {
|
||||
return resp.StatusCode == http.StatusAccepted, nil
|
||||
}
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
// GetFileOptions includes options for a get file operation
|
||||
type GetFileOptions struct {
|
||||
Timeout uint
|
||||
GetContentMD5 bool
|
||||
}
|
||||
|
||||
// DownloadToStream operation downloads the file.
|
||||
//
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-file
|
||||
func (f *File) DownloadToStream(options *FileRequestOptions) (io.ReadCloser, error) {
|
||||
params := prepareOptions(options)
|
||||
resp, err := f.fsc.getResourceNoClose(f.buildPath(), compNone, resourceFile, params, http.MethodGet, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
drainRespBody(resp)
|
||||
return nil, err
|
||||
}
|
||||
return resp.Body, nil
|
||||
}
|
||||
|
||||
// DownloadRangeToStream operation downloads the specified range of this file with optional MD5 hash.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-file
|
||||
func (f *File) DownloadRangeToStream(fileRange FileRange, getContentMD5 bool) (fs FileStream, err error) {
|
||||
if getContentMD5 && isRangeTooBig(fileRange) {
|
||||
return fs, fmt.Errorf("must specify a range less than or equal to 4MB when getContentMD5 is true")
|
||||
}
|
||||
|
||||
func (f *File) DownloadRangeToStream(fileRange FileRange, options *GetFileOptions) (fs FileStream, err error) {
|
||||
extraHeaders := map[string]string{
|
||||
"Range": fileRange.String(),
|
||||
}
|
||||
if getContentMD5 == true {
|
||||
extraHeaders["x-ms-range-get-content-md5"] = "true"
|
||||
params := url.Values{}
|
||||
if options != nil {
|
||||
if options.GetContentMD5 {
|
||||
if isRangeTooBig(fileRange) {
|
||||
return fs, fmt.Errorf("must specify a range less than or equal to 4MB when getContentMD5 is true")
|
||||
}
|
||||
extraHeaders["x-ms-range-get-content-md5"] = "true"
|
||||
}
|
||||
params = addTimeout(params, options.Timeout)
|
||||
}
|
||||
|
||||
resp, err := f.fsc.getResourceNoClose(f.buildPath(), compNone, resourceFile, http.MethodGet, extraHeaders)
|
||||
resp, err := f.fsc.getResourceNoClose(f.buildPath(), compNone, resourceFile, params, http.MethodGet, extraHeaders)
|
||||
if err != nil {
|
||||
return fs, err
|
||||
}
|
||||
|
||||
if err = checkRespCode(resp.statusCode, []int{http.StatusOK, http.StatusPartialContent}); err != nil {
|
||||
resp.body.Close()
|
||||
if err = checkRespCode(resp, []int{http.StatusOK, http.StatusPartialContent}); err != nil {
|
||||
drainRespBody(resp)
|
||||
return fs, err
|
||||
}
|
||||
|
||||
fs.Body = resp.body
|
||||
if getContentMD5 {
|
||||
fs.ContentMD5 = resp.headers.Get("Content-MD5")
|
||||
fs.Body = resp.Body
|
||||
if options != nil && options.GetContentMD5 {
|
||||
fs.ContentMD5 = resp.Header.Get("Content-MD5")
|
||||
}
|
||||
return fs, nil
|
||||
}
|
||||
|
@ -221,8 +264,10 @@ func (f *File) Exists() (bool, error) {
|
|||
}
|
||||
|
||||
// FetchAttributes updates metadata and properties for this file.
|
||||
func (f *File) FetchAttributes() error {
|
||||
headers, err := f.fsc.getResourceHeaders(f.buildPath(), compNone, resourceFile, http.MethodHead)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-file-properties
|
||||
func (f *File) FetchAttributes(options *FileRequestOptions) error {
|
||||
params := prepareOptions(options)
|
||||
headers, err := f.fsc.getResourceHeaders(f.buildPath(), compNone, resourceFile, params, http.MethodHead)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -242,17 +287,26 @@ func isRangeTooBig(fileRange FileRange) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// ListRangesOptions includes options for a list file ranges operation
|
||||
type ListRangesOptions struct {
|
||||
Timeout uint
|
||||
ListRange *FileRange
|
||||
}
|
||||
|
||||
// ListRanges returns the list of valid ranges for this file.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166984.aspx
|
||||
func (f *File) ListRanges(listRange *FileRange) (*FileRanges, error) {
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Ranges
|
||||
func (f *File) ListRanges(options *ListRangesOptions) (*FileRanges, error) {
|
||||
params := url.Values{"comp": {"rangelist"}}
|
||||
|
||||
// add optional range to list
|
||||
var headers map[string]string
|
||||
if listRange != nil {
|
||||
headers = make(map[string]string)
|
||||
headers["Range"] = listRange.String()
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
if options.ListRange != nil {
|
||||
headers = make(map[string]string)
|
||||
headers["Range"] = options.ListRange.String()
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := f.fsc.listContent(f.buildPath(), params, headers)
|
||||
|
@ -260,25 +314,25 @@ func (f *File) ListRanges(listRange *FileRange) (*FileRanges, error) {
|
|||
return nil, err
|
||||
}
|
||||
|
||||
defer resp.body.Close()
|
||||
defer resp.Body.Close()
|
||||
var cl uint64
|
||||
cl, err = strconv.ParseUint(resp.headers.Get("x-ms-content-length"), 10, 64)
|
||||
cl, err = strconv.ParseUint(resp.Header.Get("x-ms-content-length"), 10, 64)
|
||||
if err != nil {
|
||||
ioutil.ReadAll(resp.body)
|
||||
ioutil.ReadAll(resp.Body)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var out FileRanges
|
||||
out.ContentLength = cl
|
||||
out.ETag = resp.headers.Get("ETag")
|
||||
out.LastModified = resp.headers.Get("Last-Modified")
|
||||
out.ETag = resp.Header.Get("ETag")
|
||||
out.LastModified = resp.Header.Get("Last-Modified")
|
||||
|
||||
err = xmlUnmarshal(resp.body, &out)
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
return &out, err
|
||||
}
|
||||
|
||||
// modifies a range of bytes in this file
|
||||
func (f *File) modifyRange(bytes io.Reader, fileRange FileRange, contentMD5 *string) (http.Header, error) {
|
||||
func (f *File) modifyRange(bytes io.Reader, fileRange FileRange, timeout *uint, contentMD5 *string) (http.Header, error) {
|
||||
if err := f.fsc.checkForStorageEmulator(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -289,7 +343,12 @@ func (f *File) modifyRange(bytes io.Reader, fileRange FileRange, contentMD5 *str
|
|||
return nil, errors.New("range cannot exceed 4MB in size")
|
||||
}
|
||||
|
||||
uri := f.fsc.client.getEndpoint(fileServiceName, f.buildPath(), url.Values{"comp": {"range"}})
|
||||
params := url.Values{"comp": {"range"}}
|
||||
if timeout != nil {
|
||||
params = addTimeout(params, *timeout)
|
||||
}
|
||||
|
||||
uri := f.fsc.client.getEndpoint(fileServiceName, f.buildPath(), params)
|
||||
|
||||
// default to clear
|
||||
write := "clear"
|
||||
|
@ -316,8 +375,8 @@ func (f *File) modifyRange(bytes io.Reader, fileRange FileRange, contentMD5 *str
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return resp.headers, checkRespCode(resp.statusCode, []int{http.StatusCreated})
|
||||
defer drainRespBody(resp)
|
||||
return resp.Header, checkRespCode(resp, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
// SetMetadata replaces the metadata for this file.
|
||||
|
@ -327,9 +386,9 @@ func (f *File) modifyRange(bytes io.Reader, fileRange FileRange, contentMD5 *str
|
|||
// are case-insensitive so case munging should not matter to other
|
||||
// applications either.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn689097.aspx
|
||||
func (f *File) SetMetadata() error {
|
||||
headers, err := f.fsc.setResourceHeaders(f.buildPath(), compMetadata, resourceFile, mergeMDIntoExtraHeaders(f.Metadata, nil))
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-File-Metadata
|
||||
func (f *File) SetMetadata(options *FileRequestOptions) error {
|
||||
headers, err := f.fsc.setResourceHeaders(f.buildPath(), compMetadata, resourceFile, mergeMDIntoExtraHeaders(f.Metadata, nil), options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -345,9 +404,9 @@ func (f *File) SetMetadata() error {
|
|||
// are case-insensitive so case munging should not matter to other
|
||||
// applications either.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn166975.aspx
|
||||
func (f *File) SetProperties() error {
|
||||
headers, err := f.fsc.setResourceHeaders(f.buildPath(), compProperties, resourceFile, headersFromStruct(f.Properties))
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-File-Properties
|
||||
func (f *File) SetProperties(options *FileRequestOptions) error {
|
||||
headers, err := f.fsc.setResourceHeaders(f.buildPath(), compProperties, resourceFile, headersFromStruct(f.Properties), options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -362,14 +421,6 @@ func (f *File) updateEtagAndLastModified(headers http.Header) {
|
|||
f.Properties.LastModified = headers.Get("Last-Modified")
|
||||
}
|
||||
|
||||
// updates Etag, last modified date and x-ms-copy-id
|
||||
func (f *File) updateEtagLastModifiedAndCopyHeaders(headers http.Header) {
|
||||
f.Properties.Etag = headers.Get("Etag")
|
||||
f.Properties.LastModified = headers.Get("Last-Modified")
|
||||
f.FileCopyProperties.ID = headers.Get("X-Ms-Copy-Id")
|
||||
f.FileCopyProperties.Status = headers.Get("X-Ms-Copy-Status")
|
||||
}
|
||||
|
||||
// updates file properties from the specified HTTP header
|
||||
func (f *File) updateProperties(header http.Header) {
|
||||
size, err := strconv.ParseUint(header.Get("Content-Length"), 10, 64)
|
||||
|
@ -390,23 +441,40 @@ func (f *File) updateProperties(header http.Header) {
|
|||
// This method does not create a publicly accessible URL if the file
|
||||
// is private and this method does not check if the file exists.
|
||||
func (f *File) URL() string {
|
||||
return f.fsc.client.getEndpoint(fileServiceName, f.buildPath(), url.Values{})
|
||||
return f.fsc.client.getEndpoint(fileServiceName, f.buildPath(), nil)
|
||||
}
|
||||
|
||||
// WriteRange writes a range of bytes to this file with an optional MD5 hash of the content.
|
||||
// Note that the length of bytes must match (rangeEnd - rangeStart) + 1 with a maximum size of 4MB.
|
||||
// WriteRangeOptions includes options for a write file range operation
|
||||
type WriteRangeOptions struct {
|
||||
Timeout uint
|
||||
ContentMD5 string
|
||||
}
|
||||
|
||||
// WriteRange writes a range of bytes to this file with an optional MD5 hash of the content (inside
|
||||
// options parameter). Note that the length of bytes must match (rangeEnd - rangeStart) + 1 with
|
||||
// a maximum size of 4MB.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn194276.aspx
|
||||
func (f *File) WriteRange(bytes io.Reader, fileRange FileRange, contentMD5 *string) error {
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Range
|
||||
func (f *File) WriteRange(bytes io.Reader, fileRange FileRange, options *WriteRangeOptions) error {
|
||||
if bytes == nil {
|
||||
return errors.New("bytes cannot be nil")
|
||||
}
|
||||
var timeout *uint
|
||||
var md5 *string
|
||||
if options != nil {
|
||||
timeout = &options.Timeout
|
||||
md5 = &options.ContentMD5
|
||||
}
|
||||
|
||||
headers, err := f.modifyRange(bytes, fileRange, contentMD5)
|
||||
headers, err := f.modifyRange(bytes, fileRange, timeout, md5)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// it's perfectly legal for multiple go routines to call WriteRange
|
||||
// on the same *File (e.g. concurrently writing non-overlapping ranges)
|
||||
// so we must take the file mutex before updating our properties.
|
||||
f.mutex.Lock()
|
||||
f.updateEtagAndLastModified(headers)
|
||||
f.mutex.Unlock()
|
||||
return nil
|
||||
}
|
||||
|
|
149
vendor/github.com/Azure/azure-sdk-for-go/storage/fileserviceclient.go
generated
vendored
149
vendor/github.com/Azure/azure-sdk-for-go/storage/fileserviceclient.go
generated
vendored
|
@ -1,11 +1,25 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strings"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// FileServiceClient contains operations for Microsoft Azure File Service.
|
||||
|
@ -17,7 +31,7 @@ type FileServiceClient struct {
|
|||
// ListSharesParameters defines the set of customizable parameters to make a
|
||||
// List Shares call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn167009.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Shares
|
||||
type ListSharesParameters struct {
|
||||
Prefix string
|
||||
Marker string
|
||||
|
@ -29,7 +43,7 @@ type ListSharesParameters struct {
|
|||
// ShareListResponse contains the response fields from
|
||||
// ListShares call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn167009.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/List-Shares
|
||||
type ShareListResponse struct {
|
||||
XMLName xml.Name `xml:"EnumerationResults"`
|
||||
Xmlns string `xml:"xmlns,attr"`
|
||||
|
@ -79,10 +93,10 @@ func (p ListSharesParameters) getParameters() url.Values {
|
|||
out.Set("include", p.Include)
|
||||
}
|
||||
if p.MaxResults != 0 {
|
||||
out.Set("maxresults", fmt.Sprintf("%v", p.MaxResults))
|
||||
out.Set("maxresults", strconv.FormatUint(uint64(p.MaxResults), 10))
|
||||
}
|
||||
if p.Timeout != 0 {
|
||||
out.Set("timeout", fmt.Sprintf("%v", p.Timeout))
|
||||
out.Set("timeout", strconv.FormatUint(uint64(p.Timeout), 10))
|
||||
}
|
||||
|
||||
return out
|
||||
|
@ -91,15 +105,16 @@ func (p ListSharesParameters) getParameters() url.Values {
|
|||
func (p ListDirsAndFilesParameters) getParameters() url.Values {
|
||||
out := url.Values{}
|
||||
|
||||
if p.Prefix != "" {
|
||||
out.Set("prefix", p.Prefix)
|
||||
}
|
||||
if p.Marker != "" {
|
||||
out.Set("marker", p.Marker)
|
||||
}
|
||||
if p.MaxResults != 0 {
|
||||
out.Set("maxresults", fmt.Sprintf("%v", p.MaxResults))
|
||||
}
|
||||
if p.Timeout != 0 {
|
||||
out.Set("timeout", fmt.Sprintf("%v", p.Timeout))
|
||||
out.Set("maxresults", strconv.FormatUint(uint64(p.MaxResults), 10))
|
||||
}
|
||||
out = addTimeout(out, p.Timeout)
|
||||
|
||||
return out
|
||||
}
|
||||
|
@ -117,9 +132,9 @@ func getURLInitValues(comp compType, res resourceType) url.Values {
|
|||
}
|
||||
|
||||
// GetShareReference returns a Share object for the specified share name.
|
||||
func (f FileServiceClient) GetShareReference(name string) Share {
|
||||
return Share{
|
||||
fsc: &f,
|
||||
func (f *FileServiceClient) GetShareReference(name string) *Share {
|
||||
return &Share{
|
||||
fsc: f,
|
||||
Name: name,
|
||||
Properties: ShareProperties{
|
||||
Quota: -1,
|
||||
|
@ -130,7 +145,7 @@ func (f FileServiceClient) GetShareReference(name string) Share {
|
|||
// ListShares returns the list of shares in a storage account along with
|
||||
// pagination token and other response details.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179352.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/list-shares
|
||||
func (f FileServiceClient) ListShares(params ListSharesParameters) (*ShareListResponse, error) {
|
||||
q := mergeParams(params.getParameters(), url.Values{"comp": {"list"}})
|
||||
|
||||
|
@ -139,8 +154,8 @@ func (f FileServiceClient) ListShares(params ListSharesParameters) (*ShareListRe
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
err = xmlUnmarshal(resp.body, &out)
|
||||
defer resp.Body.Close()
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
|
||||
// assign our client to the newly created Share objects
|
||||
for i := range out.Shares {
|
||||
|
@ -164,7 +179,7 @@ func (f *FileServiceClient) SetServiceProperties(props ServiceProperties) error
|
|||
}
|
||||
|
||||
// retrieves directory or share content
|
||||
func (f FileServiceClient) listContent(path string, params url.Values, extraHeaders map[string]string) (*storageResponse, error) {
|
||||
func (f FileServiceClient) listContent(path string, params url.Values, extraHeaders map[string]string) (*http.Response, error) {
|
||||
if err := f.checkForStorageEmulator(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -178,8 +193,8 @@ func (f FileServiceClient) listContent(path string, params url.Values, extraHead
|
|||
return nil, err
|
||||
}
|
||||
|
||||
if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil {
|
||||
readAndCloseBody(resp.body)
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
drainRespBody(resp)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
|
@ -197,9 +212,9 @@ func (f FileServiceClient) resourceExists(path string, res resourceType) (bool,
|
|||
|
||||
resp, err := f.client.exec(http.MethodHead, uri, headers, nil, f.auth)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusOK || resp.statusCode == http.StatusNotFound {
|
||||
return resp.statusCode == http.StatusOK, resp.headers, nil
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNotFound {
|
||||
return resp.StatusCode == http.StatusOK, resp.Header, nil
|
||||
}
|
||||
}
|
||||
return false, nil, err
|
||||
|
@ -211,12 +226,12 @@ func (f FileServiceClient) createResource(path string, res resourceType, urlPara
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return resp.headers, checkRespCode(resp.statusCode, expectedResponseCodes)
|
||||
defer drainRespBody(resp)
|
||||
return resp.Header, checkRespCode(resp, expectedResponseCodes)
|
||||
}
|
||||
|
||||
// creates a resource depending on the specified resource type, doesn't close the response body
|
||||
func (f FileServiceClient) createResourceNoClose(path string, res resourceType, urlParams url.Values, extraHeaders map[string]string) (*storageResponse, error) {
|
||||
func (f FileServiceClient) createResourceNoClose(path string, res resourceType, urlParams url.Values, extraHeaders map[string]string) (*http.Response, error) {
|
||||
if err := f.checkForStorageEmulator(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -231,51 +246,50 @@ func (f FileServiceClient) createResourceNoClose(path string, res resourceType,
|
|||
}
|
||||
|
||||
// returns HTTP header data for the specified directory or share
|
||||
func (f FileServiceClient) getResourceHeaders(path string, comp compType, res resourceType, verb string) (http.Header, error) {
|
||||
resp, err := f.getResourceNoClose(path, comp, res, verb, nil)
|
||||
func (f FileServiceClient) getResourceHeaders(path string, comp compType, res resourceType, params url.Values, verb string) (http.Header, error) {
|
||||
resp, err := f.getResourceNoClose(path, comp, res, params, verb, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil {
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return resp.headers, nil
|
||||
return resp.Header, nil
|
||||
}
|
||||
|
||||
// gets the specified resource, doesn't close the response body
|
||||
func (f FileServiceClient) getResourceNoClose(path string, comp compType, res resourceType, verb string, extraHeaders map[string]string) (*storageResponse, error) {
|
||||
func (f FileServiceClient) getResourceNoClose(path string, comp compType, res resourceType, params url.Values, verb string, extraHeaders map[string]string) (*http.Response, error) {
|
||||
if err := f.checkForStorageEmulator(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
params := getURLInitValues(comp, res)
|
||||
params = mergeParams(params, getURLInitValues(comp, res))
|
||||
uri := f.client.getEndpoint(fileServiceName, path, params)
|
||||
extraHeaders = f.client.protectUserAgent(extraHeaders)
|
||||
headers := mergeHeaders(f.client.getStandardHeaders(), extraHeaders)
|
||||
|
||||
return f.client.exec(verb, uri, headers, nil, f.auth)
|
||||
}
|
||||
|
||||
// deletes the resource and returns the response
|
||||
func (f FileServiceClient) deleteResource(path string, res resourceType) error {
|
||||
resp, err := f.deleteResourceNoClose(path, res)
|
||||
func (f FileServiceClient) deleteResource(path string, res resourceType, options *FileRequestOptions) error {
|
||||
resp, err := f.deleteResourceNoClose(path, res, options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusAccepted})
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusAccepted})
|
||||
}
|
||||
|
||||
// deletes the resource and returns the response, doesn't close the response body
|
||||
func (f FileServiceClient) deleteResourceNoClose(path string, res resourceType) (*storageResponse, error) {
|
||||
func (f FileServiceClient) deleteResourceNoClose(path string, res resourceType, options *FileRequestOptions) (*http.Response, error) {
|
||||
if err := f.checkForStorageEmulator(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
values := getURLInitValues(compNone, res)
|
||||
values := mergeParams(getURLInitValues(compNone, res), prepareOptions(options))
|
||||
uri := f.client.getEndpoint(fileServiceName, path, values)
|
||||
return f.client.exec(http.MethodDelete, uri, f.client.getStandardHeaders(), nil, f.auth)
|
||||
}
|
||||
|
@ -294,21 +308,13 @@ func mergeMDIntoExtraHeaders(metadata, extraHeaders map[string]string) map[strin
|
|||
return extraHeaders
|
||||
}
|
||||
|
||||
// merges extraHeaders into headers and returns headers
|
||||
func mergeHeaders(headers, extraHeaders map[string]string) map[string]string {
|
||||
for k, v := range extraHeaders {
|
||||
headers[k] = v
|
||||
}
|
||||
return headers
|
||||
}
|
||||
|
||||
// sets extra header data for the specified resource
|
||||
func (f FileServiceClient) setResourceHeaders(path string, comp compType, res resourceType, extraHeaders map[string]string) (http.Header, error) {
|
||||
func (f FileServiceClient) setResourceHeaders(path string, comp compType, res resourceType, extraHeaders map[string]string, options *FileRequestOptions) (http.Header, error) {
|
||||
if err := f.checkForStorageEmulator(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
params := getURLInitValues(comp, res)
|
||||
params := mergeParams(getURLInitValues(comp, res), prepareOptions(options))
|
||||
uri := f.client.getEndpoint(fileServiceName, path, params)
|
||||
extraHeaders = f.client.protectUserAgent(extraHeaders)
|
||||
headers := mergeHeaders(f.client.getStandardHeaders(), extraHeaders)
|
||||
|
@ -317,52 +323,9 @@ func (f FileServiceClient) setResourceHeaders(path string, comp compType, res re
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
defer drainRespBody(resp)
|
||||
|
||||
return resp.headers, checkRespCode(resp.statusCode, []int{http.StatusOK})
|
||||
}
|
||||
|
||||
// gets metadata for the specified resource
|
||||
func (f FileServiceClient) getMetadata(path string, res resourceType) (map[string]string, error) {
|
||||
if err := f.checkForStorageEmulator(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
headers, err := f.getResourceHeaders(path, compMetadata, res, http.MethodGet)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return getMetadataFromHeaders(headers), nil
|
||||
}
|
||||
|
||||
// returns a map of custom metadata values from the specified HTTP header
|
||||
func getMetadataFromHeaders(header http.Header) map[string]string {
|
||||
metadata := make(map[string]string)
|
||||
for k, v := range header {
|
||||
// Can't trust CanonicalHeaderKey() to munge case
|
||||
// reliably. "_" is allowed in identifiers:
|
||||
// https://msdn.microsoft.com/en-us/library/azure/dd179414.aspx
|
||||
// https://msdn.microsoft.com/library/aa664670(VS.71).aspx
|
||||
// http://tools.ietf.org/html/rfc7230#section-3.2
|
||||
// ...but "_" is considered invalid by
|
||||
// CanonicalMIMEHeaderKey in
|
||||
// https://golang.org/src/net/textproto/reader.go?s=14615:14659#L542
|
||||
// so k can be "X-Ms-Meta-Foo" or "x-ms-meta-foo_bar".
|
||||
k = strings.ToLower(k)
|
||||
if len(v) == 0 || !strings.HasPrefix(k, strings.ToLower(userDefinedMetadataHeaderPrefix)) {
|
||||
continue
|
||||
}
|
||||
// metadata["foo"] = content of the last X-Ms-Meta-Foo header
|
||||
k = k[len(userDefinedMetadataHeaderPrefix):]
|
||||
metadata[k] = v[len(v)-1]
|
||||
}
|
||||
|
||||
if len(metadata) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return metadata
|
||||
return resp.Header, checkRespCode(resp, []int{http.StatusOK})
|
||||
}
|
||||
|
||||
//checkForStorageEmulator determines if the client is setup for use with
|
||||
|
|
201
vendor/github.com/Azure/azure-sdk-for-go/storage/leaseblob.go
generated
vendored
Normal file
201
vendor/github.com/Azure/azure-sdk-for-go/storage/leaseblob.go
generated
vendored
Normal file
|
@ -0,0 +1,201 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
// lease constants.
|
||||
const (
|
||||
leaseHeaderPrefix = "x-ms-lease-"
|
||||
headerLeaseID = "x-ms-lease-id"
|
||||
leaseAction = "x-ms-lease-action"
|
||||
leaseBreakPeriod = "x-ms-lease-break-period"
|
||||
leaseDuration = "x-ms-lease-duration"
|
||||
leaseProposedID = "x-ms-proposed-lease-id"
|
||||
leaseTime = "x-ms-lease-time"
|
||||
|
||||
acquireLease = "acquire"
|
||||
renewLease = "renew"
|
||||
changeLease = "change"
|
||||
releaseLease = "release"
|
||||
breakLease = "break"
|
||||
)
|
||||
|
||||
// leasePut is common PUT code for the various acquire/release/break etc functions.
|
||||
func (b *Blob) leaseCommonPut(headers map[string]string, expectedStatus int, options *LeaseOptions) (http.Header, error) {
|
||||
params := url.Values{"comp": {"lease"}}
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err := checkRespCode(resp, []int{expectedStatus}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return resp.Header, nil
|
||||
}
|
||||
|
||||
// LeaseOptions includes options for all operations regarding leasing blobs
|
||||
type LeaseOptions struct {
|
||||
Timeout uint
|
||||
Origin string `header:"Origin"`
|
||||
IfMatch string `header:"If-Match"`
|
||||
IfNoneMatch string `header:"If-None-Match"`
|
||||
IfModifiedSince *time.Time `header:"If-Modified-Since"`
|
||||
IfUnmodifiedSince *time.Time `header:"If-Unmodified-Since"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// AcquireLease creates a lease for a blob
|
||||
// returns leaseID acquired
|
||||
// In API Versions starting on 2012-02-12, the minimum leaseTimeInSeconds is 15, the maximum
|
||||
// non-infinite leaseTimeInSeconds is 60. To specify an infinite lease, provide the value -1.
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Lease-Blob
|
||||
func (b *Blob) AcquireLease(leaseTimeInSeconds int, proposedLeaseID string, options *LeaseOptions) (returnedLeaseID string, err error) {
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers[leaseAction] = acquireLease
|
||||
|
||||
if leaseTimeInSeconds == -1 {
|
||||
// Do nothing, but don't trigger the following clauses.
|
||||
} else if leaseTimeInSeconds > 60 || b.Container.bsc.client.apiVersion < "2012-02-12" {
|
||||
leaseTimeInSeconds = 60
|
||||
} else if leaseTimeInSeconds < 15 {
|
||||
leaseTimeInSeconds = 15
|
||||
}
|
||||
|
||||
headers[leaseDuration] = strconv.Itoa(leaseTimeInSeconds)
|
||||
|
||||
if proposedLeaseID != "" {
|
||||
headers[leaseProposedID] = proposedLeaseID
|
||||
}
|
||||
|
||||
respHeaders, err := b.leaseCommonPut(headers, http.StatusCreated, options)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
returnedLeaseID = respHeaders.Get(http.CanonicalHeaderKey(headerLeaseID))
|
||||
|
||||
if returnedLeaseID != "" {
|
||||
return returnedLeaseID, nil
|
||||
}
|
||||
|
||||
return "", errors.New("LeaseID not returned")
|
||||
}
|
||||
|
||||
// BreakLease breaks the lease for a blob
|
||||
// Returns the timeout remaining in the lease in seconds
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Lease-Blob
|
||||
func (b *Blob) BreakLease(options *LeaseOptions) (breakTimeout int, err error) {
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers[leaseAction] = breakLease
|
||||
return b.breakLeaseCommon(headers, options)
|
||||
}
|
||||
|
||||
// BreakLeaseWithBreakPeriod breaks the lease for a blob
|
||||
// breakPeriodInSeconds is used to determine how long until new lease can be created.
|
||||
// Returns the timeout remaining in the lease in seconds
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Lease-Blob
|
||||
func (b *Blob) BreakLeaseWithBreakPeriod(breakPeriodInSeconds int, options *LeaseOptions) (breakTimeout int, err error) {
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers[leaseAction] = breakLease
|
||||
headers[leaseBreakPeriod] = strconv.Itoa(breakPeriodInSeconds)
|
||||
return b.breakLeaseCommon(headers, options)
|
||||
}
|
||||
|
||||
// breakLeaseCommon is common code for both version of BreakLease (with and without break period)
|
||||
func (b *Blob) breakLeaseCommon(headers map[string]string, options *LeaseOptions) (breakTimeout int, err error) {
|
||||
|
||||
respHeaders, err := b.leaseCommonPut(headers, http.StatusAccepted, options)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
breakTimeoutStr := respHeaders.Get(http.CanonicalHeaderKey(leaseTime))
|
||||
if breakTimeoutStr != "" {
|
||||
breakTimeout, err = strconv.Atoi(breakTimeoutStr)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
}
|
||||
|
||||
return breakTimeout, nil
|
||||
}
|
||||
|
||||
// ChangeLease changes a lease ID for a blob
|
||||
// Returns the new LeaseID acquired
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Lease-Blob
|
||||
func (b *Blob) ChangeLease(currentLeaseID string, proposedLeaseID string, options *LeaseOptions) (newLeaseID string, err error) {
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers[leaseAction] = changeLease
|
||||
headers[headerLeaseID] = currentLeaseID
|
||||
headers[leaseProposedID] = proposedLeaseID
|
||||
|
||||
respHeaders, err := b.leaseCommonPut(headers, http.StatusOK, options)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
newLeaseID = respHeaders.Get(http.CanonicalHeaderKey(headerLeaseID))
|
||||
if newLeaseID != "" {
|
||||
return newLeaseID, nil
|
||||
}
|
||||
|
||||
return "", errors.New("LeaseID not returned")
|
||||
}
|
||||
|
||||
// ReleaseLease releases the lease for a blob
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Lease-Blob
|
||||
func (b *Blob) ReleaseLease(currentLeaseID string, options *LeaseOptions) error {
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers[leaseAction] = releaseLease
|
||||
headers[headerLeaseID] = currentLeaseID
|
||||
|
||||
_, err := b.leaseCommonPut(headers, http.StatusOK, options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// RenewLease renews the lease for a blob as per https://msdn.microsoft.com/en-us/library/azure/ee691972.aspx
|
||||
func (b *Blob) RenewLease(currentLeaseID string, options *LeaseOptions) error {
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers[leaseAction] = renewLease
|
||||
headers[headerLeaseID] = currentLeaseID
|
||||
|
||||
_, err := b.leaseCommonPut(headers, http.StatusOK, options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
170
vendor/github.com/Azure/azure-sdk-for-go/storage/message.go
generated
vendored
Normal file
170
vendor/github.com/Azure/azure-sdk-for-go/storage/message.go
generated
vendored
Normal file
|
@ -0,0 +1,170 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Message represents an Azure message.
|
||||
type Message struct {
|
||||
Queue *Queue
|
||||
Text string `xml:"MessageText"`
|
||||
ID string `xml:"MessageId"`
|
||||
Insertion TimeRFC1123 `xml:"InsertionTime"`
|
||||
Expiration TimeRFC1123 `xml:"ExpirationTime"`
|
||||
PopReceipt string `xml:"PopReceipt"`
|
||||
NextVisible TimeRFC1123 `xml:"TimeNextVisible"`
|
||||
DequeueCount int `xml:"DequeueCount"`
|
||||
}
|
||||
|
||||
func (m *Message) buildPath() string {
|
||||
return fmt.Sprintf("%s/%s", m.Queue.buildPathMessages(), m.ID)
|
||||
}
|
||||
|
||||
// PutMessageOptions is the set of options can be specified for Put Messsage
|
||||
// operation. A zero struct does not use any preferences for the request.
|
||||
type PutMessageOptions struct {
|
||||
Timeout uint
|
||||
VisibilityTimeout int
|
||||
MessageTTL int
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// Put operation adds a new message to the back of the message queue.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Message
|
||||
func (m *Message) Put(options *PutMessageOptions) error {
|
||||
query := url.Values{}
|
||||
headers := m.Queue.qsc.client.getStandardHeaders()
|
||||
|
||||
req := putMessageRequest{MessageText: m.Text}
|
||||
body, nn, err := xmlMarshal(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers["Content-Length"] = strconv.Itoa(nn)
|
||||
|
||||
if options != nil {
|
||||
if options.VisibilityTimeout != 0 {
|
||||
query.Set("visibilitytimeout", strconv.Itoa(options.VisibilityTimeout))
|
||||
}
|
||||
if options.MessageTTL != 0 {
|
||||
query.Set("messagettl", strconv.Itoa(options.MessageTTL))
|
||||
}
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
|
||||
uri := m.Queue.qsc.client.getEndpoint(queueServiceName, m.Queue.buildPathMessages(), query)
|
||||
resp, err := m.Queue.qsc.client.exec(http.MethodPost, uri, headers, body, m.Queue.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
err = checkRespCode(resp, []int{http.StatusCreated})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = xmlUnmarshal(resp.Body, m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateMessageOptions is the set of options can be specified for Update Messsage
|
||||
// operation. A zero struct does not use any preferences for the request.
|
||||
type UpdateMessageOptions struct {
|
||||
Timeout uint
|
||||
VisibilityTimeout int
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// Update operation updates the specified message.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Update-Message
|
||||
func (m *Message) Update(options *UpdateMessageOptions) error {
|
||||
query := url.Values{}
|
||||
if m.PopReceipt != "" {
|
||||
query.Set("popreceipt", m.PopReceipt)
|
||||
}
|
||||
|
||||
headers := m.Queue.qsc.client.getStandardHeaders()
|
||||
req := putMessageRequest{MessageText: m.Text}
|
||||
body, nn, err := xmlMarshal(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers["Content-Length"] = strconv.Itoa(nn)
|
||||
|
||||
if options != nil {
|
||||
if options.VisibilityTimeout != 0 {
|
||||
query.Set("visibilitytimeout", strconv.Itoa(options.VisibilityTimeout))
|
||||
}
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := m.Queue.qsc.client.getEndpoint(queueServiceName, m.buildPath(), query)
|
||||
|
||||
resp, err := m.Queue.qsc.client.exec(http.MethodPut, uri, headers, body, m.Queue.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
m.PopReceipt = resp.Header.Get("x-ms-popreceipt")
|
||||
nextTimeStr := resp.Header.Get("x-ms-time-next-visible")
|
||||
if nextTimeStr != "" {
|
||||
nextTime, err := time.Parse(time.RFC1123, nextTimeStr)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
m.NextVisible = TimeRFC1123(nextTime)
|
||||
}
|
||||
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
// Delete operation deletes the specified message.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179347.aspx
|
||||
func (m *Message) Delete(options *QueueServiceOptions) error {
|
||||
params := url.Values{"popreceipt": {m.PopReceipt}}
|
||||
headers := m.Queue.qsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := m.Queue.qsc.client.getEndpoint(queueServiceName, m.buildPath(), params)
|
||||
|
||||
resp, err := m.Queue.qsc.client.exec(http.MethodDelete, uri, headers, nil, m.Queue.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
type putMessageRequest struct {
|
||||
XMLName xml.Name `xml:"QueueMessage"`
|
||||
MessageText string `xml:"MessageText"`
|
||||
}
|
47
vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go
generated
vendored
Normal file
47
vendor/github.com/Azure/azure-sdk-for-go/storage/odata.go
generated
vendored
Normal file
|
@ -0,0 +1,47 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// MetadataLevel determines if operations should return a paylod,
|
||||
// and it level of detail.
|
||||
type MetadataLevel string
|
||||
|
||||
// This consts are meant to help with Odata supported operations
|
||||
const (
|
||||
OdataTypeSuffix = "@odata.type"
|
||||
|
||||
// Types
|
||||
|
||||
OdataBinary = "Edm.Binary"
|
||||
OdataDateTime = "Edm.DateTime"
|
||||
OdataGUID = "Edm.Guid"
|
||||
OdataInt64 = "Edm.Int64"
|
||||
|
||||
// Query options
|
||||
|
||||
OdataFilter = "$filter"
|
||||
OdataOrderBy = "$orderby"
|
||||
OdataTop = "$top"
|
||||
OdataSkip = "$skip"
|
||||
OdataCount = "$count"
|
||||
OdataExpand = "$expand"
|
||||
OdataSelect = "$select"
|
||||
OdataSearch = "$search"
|
||||
|
||||
EmptyPayload MetadataLevel = ""
|
||||
NoMetadata MetadataLevel = "application/json;odata=nometadata"
|
||||
MinimalMetadata MetadataLevel = "application/json;odata=minimalmetadata"
|
||||
FullMetadata MetadataLevel = "application/json;odata=fullmetadata"
|
||||
)
|
203
vendor/github.com/Azure/azure-sdk-for-go/storage/pageblob.go
generated
vendored
Normal file
203
vendor/github.com/Azure/azure-sdk-for-go/storage/pageblob.go
generated
vendored
Normal file
|
@ -0,0 +1,203 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"time"
|
||||
)
|
||||
|
||||
// GetPageRangesResponse contains the response fields from
|
||||
// Get Page Ranges call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/ee691973.aspx
|
||||
type GetPageRangesResponse struct {
|
||||
XMLName xml.Name `xml:"PageList"`
|
||||
PageList []PageRange `xml:"PageRange"`
|
||||
}
|
||||
|
||||
// PageRange contains information about a page of a page blob from
|
||||
// Get Pages Range call.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/ee691973.aspx
|
||||
type PageRange struct {
|
||||
Start int64 `xml:"Start"`
|
||||
End int64 `xml:"End"`
|
||||
}
|
||||
|
||||
var (
|
||||
errBlobCopyAborted = errors.New("storage: blob copy is aborted")
|
||||
errBlobCopyIDMismatch = errors.New("storage: blob copy id is a mismatch")
|
||||
)
|
||||
|
||||
// PutPageOptions includes the options for a put page operation
|
||||
type PutPageOptions struct {
|
||||
Timeout uint
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
IfSequenceNumberLessThanOrEqualTo *int `header:"x-ms-if-sequence-number-le"`
|
||||
IfSequenceNumberLessThan *int `header:"x-ms-if-sequence-number-lt"`
|
||||
IfSequenceNumberEqualTo *int `header:"x-ms-if-sequence-number-eq"`
|
||||
IfModifiedSince *time.Time `header:"If-Modified-Since"`
|
||||
IfUnmodifiedSince *time.Time `header:"If-Unmodified-Since"`
|
||||
IfMatch string `header:"If-Match"`
|
||||
IfNoneMatch string `header:"If-None-Match"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// WriteRange writes a range of pages to a page blob.
|
||||
// Ranges must be aligned with 512-byte boundaries and chunk must be of size
|
||||
// multiplies by 512.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Page
|
||||
func (b *Blob) WriteRange(blobRange BlobRange, bytes io.Reader, options *PutPageOptions) error {
|
||||
if bytes == nil {
|
||||
return errors.New("bytes cannot be nil")
|
||||
}
|
||||
return b.modifyRange(blobRange, bytes, options)
|
||||
}
|
||||
|
||||
// ClearRange clears the given range in a page blob.
|
||||
// Ranges must be aligned with 512-byte boundaries and chunk must be of size
|
||||
// multiplies by 512.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Page
|
||||
func (b *Blob) ClearRange(blobRange BlobRange, options *PutPageOptions) error {
|
||||
return b.modifyRange(blobRange, nil, options)
|
||||
}
|
||||
|
||||
func (b *Blob) modifyRange(blobRange BlobRange, bytes io.Reader, options *PutPageOptions) error {
|
||||
if blobRange.End < blobRange.Start {
|
||||
return errors.New("the value for rangeEnd must be greater than or equal to rangeStart")
|
||||
}
|
||||
if blobRange.Start%512 != 0 {
|
||||
return errors.New("the value for rangeStart must be a multiple of 512")
|
||||
}
|
||||
if blobRange.End%512 != 511 {
|
||||
return errors.New("the value for rangeEnd must be a multiple of 512 - 1")
|
||||
}
|
||||
|
||||
params := url.Values{"comp": {"page"}}
|
||||
|
||||
// default to clear
|
||||
write := "clear"
|
||||
var cl uint64
|
||||
|
||||
// if bytes is not nil then this is an update operation
|
||||
if bytes != nil {
|
||||
write = "update"
|
||||
cl = (blobRange.End - blobRange.Start) + 1
|
||||
}
|
||||
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-blob-type"] = string(BlobTypePage)
|
||||
headers["x-ms-page-write"] = write
|
||||
headers["x-ms-range"] = blobRange.String()
|
||||
headers["Content-Length"] = fmt.Sprintf("%v", cl)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, bytes, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
// GetPageRangesOptions includes the options for a get page ranges operation
|
||||
type GetPageRangesOptions struct {
|
||||
Timeout uint
|
||||
Snapshot *time.Time
|
||||
PreviousSnapshot *time.Time
|
||||
Range *BlobRange
|
||||
LeaseID string `header:"x-ms-lease-id"`
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// GetPageRanges returns the list of valid page ranges for a page blob.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Get-Page-Ranges
|
||||
func (b *Blob) GetPageRanges(options *GetPageRangesOptions) (GetPageRangesResponse, error) {
|
||||
params := url.Values{"comp": {"pagelist"}}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
params = addSnapshot(params, options.Snapshot)
|
||||
if options.PreviousSnapshot != nil {
|
||||
params.Add("prevsnapshot", timeRFC3339Formatted(*options.PreviousSnapshot))
|
||||
}
|
||||
if options.Range != nil {
|
||||
headers["Range"] = options.Range.String()
|
||||
}
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
var out GetPageRangesResponse
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodGet, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return out, err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return out, err
|
||||
}
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
return out, err
|
||||
}
|
||||
|
||||
// PutPageBlob initializes an empty page blob with specified name and maximum
|
||||
// size in bytes (size must be aligned to a 512-byte boundary). A page blob must
|
||||
// be created using this method before writing pages.
|
||||
//
|
||||
// See CreateBlockBlobFromReader for more info on creating blobs.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Put-Blob
|
||||
func (b *Blob) PutPageBlob(options *PutBlobOptions) error {
|
||||
if b.Properties.ContentLength%512 != 0 {
|
||||
return errors.New("Content length must be aligned to a 512-byte boundary")
|
||||
}
|
||||
|
||||
params := url.Values{}
|
||||
headers := b.Container.bsc.client.getStandardHeaders()
|
||||
headers["x-ms-blob-type"] = string(BlobTypePage)
|
||||
headers["x-ms-blob-content-length"] = fmt.Sprintf("%v", b.Properties.ContentLength)
|
||||
headers["x-ms-blob-sequence-number"] = fmt.Sprintf("%v", b.Properties.SequenceNumber)
|
||||
headers = mergeHeaders(headers, headersFromStruct(b.Properties))
|
||||
headers = b.Container.bsc.client.addMetadataToHeaders(headers, b.Metadata)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := b.Container.bsc.client.getEndpoint(blobServiceName, b.buildPath(), params)
|
||||
|
||||
resp, err := b.Container.bsc.client.exec(http.MethodPut, uri, headers, nil, b.Container.bsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return b.respondCreation(resp, BlobTypePage)
|
||||
}
|
603
vendor/github.com/Azure/azure-sdk-for-go/storage/queue.go
generated
vendored
603
vendor/github.com/Azure/azure-sdk-for-go/storage/queue.go
generated
vendored
|
@ -1,339 +1,436 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/xml"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
const (
|
||||
// casing is per Golang's http.Header canonicalizing the header names.
|
||||
approximateMessagesCountHeader = "X-Ms-Approximate-Messages-Count"
|
||||
userDefinedMetadataHeaderPrefix = "X-Ms-Meta-"
|
||||
approximateMessagesCountHeader = "X-Ms-Approximate-Messages-Count"
|
||||
)
|
||||
|
||||
func pathForQueue(queue string) string { return fmt.Sprintf("/%s", queue) }
|
||||
func pathForQueueMessages(queue string) string { return fmt.Sprintf("/%s/messages", queue) }
|
||||
func pathForMessage(queue, name string) string { return fmt.Sprintf("/%s/messages/%s", queue, name) }
|
||||
|
||||
type putMessageRequest struct {
|
||||
XMLName xml.Name `xml:"QueueMessage"`
|
||||
MessageText string `xml:"MessageText"`
|
||||
// QueueAccessPolicy represents each access policy in the queue ACL.
|
||||
type QueueAccessPolicy struct {
|
||||
ID string
|
||||
StartTime time.Time
|
||||
ExpiryTime time.Time
|
||||
CanRead bool
|
||||
CanAdd bool
|
||||
CanUpdate bool
|
||||
CanProcess bool
|
||||
}
|
||||
|
||||
// PutMessageParameters is the set of options can be specified for Put Messsage
|
||||
// operation. A zero struct does not use any preferences for the request.
|
||||
type PutMessageParameters struct {
|
||||
VisibilityTimeout int
|
||||
MessageTTL int
|
||||
// QueuePermissions represents the queue ACLs.
|
||||
type QueuePermissions struct {
|
||||
AccessPolicies []QueueAccessPolicy
|
||||
}
|
||||
|
||||
func (p PutMessageParameters) getParameters() url.Values {
|
||||
out := url.Values{}
|
||||
if p.VisibilityTimeout != 0 {
|
||||
out.Set("visibilitytimeout", strconv.Itoa(p.VisibilityTimeout))
|
||||
}
|
||||
if p.MessageTTL != 0 {
|
||||
out.Set("messagettl", strconv.Itoa(p.MessageTTL))
|
||||
}
|
||||
return out
|
||||
// SetQueuePermissionOptions includes options for a set queue permissions operation
|
||||
type SetQueuePermissionOptions struct {
|
||||
Timeout uint
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// GetMessagesParameters is the set of options can be specified for Get
|
||||
// Messsages operation. A zero struct does not use any preferences for the
|
||||
// request.
|
||||
type GetMessagesParameters struct {
|
||||
NumOfMessages int
|
||||
VisibilityTimeout int
|
||||
// Queue represents an Azure queue.
|
||||
type Queue struct {
|
||||
qsc *QueueServiceClient
|
||||
Name string
|
||||
Metadata map[string]string
|
||||
AproxMessageCount uint64
|
||||
}
|
||||
|
||||
func (p GetMessagesParameters) getParameters() url.Values {
|
||||
out := url.Values{}
|
||||
if p.NumOfMessages != 0 {
|
||||
out.Set("numofmessages", strconv.Itoa(p.NumOfMessages))
|
||||
}
|
||||
if p.VisibilityTimeout != 0 {
|
||||
out.Set("visibilitytimeout", strconv.Itoa(p.VisibilityTimeout))
|
||||
}
|
||||
return out
|
||||
func (q *Queue) buildPath() string {
|
||||
return fmt.Sprintf("/%s", q.Name)
|
||||
}
|
||||
|
||||
// PeekMessagesParameters is the set of options can be specified for Peek
|
||||
// Messsage operation. A zero struct does not use any preferences for the
|
||||
// request.
|
||||
type PeekMessagesParameters struct {
|
||||
NumOfMessages int
|
||||
func (q *Queue) buildPathMessages() string {
|
||||
return fmt.Sprintf("%s/messages", q.buildPath())
|
||||
}
|
||||
|
||||
func (p PeekMessagesParameters) getParameters() url.Values {
|
||||
out := url.Values{"peekonly": {"true"}} // Required for peek operation
|
||||
if p.NumOfMessages != 0 {
|
||||
out.Set("numofmessages", strconv.Itoa(p.NumOfMessages))
|
||||
}
|
||||
return out
|
||||
// QueueServiceOptions includes options for some queue service operations
|
||||
type QueueServiceOptions struct {
|
||||
Timeout uint
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// UpdateMessageParameters is the set of options can be specified for Update Messsage
|
||||
// operation. A zero struct does not use any preferences for the request.
|
||||
type UpdateMessageParameters struct {
|
||||
PopReceipt string
|
||||
VisibilityTimeout int
|
||||
}
|
||||
|
||||
func (p UpdateMessageParameters) getParameters() url.Values {
|
||||
out := url.Values{}
|
||||
if p.PopReceipt != "" {
|
||||
out.Set("popreceipt", p.PopReceipt)
|
||||
}
|
||||
if p.VisibilityTimeout != 0 {
|
||||
out.Set("visibilitytimeout", strconv.Itoa(p.VisibilityTimeout))
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// GetMessagesResponse represents a response returned from Get Messages
|
||||
// operation.
|
||||
type GetMessagesResponse struct {
|
||||
XMLName xml.Name `xml:"QueueMessagesList"`
|
||||
QueueMessagesList []GetMessageResponse `xml:"QueueMessage"`
|
||||
}
|
||||
|
||||
// GetMessageResponse represents a QueueMessage object returned from Get
|
||||
// Messages operation response.
|
||||
type GetMessageResponse struct {
|
||||
MessageID string `xml:"MessageId"`
|
||||
InsertionTime string `xml:"InsertionTime"`
|
||||
ExpirationTime string `xml:"ExpirationTime"`
|
||||
PopReceipt string `xml:"PopReceipt"`
|
||||
TimeNextVisible string `xml:"TimeNextVisible"`
|
||||
DequeueCount int `xml:"DequeueCount"`
|
||||
MessageText string `xml:"MessageText"`
|
||||
}
|
||||
|
||||
// PeekMessagesResponse represents a response returned from Get Messages
|
||||
// operation.
|
||||
type PeekMessagesResponse struct {
|
||||
XMLName xml.Name `xml:"QueueMessagesList"`
|
||||
QueueMessagesList []PeekMessageResponse `xml:"QueueMessage"`
|
||||
}
|
||||
|
||||
// PeekMessageResponse represents a QueueMessage object returned from Peek
|
||||
// Messages operation response.
|
||||
type PeekMessageResponse struct {
|
||||
MessageID string `xml:"MessageId"`
|
||||
InsertionTime string `xml:"InsertionTime"`
|
||||
ExpirationTime string `xml:"ExpirationTime"`
|
||||
DequeueCount int `xml:"DequeueCount"`
|
||||
MessageText string `xml:"MessageText"`
|
||||
}
|
||||
|
||||
// QueueMetadataResponse represents user defined metadata and queue
|
||||
// properties on a specific queue.
|
||||
// Create operation creates a queue under the given account.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179384.aspx
|
||||
type QueueMetadataResponse struct {
|
||||
ApproximateMessageCount int
|
||||
UserDefinedMetadata map[string]string
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Create-Queue4
|
||||
func (q *Queue) Create(options *QueueServiceOptions) error {
|
||||
params := url.Values{}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
headers = q.qsc.client.addMetadataToHeaders(headers, q.Metadata)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), params)
|
||||
|
||||
resp, err := q.qsc.client.exec(http.MethodPut, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
// Delete operation permanently deletes the specified queue.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Delete-Queue3
|
||||
func (q *Queue) Delete(options *QueueServiceOptions) error {
|
||||
params := url.Values{}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), params)
|
||||
resp, err := q.qsc.client.exec(http.MethodDelete, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
// Exists returns true if a queue with given name exists.
|
||||
func (q *Queue) Exists() (bool, error) {
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), url.Values{"comp": {"metadata"}})
|
||||
resp, err := q.qsc.client.exec(http.MethodGet, uri, q.qsc.client.getStandardHeaders(), nil, q.qsc.auth)
|
||||
if resp != nil {
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusOK || resp.StatusCode == http.StatusNotFound {
|
||||
return resp.StatusCode == http.StatusOK, nil
|
||||
}
|
||||
err = getErrorFromResponse(resp)
|
||||
}
|
||||
return false, err
|
||||
}
|
||||
|
||||
// SetMetadata operation sets user-defined metadata on the specified queue.
|
||||
// Metadata is associated with the queue as name-value pairs.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179348.aspx
|
||||
func (c QueueServiceClient) SetMetadata(name string, metadata map[string]string) error {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueue(name), url.Values{"comp": []string{"metadata"}})
|
||||
metadata = c.client.protectUserAgent(metadata)
|
||||
headers := c.client.getStandardHeaders()
|
||||
for k, v := range metadata {
|
||||
headers[userDefinedMetadataHeaderPrefix+k] = v
|
||||
}
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-Queue-Metadata
|
||||
func (q *Queue) SetMetadata(options *QueueServiceOptions) error {
|
||||
params := url.Values{"comp": {"metadata"}}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
headers = q.qsc.client.addMetadataToHeaders(headers, q.Metadata)
|
||||
|
||||
resp, err := c.client.exec(http.MethodPut, uri, headers, nil, c.auth)
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), params)
|
||||
|
||||
resp, err := q.qsc.client.exec(http.MethodPut, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusNoContent})
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
// GetMetadata operation retrieves user-defined metadata and queue
|
||||
// properties on the specified queue. Metadata is associated with
|
||||
// the queue as name-values pairs.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179384.aspx
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-Queue-Metadata
|
||||
//
|
||||
// Because the way Golang's http client (and http.Header in particular)
|
||||
// canonicalize header names, the returned metadata names would always
|
||||
// be all lower case.
|
||||
func (c QueueServiceClient) GetMetadata(name string) (QueueMetadataResponse, error) {
|
||||
qm := QueueMetadataResponse{}
|
||||
qm.UserDefinedMetadata = make(map[string]string)
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueue(name), url.Values{"comp": []string{"metadata"}})
|
||||
headers := c.client.getStandardHeaders()
|
||||
resp, err := c.client.exec(http.MethodGet, uri, headers, nil, c.auth)
|
||||
if err != nil {
|
||||
return qm, err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
func (q *Queue) GetMetadata(options *QueueServiceOptions) error {
|
||||
params := url.Values{"comp": {"metadata"}}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
|
||||
for k, v := range resp.headers {
|
||||
if len(v) != 1 {
|
||||
return qm, fmt.Errorf("Unexpected number of values (%d) in response header '%s'", len(v), k)
|
||||
}
|
||||
|
||||
value := v[0]
|
||||
|
||||
if k == approximateMessagesCountHeader {
|
||||
qm.ApproximateMessageCount, err = strconv.Atoi(value)
|
||||
if err != nil {
|
||||
return qm, fmt.Errorf("Unexpected value in response header '%s': '%s' ", k, value)
|
||||
}
|
||||
} else if strings.HasPrefix(k, userDefinedMetadataHeaderPrefix) {
|
||||
name := strings.TrimPrefix(k, userDefinedMetadataHeaderPrefix)
|
||||
qm.UserDefinedMetadata[strings.ToLower(name)] = value
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), url.Values{"comp": {"metadata"}})
|
||||
|
||||
resp, err := q.qsc.client.exec(http.MethodGet, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err := checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
aproxMessagesStr := resp.Header.Get(http.CanonicalHeaderKey(approximateMessagesCountHeader))
|
||||
if aproxMessagesStr != "" {
|
||||
aproxMessages, err := strconv.ParseUint(aproxMessagesStr, 10, 64)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
q.AproxMessageCount = aproxMessages
|
||||
}
|
||||
|
||||
return qm, checkRespCode(resp.statusCode, []int{http.StatusOK})
|
||||
q.Metadata = getMetadataFromHeaders(resp.Header)
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateQueue operation creates a queue under the given account.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179342.aspx
|
||||
func (c QueueServiceClient) CreateQueue(name string) error {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueue(name), url.Values{})
|
||||
headers := c.client.getStandardHeaders()
|
||||
resp, err := c.client.exec(http.MethodPut, uri, headers, nil, c.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
// GetMessageReference returns a message object with the specified text.
|
||||
func (q *Queue) GetMessageReference(text string) *Message {
|
||||
return &Message{
|
||||
Queue: q,
|
||||
Text: text,
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
// DeleteQueue operation permanently deletes the specified queue.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179436.aspx
|
||||
func (c QueueServiceClient) DeleteQueue(name string) error {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueue(name), url.Values{})
|
||||
resp, err := c.client.exec(http.MethodDelete, uri, c.client.getStandardHeaders(), nil, c.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusNoContent})
|
||||
// GetMessagesOptions is the set of options can be specified for Get
|
||||
// Messsages operation. A zero struct does not use any preferences for the
|
||||
// request.
|
||||
type GetMessagesOptions struct {
|
||||
Timeout uint
|
||||
NumOfMessages int
|
||||
VisibilityTimeout int
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// QueueExists returns true if a queue with given name exists.
|
||||
func (c QueueServiceClient) QueueExists(name string) (bool, error) {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueue(name), url.Values{"comp": {"metadata"}})
|
||||
resp, err := c.client.exec(http.MethodGet, uri, c.client.getStandardHeaders(), nil, c.auth)
|
||||
if resp != nil && (resp.statusCode == http.StatusOK || resp.statusCode == http.StatusNotFound) {
|
||||
return resp.statusCode == http.StatusOK, nil
|
||||
}
|
||||
|
||||
return false, err
|
||||
}
|
||||
|
||||
// PutMessage operation adds a new message to the back of the message queue.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179346.aspx
|
||||
func (c QueueServiceClient) PutMessage(queue string, message string, params PutMessageParameters) error {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueueMessages(queue), params.getParameters())
|
||||
req := putMessageRequest{MessageText: message}
|
||||
body, nn, err := xmlMarshal(req)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers := c.client.getStandardHeaders()
|
||||
headers["Content-Length"] = strconv.Itoa(nn)
|
||||
resp, err := c.client.exec(http.MethodPost, uri, headers, body, c.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
// ClearMessages operation deletes all messages from the specified queue.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179454.aspx
|
||||
func (c QueueServiceClient) ClearMessages(queue string) error {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueueMessages(queue), url.Values{})
|
||||
resp, err := c.client.exec(http.MethodDelete, uri, c.client.getStandardHeaders(), nil, c.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusNoContent})
|
||||
type messages struct {
|
||||
XMLName xml.Name `xml:"QueueMessagesList"`
|
||||
Messages []Message `xml:"QueueMessage"`
|
||||
}
|
||||
|
||||
// GetMessages operation retrieves one or more messages from the front of the
|
||||
// queue.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179474.aspx
|
||||
func (c QueueServiceClient) GetMessages(queue string, params GetMessagesParameters) (GetMessagesResponse, error) {
|
||||
var r GetMessagesResponse
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueueMessages(queue), params.getParameters())
|
||||
resp, err := c.client.exec(http.MethodGet, uri, c.client.getStandardHeaders(), nil, c.auth)
|
||||
if err != nil {
|
||||
return r, err
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Get-Messages
|
||||
func (q *Queue) GetMessages(options *GetMessagesOptions) ([]Message, error) {
|
||||
query := url.Values{}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
if options.NumOfMessages != 0 {
|
||||
query.Set("numofmessages", strconv.Itoa(options.NumOfMessages))
|
||||
}
|
||||
if options.VisibilityTimeout != 0 {
|
||||
query.Set("visibilitytimeout", strconv.Itoa(options.VisibilityTimeout))
|
||||
}
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
defer resp.body.Close()
|
||||
err = xmlUnmarshal(resp.body, &r)
|
||||
return r, err
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPathMessages(), query)
|
||||
|
||||
resp, err := q.qsc.client.exec(http.MethodGet, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return []Message{}, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var out messages
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
if err != nil {
|
||||
return []Message{}, err
|
||||
}
|
||||
for i := range out.Messages {
|
||||
out.Messages[i].Queue = q
|
||||
}
|
||||
return out.Messages, err
|
||||
}
|
||||
|
||||
// PeekMessagesOptions is the set of options can be specified for Peek
|
||||
// Messsage operation. A zero struct does not use any preferences for the
|
||||
// request.
|
||||
type PeekMessagesOptions struct {
|
||||
Timeout uint
|
||||
NumOfMessages int
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// PeekMessages retrieves one or more messages from the front of the queue, but
|
||||
// does not alter the visibility of the message.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179472.aspx
|
||||
func (c QueueServiceClient) PeekMessages(queue string, params PeekMessagesParameters) (PeekMessagesResponse, error) {
|
||||
var r PeekMessagesResponse
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForQueueMessages(queue), params.getParameters())
|
||||
resp, err := c.client.exec(http.MethodGet, uri, c.client.getStandardHeaders(), nil, c.auth)
|
||||
if err != nil {
|
||||
return r, err
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Peek-Messages
|
||||
func (q *Queue) PeekMessages(options *PeekMessagesOptions) ([]Message, error) {
|
||||
query := url.Values{"peekonly": {"true"}} // Required for peek operation
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
if options.NumOfMessages != 0 {
|
||||
query.Set("numofmessages", strconv.Itoa(options.NumOfMessages))
|
||||
}
|
||||
query = addTimeout(query, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
defer resp.body.Close()
|
||||
err = xmlUnmarshal(resp.body, &r)
|
||||
return r, err
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPathMessages(), query)
|
||||
|
||||
resp, err := q.qsc.client.exec(http.MethodGet, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return []Message{}, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var out messages
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
if err != nil {
|
||||
return []Message{}, err
|
||||
}
|
||||
for i := range out.Messages {
|
||||
out.Messages[i].Queue = q
|
||||
}
|
||||
return out.Messages, err
|
||||
}
|
||||
|
||||
// DeleteMessage operation deletes the specified message.
|
||||
// ClearMessages operation deletes all messages from the specified queue.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179347.aspx
|
||||
func (c QueueServiceClient) DeleteMessage(queue, messageID, popReceipt string) error {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForMessage(queue, messageID), url.Values{
|
||||
"popreceipt": {popReceipt}})
|
||||
resp, err := c.client.exec(http.MethodDelete, uri, c.client.getStandardHeaders(), nil, c.auth)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Clear-Messages
|
||||
func (q *Queue) ClearMessages(options *QueueServiceOptions) error {
|
||||
params := url.Values{}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPathMessages(), params)
|
||||
|
||||
resp, err := q.qsc.client.exec(http.MethodDelete, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusNoContent})
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
// UpdateMessage operation deletes the specified message.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/hh452234.aspx
|
||||
func (c QueueServiceClient) UpdateMessage(queue string, messageID string, message string, params UpdateMessageParameters) error {
|
||||
uri := c.client.getEndpoint(queueServiceName, pathForMessage(queue, messageID), params.getParameters())
|
||||
req := putMessageRequest{MessageText: message}
|
||||
body, nn, err := xmlMarshal(req)
|
||||
// SetPermissions sets up queue permissions
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/set-queue-acl
|
||||
func (q *Queue) SetPermissions(permissions QueuePermissions, options *SetQueuePermissionOptions) error {
|
||||
body, length, err := generateQueueACLpayload(permissions.AccessPolicies)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers := c.client.getStandardHeaders()
|
||||
headers["Content-Length"] = fmt.Sprintf("%d", nn)
|
||||
resp, err := c.client.exec(http.MethodPut, uri, headers, body, c.auth)
|
||||
|
||||
params := url.Values{
|
||||
"comp": {"acl"},
|
||||
}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
headers["Content-Length"] = strconv.Itoa(length)
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), params)
|
||||
resp, err := q.qsc.client.exec(http.MethodPut, uri, headers, body, q.qsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusNoContent})
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
func generateQueueACLpayload(policies []QueueAccessPolicy) (io.Reader, int, error) {
|
||||
sil := SignedIdentifiers{
|
||||
SignedIdentifiers: []SignedIdentifier{},
|
||||
}
|
||||
for _, qapd := range policies {
|
||||
permission := qapd.generateQueuePermissions()
|
||||
signedIdentifier := convertAccessPolicyToXMLStructs(qapd.ID, qapd.StartTime, qapd.ExpiryTime, permission)
|
||||
sil.SignedIdentifiers = append(sil.SignedIdentifiers, signedIdentifier)
|
||||
}
|
||||
return xmlMarshal(sil)
|
||||
}
|
||||
|
||||
func (qapd *QueueAccessPolicy) generateQueuePermissions() (permissions string) {
|
||||
// generate the permissions string (raup).
|
||||
// still want the end user API to have bool flags.
|
||||
permissions = ""
|
||||
|
||||
if qapd.CanRead {
|
||||
permissions += "r"
|
||||
}
|
||||
|
||||
if qapd.CanAdd {
|
||||
permissions += "a"
|
||||
}
|
||||
|
||||
if qapd.CanUpdate {
|
||||
permissions += "u"
|
||||
}
|
||||
|
||||
if qapd.CanProcess {
|
||||
permissions += "p"
|
||||
}
|
||||
|
||||
return permissions
|
||||
}
|
||||
|
||||
// GetQueuePermissionOptions includes options for a get queue permissions operation
|
||||
type GetQueuePermissionOptions struct {
|
||||
Timeout uint
|
||||
RequestID string `header:"x-ms-client-request-id"`
|
||||
}
|
||||
|
||||
// GetPermissions gets the queue permissions as per https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-queue-acl
|
||||
// If timeout is 0 then it will not be passed to Azure
|
||||
func (q *Queue) GetPermissions(options *GetQueuePermissionOptions) (*QueuePermissions, error) {
|
||||
params := url.Values{
|
||||
"comp": {"acl"},
|
||||
}
|
||||
headers := q.qsc.client.getStandardHeaders()
|
||||
|
||||
if options != nil {
|
||||
params = addTimeout(params, options.Timeout)
|
||||
headers = mergeHeaders(headers, headersFromStruct(*options))
|
||||
}
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), params)
|
||||
resp, err := q.qsc.client.exec(http.MethodGet, uri, headers, nil, q.qsc.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
var ap AccessPolicy
|
||||
err = xmlUnmarshal(resp.Body, &ap.SignedIdentifiersList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return buildQueueAccessPolicy(ap, &resp.Header), nil
|
||||
}
|
||||
|
||||
func buildQueueAccessPolicy(ap AccessPolicy, headers *http.Header) *QueuePermissions {
|
||||
permissions := QueuePermissions{
|
||||
AccessPolicies: []QueueAccessPolicy{},
|
||||
}
|
||||
|
||||
for _, policy := range ap.SignedIdentifiersList.SignedIdentifiers {
|
||||
qapd := QueueAccessPolicy{
|
||||
ID: policy.ID,
|
||||
StartTime: policy.AccessPolicy.StartTime,
|
||||
ExpiryTime: policy.AccessPolicy.ExpiryTime,
|
||||
}
|
||||
qapd.CanRead = updatePermissions(policy.AccessPolicy.Permission, "r")
|
||||
qapd.CanAdd = updatePermissions(policy.AccessPolicy.Permission, "a")
|
||||
qapd.CanUpdate = updatePermissions(policy.AccessPolicy.Permission, "u")
|
||||
qapd.CanProcess = updatePermissions(policy.AccessPolicy.Permission, "p")
|
||||
|
||||
permissions.AccessPolicies = append(permissions.AccessPolicies, qapd)
|
||||
}
|
||||
return &permissions
|
||||
}
|
||||
|
|
146
vendor/github.com/Azure/azure-sdk-for-go/storage/queuesasuri.go
generated
vendored
Normal file
146
vendor/github.com/Azure/azure-sdk-for-go/storage/queuesasuri.go
generated
vendored
Normal file
|
@ -0,0 +1,146 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net/url"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// QueueSASOptions are options to construct a blob SAS
|
||||
// URI.
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas
|
||||
type QueueSASOptions struct {
|
||||
QueueSASPermissions
|
||||
SASOptions
|
||||
}
|
||||
|
||||
// QueueSASPermissions includes the available permissions for
|
||||
// a queue SAS URI.
|
||||
type QueueSASPermissions struct {
|
||||
Read bool
|
||||
Add bool
|
||||
Update bool
|
||||
Process bool
|
||||
}
|
||||
|
||||
func (q QueueSASPermissions) buildString() string {
|
||||
permissions := ""
|
||||
|
||||
if q.Read {
|
||||
permissions += "r"
|
||||
}
|
||||
if q.Add {
|
||||
permissions += "a"
|
||||
}
|
||||
if q.Update {
|
||||
permissions += "u"
|
||||
}
|
||||
if q.Process {
|
||||
permissions += "p"
|
||||
}
|
||||
return permissions
|
||||
}
|
||||
|
||||
// GetSASURI creates an URL to the specified queue which contains the Shared
|
||||
// Access Signature with specified permissions and expiration time.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/constructing-a-service-sas
|
||||
func (q *Queue) GetSASURI(options QueueSASOptions) (string, error) {
|
||||
canonicalizedResource, err := q.qsc.client.buildCanonicalizedResource(q.buildPath(), q.qsc.auth, true)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
// "The canonicalizedresouce portion of the string is a canonical path to the signed resource.
|
||||
// It must include the service name (blob, table, queue or file) for version 2015-02-21 or
|
||||
// later, the storage account name, and the resource name, and must be URL-decoded.
|
||||
// -- https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx
|
||||
// We need to replace + with %2b first to avoid being treated as a space (which is correct for query strings, but not the path component).
|
||||
canonicalizedResource = strings.Replace(canonicalizedResource, "+", "%2b", -1)
|
||||
canonicalizedResource, err = url.QueryUnescape(canonicalizedResource)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
signedStart := ""
|
||||
if options.Start != (time.Time{}) {
|
||||
signedStart = options.Start.UTC().Format(time.RFC3339)
|
||||
}
|
||||
signedExpiry := options.Expiry.UTC().Format(time.RFC3339)
|
||||
|
||||
protocols := "https,http"
|
||||
if options.UseHTTPS {
|
||||
protocols = "https"
|
||||
}
|
||||
|
||||
permissions := options.QueueSASPermissions.buildString()
|
||||
stringToSign, err := queueSASStringToSign(q.qsc.client.apiVersion, canonicalizedResource, signedStart, signedExpiry, options.IP, permissions, protocols, options.Identifier)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
sig := q.qsc.client.computeHmac256(stringToSign)
|
||||
sasParams := url.Values{
|
||||
"sv": {q.qsc.client.apiVersion},
|
||||
"se": {signedExpiry},
|
||||
"sp": {permissions},
|
||||
"sig": {sig},
|
||||
}
|
||||
|
||||
if q.qsc.client.apiVersion >= "2015-04-05" {
|
||||
sasParams.Add("spr", protocols)
|
||||
addQueryParameter(sasParams, "sip", options.IP)
|
||||
}
|
||||
|
||||
uri := q.qsc.client.getEndpoint(queueServiceName, q.buildPath(), nil)
|
||||
sasURL, err := url.Parse(uri)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
sasURL.RawQuery = sasParams.Encode()
|
||||
return sasURL.String(), nil
|
||||
}
|
||||
|
||||
func queueSASStringToSign(signedVersion, canonicalizedResource, signedStart, signedExpiry, signedIP, signedPermissions, protocols, signedIdentifier string) (string, error) {
|
||||
|
||||
if signedVersion >= "2015-02-21" {
|
||||
canonicalizedResource = "/queue" + canonicalizedResource
|
||||
}
|
||||
|
||||
// https://msdn.microsoft.com/en-us/library/azure/dn140255.aspx#Anchor_12
|
||||
if signedVersion >= "2015-04-05" {
|
||||
return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s\n%s\n%s",
|
||||
signedPermissions,
|
||||
signedStart,
|
||||
signedExpiry,
|
||||
canonicalizedResource,
|
||||
signedIdentifier,
|
||||
signedIP,
|
||||
protocols,
|
||||
signedVersion), nil
|
||||
|
||||
}
|
||||
|
||||
// reference: http://msdn.microsoft.com/en-us/library/azure/dn140255.aspx
|
||||
if signedVersion >= "2013-08-15" {
|
||||
return fmt.Sprintf("%s\n%s\n%s\n%s\n%s\n%s", signedPermissions, signedStart, signedExpiry, canonicalizedResource, signedIdentifier, signedVersion), nil
|
||||
}
|
||||
|
||||
return "", errors.New("storage: not implemented SAS for versions earlier than 2013-08-15")
|
||||
}
|
30
vendor/github.com/Azure/azure-sdk-for-go/storage/queueserviceclient.go
generated
vendored
30
vendor/github.com/Azure/azure-sdk-for-go/storage/queueserviceclient.go
generated
vendored
|
@ -1,5 +1,19 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
// QueueServiceClient contains operations for Microsoft Azure Queue Storage
|
||||
// Service.
|
||||
type QueueServiceClient struct {
|
||||
|
@ -9,12 +23,20 @@ type QueueServiceClient struct {
|
|||
|
||||
// GetServiceProperties gets the properties of your storage account's queue service.
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-queue-service-properties
|
||||
func (c *QueueServiceClient) GetServiceProperties() (*ServiceProperties, error) {
|
||||
return c.client.getServiceProperties(queueServiceName, c.auth)
|
||||
func (q *QueueServiceClient) GetServiceProperties() (*ServiceProperties, error) {
|
||||
return q.client.getServiceProperties(queueServiceName, q.auth)
|
||||
}
|
||||
|
||||
// SetServiceProperties sets the properties of your storage account's queue service.
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/set-queue-service-properties
|
||||
func (c *QueueServiceClient) SetServiceProperties(props ServiceProperties) error {
|
||||
return c.client.setServiceProperties(props, queueServiceName, c.auth)
|
||||
func (q *QueueServiceClient) SetServiceProperties(props ServiceProperties) error {
|
||||
return q.client.setServiceProperties(props, queueServiceName, q.auth)
|
||||
}
|
||||
|
||||
// GetQueueReference returns a Container object for the specified queue name.
|
||||
func (q *QueueServiceClient) GetQueueReference(name string) *Queue {
|
||||
return &Queue{
|
||||
qsc: q,
|
||||
Name: name,
|
||||
}
|
||||
}
|
||||
|
|
94
vendor/github.com/Azure/azure-sdk-for-go/storage/share.go
generated
vendored
94
vendor/github.com/Azure/azure-sdk-for-go/storage/share.go
generated
vendored
|
@ -1,5 +1,19 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
|
@ -30,9 +44,15 @@ func (s *Share) buildPath() string {
|
|||
// Create this share under the associated account.
|
||||
// If a share with the same name already exists, the operation fails.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn167008.aspx
|
||||
func (s *Share) Create() error {
|
||||
headers, err := s.fsc.createResource(s.buildPath(), resourceShare, nil, mergeMDIntoExtraHeaders(s.Metadata, nil), []int{http.StatusCreated})
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Create-Share
|
||||
func (s *Share) Create(options *FileRequestOptions) error {
|
||||
extraheaders := map[string]string{}
|
||||
if s.Properties.Quota > 0 {
|
||||
extraheaders["x-ms-share-quota"] = strconv.Itoa(s.Properties.Quota)
|
||||
}
|
||||
|
||||
params := prepareOptions(options)
|
||||
headers, err := s.fsc.createResource(s.buildPath(), resourceShare, params, mergeMDIntoExtraHeaders(s.Metadata, extraheaders), []int{http.StatusCreated})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -45,17 +65,23 @@ func (s *Share) Create() error {
|
|||
// it does not exist. Returns true if the share is newly created or false if
|
||||
// the share already exists.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn167008.aspx
|
||||
func (s *Share) CreateIfNotExists() (bool, error) {
|
||||
resp, err := s.fsc.createResourceNoClose(s.buildPath(), resourceShare, nil, nil)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Create-Share
|
||||
func (s *Share) CreateIfNotExists(options *FileRequestOptions) (bool, error) {
|
||||
extraheaders := map[string]string{}
|
||||
if s.Properties.Quota > 0 {
|
||||
extraheaders["x-ms-share-quota"] = strconv.Itoa(s.Properties.Quota)
|
||||
}
|
||||
|
||||
params := prepareOptions(options)
|
||||
resp, err := s.fsc.createResourceNoClose(s.buildPath(), resourceShare, params, extraheaders)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusCreated || resp.statusCode == http.StatusConflict {
|
||||
if resp.statusCode == http.StatusCreated {
|
||||
s.updateEtagAndLastModified(resp.headers)
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusCreated || resp.StatusCode == http.StatusConflict {
|
||||
if resp.StatusCode == http.StatusCreated {
|
||||
s.updateEtagAndLastModified(resp.Header)
|
||||
return true, nil
|
||||
}
|
||||
return false, s.FetchAttributes()
|
||||
return false, s.FetchAttributes(nil)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -66,20 +92,20 @@ func (s *Share) CreateIfNotExists() (bool, error) {
|
|||
// and directories contained within it are later deleted during garbage
|
||||
// collection. If the share does not exist the operation fails
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn689090.aspx
|
||||
func (s *Share) Delete() error {
|
||||
return s.fsc.deleteResource(s.buildPath(), resourceShare)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Delete-Share
|
||||
func (s *Share) Delete(options *FileRequestOptions) error {
|
||||
return s.fsc.deleteResource(s.buildPath(), resourceShare, options)
|
||||
}
|
||||
|
||||
// DeleteIfExists operation marks this share for deletion if it exists.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dn689090.aspx
|
||||
func (s *Share) DeleteIfExists() (bool, error) {
|
||||
resp, err := s.fsc.deleteResourceNoClose(s.buildPath(), resourceShare)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Delete-Share
|
||||
func (s *Share) DeleteIfExists(options *FileRequestOptions) (bool, error) {
|
||||
resp, err := s.fsc.deleteResourceNoClose(s.buildPath(), resourceShare, options)
|
||||
if resp != nil {
|
||||
defer readAndCloseBody(resp.body)
|
||||
if resp.statusCode == http.StatusAccepted || resp.statusCode == http.StatusNotFound {
|
||||
return resp.statusCode == http.StatusAccepted, nil
|
||||
defer drainRespBody(resp)
|
||||
if resp.StatusCode == http.StatusAccepted || resp.StatusCode == http.StatusNotFound {
|
||||
return resp.StatusCode == http.StatusAccepted, nil
|
||||
}
|
||||
}
|
||||
return false, err
|
||||
|
@ -97,8 +123,10 @@ func (s *Share) Exists() (bool, error) {
|
|||
}
|
||||
|
||||
// FetchAttributes retrieves metadata and properties for this share.
|
||||
func (s *Share) FetchAttributes() error {
|
||||
headers, err := s.fsc.getResourceHeaders(s.buildPath(), compNone, resourceShare, http.MethodHead)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-share-properties
|
||||
func (s *Share) FetchAttributes(options *FileRequestOptions) error {
|
||||
params := prepareOptions(options)
|
||||
headers, err := s.fsc.getResourceHeaders(s.buildPath(), compNone, resourceShare, params, http.MethodHead)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -130,9 +158,9 @@ func (s *Share) ServiceClient() *FileServiceClient {
|
|||
// are case-insensitive so case munging should not matter to other
|
||||
// applications either.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/dd179414.aspx
|
||||
func (s *Share) SetMetadata() error {
|
||||
headers, err := s.fsc.setResourceHeaders(s.buildPath(), compMetadata, resourceShare, mergeMDIntoExtraHeaders(s.Metadata, nil))
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/set-share-metadata
|
||||
func (s *Share) SetMetadata(options *FileRequestOptions) error {
|
||||
headers, err := s.fsc.setResourceHeaders(s.buildPath(), compMetadata, resourceShare, mergeMDIntoExtraHeaders(s.Metadata, nil), options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -148,15 +176,17 @@ func (s *Share) SetMetadata() error {
|
|||
// are case-insensitive so case munging should not matter to other
|
||||
// applications either.
|
||||
//
|
||||
// See https://msdn.microsoft.com/en-us/library/azure/mt427368.aspx
|
||||
func (s *Share) SetProperties() error {
|
||||
if s.Properties.Quota < 1 || s.Properties.Quota > 5120 {
|
||||
return fmt.Errorf("invalid value %v for quota, valid values are [1, 5120]", s.Properties.Quota)
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-Share-Properties
|
||||
func (s *Share) SetProperties(options *FileRequestOptions) error {
|
||||
extraheaders := map[string]string{}
|
||||
if s.Properties.Quota > 0 {
|
||||
if s.Properties.Quota > 5120 {
|
||||
return fmt.Errorf("invalid value %v for quota, valid values are [1, 5120]", s.Properties.Quota)
|
||||
}
|
||||
extraheaders["x-ms-share-quota"] = strconv.Itoa(s.Properties.Quota)
|
||||
}
|
||||
|
||||
headers, err := s.fsc.setResourceHeaders(s.buildPath(), compProperties, resourceShare, map[string]string{
|
||||
"x-ms-share-quota": strconv.Itoa(s.Properties.Quota),
|
||||
})
|
||||
headers, err := s.fsc.setResourceHeaders(s.buildPath(), compProperties, resourceShare, extraheaders, options)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
14
vendor/github.com/Azure/azure-sdk-for-go/storage/storagepolicy.go
generated
vendored
14
vendor/github.com/Azure/azure-sdk-for-go/storage/storagepolicy.go
generated
vendored
|
@ -1,5 +1,19 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"time"
|
||||
|
|
29
vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go
generated
vendored
29
vendor/github.com/Azure/azure-sdk-for-go/storage/storageservice.go
generated
vendored
|
@ -1,9 +1,23 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
// ServiceProperties represents the storage account service properties
|
||||
|
@ -63,14 +77,14 @@ func (c Client) getServiceProperties(service string, auth authentication) (*Serv
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil {
|
||||
if err := checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var out ServiceProperties
|
||||
err = xmlUnmarshal(resp.body, &out)
|
||||
err = xmlUnmarshal(resp.Body, &out)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -106,13 +120,12 @@ func (c Client) setServiceProperties(props ServiceProperties, service string, au
|
|||
}
|
||||
|
||||
headers := c.getStandardHeaders()
|
||||
headers["Content-Length"] = fmt.Sprintf("%v", length)
|
||||
headers["Content-Length"] = strconv.Itoa(length)
|
||||
|
||||
resp, err := c.exec(http.MethodPut, uri, headers, body, auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
|
||||
return checkRespCode(resp.statusCode, []int{http.StatusAccepted})
|
||||
defer drainRespBody(resp)
|
||||
return checkRespCode(resp, []int{http.StatusAccepted})
|
||||
}
|
||||
|
|
385
vendor/github.com/Azure/azure-sdk-for-go/storage/table.go
generated
vendored
385
vendor/github.com/Azure/azure-sdk-for-go/storage/table.go
generated
vendored
|
@ -1,5 +1,19 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
|
@ -9,20 +23,19 @@ import (
|
|||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// AzureTable is the typedef of the Azure Table name
|
||||
type AzureTable string
|
||||
|
||||
const (
|
||||
tablesURIPath = "/Tables"
|
||||
tablesURIPath = "/Tables"
|
||||
nextTableQueryParameter = "NextTableName"
|
||||
headerNextPartitionKey = "x-ms-continuation-NextPartitionKey"
|
||||
headerNextRowKey = "x-ms-continuation-NextRowKey"
|
||||
nextPartitionKeyQueryParameter = "NextPartitionKey"
|
||||
nextRowKeyQueryParameter = "NextRowKey"
|
||||
)
|
||||
|
||||
type createTableRequest struct {
|
||||
TableName string `json:"TableName"`
|
||||
}
|
||||
|
||||
// TableAccessPolicy are used for SETTING table policies
|
||||
type TableAccessPolicy struct {
|
||||
ID string
|
||||
|
@ -34,140 +47,231 @@ type TableAccessPolicy struct {
|
|||
CanDelete bool
|
||||
}
|
||||
|
||||
func pathForTable(table AzureTable) string { return fmt.Sprintf("%s", table) }
|
||||
|
||||
func (c *TableServiceClient) getStandardHeaders() map[string]string {
|
||||
return map[string]string{
|
||||
"x-ms-version": "2015-02-21",
|
||||
"x-ms-date": currentTimeRfc1123Formatted(),
|
||||
"Accept": "application/json;odata=nometadata",
|
||||
"Accept-Charset": "UTF-8",
|
||||
"Content-Type": "application/json",
|
||||
userAgentHeader: c.client.userAgent,
|
||||
}
|
||||
// Table represents an Azure table.
|
||||
type Table struct {
|
||||
tsc *TableServiceClient
|
||||
Name string `json:"TableName"`
|
||||
OdataEditLink string `json:"odata.editLink"`
|
||||
OdataID string `json:"odata.id"`
|
||||
OdataMetadata string `json:"odata.metadata"`
|
||||
OdataType string `json:"odata.type"`
|
||||
}
|
||||
|
||||
// QueryTables returns the tables created in the
|
||||
// *TableServiceClient storage account.
|
||||
func (c *TableServiceClient) QueryTables() ([]AzureTable, error) {
|
||||
uri := c.client.getEndpoint(tableServiceName, tablesURIPath, url.Values{})
|
||||
// EntityQueryResult contains the response from
|
||||
// ExecuteQuery and ExecuteQueryNextResults functions.
|
||||
type EntityQueryResult struct {
|
||||
OdataMetadata string `json:"odata.metadata"`
|
||||
Entities []*Entity `json:"value"`
|
||||
QueryNextLink
|
||||
table *Table
|
||||
}
|
||||
|
||||
headers := c.getStandardHeaders()
|
||||
headers["Content-Length"] = "0"
|
||||
type continuationToken struct {
|
||||
NextPartitionKey string
|
||||
NextRowKey string
|
||||
}
|
||||
|
||||
resp, err := c.client.execInternalJSON(http.MethodGet, uri, headers, nil, c.auth)
|
||||
func (t *Table) buildPath() string {
|
||||
return fmt.Sprintf("/%s", t.Name)
|
||||
}
|
||||
|
||||
func (t *Table) buildSpecificPath() string {
|
||||
return fmt.Sprintf("%s('%s')", tablesURIPath, t.Name)
|
||||
}
|
||||
|
||||
// Get gets the referenced table.
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/querying-tables-and-entities
|
||||
func (t *Table) Get(timeout uint, ml MetadataLevel) error {
|
||||
if ml == EmptyPayload {
|
||||
return errEmptyPayload
|
||||
}
|
||||
|
||||
query := url.Values{
|
||||
"timeout": {strconv.FormatUint(uint64(timeout), 10)},
|
||||
}
|
||||
headers := t.tsc.client.getStandardHeaders()
|
||||
headers[headerAccept] = string(ml)
|
||||
|
||||
uri := t.tsc.client.getEndpoint(tableServiceName, t.buildSpecificPath(), query)
|
||||
resp, err := t.tsc.client.exec(http.MethodGet, uri, headers, nil, t.tsc.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil {
|
||||
ioutil.ReadAll(resp.body)
|
||||
return nil, err
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
buf := new(bytes.Buffer)
|
||||
if _, err := buf.ReadFrom(resp.body); err != nil {
|
||||
return nil, err
|
||||
respBody, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var respArray queryTablesResponse
|
||||
if err := json.Unmarshal(buf.Bytes(), &respArray); err != nil {
|
||||
return nil, err
|
||||
err = json.Unmarshal(respBody, t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s := make([]AzureTable, len(respArray.TableName))
|
||||
for i, elem := range respArray.TableName {
|
||||
s[i] = AzureTable(elem.TableName)
|
||||
}
|
||||
|
||||
return s, nil
|
||||
return nil
|
||||
}
|
||||
|
||||
// CreateTable creates the table given the specific
|
||||
// name. This function fails if the name is not compliant
|
||||
// Create creates the referenced table.
|
||||
// This function fails if the name is not compliant
|
||||
// with the specification or the tables already exists.
|
||||
func (c *TableServiceClient) CreateTable(table AzureTable) error {
|
||||
uri := c.client.getEndpoint(tableServiceName, tablesURIPath, url.Values{})
|
||||
// ml determines the level of detail of metadata in the operation response,
|
||||
// or no data at all.
|
||||
// See https://docs.microsoft.com/rest/api/storageservices/fileservices/create-table
|
||||
func (t *Table) Create(timeout uint, ml MetadataLevel, options *TableOptions) error {
|
||||
uri := t.tsc.client.getEndpoint(tableServiceName, tablesURIPath, url.Values{
|
||||
"timeout": {strconv.FormatUint(uint64(timeout), 10)},
|
||||
})
|
||||
|
||||
headers := c.getStandardHeaders()
|
||||
|
||||
req := createTableRequest{TableName: string(table)}
|
||||
type createTableRequest struct {
|
||||
TableName string `json:"TableName"`
|
||||
}
|
||||
req := createTableRequest{TableName: t.Name}
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
if err := json.NewEncoder(buf).Encode(req); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
headers["Content-Length"] = fmt.Sprintf("%d", buf.Len())
|
||||
|
||||
resp, err := c.client.execInternalJSON(http.MethodPost, uri, headers, buf, c.auth)
|
||||
headers := t.tsc.client.getStandardHeaders()
|
||||
headers = addReturnContentHeaders(headers, ml)
|
||||
headers = addBodyRelatedHeaders(headers, buf.Len())
|
||||
headers = options.addToHeaders(headers)
|
||||
|
||||
resp, err := t.tsc.client.exec(http.MethodPost, uri, headers, buf, t.tsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := checkRespCode(resp.statusCode, []int{http.StatusCreated}); err != nil {
|
||||
return err
|
||||
if ml == EmptyPayload {
|
||||
if err := checkRespCode(resp, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err := checkRespCode(resp, []int{http.StatusCreated}); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if ml != EmptyPayload {
|
||||
data, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = json.Unmarshal(data, t)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteTable deletes the table given the specific
|
||||
// name. This function fails if the table is not present.
|
||||
// Be advised: DeleteTable deletes all the entries
|
||||
// that may be present.
|
||||
func (c *TableServiceClient) DeleteTable(table AzureTable) error {
|
||||
uri := c.client.getEndpoint(tableServiceName, tablesURIPath, url.Values{})
|
||||
uri += fmt.Sprintf("('%s')", string(table))
|
||||
// Delete deletes the referenced table.
|
||||
// This function fails if the table is not present.
|
||||
// Be advised: Delete deletes all the entries that may be present.
|
||||
// See https://docs.microsoft.com/rest/api/storageservices/fileservices/delete-table
|
||||
func (t *Table) Delete(timeout uint, options *TableOptions) error {
|
||||
uri := t.tsc.client.getEndpoint(tableServiceName, t.buildSpecificPath(), url.Values{
|
||||
"timeout": {strconv.Itoa(int(timeout))},
|
||||
})
|
||||
|
||||
headers := c.getStandardHeaders()
|
||||
|
||||
headers["Content-Length"] = "0"
|
||||
|
||||
resp, err := c.client.execInternalJSON(http.MethodDelete, uri, headers, nil, c.auth)
|
||||
headers := t.tsc.client.getStandardHeaders()
|
||||
headers = addReturnContentHeaders(headers, EmptyPayload)
|
||||
headers = options.addToHeaders(headers)
|
||||
|
||||
resp, err := t.tsc.client.exec(http.MethodDelete, uri, headers, nil, t.tsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err := checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
|
||||
}
|
||||
return nil
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
// SetTablePermissions sets up table ACL permissions as per REST details https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/Set-Table-ACL
|
||||
func (c *TableServiceClient) SetTablePermissions(table AzureTable, policies []TableAccessPolicy, timeout uint) (err error) {
|
||||
params := url.Values{"comp": {"acl"}}
|
||||
// QueryOptions includes options for a query entities operation.
|
||||
// Top, filter and select are OData query options.
|
||||
type QueryOptions struct {
|
||||
Top uint
|
||||
Filter string
|
||||
Select []string
|
||||
RequestID string
|
||||
}
|
||||
|
||||
if timeout > 0 {
|
||||
params.Add("timeout", fmt.Sprint(timeout))
|
||||
func (options *QueryOptions) getParameters() (url.Values, map[string]string) {
|
||||
query := url.Values{}
|
||||
headers := map[string]string{}
|
||||
if options != nil {
|
||||
if options.Top > 0 {
|
||||
query.Add(OdataTop, strconv.FormatUint(uint64(options.Top), 10))
|
||||
}
|
||||
if options.Filter != "" {
|
||||
query.Add(OdataFilter, options.Filter)
|
||||
}
|
||||
if len(options.Select) > 0 {
|
||||
query.Add(OdataSelect, strings.Join(options.Select, ","))
|
||||
}
|
||||
headers = addToHeaders(headers, "x-ms-client-request-id", options.RequestID)
|
||||
}
|
||||
return query, headers
|
||||
}
|
||||
|
||||
// QueryEntities returns the entities in the table.
|
||||
// You can use query options defined by the OData Protocol specification.
|
||||
//
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/query-entities
|
||||
func (t *Table) QueryEntities(timeout uint, ml MetadataLevel, options *QueryOptions) (*EntityQueryResult, error) {
|
||||
if ml == EmptyPayload {
|
||||
return nil, errEmptyPayload
|
||||
}
|
||||
query, headers := options.getParameters()
|
||||
query = addTimeout(query, timeout)
|
||||
uri := t.tsc.client.getEndpoint(tableServiceName, t.buildPath(), query)
|
||||
return t.queryEntities(uri, headers, ml)
|
||||
}
|
||||
|
||||
// NextResults returns the next page of results
|
||||
// from a QueryEntities or NextResults operation.
|
||||
//
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/query-entities
|
||||
// See https://docs.microsoft.com/rest/api/storageservices/fileservices/query-timeout-and-pagination
|
||||
func (eqr *EntityQueryResult) NextResults(options *TableOptions) (*EntityQueryResult, error) {
|
||||
if eqr == nil {
|
||||
return nil, errNilPreviousResult
|
||||
}
|
||||
if eqr.NextLink == nil {
|
||||
return nil, errNilNextLink
|
||||
}
|
||||
headers := options.addToHeaders(map[string]string{})
|
||||
return eqr.table.queryEntities(*eqr.NextLink, headers, eqr.ml)
|
||||
}
|
||||
|
||||
// SetPermissions sets up table ACL permissions
|
||||
// See https://docs.microsoft.com/rest/api/storageservices/fileservices/Set-Table-ACL
|
||||
func (t *Table) SetPermissions(tap []TableAccessPolicy, timeout uint, options *TableOptions) error {
|
||||
params := url.Values{"comp": {"acl"},
|
||||
"timeout": {strconv.Itoa(int(timeout))},
|
||||
}
|
||||
|
||||
uri := c.client.getEndpoint(tableServiceName, string(table), params)
|
||||
headers := c.client.getStandardHeaders()
|
||||
uri := t.tsc.client.getEndpoint(tableServiceName, t.Name, params)
|
||||
headers := t.tsc.client.getStandardHeaders()
|
||||
headers = options.addToHeaders(headers)
|
||||
|
||||
body, length, err := generateTableACLPayload(policies)
|
||||
body, length, err := generateTableACLPayload(tap)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
headers["Content-Length"] = fmt.Sprintf("%v", length)
|
||||
headers["Content-Length"] = strconv.Itoa(length)
|
||||
|
||||
resp, err := c.client.execInternalJSON(http.MethodPut, uri, headers, body, c.auth)
|
||||
resp, err := t.tsc.client.exec(http.MethodPut, uri, headers, body, t.tsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer readAndCloseBody(resp.body)
|
||||
defer drainRespBody(resp)
|
||||
|
||||
if err := checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
return checkRespCode(resp, []int{http.StatusNoContent})
|
||||
}
|
||||
|
||||
func generateTableACLPayload(policies []TableAccessPolicy) (io.Reader, int, error) {
|
||||
|
@ -182,38 +286,99 @@ func generateTableACLPayload(policies []TableAccessPolicy) (io.Reader, int, erro
|
|||
return xmlMarshal(sil)
|
||||
}
|
||||
|
||||
// GetTablePermissions gets the table ACL permissions, as per REST details https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-table-acl
|
||||
func (c *TableServiceClient) GetTablePermissions(table AzureTable, timeout int) (permissionResponse []TableAccessPolicy, err error) {
|
||||
params := url.Values{"comp": {"acl"}}
|
||||
|
||||
if timeout > 0 {
|
||||
params.Add("timeout", strconv.Itoa(timeout))
|
||||
// GetPermissions gets the table ACL permissions
|
||||
// See https://docs.microsoft.com/rest/api/storageservices/fileservices/get-table-acl
|
||||
func (t *Table) GetPermissions(timeout int, options *TableOptions) ([]TableAccessPolicy, error) {
|
||||
params := url.Values{"comp": {"acl"},
|
||||
"timeout": {strconv.Itoa(int(timeout))},
|
||||
}
|
||||
|
||||
uri := c.client.getEndpoint(tableServiceName, string(table), params)
|
||||
headers := c.client.getStandardHeaders()
|
||||
resp, err := c.client.execInternalJSON(http.MethodGet, uri, headers, nil, c.auth)
|
||||
uri := t.tsc.client.getEndpoint(tableServiceName, t.Name, params)
|
||||
headers := t.tsc.client.getStandardHeaders()
|
||||
headers = options.addToHeaders(headers)
|
||||
|
||||
resp, err := t.tsc.client.exec(http.MethodGet, uri, headers, nil, t.tsc.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil {
|
||||
ioutil.ReadAll(resp.body)
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var ap AccessPolicy
|
||||
err = xmlUnmarshal(resp.body, &ap.SignedIdentifiersList)
|
||||
err = xmlUnmarshal(resp.Body, &ap.SignedIdentifiersList)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
out := updateTableAccessPolicy(ap)
|
||||
return out, nil
|
||||
return updateTableAccessPolicy(ap), nil
|
||||
}
|
||||
|
||||
func (t *Table) queryEntities(uri string, headers map[string]string, ml MetadataLevel) (*EntityQueryResult, error) {
|
||||
headers = mergeHeaders(headers, t.tsc.client.getStandardHeaders())
|
||||
if ml != EmptyPayload {
|
||||
headers[headerAccept] = string(ml)
|
||||
}
|
||||
|
||||
resp, err := t.tsc.client.exec(http.MethodGet, uri, headers, nil, t.tsc.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err = checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
data, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var entities EntityQueryResult
|
||||
err = json.Unmarshal(data, &entities)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for i := range entities.Entities {
|
||||
entities.Entities[i].Table = t
|
||||
}
|
||||
entities.table = t
|
||||
|
||||
contToken := extractContinuationTokenFromHeaders(resp.Header)
|
||||
if contToken == nil {
|
||||
entities.NextLink = nil
|
||||
} else {
|
||||
originalURI, err := url.Parse(uri)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v := originalURI.Query()
|
||||
v.Set(nextPartitionKeyQueryParameter, contToken.NextPartitionKey)
|
||||
v.Set(nextRowKeyQueryParameter, contToken.NextRowKey)
|
||||
newURI := t.tsc.client.getEndpoint(tableServiceName, t.buildPath(), v)
|
||||
entities.NextLink = &newURI
|
||||
entities.ml = ml
|
||||
}
|
||||
|
||||
return &entities, nil
|
||||
}
|
||||
|
||||
func extractContinuationTokenFromHeaders(h http.Header) *continuationToken {
|
||||
ct := continuationToken{
|
||||
NextPartitionKey: h.Get(headerNextPartitionKey),
|
||||
NextRowKey: h.Get(headerNextRowKey),
|
||||
}
|
||||
|
||||
if ct.NextPartitionKey != "" && ct.NextRowKey != "" {
|
||||
return &ct
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func updateTableAccessPolicy(ap AccessPolicy) []TableAccessPolicy {
|
||||
out := []TableAccessPolicy{}
|
||||
taps := []TableAccessPolicy{}
|
||||
for _, policy := range ap.SignedIdentifiersList.SignedIdentifiers {
|
||||
tap := TableAccessPolicy{
|
||||
ID: policy.ID,
|
||||
|
@ -225,9 +390,9 @@ func updateTableAccessPolicy(ap AccessPolicy) []TableAccessPolicy {
|
|||
tap.CanUpdate = updatePermissions(policy.AccessPolicy.Permission, "u")
|
||||
tap.CanDelete = updatePermissions(policy.AccessPolicy.Permission, "d")
|
||||
|
||||
out = append(out, tap)
|
||||
taps = append(taps, tap)
|
||||
}
|
||||
return out
|
||||
return taps
|
||||
}
|
||||
|
||||
func generateTablePermissions(tap *TableAccessPolicy) (permissions string) {
|
||||
|
|
328
vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go
generated
vendored
Normal file
328
vendor/github.com/Azure/azure-sdk-for-go/storage/table_batch.go
generated
vendored
Normal file
|
@ -0,0 +1,328 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"mime/multipart"
|
||||
"net/http"
|
||||
"net/textproto"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/marstr/guid"
|
||||
)
|
||||
|
||||
// Operation type. Insert, Delete, Replace etc.
|
||||
type Operation int
|
||||
|
||||
// consts for batch operations.
|
||||
const (
|
||||
InsertOp = Operation(1)
|
||||
DeleteOp = Operation(2)
|
||||
ReplaceOp = Operation(3)
|
||||
MergeOp = Operation(4)
|
||||
InsertOrReplaceOp = Operation(5)
|
||||
InsertOrMergeOp = Operation(6)
|
||||
)
|
||||
|
||||
// BatchEntity used for tracking Entities to operate on and
|
||||
// whether operations (replace/merge etc) should be forced.
|
||||
// Wrapper for regular Entity with additional data specific for the entity.
|
||||
type BatchEntity struct {
|
||||
*Entity
|
||||
Force bool
|
||||
Op Operation
|
||||
}
|
||||
|
||||
// TableBatch stores all the entities that will be operated on during a batch process.
|
||||
// Entities can be inserted, replaced or deleted.
|
||||
type TableBatch struct {
|
||||
BatchEntitySlice []BatchEntity
|
||||
|
||||
// reference to table we're operating on.
|
||||
Table *Table
|
||||
}
|
||||
|
||||
// defaultChangesetHeaders for changeSets
|
||||
var defaultChangesetHeaders = map[string]string{
|
||||
"Accept": "application/json;odata=minimalmetadata",
|
||||
"Content-Type": "application/json",
|
||||
"Prefer": "return-no-content",
|
||||
}
|
||||
|
||||
// NewBatch return new TableBatch for populating.
|
||||
func (t *Table) NewBatch() *TableBatch {
|
||||
return &TableBatch{
|
||||
Table: t,
|
||||
}
|
||||
}
|
||||
|
||||
// InsertEntity adds an entity in preparation for a batch insert.
|
||||
func (t *TableBatch) InsertEntity(entity *Entity) {
|
||||
be := BatchEntity{Entity: entity, Force: false, Op: InsertOp}
|
||||
t.BatchEntitySlice = append(t.BatchEntitySlice, be)
|
||||
}
|
||||
|
||||
// InsertOrReplaceEntity adds an entity in preparation for a batch insert or replace.
|
||||
func (t *TableBatch) InsertOrReplaceEntity(entity *Entity, force bool) {
|
||||
be := BatchEntity{Entity: entity, Force: false, Op: InsertOrReplaceOp}
|
||||
t.BatchEntitySlice = append(t.BatchEntitySlice, be)
|
||||
}
|
||||
|
||||
// InsertOrReplaceEntityByForce adds an entity in preparation for a batch insert or replace. Forces regardless of ETag
|
||||
func (t *TableBatch) InsertOrReplaceEntityByForce(entity *Entity) {
|
||||
t.InsertOrReplaceEntity(entity, true)
|
||||
}
|
||||
|
||||
// InsertOrMergeEntity adds an entity in preparation for a batch insert or merge.
|
||||
func (t *TableBatch) InsertOrMergeEntity(entity *Entity, force bool) {
|
||||
be := BatchEntity{Entity: entity, Force: false, Op: InsertOrMergeOp}
|
||||
t.BatchEntitySlice = append(t.BatchEntitySlice, be)
|
||||
}
|
||||
|
||||
// InsertOrMergeEntityByForce adds an entity in preparation for a batch insert or merge. Forces regardless of ETag
|
||||
func (t *TableBatch) InsertOrMergeEntityByForce(entity *Entity) {
|
||||
t.InsertOrMergeEntity(entity, true)
|
||||
}
|
||||
|
||||
// ReplaceEntity adds an entity in preparation for a batch replace.
|
||||
func (t *TableBatch) ReplaceEntity(entity *Entity) {
|
||||
be := BatchEntity{Entity: entity, Force: false, Op: ReplaceOp}
|
||||
t.BatchEntitySlice = append(t.BatchEntitySlice, be)
|
||||
}
|
||||
|
||||
// DeleteEntity adds an entity in preparation for a batch delete
|
||||
func (t *TableBatch) DeleteEntity(entity *Entity, force bool) {
|
||||
be := BatchEntity{Entity: entity, Force: false, Op: DeleteOp}
|
||||
t.BatchEntitySlice = append(t.BatchEntitySlice, be)
|
||||
}
|
||||
|
||||
// DeleteEntityByForce adds an entity in preparation for a batch delete. Forces regardless of ETag
|
||||
func (t *TableBatch) DeleteEntityByForce(entity *Entity, force bool) {
|
||||
t.DeleteEntity(entity, true)
|
||||
}
|
||||
|
||||
// MergeEntity adds an entity in preparation for a batch merge
|
||||
func (t *TableBatch) MergeEntity(entity *Entity) {
|
||||
be := BatchEntity{Entity: entity, Force: false, Op: MergeOp}
|
||||
t.BatchEntitySlice = append(t.BatchEntitySlice, be)
|
||||
}
|
||||
|
||||
// ExecuteBatch executes many table operations in one request to Azure.
|
||||
// The operations can be combinations of Insert, Delete, Replace and Merge
|
||||
// Creates the inner changeset body (various operations, Insert, Delete etc) then creates the outer request packet that encompasses
|
||||
// the changesets.
|
||||
// As per document https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/performing-entity-group-transactions
|
||||
func (t *TableBatch) ExecuteBatch() error {
|
||||
|
||||
// Using `github.com/marstr/guid` is in response to issue #947 (https://github.com/Azure/azure-sdk-for-go/issues/947).
|
||||
id, err := guid.NewGUIDs(guid.CreationStrategyVersion1)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
changesetBoundary := fmt.Sprintf("changeset_%s", id.String())
|
||||
uri := t.Table.tsc.client.getEndpoint(tableServiceName, "$batch", nil)
|
||||
changesetBody, err := t.generateChangesetBody(changesetBoundary)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
id, err = guid.NewGUIDs(guid.CreationStrategyVersion1)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
boundary := fmt.Sprintf("batch_%s", id.String())
|
||||
body, err := generateBody(changesetBody, changesetBoundary, boundary)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
headers := t.Table.tsc.client.getStandardHeaders()
|
||||
headers[headerContentType] = fmt.Sprintf("multipart/mixed; boundary=%s", boundary)
|
||||
|
||||
resp, err := t.Table.tsc.client.execBatchOperationJSON(http.MethodPost, uri, headers, bytes.NewReader(body.Bytes()), t.Table.tsc.auth)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer drainRespBody(resp.resp)
|
||||
|
||||
if err = checkRespCode(resp.resp, []int{http.StatusAccepted}); err != nil {
|
||||
|
||||
// check which batch failed.
|
||||
operationFailedMessage := t.getFailedOperation(resp.odata.Err.Message.Value)
|
||||
requestID, date, version := getDebugHeaders(resp.resp.Header)
|
||||
return AzureStorageServiceError{
|
||||
StatusCode: resp.resp.StatusCode,
|
||||
Code: resp.odata.Err.Code,
|
||||
RequestID: requestID,
|
||||
Date: date,
|
||||
APIVersion: version,
|
||||
Message: operationFailedMessage,
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// getFailedOperation parses the original Azure error string and determines which operation failed
|
||||
// and generates appropriate message.
|
||||
func (t *TableBatch) getFailedOperation(errorMessage string) string {
|
||||
// errorMessage consists of "number:string" we just need the number.
|
||||
sp := strings.Split(errorMessage, ":")
|
||||
if len(sp) > 1 {
|
||||
msg := fmt.Sprintf("Element %s in the batch returned an unexpected response code.\n%s", sp[0], errorMessage)
|
||||
return msg
|
||||
}
|
||||
|
||||
// cant parse the message, just return the original message to client
|
||||
return errorMessage
|
||||
}
|
||||
|
||||
// generateBody generates the complete body for the batch request.
|
||||
func generateBody(changeSetBody *bytes.Buffer, changesetBoundary string, boundary string) (*bytes.Buffer, error) {
|
||||
|
||||
body := new(bytes.Buffer)
|
||||
writer := multipart.NewWriter(body)
|
||||
writer.SetBoundary(boundary)
|
||||
h := make(textproto.MIMEHeader)
|
||||
h.Set(headerContentType, fmt.Sprintf("multipart/mixed; boundary=%s\r\n", changesetBoundary))
|
||||
batchWriter, err := writer.CreatePart(h)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
batchWriter.Write(changeSetBody.Bytes())
|
||||
writer.Close()
|
||||
return body, nil
|
||||
}
|
||||
|
||||
// generateChangesetBody generates the individual changesets for the various operations within the batch request.
|
||||
// There is a changeset for Insert, Delete, Merge etc.
|
||||
func (t *TableBatch) generateChangesetBody(changesetBoundary string) (*bytes.Buffer, error) {
|
||||
|
||||
body := new(bytes.Buffer)
|
||||
writer := multipart.NewWriter(body)
|
||||
writer.SetBoundary(changesetBoundary)
|
||||
|
||||
for _, be := range t.BatchEntitySlice {
|
||||
t.generateEntitySubset(&be, writer)
|
||||
}
|
||||
|
||||
writer.Close()
|
||||
return body, nil
|
||||
}
|
||||
|
||||
// generateVerb generates the HTTP request VERB required for each changeset.
|
||||
func generateVerb(op Operation) (string, error) {
|
||||
switch op {
|
||||
case InsertOp:
|
||||
return http.MethodPost, nil
|
||||
case DeleteOp:
|
||||
return http.MethodDelete, nil
|
||||
case ReplaceOp, InsertOrReplaceOp:
|
||||
return http.MethodPut, nil
|
||||
case MergeOp, InsertOrMergeOp:
|
||||
return "MERGE", nil
|
||||
default:
|
||||
return "", errors.New("Unable to detect operation")
|
||||
}
|
||||
}
|
||||
|
||||
// generateQueryPath generates the query path for within the changesets
|
||||
// For inserts it will just be a table query path (table name)
|
||||
// but for other operations (modifying an existing entity) then
|
||||
// the partition/row keys need to be generated.
|
||||
func (t *TableBatch) generateQueryPath(op Operation, entity *Entity) string {
|
||||
if op == InsertOp {
|
||||
return entity.Table.buildPath()
|
||||
}
|
||||
return entity.buildPath()
|
||||
}
|
||||
|
||||
// generateGenericOperationHeaders generates common headers for a given operation.
|
||||
func generateGenericOperationHeaders(be *BatchEntity) map[string]string {
|
||||
retval := map[string]string{}
|
||||
|
||||
for k, v := range defaultChangesetHeaders {
|
||||
retval[k] = v
|
||||
}
|
||||
|
||||
if be.Op == DeleteOp || be.Op == ReplaceOp || be.Op == MergeOp {
|
||||
if be.Force || be.Entity.OdataEtag == "" {
|
||||
retval["If-Match"] = "*"
|
||||
} else {
|
||||
retval["If-Match"] = be.Entity.OdataEtag
|
||||
}
|
||||
}
|
||||
|
||||
return retval
|
||||
}
|
||||
|
||||
// generateEntitySubset generates body payload for particular batch entity
|
||||
func (t *TableBatch) generateEntitySubset(batchEntity *BatchEntity, writer *multipart.Writer) error {
|
||||
|
||||
h := make(textproto.MIMEHeader)
|
||||
h.Set(headerContentType, "application/http")
|
||||
h.Set(headerContentTransferEncoding, "binary")
|
||||
|
||||
verb, err := generateVerb(batchEntity.Op)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
genericOpHeadersMap := generateGenericOperationHeaders(batchEntity)
|
||||
queryPath := t.generateQueryPath(batchEntity.Op, batchEntity.Entity)
|
||||
uri := t.Table.tsc.client.getEndpoint(tableServiceName, queryPath, nil)
|
||||
|
||||
operationWriter, err := writer.CreatePart(h)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
urlAndVerb := fmt.Sprintf("%s %s HTTP/1.1\r\n", verb, uri)
|
||||
operationWriter.Write([]byte(urlAndVerb))
|
||||
writeHeaders(genericOpHeadersMap, &operationWriter)
|
||||
operationWriter.Write([]byte("\r\n")) // additional \r\n is needed per changeset separating the "headers" and the body.
|
||||
|
||||
// delete operation doesn't need a body.
|
||||
if batchEntity.Op != DeleteOp {
|
||||
//var e Entity = batchEntity.Entity
|
||||
body, err := json.Marshal(batchEntity.Entity)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
operationWriter.Write(body)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func writeHeaders(h map[string]string, writer *io.Writer) {
|
||||
// This way it is guaranteed the headers will be written in a sorted order
|
||||
var keys []string
|
||||
for k := range h {
|
||||
keys = append(keys, k)
|
||||
}
|
||||
sort.Strings(keys)
|
||||
for _, k := range keys {
|
||||
(*writer).Write([]byte(fmt.Sprintf("%s: %s\r\n", k, h[k])))
|
||||
}
|
||||
}
|
345
vendor/github.com/Azure/azure-sdk-for-go/storage/table_entities.go
generated
vendored
345
vendor/github.com/Azure/azure-sdk-for-go/storage/table_entities.go
generated
vendored
|
@ -1,345 +0,0 @@
|
|||
package storage
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// Annotating as secure for gas scanning
|
||||
/* #nosec */
|
||||
const (
|
||||
partitionKeyNode = "PartitionKey"
|
||||
rowKeyNode = "RowKey"
|
||||
tag = "table"
|
||||
tagIgnore = "-"
|
||||
continuationTokenPartitionKeyHeader = "X-Ms-Continuation-Nextpartitionkey"
|
||||
continuationTokenRowHeader = "X-Ms-Continuation-Nextrowkey"
|
||||
maxTopParameter = 1000
|
||||
)
|
||||
|
||||
type queryTablesResponse struct {
|
||||
TableName []struct {
|
||||
TableName string `json:"TableName"`
|
||||
} `json:"value"`
|
||||
}
|
||||
|
||||
const (
|
||||
tableOperationTypeInsert = iota
|
||||
tableOperationTypeUpdate = iota
|
||||
tableOperationTypeMerge = iota
|
||||
tableOperationTypeInsertOrReplace = iota
|
||||
tableOperationTypeInsertOrMerge = iota
|
||||
)
|
||||
|
||||
type tableOperation int
|
||||
|
||||
// TableEntity interface specifies
|
||||
// the functions needed to support
|
||||
// marshaling and unmarshaling into
|
||||
// Azure Tables. The struct must only contain
|
||||
// simple types because Azure Tables do not
|
||||
// support hierarchy.
|
||||
type TableEntity interface {
|
||||
PartitionKey() string
|
||||
RowKey() string
|
||||
SetPartitionKey(string) error
|
||||
SetRowKey(string) error
|
||||
}
|
||||
|
||||
// ContinuationToken is an opaque (ie not useful to inspect)
|
||||
// struct that Get... methods can return if there are more
|
||||
// entries to be returned than the ones already
|
||||
// returned. Just pass it to the same function to continue
|
||||
// receiving the remaining entries.
|
||||
type ContinuationToken struct {
|
||||
NextPartitionKey string
|
||||
NextRowKey string
|
||||
}
|
||||
|
||||
type getTableEntriesResponse struct {
|
||||
Elements []map[string]interface{} `json:"value"`
|
||||
}
|
||||
|
||||
// QueryTableEntities queries the specified table and returns the unmarshaled
|
||||
// entities of type retType.
|
||||
// top parameter limits the returned entries up to top. Maximum top
|
||||
// allowed by Azure API is 1000. In case there are more than top entries to be
|
||||
// returned the function will return a non nil *ContinuationToken. You can call the
|
||||
// same function again passing the received ContinuationToken as previousContToken
|
||||
// parameter in order to get the following entries. The query parameter
|
||||
// is the odata query. To retrieve all the entries pass the empty string.
|
||||
// The function returns a pointer to a TableEntity slice, the *ContinuationToken
|
||||
// if there are more entries to be returned and an error in case something went
|
||||
// wrong.
|
||||
//
|
||||
// Example:
|
||||
// entities, cToken, err = tSvc.QueryTableEntities("table", cToken, reflect.TypeOf(entity), 20, "")
|
||||
func (c *TableServiceClient) QueryTableEntities(tableName AzureTable, previousContToken *ContinuationToken, retType reflect.Type, top int, query string) ([]TableEntity, *ContinuationToken, error) {
|
||||
if top > maxTopParameter {
|
||||
return nil, nil, fmt.Errorf("top accepts at maximum %d elements. Requested %d instead", maxTopParameter, top)
|
||||
}
|
||||
|
||||
uri := c.client.getEndpoint(tableServiceName, pathForTable(tableName), url.Values{})
|
||||
uri += fmt.Sprintf("?$top=%d", top)
|
||||
if query != "" {
|
||||
uri += fmt.Sprintf("&$filter=%s", url.QueryEscape(query))
|
||||
}
|
||||
|
||||
if previousContToken != nil {
|
||||
uri += fmt.Sprintf("&NextPartitionKey=%s&NextRowKey=%s", previousContToken.NextPartitionKey, previousContToken.NextRowKey)
|
||||
}
|
||||
|
||||
headers := c.getStandardHeaders()
|
||||
|
||||
headers["Content-Length"] = "0"
|
||||
|
||||
resp, err := c.client.execInternalJSON(http.MethodGet, uri, headers, nil, c.auth)
|
||||
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
contToken := extractContinuationTokenFromHeaders(resp.headers)
|
||||
|
||||
defer resp.body.Close()
|
||||
|
||||
if err = checkRespCode(resp.statusCode, []int{http.StatusOK}); err != nil {
|
||||
return nil, contToken, err
|
||||
}
|
||||
|
||||
retEntries, err := deserializeEntity(retType, resp.body)
|
||||
if err != nil {
|
||||
return nil, contToken, err
|
||||
}
|
||||
|
||||
return retEntries, contToken, nil
|
||||
}
|
||||
|
||||
// InsertEntity inserts an entity in the specified table.
|
||||
// The function fails if there is an entity with the same
|
||||
// PartitionKey and RowKey in the table.
|
||||
func (c *TableServiceClient) InsertEntity(table AzureTable, entity TableEntity) error {
|
||||
if sc, err := c.execTable(table, entity, false, http.MethodPost); err != nil {
|
||||
return checkRespCode(sc, []int{http.StatusCreated})
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *TableServiceClient) execTable(table AzureTable, entity TableEntity, specifyKeysInURL bool, method string) (int, error) {
|
||||
uri := c.client.getEndpoint(tableServiceName, pathForTable(table), url.Values{})
|
||||
if specifyKeysInURL {
|
||||
uri += fmt.Sprintf("(PartitionKey='%s',RowKey='%s')", url.QueryEscape(entity.PartitionKey()), url.QueryEscape(entity.RowKey()))
|
||||
}
|
||||
|
||||
headers := c.getStandardHeaders()
|
||||
|
||||
var buf bytes.Buffer
|
||||
|
||||
if err := injectPartitionAndRowKeys(entity, &buf); err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
headers["Content-Length"] = fmt.Sprintf("%d", buf.Len())
|
||||
|
||||
resp, err := c.client.execInternalJSON(method, uri, headers, &buf, c.auth)
|
||||
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
||||
defer resp.body.Close()
|
||||
|
||||
return resp.statusCode, nil
|
||||
}
|
||||
|
||||
// UpdateEntity updates the contents of an entity with the
|
||||
// one passed as parameter. The function fails if there is no entity
|
||||
// with the same PartitionKey and RowKey in the table.
|
||||
func (c *TableServiceClient) UpdateEntity(table AzureTable, entity TableEntity) error {
|
||||
if sc, err := c.execTable(table, entity, true, http.MethodPut); err != nil {
|
||||
return checkRespCode(sc, []int{http.StatusNoContent})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// MergeEntity merges the contents of an entity with the
|
||||
// one passed as parameter.
|
||||
// The function fails if there is no entity
|
||||
// with the same PartitionKey and RowKey in the table.
|
||||
func (c *TableServiceClient) MergeEntity(table AzureTable, entity TableEntity) error {
|
||||
if sc, err := c.execTable(table, entity, true, "MERGE"); err != nil {
|
||||
return checkRespCode(sc, []int{http.StatusNoContent})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteEntityWithoutCheck deletes the entity matching by
|
||||
// PartitionKey and RowKey. There is no check on IfMatch
|
||||
// parameter so the entity is always deleted.
|
||||
// The function fails if there is no entity
|
||||
// with the same PartitionKey and RowKey in the table.
|
||||
func (c *TableServiceClient) DeleteEntityWithoutCheck(table AzureTable, entity TableEntity) error {
|
||||
return c.DeleteEntity(table, entity, "*")
|
||||
}
|
||||
|
||||
// DeleteEntity deletes the entity matching by
|
||||
// PartitionKey, RowKey and ifMatch field.
|
||||
// The function fails if there is no entity
|
||||
// with the same PartitionKey and RowKey in the table or
|
||||
// the ifMatch is different.
|
||||
func (c *TableServiceClient) DeleteEntity(table AzureTable, entity TableEntity, ifMatch string) error {
|
||||
uri := c.client.getEndpoint(tableServiceName, pathForTable(table), url.Values{})
|
||||
uri += fmt.Sprintf("(PartitionKey='%s',RowKey='%s')", url.QueryEscape(entity.PartitionKey()), url.QueryEscape(entity.RowKey()))
|
||||
|
||||
headers := c.getStandardHeaders()
|
||||
|
||||
headers["Content-Length"] = "0"
|
||||
headers["If-Match"] = ifMatch
|
||||
|
||||
resp, err := c.client.execInternalJSON(http.MethodDelete, uri, headers, nil, c.auth)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.body.Close()
|
||||
|
||||
if err := checkRespCode(resp.statusCode, []int{http.StatusNoContent}); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// InsertOrReplaceEntity inserts an entity in the specified table
|
||||
// or replaced the existing one.
|
||||
func (c *TableServiceClient) InsertOrReplaceEntity(table AzureTable, entity TableEntity) error {
|
||||
if sc, err := c.execTable(table, entity, true, http.MethodPut); err != nil {
|
||||
return checkRespCode(sc, []int{http.StatusNoContent})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// InsertOrMergeEntity inserts an entity in the specified table
|
||||
// or merges the existing one.
|
||||
func (c *TableServiceClient) InsertOrMergeEntity(table AzureTable, entity TableEntity) error {
|
||||
if sc, err := c.execTable(table, entity, true, "MERGE"); err != nil {
|
||||
return checkRespCode(sc, []int{http.StatusNoContent})
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func injectPartitionAndRowKeys(entity TableEntity, buf *bytes.Buffer) error {
|
||||
if err := json.NewEncoder(buf).Encode(entity); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
dec := make(map[string]interface{})
|
||||
if err := json.NewDecoder(buf).Decode(&dec); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Inject PartitionKey and RowKey
|
||||
dec[partitionKeyNode] = entity.PartitionKey()
|
||||
dec[rowKeyNode] = entity.RowKey()
|
||||
|
||||
// Remove tagged fields
|
||||
// The tag is defined in the const section
|
||||
// This is useful to avoid storing the PartitionKey and RowKey twice.
|
||||
numFields := reflect.ValueOf(entity).Elem().NumField()
|
||||
for i := 0; i < numFields; i++ {
|
||||
f := reflect.ValueOf(entity).Elem().Type().Field(i)
|
||||
|
||||
if f.Tag.Get(tag) == tagIgnore {
|
||||
// we must look for its JSON name in the dictionary
|
||||
// as the user can rename it using a tag
|
||||
jsonName := f.Name
|
||||
if f.Tag.Get("json") != "" {
|
||||
jsonName = f.Tag.Get("json")
|
||||
}
|
||||
delete(dec, jsonName)
|
||||
}
|
||||
}
|
||||
|
||||
buf.Reset()
|
||||
|
||||
if err := json.NewEncoder(buf).Encode(&dec); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func deserializeEntity(retType reflect.Type, reader io.Reader) ([]TableEntity, error) {
|
||||
buf := new(bytes.Buffer)
|
||||
|
||||
var ret getTableEntriesResponse
|
||||
if err := json.NewDecoder(reader).Decode(&ret); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
tEntries := make([]TableEntity, len(ret.Elements))
|
||||
|
||||
for i, entry := range ret.Elements {
|
||||
|
||||
buf.Reset()
|
||||
if err := json.NewEncoder(buf).Encode(entry); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dec := make(map[string]interface{})
|
||||
if err := json.NewDecoder(buf).Decode(&dec); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var pKey, rKey string
|
||||
// strip pk and rk
|
||||
for key, val := range dec {
|
||||
switch key {
|
||||
case partitionKeyNode:
|
||||
pKey = val.(string)
|
||||
case rowKeyNode:
|
||||
rKey = val.(string)
|
||||
}
|
||||
}
|
||||
|
||||
delete(dec, partitionKeyNode)
|
||||
delete(dec, rowKeyNode)
|
||||
|
||||
buf.Reset()
|
||||
if err := json.NewEncoder(buf).Encode(dec); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Create a empty retType instance
|
||||
tEntries[i] = reflect.New(retType.Elem()).Interface().(TableEntity)
|
||||
// Popolate it with the values
|
||||
if err := json.NewDecoder(buf).Decode(&tEntries[i]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Reset PartitionKey and RowKey
|
||||
if err := tEntries[i].SetPartitionKey(pKey); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := tEntries[i].SetRowKey(rKey); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return tEntries, nil
|
||||
}
|
||||
|
||||
func extractContinuationTokenFromHeaders(h http.Header) *ContinuationToken {
|
||||
ct := ContinuationToken{h.Get(continuationTokenPartitionKeyHeader), h.Get(continuationTokenRowHeader)}
|
||||
|
||||
if ct.NextPartitionKey != "" && ct.NextRowKey != "" {
|
||||
return &ct
|
||||
}
|
||||
return nil
|
||||
}
|
192
vendor/github.com/Azure/azure-sdk-for-go/storage/tableserviceclient.go
generated
vendored
192
vendor/github.com/Azure/azure-sdk-for-go/storage/tableserviceclient.go
generated
vendored
|
@ -1,5 +1,35 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
const (
|
||||
headerAccept = "Accept"
|
||||
headerEtag = "Etag"
|
||||
headerPrefer = "Prefer"
|
||||
headerXmsContinuation = "x-ms-Continuation-NextTableName"
|
||||
)
|
||||
|
||||
// TableServiceClient contains operations for Microsoft Azure Table Storage
|
||||
// Service.
|
||||
type TableServiceClient struct {
|
||||
|
@ -7,14 +37,168 @@ type TableServiceClient struct {
|
|||
auth authentication
|
||||
}
|
||||
|
||||
// TableOptions includes options for some table operations
|
||||
type TableOptions struct {
|
||||
RequestID string
|
||||
}
|
||||
|
||||
func (options *TableOptions) addToHeaders(h map[string]string) map[string]string {
|
||||
if options != nil {
|
||||
h = addToHeaders(h, "x-ms-client-request-id", options.RequestID)
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
// QueryNextLink includes information for getting the next page of
|
||||
// results in query operations
|
||||
type QueryNextLink struct {
|
||||
NextLink *string
|
||||
ml MetadataLevel
|
||||
}
|
||||
|
||||
// GetServiceProperties gets the properties of your storage account's table service.
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/get-table-service-properties
|
||||
func (c *TableServiceClient) GetServiceProperties() (*ServiceProperties, error) {
|
||||
return c.client.getServiceProperties(tableServiceName, c.auth)
|
||||
func (t *TableServiceClient) GetServiceProperties() (*ServiceProperties, error) {
|
||||
return t.client.getServiceProperties(tableServiceName, t.auth)
|
||||
}
|
||||
|
||||
// SetServiceProperties sets the properties of your storage account's table service.
|
||||
// See: https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/set-table-service-properties
|
||||
func (c *TableServiceClient) SetServiceProperties(props ServiceProperties) error {
|
||||
return c.client.setServiceProperties(props, tableServiceName, c.auth)
|
||||
func (t *TableServiceClient) SetServiceProperties(props ServiceProperties) error {
|
||||
return t.client.setServiceProperties(props, tableServiceName, t.auth)
|
||||
}
|
||||
|
||||
// GetTableReference returns a Table object for the specified table name.
|
||||
func (t *TableServiceClient) GetTableReference(name string) *Table {
|
||||
return &Table{
|
||||
tsc: t,
|
||||
Name: name,
|
||||
}
|
||||
}
|
||||
|
||||
// QueryTablesOptions includes options for some table operations
|
||||
type QueryTablesOptions struct {
|
||||
Top uint
|
||||
Filter string
|
||||
RequestID string
|
||||
}
|
||||
|
||||
func (options *QueryTablesOptions) getParameters() (url.Values, map[string]string) {
|
||||
query := url.Values{}
|
||||
headers := map[string]string{}
|
||||
if options != nil {
|
||||
if options.Top > 0 {
|
||||
query.Add(OdataTop, strconv.FormatUint(uint64(options.Top), 10))
|
||||
}
|
||||
if options.Filter != "" {
|
||||
query.Add(OdataFilter, options.Filter)
|
||||
}
|
||||
headers = addToHeaders(headers, "x-ms-client-request-id", options.RequestID)
|
||||
}
|
||||
return query, headers
|
||||
}
|
||||
|
||||
// QueryTables returns the tables in the storage account.
|
||||
// You can use query options defined by the OData Protocol specification.
|
||||
//
|
||||
// See https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/query-tables
|
||||
func (t *TableServiceClient) QueryTables(ml MetadataLevel, options *QueryTablesOptions) (*TableQueryResult, error) {
|
||||
query, headers := options.getParameters()
|
||||
uri := t.client.getEndpoint(tableServiceName, tablesURIPath, query)
|
||||
return t.queryTables(uri, headers, ml)
|
||||
}
|
||||
|
||||
// NextResults returns the next page of results
|
||||
// from a QueryTables or a NextResults operation.
|
||||
//
|
||||
// See https://docs.microsoft.com/rest/api/storageservices/fileservices/query-tables
|
||||
// See https://docs.microsoft.com/rest/api/storageservices/fileservices/query-timeout-and-pagination
|
||||
func (tqr *TableQueryResult) NextResults(options *TableOptions) (*TableQueryResult, error) {
|
||||
if tqr == nil {
|
||||
return nil, errNilPreviousResult
|
||||
}
|
||||
if tqr.NextLink == nil {
|
||||
return nil, errNilNextLink
|
||||
}
|
||||
headers := options.addToHeaders(map[string]string{})
|
||||
|
||||
return tqr.tsc.queryTables(*tqr.NextLink, headers, tqr.ml)
|
||||
}
|
||||
|
||||
// TableQueryResult contains the response from
|
||||
// QueryTables and QueryTablesNextResults functions.
|
||||
type TableQueryResult struct {
|
||||
OdataMetadata string `json:"odata.metadata"`
|
||||
Tables []Table `json:"value"`
|
||||
QueryNextLink
|
||||
tsc *TableServiceClient
|
||||
}
|
||||
|
||||
func (t *TableServiceClient) queryTables(uri string, headers map[string]string, ml MetadataLevel) (*TableQueryResult, error) {
|
||||
if ml == EmptyPayload {
|
||||
return nil, errEmptyPayload
|
||||
}
|
||||
headers = mergeHeaders(headers, t.client.getStandardHeaders())
|
||||
headers[headerAccept] = string(ml)
|
||||
|
||||
resp, err := t.client.exec(http.MethodGet, uri, headers, nil, t.auth)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if err := checkRespCode(resp, []int{http.StatusOK}); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
respBody, err := ioutil.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var out TableQueryResult
|
||||
err = json.Unmarshal(respBody, &out)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for i := range out.Tables {
|
||||
out.Tables[i].tsc = t
|
||||
}
|
||||
out.tsc = t
|
||||
|
||||
nextLink := resp.Header.Get(http.CanonicalHeaderKey(headerXmsContinuation))
|
||||
if nextLink == "" {
|
||||
out.NextLink = nil
|
||||
} else {
|
||||
originalURI, err := url.Parse(uri)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
v := originalURI.Query()
|
||||
v.Set(nextTableQueryParameter, nextLink)
|
||||
newURI := t.client.getEndpoint(tableServiceName, tablesURIPath, v)
|
||||
out.NextLink = &newURI
|
||||
out.ml = ml
|
||||
}
|
||||
|
||||
return &out, nil
|
||||
}
|
||||
|
||||
func addBodyRelatedHeaders(h map[string]string, length int) map[string]string {
|
||||
h[headerContentType] = "application/json"
|
||||
h[headerContentLength] = fmt.Sprintf("%v", length)
|
||||
h[headerAcceptCharset] = "UTF-8"
|
||||
return h
|
||||
}
|
||||
|
||||
func addReturnContentHeaders(h map[string]string, ml MetadataLevel) map[string]string {
|
||||
if ml != EmptyPayload {
|
||||
h[headerPrefer] = "return-content"
|
||||
h[headerAccept] = string(ml)
|
||||
} else {
|
||||
h[headerPrefer] = "return-no-content"
|
||||
// From API version 2015-12-11 onwards, Accept header is required
|
||||
h[headerAccept] = string(NoMetadata)
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
|
165
vendor/github.com/Azure/azure-sdk-for-go/storage/util.go
generated
vendored
165
vendor/github.com/Azure/azure-sdk-for-go/storage/util.go
generated
vendored
|
@ -1,5 +1,19 @@
|
|||
package storage
|
||||
|
||||
// Copyright 2017 Microsoft Corporation
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/hmac"
|
||||
|
@ -12,9 +26,37 @@ import (
|
|||
"net/http"
|
||||
"net/url"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
fixedTime = time.Date(2050, time.December, 20, 21, 55, 0, 0, time.FixedZone("GMT", -6))
|
||||
accountSASOptions = AccountSASTokenOptions{
|
||||
Services: Services{
|
||||
Blob: true,
|
||||
},
|
||||
ResourceTypes: ResourceTypes{
|
||||
Service: true,
|
||||
Container: true,
|
||||
Object: true,
|
||||
},
|
||||
Permissions: Permissions{
|
||||
Read: true,
|
||||
Write: true,
|
||||
Delete: true,
|
||||
List: true,
|
||||
Add: true,
|
||||
Create: true,
|
||||
Update: true,
|
||||
Process: true,
|
||||
},
|
||||
Expiry: fixedTime,
|
||||
UseHTTPS: true,
|
||||
}
|
||||
)
|
||||
|
||||
func (c Client) computeHmac256(message string) string {
|
||||
h := hmac.New(sha256.New, c.accountKey)
|
||||
h.Write([]byte(message))
|
||||
|
@ -29,6 +71,10 @@ func timeRfc1123Formatted(t time.Time) string {
|
|||
return t.Format(http.TimeFormat)
|
||||
}
|
||||
|
||||
func timeRFC3339Formatted(t time.Time) string {
|
||||
return t.Format("2006-01-02T15:04:05.0000000Z")
|
||||
}
|
||||
|
||||
func mergeParams(v1, v2 url.Values) url.Values {
|
||||
out := url.Values{}
|
||||
for k, v := range v1 {
|
||||
|
@ -76,10 +122,123 @@ func headersFromStruct(v interface{}) map[string]string {
|
|||
value := reflect.ValueOf(v)
|
||||
for i := 0; i < value.NumField(); i++ {
|
||||
key := value.Type().Field(i).Tag.Get("header")
|
||||
val := value.Field(i).String()
|
||||
if key != "" && val != "" {
|
||||
headers[key] = val
|
||||
if key != "" {
|
||||
reflectedValue := reflect.Indirect(value.Field(i))
|
||||
var val string
|
||||
if reflectedValue.IsValid() {
|
||||
switch reflectedValue.Type() {
|
||||
case reflect.TypeOf(fixedTime):
|
||||
val = timeRfc1123Formatted(reflectedValue.Interface().(time.Time))
|
||||
case reflect.TypeOf(uint64(0)), reflect.TypeOf(uint(0)):
|
||||
val = strconv.FormatUint(reflectedValue.Uint(), 10)
|
||||
case reflect.TypeOf(int(0)):
|
||||
val = strconv.FormatInt(reflectedValue.Int(), 10)
|
||||
default:
|
||||
val = reflectedValue.String()
|
||||
}
|
||||
}
|
||||
if val != "" {
|
||||
headers[key] = val
|
||||
}
|
||||
}
|
||||
}
|
||||
return headers
|
||||
}
|
||||
|
||||
// merges extraHeaders into headers and returns headers
|
||||
func mergeHeaders(headers, extraHeaders map[string]string) map[string]string {
|
||||
for k, v := range extraHeaders {
|
||||
headers[k] = v
|
||||
}
|
||||
return headers
|
||||
}
|
||||
|
||||
func addToHeaders(h map[string]string, key, value string) map[string]string {
|
||||
if value != "" {
|
||||
h[key] = value
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func addTimeToHeaders(h map[string]string, key string, value *time.Time) map[string]string {
|
||||
if value != nil {
|
||||
h = addToHeaders(h, key, timeRfc1123Formatted(*value))
|
||||
}
|
||||
return h
|
||||
}
|
||||
|
||||
func addTimeout(params url.Values, timeout uint) url.Values {
|
||||
if timeout > 0 {
|
||||
params.Add("timeout", fmt.Sprintf("%v", timeout))
|
||||
}
|
||||
return params
|
||||
}
|
||||
|
||||
func addSnapshot(params url.Values, snapshot *time.Time) url.Values {
|
||||
if snapshot != nil {
|
||||
params.Add("snapshot", timeRFC3339Formatted(*snapshot))
|
||||
}
|
||||
return params
|
||||
}
|
||||
|
||||
func getTimeFromHeaders(h http.Header, key string) (*time.Time, error) {
|
||||
var out time.Time
|
||||
var err error
|
||||
outStr := h.Get(key)
|
||||
if outStr != "" {
|
||||
out, err = time.Parse(time.RFC1123, outStr)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return &out, nil
|
||||
}
|
||||
|
||||
// TimeRFC1123 is an alias for time.Time needed for custom Unmarshalling
|
||||
type TimeRFC1123 time.Time
|
||||
|
||||
// UnmarshalXML is a custom unmarshaller that overrides the default time unmarshal which uses a different time layout.
|
||||
func (t *TimeRFC1123) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
|
||||
var value string
|
||||
d.DecodeElement(&value, &start)
|
||||
parse, err := time.Parse(time.RFC1123, value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*t = TimeRFC1123(parse)
|
||||
return nil
|
||||
}
|
||||
|
||||
// MarshalXML marshals using time.RFC1123.
|
||||
func (t *TimeRFC1123) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
|
||||
return e.EncodeElement(time.Time(*t).Format(time.RFC1123), start)
|
||||
}
|
||||
|
||||
// returns a map of custom metadata values from the specified HTTP header
|
||||
func getMetadataFromHeaders(header http.Header) map[string]string {
|
||||
metadata := make(map[string]string)
|
||||
for k, v := range header {
|
||||
// Can't trust CanonicalHeaderKey() to munge case
|
||||
// reliably. "_" is allowed in identifiers:
|
||||
// https://msdn.microsoft.com/en-us/library/azure/dd179414.aspx
|
||||
// https://msdn.microsoft.com/library/aa664670(VS.71).aspx
|
||||
// http://tools.ietf.org/html/rfc7230#section-3.2
|
||||
// ...but "_" is considered invalid by
|
||||
// CanonicalMIMEHeaderKey in
|
||||
// https://golang.org/src/net/textproto/reader.go?s=14615:14659#L542
|
||||
// so k can be "X-Ms-Meta-Lol" or "x-ms-meta-lol_rofl".
|
||||
k = strings.ToLower(k)
|
||||
if len(v) == 0 || !strings.HasPrefix(k, strings.ToLower(userDefinedMetadataHeaderPrefix)) {
|
||||
continue
|
||||
}
|
||||
// metadata["lol"] = content of the last X-Ms-Meta-Lol header
|
||||
k = k[len(userDefinedMetadataHeaderPrefix):]
|
||||
metadata[k] = v[len(v)-1]
|
||||
}
|
||||
|
||||
if len(metadata) == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return metadata
|
||||
}
|
||||
|
|
5
vendor/github.com/Azure/azure-sdk-for-go/storage/version.go
generated
vendored
5
vendor/github.com/Azure/azure-sdk-for-go/storage/version.go
generated
vendored
|
@ -1,5 +0,0 @@
|
|||
package storage
|
||||
|
||||
var (
|
||||
sdkVersion = "0.1.0"
|
||||
)
|
21
vendor/github.com/Azure/azure-sdk-for-go/version/version.go
generated
vendored
Normal file
21
vendor/github.com/Azure/azure-sdk-for-go/version/version.go
generated
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
package version
|
||||
|
||||
// Copyright (c) Microsoft and contributors. All rights reserved.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
//
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
//
|
||||
// Code generated by Microsoft (R) AutoRest Code Generator.
|
||||
// Changes may cause incorrect behavior and will be lost if the code is regenerated.
|
||||
|
||||
// Number contains the semantic version of this SDK.
|
||||
const Number = "v16.2.1"
|
Loading…
Add table
Add a link
Reference in a new issue