vendoring and caldav

This commit is contained in:
Vincent Batts 2018-04-01 11:08:20 -04:00
parent a06aa900b7
commit fd9092e5ab
96 changed files with 14832 additions and 5 deletions

5
vendor/github.com/abbot/go-http-auth/.gitignore generated vendored Normal file
View file

@ -0,0 +1,5 @@
*~
*.a
*.6
*.out
_testmain.go

178
vendor/github.com/abbot/go-http-auth/LICENSE generated vendored Normal file
View file

@ -0,0 +1,178 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS

12
vendor/github.com/abbot/go-http-auth/Makefile generated vendored Normal file
View file

@ -0,0 +1,12 @@
include $(GOROOT)/src/Make.inc
TARG=auth_digest
GOFILES=\
auth.go\
digest.go\
basic.go\
misc.go\
md5crypt.go\
users.go\
include $(GOROOT)/src/Make.pkg

71
vendor/github.com/abbot/go-http-auth/README.md generated vendored Normal file
View file

@ -0,0 +1,71 @@
HTTP Authentication implementation in Go
========================================
This is an implementation of HTTP Basic and HTTP Digest authentication
in Go language. It is designed as a simple wrapper for
http.RequestHandler functions.
Features
--------
* Supports HTTP Basic and HTTP Digest authentication.
* Supports htpasswd and htdigest formatted files.
* Automatic reloading of password files.
* Pluggable interface for user/password storage.
* Supports MD5, SHA1 and BCrypt for Basic authentication password storage.
* Configurable Digest nonce cache size with expiration.
* Wrapper for legacy http handlers (http.HandlerFunc interface)
Example usage
-------------
This is a complete working example for Basic auth:
package main
import (
"fmt"
"net/http"
auth "github.com/abbot/go-http-auth"
)
func Secret(user, realm string) string {
if user == "john" {
// password is "hello"
return "$1$dlPL2MqE$oQmn16q49SqdmhenQuNgs1"
}
return ""
}
func handle(w http.ResponseWriter, r *auth.AuthenticatedRequest) {
fmt.Fprintf(w, "<html><body><h1>Hello, %s!</h1></body></html>", r.Username)
}
func main() {
authenticator := auth.NewBasicAuthenticator("example.com", Secret)
http.HandleFunc("/", authenticator.Wrap(handle))
http.ListenAndServe(":8080", nil)
}
See more examples in the "examples" directory.
Legal
-----
This module is developed under Apache 2.0 license, and can be used for
open and proprietary projects.
Copyright 2012-2013 Lev Shamardin
Licensed under the Apache License, Version 2.0 (the "License"); you
may not use this file or any other part of this project except in
compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied. See the License for the specific language governing
permissions and limitations under the License.

109
vendor/github.com/abbot/go-http-auth/auth.go generated vendored Normal file
View file

@ -0,0 +1,109 @@
// Package auth is an implementation of HTTP Basic and HTTP Digest authentication.
package auth
import (
"net/http"
"golang.org/x/net/context"
)
/*
Request handlers must take AuthenticatedRequest instead of http.Request
*/
type AuthenticatedRequest struct {
http.Request
/*
Authenticated user name. Current API implies that Username is
never empty, which means that authentication is always done
before calling the request handler.
*/
Username string
}
/*
AuthenticatedHandlerFunc is like http.HandlerFunc, but takes
AuthenticatedRequest instead of http.Request
*/
type AuthenticatedHandlerFunc func(http.ResponseWriter, *AuthenticatedRequest)
/*
Authenticator wraps an AuthenticatedHandlerFunc with
authentication-checking code.
Typical Authenticator usage is something like:
authenticator := SomeAuthenticator(...)
http.HandleFunc("/", authenticator(my_handler))
Authenticator wrapper checks the user authentication and calls the
wrapped function only after authentication has succeeded. Otherwise,
it returns a handler which initiates the authentication procedure.
*/
type Authenticator func(AuthenticatedHandlerFunc) http.HandlerFunc
// Info contains authentication information for the request.
type Info struct {
// Authenticated is set to true when request was authenticated
// successfully, i.e. username and password passed in request did
// pass the check.
Authenticated bool
// Username contains a user name passed in the request when
// Authenticated is true. It's value is undefined if Authenticated
// is false.
Username string
// ResponseHeaders contains extra headers that must be set by server
// when sending back HTTP response.
ResponseHeaders http.Header
}
// UpdateHeaders updates headers with this Info's ResponseHeaders. It is
// safe to call this function on nil Info.
func (i *Info) UpdateHeaders(headers http.Header) {
if i == nil {
return
}
for k, values := range i.ResponseHeaders {
for _, v := range values {
headers.Add(k, v)
}
}
}
type key int // used for context keys
var infoKey key = 0
type AuthenticatorInterface interface {
// NewContext returns a new context carrying authentication
// information extracted from the request.
NewContext(ctx context.Context, r *http.Request) context.Context
// Wrap returns an http.HandlerFunc which wraps
// AuthenticatedHandlerFunc with this authenticator's
// authentication checks.
Wrap(AuthenticatedHandlerFunc) http.HandlerFunc
}
// FromContext returns authentication information from the context or
// nil if no such information present.
func FromContext(ctx context.Context) *Info {
info, ok := ctx.Value(infoKey).(*Info)
if !ok {
return nil
}
return info
}
// AuthUsernameHeader is the header set by JustCheck functions. It
// contains an authenticated username (if authentication was
// successful).
const AuthUsernameHeader = "X-Authenticated-Username"
func JustCheck(auth AuthenticatorInterface, wrapped http.HandlerFunc) http.HandlerFunc {
return auth.Wrap(func(w http.ResponseWriter, ar *AuthenticatedRequest) {
ar.Header.Set(AuthUsernameHeader, ar.Username)
wrapped(w, &ar.Request)
})
}

163
vendor/github.com/abbot/go-http-auth/basic.go generated vendored Normal file
View file

@ -0,0 +1,163 @@
package auth
import (
"bytes"
"crypto/sha1"
"crypto/subtle"
"encoding/base64"
"errors"
"net/http"
"strings"
"golang.org/x/crypto/bcrypt"
"golang.org/x/net/context"
)
type compareFunc func(hashedPassword, password []byte) error
var (
errMismatchedHashAndPassword = errors.New("mismatched hash and password")
compareFuncs = []struct {
prefix string
compare compareFunc
}{
{"", compareMD5HashAndPassword}, // default compareFunc
{"{SHA}", compareShaHashAndPassword},
// Bcrypt is complicated. According to crypt(3) from
// crypt_blowfish version 1.3 (fetched from
// http://www.openwall.com/crypt/crypt_blowfish-1.3.tar.gz), there
// are three different has prefixes: "$2a$", used by versions up
// to 1.0.4, and "$2x$" and "$2y$", used in all later
// versions. "$2a$" has a known bug, "$2x$" was added as a
// migration path for systems with "$2a$" prefix and still has a
// bug, and only "$2y$" should be used by modern systems. The bug
// has something to do with handling of 8-bit characters. Since
// both "$2a$" and "$2x$" are deprecated, we are handling them the
// same way as "$2y$", which will yield correct results for 7-bit
// character passwords, but is wrong for 8-bit character
// passwords. You have to upgrade to "$2y$" if you want sant 8-bit
// character password support with bcrypt. To add to the mess,
// OpenBSD 5.5. introduced "$2b$" prefix, which behaves exactly
// like "$2y$" according to the same source.
{"$2a$", bcrypt.CompareHashAndPassword},
{"$2b$", bcrypt.CompareHashAndPassword},
{"$2x$", bcrypt.CompareHashAndPassword},
{"$2y$", bcrypt.CompareHashAndPassword},
}
)
type BasicAuth struct {
Realm string
Secrets SecretProvider
// Headers used by authenticator. Set to ProxyHeaders to use with
// proxy server. When nil, NormalHeaders are used.
Headers *Headers
}
// check that BasicAuth implements AuthenticatorInterface
var _ = (AuthenticatorInterface)((*BasicAuth)(nil))
/*
Checks the username/password combination from the request. Returns
either an empty string (authentication failed) or the name of the
authenticated user.
Supports MD5 and SHA1 password entries
*/
func (a *BasicAuth) CheckAuth(r *http.Request) string {
s := strings.SplitN(r.Header.Get(a.Headers.V().Authorization), " ", 2)
if len(s) != 2 || s[0] != "Basic" {
return ""
}
b, err := base64.StdEncoding.DecodeString(s[1])
if err != nil {
return ""
}
pair := strings.SplitN(string(b), ":", 2)
if len(pair) != 2 {
return ""
}
user, password := pair[0], pair[1]
secret := a.Secrets(user, a.Realm)
if secret == "" {
return ""
}
compare := compareFuncs[0].compare
for _, cmp := range compareFuncs[1:] {
if strings.HasPrefix(secret, cmp.prefix) {
compare = cmp.compare
break
}
}
if compare([]byte(secret), []byte(password)) != nil {
return ""
}
return pair[0]
}
func compareShaHashAndPassword(hashedPassword, password []byte) error {
d := sha1.New()
d.Write(password)
if subtle.ConstantTimeCompare(hashedPassword[5:], []byte(base64.StdEncoding.EncodeToString(d.Sum(nil)))) != 1 {
return errMismatchedHashAndPassword
}
return nil
}
func compareMD5HashAndPassword(hashedPassword, password []byte) error {
parts := bytes.SplitN(hashedPassword, []byte("$"), 4)
if len(parts) != 4 {
return errMismatchedHashAndPassword
}
magic := []byte("$" + string(parts[1]) + "$")
salt := parts[2]
if subtle.ConstantTimeCompare(hashedPassword, MD5Crypt(password, salt, magic)) != 1 {
return errMismatchedHashAndPassword
}
return nil
}
/*
http.Handler for BasicAuth which initiates the authentication process
(or requires reauthentication).
*/
func (a *BasicAuth) RequireAuth(w http.ResponseWriter, r *http.Request) {
w.Header().Set(contentType, a.Headers.V().UnauthContentType)
w.Header().Set(a.Headers.V().Authenticate, `Basic realm="`+a.Realm+`"`)
w.WriteHeader(a.Headers.V().UnauthCode)
w.Write([]byte(a.Headers.V().UnauthResponse))
}
/*
BasicAuthenticator returns a function, which wraps an
AuthenticatedHandlerFunc converting it to http.HandlerFunc. This
wrapper function checks the authentication and either sends back
required authentication headers, or calls the wrapped function with
authenticated username in the AuthenticatedRequest.
*/
func (a *BasicAuth) Wrap(wrapped AuthenticatedHandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if username := a.CheckAuth(r); username == "" {
a.RequireAuth(w, r)
} else {
ar := &AuthenticatedRequest{Request: *r, Username: username}
wrapped(w, ar)
}
}
}
// NewContext returns a context carrying authentication information for the request.
func (a *BasicAuth) NewContext(ctx context.Context, r *http.Request) context.Context {
info := &Info{Username: a.CheckAuth(r), ResponseHeaders: make(http.Header)}
info.Authenticated = (info.Username != "")
if !info.Authenticated {
info.ResponseHeaders.Set(a.Headers.V().Authenticate, `Basic realm="`+a.Realm+`"`)
}
return context.WithValue(ctx, infoKey, info)
}
func NewBasicAuthenticator(realm string, secrets SecretProvider) *BasicAuth {
return &BasicAuth{Realm: realm, Secrets: secrets}
}

274
vendor/github.com/abbot/go-http-auth/digest.go generated vendored Normal file
View file

@ -0,0 +1,274 @@
package auth
import (
"crypto/subtle"
"fmt"
"net/http"
"net/url"
"sort"
"strconv"
"strings"
"sync"
"time"
"golang.org/x/net/context"
)
type digest_client struct {
nc uint64
last_seen int64
}
type DigestAuth struct {
Realm string
Opaque string
Secrets SecretProvider
PlainTextSecrets bool
IgnoreNonceCount bool
// Headers used by authenticator. Set to ProxyHeaders to use with
// proxy server. When nil, NormalHeaders are used.
Headers *Headers
/*
Approximate size of Client's Cache. When actual number of
tracked client nonces exceeds
ClientCacheSize+ClientCacheTolerance, ClientCacheTolerance*2
older entries are purged.
*/
ClientCacheSize int
ClientCacheTolerance int
clients map[string]*digest_client
mutex sync.Mutex
}
// check that DigestAuth implements AuthenticatorInterface
var _ = (AuthenticatorInterface)((*DigestAuth)(nil))
type digest_cache_entry struct {
nonce string
last_seen int64
}
type digest_cache []digest_cache_entry
func (c digest_cache) Less(i, j int) bool {
return c[i].last_seen < c[j].last_seen
}
func (c digest_cache) Len() int {
return len(c)
}
func (c digest_cache) Swap(i, j int) {
c[i], c[j] = c[j], c[i]
}
/*
Remove count oldest entries from DigestAuth.clients
*/
func (a *DigestAuth) Purge(count int) {
entries := make([]digest_cache_entry, 0, len(a.clients))
for nonce, client := range a.clients {
entries = append(entries, digest_cache_entry{nonce, client.last_seen})
}
cache := digest_cache(entries)
sort.Sort(cache)
for _, client := range cache[:count] {
delete(a.clients, client.nonce)
}
}
/*
http.Handler for DigestAuth which initiates the authentication process
(or requires reauthentication).
*/
func (a *DigestAuth) RequireAuth(w http.ResponseWriter, r *http.Request) {
if len(a.clients) > a.ClientCacheSize+a.ClientCacheTolerance {
a.Purge(a.ClientCacheTolerance * 2)
}
nonce := RandomKey()
a.clients[nonce] = &digest_client{nc: 0, last_seen: time.Now().UnixNano()}
w.Header().Set(contentType, a.Headers.V().UnauthContentType)
w.Header().Set(a.Headers.V().Authenticate,
fmt.Sprintf(`Digest realm="%s", nonce="%s", opaque="%s", algorithm="MD5", qop="auth"`,
a.Realm, nonce, a.Opaque))
w.WriteHeader(a.Headers.V().UnauthCode)
w.Write([]byte(a.Headers.V().UnauthResponse))
}
/*
Parse Authorization header from the http.Request. Returns a map of
auth parameters or nil if the header is not a valid parsable Digest
auth header.
*/
func DigestAuthParams(authorization string) map[string]string {
s := strings.SplitN(authorization, " ", 2)
if len(s) != 2 || s[0] != "Digest" {
return nil
}
return ParsePairs(s[1])
}
/*
Check if request contains valid authentication data. Returns a pair
of username, authinfo where username is the name of the authenticated
user or an empty string and authinfo is the contents for the optional
Authentication-Info response header.
*/
func (da *DigestAuth) CheckAuth(r *http.Request) (username string, authinfo *string) {
da.mutex.Lock()
defer da.mutex.Unlock()
username = ""
authinfo = nil
auth := DigestAuthParams(r.Header.Get(da.Headers.V().Authorization))
if auth == nil {
return "", nil
}
// RFC2617 Section 3.2.1 specifies that unset value of algorithm in
// WWW-Authenticate Response header should be treated as
// "MD5". According to section 3.2.2 the "algorithm" value in
// subsequent Request Authorization header must be set to whatever
// was supplied in the WWW-Authenticate Response header. This
// implementation always returns an algorithm in WWW-Authenticate
// header, however there seems to be broken clients in the wild
// which do not set the algorithm. Assume the unset algorithm in
// Authorization header to be equal to MD5.
if _, ok := auth["algorithm"]; !ok {
auth["algorithm"] = "MD5"
}
if da.Opaque != auth["opaque"] || auth["algorithm"] != "MD5" || auth["qop"] != "auth" {
return "", nil
}
// Check if the requested URI matches auth header
if r.RequestURI != auth["uri"] {
// We allow auth["uri"] to be a full path prefix of request-uri
// for some reason lost in history, which is probably wrong, but
// used to be like that for quite some time
// (https://tools.ietf.org/html/rfc2617#section-3.2.2 explicitly
// says that auth["uri"] is the request-uri).
//
// TODO: make an option to allow only strict checking.
switch u, err := url.Parse(auth["uri"]); {
case err != nil:
return "", nil
case r.URL == nil:
return "", nil
case len(u.Path) > len(r.URL.Path):
return "", nil
case !strings.HasPrefix(r.URL.Path, u.Path):
return "", nil
}
}
HA1 := da.Secrets(auth["username"], da.Realm)
if da.PlainTextSecrets {
HA1 = H(auth["username"] + ":" + da.Realm + ":" + HA1)
}
HA2 := H(r.Method + ":" + auth["uri"])
KD := H(strings.Join([]string{HA1, auth["nonce"], auth["nc"], auth["cnonce"], auth["qop"], HA2}, ":"))
if subtle.ConstantTimeCompare([]byte(KD), []byte(auth["response"])) != 1 {
return "", nil
}
// At this point crypto checks are completed and validated.
// Now check if the session is valid.
nc, err := strconv.ParseUint(auth["nc"], 16, 64)
if err != nil {
return "", nil
}
if client, ok := da.clients[auth["nonce"]]; !ok {
return "", nil
} else {
if client.nc != 0 && client.nc >= nc && !da.IgnoreNonceCount {
return "", nil
}
client.nc = nc
client.last_seen = time.Now().UnixNano()
}
resp_HA2 := H(":" + auth["uri"])
rspauth := H(strings.Join([]string{HA1, auth["nonce"], auth["nc"], auth["cnonce"], auth["qop"], resp_HA2}, ":"))
info := fmt.Sprintf(`qop="auth", rspauth="%s", cnonce="%s", nc="%s"`, rspauth, auth["cnonce"], auth["nc"])
return auth["username"], &info
}
/*
Default values for ClientCacheSize and ClientCacheTolerance for DigestAuth
*/
const DefaultClientCacheSize = 1000
const DefaultClientCacheTolerance = 100
/*
Wrap returns an Authenticator which uses HTTP Digest
authentication. Arguments:
realm: The authentication realm.
secrets: SecretProvider which must return HA1 digests for the same
realm as above.
*/
func (a *DigestAuth) Wrap(wrapped AuthenticatedHandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if username, authinfo := a.CheckAuth(r); username == "" {
a.RequireAuth(w, r)
} else {
ar := &AuthenticatedRequest{Request: *r, Username: username}
if authinfo != nil {
w.Header().Set(a.Headers.V().AuthInfo, *authinfo)
}
wrapped(w, ar)
}
}
}
/*
JustCheck returns function which converts an http.HandlerFunc into a
http.HandlerFunc which requires authentication. Username is passed as
an extra X-Authenticated-Username header.
*/
func (a *DigestAuth) JustCheck(wrapped http.HandlerFunc) http.HandlerFunc {
return a.Wrap(func(w http.ResponseWriter, ar *AuthenticatedRequest) {
ar.Header.Set(AuthUsernameHeader, ar.Username)
wrapped(w, &ar.Request)
})
}
// NewContext returns a context carrying authentication information for the request.
func (a *DigestAuth) NewContext(ctx context.Context, r *http.Request) context.Context {
username, authinfo := a.CheckAuth(r)
info := &Info{Username: username, ResponseHeaders: make(http.Header)}
if username != "" {
info.Authenticated = true
info.ResponseHeaders.Set(a.Headers.V().AuthInfo, *authinfo)
} else {
// return back digest WWW-Authenticate header
if len(a.clients) > a.ClientCacheSize+a.ClientCacheTolerance {
a.Purge(a.ClientCacheTolerance * 2)
}
nonce := RandomKey()
a.clients[nonce] = &digest_client{nc: 0, last_seen: time.Now().UnixNano()}
info.ResponseHeaders.Set(a.Headers.V().Authenticate,
fmt.Sprintf(`Digest realm="%s", nonce="%s", opaque="%s", algorithm="MD5", qop="auth"`,
a.Realm, nonce, a.Opaque))
}
return context.WithValue(ctx, infoKey, info)
}
func NewDigestAuthenticator(realm string, secrets SecretProvider) *DigestAuth {
da := &DigestAuth{
Opaque: RandomKey(),
Realm: realm,
Secrets: secrets,
PlainTextSecrets: false,
ClientCacheSize: DefaultClientCacheSize,
ClientCacheTolerance: DefaultClientCacheTolerance,
clients: map[string]*digest_client{}}
return da
}

92
vendor/github.com/abbot/go-http-auth/md5crypt.go generated vendored Normal file
View file

@ -0,0 +1,92 @@
package auth
import "crypto/md5"
import "strings"
const itoa64 = "./0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
var md5_crypt_swaps = [16]int{12, 6, 0, 13, 7, 1, 14, 8, 2, 15, 9, 3, 5, 10, 4, 11}
type MD5Entry struct {
Magic, Salt, Hash []byte
}
func NewMD5Entry(e string) *MD5Entry {
parts := strings.SplitN(e, "$", 4)
if len(parts) != 4 {
return nil
}
return &MD5Entry{
Magic: []byte("$" + parts[1] + "$"),
Salt: []byte(parts[2]),
Hash: []byte(parts[3]),
}
}
/*
MD5 password crypt implementation
*/
func MD5Crypt(password, salt, magic []byte) []byte {
d := md5.New()
d.Write(password)
d.Write(magic)
d.Write(salt)
d2 := md5.New()
d2.Write(password)
d2.Write(salt)
d2.Write(password)
for i, mixin := 0, d2.Sum(nil); i < len(password); i++ {
d.Write([]byte{mixin[i%16]})
}
for i := len(password); i != 0; i >>= 1 {
if i&1 == 0 {
d.Write([]byte{password[0]})
} else {
d.Write([]byte{0})
}
}
final := d.Sum(nil)
for i := 0; i < 1000; i++ {
d2 := md5.New()
if i&1 == 0 {
d2.Write(final)
} else {
d2.Write(password)
}
if i%3 != 0 {
d2.Write(salt)
}
if i%7 != 0 {
d2.Write(password)
}
if i&1 == 0 {
d2.Write(password)
} else {
d2.Write(final)
}
final = d2.Sum(nil)
}
result := make([]byte, 0, 22)
v := uint(0)
bits := uint(0)
for _, i := range md5_crypt_swaps {
v |= (uint(final[i]) << bits)
for bits = bits + 8; bits > 6; bits -= 6 {
result = append(result, itoa64[v&0x3f])
v >>= 6
}
}
result = append(result, itoa64[v&0x3f])
return append(append(append(magic, salt...), '$'), result...)
}

141
vendor/github.com/abbot/go-http-auth/misc.go generated vendored Normal file
View file

@ -0,0 +1,141 @@
package auth
import (
"bytes"
"crypto/md5"
"crypto/rand"
"encoding/base64"
"fmt"
"net/http"
"strings"
)
// RandomKey returns a random 16-byte base64 alphabet string
func RandomKey() string {
k := make([]byte, 12)
for bytes := 0; bytes < len(k); {
n, err := rand.Read(k[bytes:])
if err != nil {
panic("rand.Read() failed")
}
bytes += n
}
return base64.StdEncoding.EncodeToString(k)
}
// H function for MD5 algorithm (returns a lower-case hex MD5 digest)
func H(data string) string {
digest := md5.New()
digest.Write([]byte(data))
return fmt.Sprintf("%x", digest.Sum(nil))
}
// ParseList parses a comma-separated list of values as described by
// RFC 2068 and returns list elements.
//
// Lifted from https://code.google.com/p/gorilla/source/browse/http/parser/parser.go
// which was ported from urllib2.parse_http_list, from the Python
// standard library.
func ParseList(value string) []string {
var list []string
var escape, quote bool
b := new(bytes.Buffer)
for _, r := range value {
switch {
case escape:
b.WriteRune(r)
escape = false
case quote:
if r == '\\' {
escape = true
} else {
if r == '"' {
quote = false
}
b.WriteRune(r)
}
case r == ',':
list = append(list, strings.TrimSpace(b.String()))
b.Reset()
case r == '"':
quote = true
b.WriteRune(r)
default:
b.WriteRune(r)
}
}
// Append last part.
if s := b.String(); s != "" {
list = append(list, strings.TrimSpace(s))
}
return list
}
// ParsePairs extracts key/value pairs from a comma-separated list of
// values as described by RFC 2068 and returns a map[key]value. The
// resulting values are unquoted. If a list element doesn't contain a
// "=", the key is the element itself and the value is an empty
// string.
//
// Lifted from https://code.google.com/p/gorilla/source/browse/http/parser/parser.go
func ParsePairs(value string) map[string]string {
m := make(map[string]string)
for _, pair := range ParseList(strings.TrimSpace(value)) {
if i := strings.Index(pair, "="); i < 0 {
m[pair] = ""
} else {
v := pair[i+1:]
if v[0] == '"' && v[len(v)-1] == '"' {
// Unquote it.
v = v[1 : len(v)-1]
}
m[pair[:i]] = v
}
}
return m
}
// Headers contains header and error codes used by authenticator.
type Headers struct {
Authenticate string // WWW-Authenticate
Authorization string // Authorization
AuthInfo string // Authentication-Info
UnauthCode int // 401
UnauthContentType string // text/plain
UnauthResponse string // Unauthorized.
}
// V returns NormalHeaders when h is nil, or h otherwise. Allows to
// use uninitialized *Headers values in structs.
func (h *Headers) V() *Headers {
if h == nil {
return NormalHeaders
}
return h
}
var (
// NormalHeaders are the regular Headers used by an HTTP Server for
// request authentication.
NormalHeaders = &Headers{
Authenticate: "WWW-Authenticate",
Authorization: "Authorization",
AuthInfo: "Authentication-Info",
UnauthCode: http.StatusUnauthorized,
UnauthContentType: "text/plain",
UnauthResponse: fmt.Sprintf("%d %s\n", http.StatusUnauthorized, http.StatusText(http.StatusUnauthorized)),
}
// ProxyHeaders are Headers used by an HTTP Proxy server for proxy
// access authentication.
ProxyHeaders = &Headers{
Authenticate: "Proxy-Authenticate",
Authorization: "Proxy-Authorization",
AuthInfo: "Proxy-Authentication-Info",
UnauthCode: http.StatusProxyAuthRequired,
UnauthContentType: "text/plain",
UnauthResponse: fmt.Sprintf("%d %s\n", http.StatusProxyAuthRequired, http.StatusText(http.StatusProxyAuthRequired)),
}
)
const contentType = "Content-Type"

1
vendor/github.com/abbot/go-http-auth/test.htdigest generated vendored Normal file
View file

@ -0,0 +1 @@
test:example.com:aa78524fceb0e50fd8ca96dd818b8cf9

4
vendor/github.com/abbot/go-http-auth/test.htpasswd generated vendored Normal file
View file

@ -0,0 +1,4 @@
test:{SHA}qvTGHdzF6KLavt4PO0gs2a6pQ00=
test2:$apr1$a0j62R97$mYqFkloXH0/UOaUnAiV2b0
test16:$apr1$JI4wh3am$AmhephVqLTUyAVpFQeHZC0
test3:$2y$05$ih3C91zUBSTFcAh2mQnZYuob0UOZVEf16wl/ukgjDhjvj.xgM1WwS

154
vendor/github.com/abbot/go-http-auth/users.go generated vendored Normal file
View file

@ -0,0 +1,154 @@
package auth
import (
"encoding/csv"
"os"
"sync"
)
/*
SecretProvider is used by authenticators. Takes user name and realm
as an argument, returns secret required for authentication (HA1 for
digest authentication, properly encrypted password for basic).
Returning an empty string means failing the authentication.
*/
type SecretProvider func(user, realm string) string
/*
Common functions for file auto-reloading
*/
type File struct {
Path string
Info os.FileInfo
/* must be set in inherited types during initialization */
Reload func()
mu sync.Mutex
}
func (f *File) ReloadIfNeeded() {
info, err := os.Stat(f.Path)
if err != nil {
panic(err)
}
f.mu.Lock()
defer f.mu.Unlock()
if f.Info == nil || f.Info.ModTime() != info.ModTime() {
f.Info = info
f.Reload()
}
}
/*
Structure used for htdigest file authentication. Users map realms to
maps of users to their HA1 digests.
*/
type HtdigestFile struct {
File
Users map[string]map[string]string
mu sync.RWMutex
}
func reload_htdigest(hf *HtdigestFile) {
r, err := os.Open(hf.Path)
if err != nil {
panic(err)
}
csv_reader := csv.NewReader(r)
csv_reader.Comma = ':'
csv_reader.Comment = '#'
csv_reader.TrimLeadingSpace = true
records, err := csv_reader.ReadAll()
if err != nil {
panic(err)
}
hf.mu.Lock()
defer hf.mu.Unlock()
hf.Users = make(map[string]map[string]string)
for _, record := range records {
_, exists := hf.Users[record[1]]
if !exists {
hf.Users[record[1]] = make(map[string]string)
}
hf.Users[record[1]][record[0]] = record[2]
}
}
/*
SecretProvider implementation based on htdigest-formated files. Will
reload htdigest file on changes. Will panic on syntax errors in
htdigest files.
*/
func HtdigestFileProvider(filename string) SecretProvider {
hf := &HtdigestFile{File: File{Path: filename}}
hf.Reload = func() { reload_htdigest(hf) }
return func(user, realm string) string {
hf.ReloadIfNeeded()
hf.mu.RLock()
defer hf.mu.RUnlock()
_, exists := hf.Users[realm]
if !exists {
return ""
}
digest, exists := hf.Users[realm][user]
if !exists {
return ""
}
return digest
}
}
/*
Structure used for htdigest file authentication. Users map users to
their salted encrypted password
*/
type HtpasswdFile struct {
File
Users map[string]string
mu sync.RWMutex
}
func reload_htpasswd(h *HtpasswdFile) {
r, err := os.Open(h.Path)
if err != nil {
panic(err)
}
csv_reader := csv.NewReader(r)
csv_reader.Comma = ':'
csv_reader.Comment = '#'
csv_reader.TrimLeadingSpace = true
records, err := csv_reader.ReadAll()
if err != nil {
panic(err)
}
h.mu.Lock()
defer h.mu.Unlock()
h.Users = make(map[string]string)
for _, record := range records {
h.Users[record[0]] = record[1]
}
}
/*
SecretProvider implementation based on htpasswd-formated files. Will
reload htpasswd file on changes. Will panic on syntax errors in
htpasswd files. Realm argument of the SecretProvider is ignored.
*/
func HtpasswdFileProvider(filename string) SecretProvider {
h := &HtpasswdFile{File: File{Path: filename}}
h.Reload = func() { reload_htpasswd(h) }
return func(user, realm string) string {
h.ReloadIfNeeded()
h.mu.RLock()
password, exists := h.Users[user]
h.mu.RUnlock()
if !exists {
return ""
}
return password
}
}

16
vendor/github.com/beevik/etree/.travis.yml generated vendored Normal file
View file

@ -0,0 +1,16 @@
language: go
sudo: false
go:
- 1.4.2
- 1.5.1
- 1.6
- tip
matrix:
allow_failures:
- go: tip
script:
- go vet ./...
- go test -v ./...

8
vendor/github.com/beevik/etree/CONTRIBUTORS generated vendored Normal file
View file

@ -0,0 +1,8 @@
Brett Vickers (beevik)
Felix Geisendörfer (felixge)
Kamil Kisiel (kisielk)
Graham King (grahamking)
Matt Smith (ma314smith)
Michal Jemala (michaljemala)
Nicolas Piganeau (npiganeau)
Chris Brown (ccbrown)

24
vendor/github.com/beevik/etree/LICENSE generated vendored Normal file
View file

@ -0,0 +1,24 @@
Copyright 2015 Brett Vickers. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY COPYRIGHT HOLDER ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

203
vendor/github.com/beevik/etree/README.md generated vendored Normal file
View file

@ -0,0 +1,203 @@
[![Build Status](https://travis-ci.org/beevik/etree.svg?branch=master)](https://travis-ci.org/beevik/etree)
[![GoDoc](https://godoc.org/github.com/beevik/etree?status.svg)](https://godoc.org/github.com/beevik/etree)
etree
=====
The etree package is a lightweight, pure go package that expresses XML in
the form of an element tree. Its design was inspired by the Python
[ElementTree](http://docs.python.org/2/library/xml.etree.elementtree.html)
module. Some of the package's features include:
* Represents XML documents as trees of elements for easy traversal.
* Imports, serializes, modifies or creates XML documents from scratch.
* Writes and reads XML to/from files, byte slices, strings and io interfaces.
* Performs simple or complex searches with lightweight XPath-like query APIs.
* Auto-indents XML using spaces or tabs for better readability.
* Implemented in pure go; depends only on standard go libraries.
* Built on top of the go [encoding/xml](http://golang.org/pkg/encoding/xml)
package.
### Creating an XML document
The following example creates an XML document from scratch using the etree
package and outputs its indented contents to stdout.
```go
doc := etree.NewDocument()
doc.CreateProcInst("xml", `version="1.0" encoding="UTF-8"`)
doc.CreateProcInst("xml-stylesheet", `type="text/xsl" href="style.xsl"`)
people := doc.CreateElement("People")
people.CreateComment("These are all known people")
jon := people.CreateElement("Person")
jon.CreateAttr("name", "Jon")
sally := people.CreateElement("Person")
sally.CreateAttr("name", "Sally")
doc.Indent(2)
doc.WriteTo(os.Stdout)
```
Output:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="style.xsl"?>
<People>
<!--These are all known people-->
<Person name="Jon"/>
<Person name="Sally"/>
</People>
```
### Reading an XML file
Suppose you have a file on disk called `bookstore.xml` containing the
following data:
```xml
<bookstore xmlns:p="urn:schemas-books-com:prices">
<book category="COOKING">
<title lang="en">Everyday Italian</title>
<author>Giada De Laurentiis</author>
<year>2005</year>
<p:price>30.00</p:price>
</book>
<book category="CHILDREN">
<title lang="en">Harry Potter</title>
<author>J K. Rowling</author>
<year>2005</year>
<p:price>29.99</p:price>
</book>
<book category="WEB">
<title lang="en">XQuery Kick Start</title>
<author>James McGovern</author>
<author>Per Bothner</author>
<author>Kurt Cagle</author>
<author>James Linn</author>
<author>Vaidyanathan Nagarajan</author>
<year>2003</year>
<p:price>49.99</p:price>
</book>
<book category="WEB">
<title lang="en">Learning XML</title>
<author>Erik T. Ray</author>
<year>2003</year>
<p:price>39.95</p:price>
</book>
</bookstore>
```
This code reads the file's contents into an etree document.
```go
doc := etree.NewDocument()
if err := doc.ReadFromFile("bookstore.xml"); err != nil {
panic(err)
}
```
You can also read XML from a string, a byte slice, or an `io.Reader`.
### Processing elements and attributes
This example illustrates several ways to access elements and attributes using
etree selection queries.
```go
root := doc.SelectElement("bookstore")
fmt.Println("ROOT element:", root.Tag)
for _, book := range root.SelectElements("book") {
fmt.Println("CHILD element:", book.Tag)
if title := book.SelectElement("title"); title != nil {
lang := title.SelectAttrValue("lang", "unknown")
fmt.Printf(" TITLE: %s (%s)\n", title.Text(), lang)
}
for _, attr := range book.Attr {
fmt.Printf(" ATTR: %s=%s\n", attr.Key, attr.Value)
}
}
```
Output:
```
ROOT element: bookstore
CHILD element: book
TITLE: Everyday Italian (en)
ATTR: category=COOKING
CHILD element: book
TITLE: Harry Potter (en)
ATTR: category=CHILDREN
CHILD element: book
TITLE: XQuery Kick Start (en)
ATTR: category=WEB
CHILD element: book
TITLE: Learning XML (en)
ATTR: category=WEB
```
### Path queries
This example uses etree's path functions to select all book titles that fall
into the category of 'WEB'. The double-slash prefix in the path causes the
search for book elements to occur recursively; book elements may appear at any
level of the XML hierarchy.
```go
for _, t := range doc.FindElements("//book[@category='WEB']/title") {
fmt.Println("Title:", t.Text())
}
```
Output:
```
Title: XQuery Kick Start
Title: Learning XML
```
This example finds the first book element under the root bookstore element and
outputs the tag and text of each of its child elements.
```go
for _, e := range doc.FindElements("./bookstore/book[1]/*") {
fmt.Printf("%s: %s\n", e.Tag, e.Text())
}
```
Output:
```
title: Everyday Italian
author: Giada De Laurentiis
year: 2005
price: 30.00
```
This example finds all books with a price of 49.99 and outputs their titles.
```go
path := etree.MustCompilePath("./bookstore/book[p:price='49.99']/title")
for _, e := range doc.FindElementsPath(path) {
fmt.Println(e.Text())
}
```
Output:
```
XQuery Kick Start
```
Note that this example uses the FindElementsPath function, which takes as an
argument a pre-compiled path object. Use precompiled paths when you plan to
search with the same path more than once.
### Other features
These are just a few examples of the things the etree package can do. See the
[documentation](http://godoc.org/github.com/beevik/etree) for a complete
description of its capabilities.
### Contributing
This project accepts contributions. Just fork the repo and submit a pull
request!

943
vendor/github.com/beevik/etree/etree.go generated vendored Normal file
View file

@ -0,0 +1,943 @@
// Copyright 2015 Brett Vickers.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package etree provides XML services through an Element Tree
// abstraction.
package etree
import (
"bufio"
"bytes"
"encoding/xml"
"errors"
"io"
"os"
"strings"
)
const (
// NoIndent is used with Indent to disable all indenting.
NoIndent = -1
)
// ErrXML is returned when XML parsing fails due to incorrect formatting.
var ErrXML = errors.New("etree: invalid XML format")
// ReadSettings allow for changing the default behavior of the ReadFrom*
// methods.
type ReadSettings struct {
// CharsetReader to be passed to standard xml.Decoder. Default: nil.
CharsetReader func(charset string, input io.Reader) (io.Reader, error)
// Permissive allows input containing common mistakes such as missing tags
// or attribute values. Default: false.
Permissive bool
}
// newReadSettings creates a default ReadSettings record.
func newReadSettings() ReadSettings {
return ReadSettings{}
}
// WriteSettings allow for changing the serialization behavior of the WriteTo*
// methods.
type WriteSettings struct {
// CanonicalEndTags forces the production of XML end tags, even for
// elements that have no child elements. Default: false.
CanonicalEndTags bool
// CanonicalText forces the production of XML character references for
// text data characters &, <, and >. If false, XML character references
// are also produced for " and '. Default: false.
CanonicalText bool
// CanonicalAttrVal forces the production of XML character references for
// attribute value characters &, < and ". If false, XML character
// references are also produced for > and '. Default: false.
CanonicalAttrVal bool
}
// newWriteSettings creates a default WriteSettings record.
func newWriteSettings() WriteSettings {
return WriteSettings{
CanonicalEndTags: false,
CanonicalText: false,
CanonicalAttrVal: false,
}
}
// A Token is an empty interface that represents an Element, CharData,
// Comment, Directive, or ProcInst.
type Token interface {
Parent() *Element
dup(parent *Element) Token
setParent(parent *Element)
writeTo(w *bufio.Writer, s *WriteSettings)
}
// A Document is a container holding a complete XML hierarchy. Its embedded
// element contains zero or more children, one of which is usually the root
// element. The embedded element may include other children such as
// processing instructions or BOM CharData tokens.
type Document struct {
Element
ReadSettings ReadSettings
WriteSettings WriteSettings
}
// An Element represents an XML element, its attributes, and its child tokens.
type Element struct {
Space, Tag string // namespace and tag
Attr []Attr // key-value attribute pairs
Child []Token // child tokens (elements, comments, etc.)
parent *Element // parent element
}
// An Attr represents a key-value attribute of an XML element.
type Attr struct {
Space, Key string // The attribute's namespace and key
Value string // The attribute value string
}
// CharData represents character data within XML.
type CharData struct {
Data string
parent *Element
whitespace bool
}
// A Comment represents an XML comment.
type Comment struct {
Data string
parent *Element
}
// A Directive represents an XML directive.
type Directive struct {
Data string
parent *Element
}
// A ProcInst represents an XML processing instruction.
type ProcInst struct {
Target string
Inst string
parent *Element
}
// NewDocument creates an XML document without a root element.
func NewDocument() *Document {
return &Document{
Element{Child: make([]Token, 0)},
newReadSettings(),
newWriteSettings(),
}
}
// Copy returns a recursive, deep copy of the document.
func (d *Document) Copy() *Document {
return &Document{*(d.dup(nil).(*Element)), d.ReadSettings, d.WriteSettings}
}
// Root returns the root element of the document, or nil if there is no root
// element.
func (d *Document) Root() *Element {
for _, t := range d.Child {
if c, ok := t.(*Element); ok {
return c
}
}
return nil
}
// SetRoot replaces the document's root element with e. If the document
// already has a root when this function is called, then the document's
// original root is unbound first. If the element e is bound to another
// document (or to another element within a document), then it is unbound
// first.
func (d *Document) SetRoot(e *Element) {
if e.parent != nil {
e.parent.RemoveChild(e)
}
e.setParent(&d.Element)
for i, t := range d.Child {
if _, ok := t.(*Element); ok {
t.setParent(nil)
d.Child[i] = e
return
}
}
d.Child = append(d.Child, e)
}
// ReadFrom reads XML from the reader r into the document d. It returns the
// number of bytes read and any error encountered.
func (d *Document) ReadFrom(r io.Reader) (n int64, err error) {
return d.Element.readFrom(r, d.ReadSettings)
}
// ReadFromFile reads XML from the string s into the document d.
func (d *Document) ReadFromFile(filename string) error {
f, err := os.Open(filename)
if err != nil {
return err
}
defer f.Close()
_, err = d.ReadFrom(f)
return err
}
// ReadFromBytes reads XML from the byte slice b into the document d.
func (d *Document) ReadFromBytes(b []byte) error {
_, err := d.ReadFrom(bytes.NewReader(b))
return err
}
// ReadFromString reads XML from the string s into the document d.
func (d *Document) ReadFromString(s string) error {
_, err := d.ReadFrom(strings.NewReader(s))
return err
}
// WriteTo serializes an XML document into the writer w. It
// returns the number of bytes written and any error encountered.
func (d *Document) WriteTo(w io.Writer) (n int64, err error) {
cw := newCountWriter(w)
b := bufio.NewWriter(cw)
for _, c := range d.Child {
c.writeTo(b, &d.WriteSettings)
}
err, n = b.Flush(), cw.bytes
return
}
// WriteToFile serializes an XML document into the file named
// filename.
func (d *Document) WriteToFile(filename string) error {
f, err := os.Create(filename)
if err != nil {
return err
}
defer f.Close()
_, err = d.WriteTo(f)
return err
}
// WriteToBytes serializes the XML document into a slice of
// bytes.
func (d *Document) WriteToBytes() (b []byte, err error) {
var buf bytes.Buffer
if _, err = d.WriteTo(&buf); err != nil {
return
}
return buf.Bytes(), nil
}
// WriteToString serializes the XML document into a string.
func (d *Document) WriteToString() (s string, err error) {
var b []byte
if b, err = d.WriteToBytes(); err != nil {
return
}
return string(b), nil
}
type indentFunc func(depth int) string
// Indent modifies the document's element tree by inserting CharData entities
// containing carriage returns and indentation. The amount of indentation per
// depth level is given as spaces. Pass etree.NoIndent for spaces if you want
// no indentation at all.
func (d *Document) Indent(spaces int) {
var indent indentFunc
switch {
case spaces < 0:
indent = func(depth int) string { return "" }
default:
indent = func(depth int) string { return crIndent(depth*spaces, crsp) }
}
d.Element.indent(0, indent)
}
// IndentTabs modifies the document's element tree by inserting CharData
// entities containing carriage returns and tabs for indentation. One tab is
// used per indentation level.
func (d *Document) IndentTabs() {
indent := func(depth int) string { return crIndent(depth, crtab) }
d.Element.indent(0, indent)
}
// NewElement creates an unparented element with the specified tag. The tag
// may be prefixed by a namespace and a colon.
func NewElement(tag string) *Element {
space, stag := spaceDecompose(tag)
return newElement(space, stag, nil)
}
// newElement is a helper function that creates an element and binds it to
// a parent element if possible.
func newElement(space, tag string, parent *Element) *Element {
e := &Element{
Space: space,
Tag: tag,
Attr: make([]Attr, 0),
Child: make([]Token, 0),
parent: parent,
}
if parent != nil {
parent.addChild(e)
}
return e
}
// Copy creates a recursive, deep copy of the element and all its attributes
// and children. The returned element has no parent but can be parented to a
// another element using AddElement, or to a document using SetRoot.
func (e *Element) Copy() *Element {
var parent *Element
return e.dup(parent).(*Element)
}
// Text returns the characters immediately following the element's
// opening tag.
func (e *Element) Text() string {
if len(e.Child) == 0 {
return ""
}
if cd, ok := e.Child[0].(*CharData); ok {
return cd.Data
}
return ""
}
// SetText replaces an element's subsidiary CharData text with a new string.
func (e *Element) SetText(text string) {
if len(e.Child) > 0 {
if cd, ok := e.Child[0].(*CharData); ok {
cd.Data = text
return
}
}
cd := newCharData(text, false, e)
copy(e.Child[1:], e.Child[0:])
e.Child[0] = cd
}
// CreateElement creates an element with the specified tag and adds it as the
// last child element of the element e. The tag may be prefixed by a namespace
// and a colon.
func (e *Element) CreateElement(tag string) *Element {
space, stag := spaceDecompose(tag)
return newElement(space, stag, e)
}
// AddChild adds the token t as the last child of element e. If token t was
// already the child of another element, it is first removed from its current
// parent element.
func (e *Element) AddChild(t Token) {
if t.Parent() != nil {
t.Parent().RemoveChild(t)
}
t.setParent(e)
e.addChild(t)
}
// InsertChild inserts the token t before e's existing child token ex. If ex
// is nil (or if ex is not a child of e), then t is added to the end of e's
// child token list. If token t was already the child of another element, it
// is first removed from its current parent element.
func (e *Element) InsertChild(ex Token, t Token) {
if t.Parent() != nil {
t.Parent().RemoveChild(t)
}
t.setParent(e)
for i, c := range e.Child {
if c == ex {
e.Child = append(e.Child, nil)
copy(e.Child[i+1:], e.Child[i:])
e.Child[i] = t
return
}
}
e.addChild(t)
}
// RemoveChild attempts to remove the token t from element e's list of
// children. If the token t is a child of e, then it is returned. Otherwise,
// nil is returned.
func (e *Element) RemoveChild(t Token) Token {
for i, c := range e.Child {
if c == t {
e.Child = append(e.Child[:i], e.Child[i+1:]...)
c.setParent(nil)
return t
}
}
return nil
}
// ReadFrom reads XML from the reader r and stores the result as a new child
// of element e.
func (e *Element) readFrom(ri io.Reader, settings ReadSettings) (n int64, err error) {
r := newCountReader(ri)
dec := xml.NewDecoder(r)
dec.CharsetReader = settings.CharsetReader
dec.Strict = !settings.Permissive
var stack stack
stack.push(e)
for {
t, err := dec.RawToken()
switch {
case err == io.EOF:
return r.bytes, nil
case err != nil:
return r.bytes, err
case stack.empty():
return r.bytes, ErrXML
}
top := stack.peek().(*Element)
switch t := t.(type) {
case xml.StartElement:
e := newElement(t.Name.Space, t.Name.Local, top)
for _, a := range t.Attr {
e.createAttr(a.Name.Space, a.Name.Local, a.Value)
}
stack.push(e)
case xml.EndElement:
stack.pop()
case xml.CharData:
data := string(t)
newCharData(data, isWhitespace(data), top)
case xml.Comment:
newComment(string(t), top)
case xml.Directive:
newDirective(string(t), top)
case xml.ProcInst:
newProcInst(t.Target, string(t.Inst), top)
}
}
}
// SelectAttr finds an element attribute matching the requested key and
// returns it if found. The key may be prefixed by a namespace and a colon.
func (e *Element) SelectAttr(key string) *Attr {
space, skey := spaceDecompose(key)
for i, a := range e.Attr {
if spaceMatch(space, a.Space) && skey == a.Key {
return &e.Attr[i]
}
}
return nil
}
// SelectAttrValue finds an element attribute matching the requested key and
// returns its value if found. The key may be prefixed by a namespace and a
// colon. If the key is not found, the dflt value is returned instead.
func (e *Element) SelectAttrValue(key, dflt string) string {
space, skey := spaceDecompose(key)
for _, a := range e.Attr {
if spaceMatch(space, a.Space) && skey == a.Key {
return a.Value
}
}
return dflt
}
// ChildElements returns all elements that are children of element e.
func (e *Element) ChildElements() []*Element {
var elements []*Element
for _, t := range e.Child {
if c, ok := t.(*Element); ok {
elements = append(elements, c)
}
}
return elements
}
// SelectElement returns the first child element with the given tag. The tag
// may be prefixed by a namespace and a colon.
func (e *Element) SelectElement(tag string) *Element {
space, stag := spaceDecompose(tag)
for _, t := range e.Child {
if c, ok := t.(*Element); ok && spaceMatch(space, c.Space) && stag == c.Tag {
return c
}
}
return nil
}
// SelectElements returns a slice of all child elements with the given tag.
// The tag may be prefixed by a namespace and a colon.
func (e *Element) SelectElements(tag string) []*Element {
space, stag := spaceDecompose(tag)
var elements []*Element
for _, t := range e.Child {
if c, ok := t.(*Element); ok && spaceMatch(space, c.Space) && stag == c.Tag {
elements = append(elements, c)
}
}
return elements
}
// FindElement returns the first element matched by the XPath-like path
// string. Panics if an invalid path string is supplied.
func (e *Element) FindElement(path string) *Element {
return e.FindElementPath(MustCompilePath(path))
}
// FindElementPath returns the first element matched by the XPath-like path
// string.
func (e *Element) FindElementPath(path Path) *Element {
p := newPather()
elements := p.traverse(e, path)
switch {
case len(elements) > 0:
return elements[0]
default:
return nil
}
}
// FindElements returns a slice of elements matched by the XPath-like path
// string. Panics if an invalid path string is supplied.
func (e *Element) FindElements(path string) []*Element {
return e.FindElementsPath(MustCompilePath(path))
}
// FindElementsPath returns a slice of elements matched by the Path object.
func (e *Element) FindElementsPath(path Path) []*Element {
p := newPather()
return p.traverse(e, path)
}
// indent recursively inserts proper indentation between an
// XML element's child tokens.
func (e *Element) indent(depth int, indent indentFunc) {
e.stripIndent()
n := len(e.Child)
if n == 0 {
return
}
oldChild := e.Child
e.Child = make([]Token, 0, n*2+1)
isCharData, firstNonCharData := false, true
for _, c := range oldChild {
// Insert CR+indent before child if it's not character data.
// Exceptions: when it's the first non-character-data child, or when
// the child is at root depth.
_, isCharData = c.(*CharData)
if !isCharData {
if !firstNonCharData || depth > 0 {
newCharData(indent(depth), true, e)
}
firstNonCharData = false
}
e.addChild(c)
// Recursively process child elements.
if ce, ok := c.(*Element); ok {
ce.indent(depth+1, indent)
}
}
// Insert CR+indent before the last child.
if !isCharData {
if !firstNonCharData || depth > 0 {
newCharData(indent(depth-1), true, e)
}
}
}
// stripIndent removes any previously inserted indentation.
func (e *Element) stripIndent() {
// Count the number of non-indent child tokens
n := len(e.Child)
for _, c := range e.Child {
if cd, ok := c.(*CharData); ok && cd.whitespace {
n--
}
}
if n == len(e.Child) {
return
}
// Strip out indent CharData
newChild := make([]Token, n)
j := 0
for _, c := range e.Child {
if cd, ok := c.(*CharData); ok && cd.whitespace {
continue
}
newChild[j] = c
j++
}
e.Child = newChild
}
// dup duplicates the element.
func (e *Element) dup(parent *Element) Token {
ne := &Element{
Space: e.Space,
Tag: e.Tag,
Attr: make([]Attr, len(e.Attr)),
Child: make([]Token, len(e.Child)),
parent: parent,
}
for i, t := range e.Child {
ne.Child[i] = t.dup(ne)
}
for i, a := range e.Attr {
ne.Attr[i] = a
}
return ne
}
// Parent returns the element token's parent element, or nil if it has no
// parent.
func (e *Element) Parent() *Element {
return e.parent
}
// setParent replaces the element token's parent.
func (e *Element) setParent(parent *Element) {
e.parent = parent
}
// writeTo serializes the element to the writer w.
func (e *Element) writeTo(w *bufio.Writer, s *WriteSettings) {
w.WriteByte('<')
if e.Space != "" {
w.WriteString(e.Space)
w.WriteByte(':')
}
w.WriteString(e.Tag)
for _, a := range e.Attr {
w.WriteByte(' ')
a.writeTo(w, s)
}
if len(e.Child) > 0 {
w.WriteString(">")
for _, c := range e.Child {
c.writeTo(w, s)
}
w.Write([]byte{'<', '/'})
if e.Space != "" {
w.WriteString(e.Space)
w.WriteByte(':')
}
w.WriteString(e.Tag)
w.WriteByte('>')
} else {
if s.CanonicalEndTags {
w.Write([]byte{'>', '<', '/'})
if e.Space != "" {
w.WriteString(e.Space)
w.WriteByte(':')
}
w.WriteString(e.Tag)
w.WriteByte('>')
} else {
w.Write([]byte{'/', '>'})
}
}
}
// addChild adds a child token to the element e.
func (e *Element) addChild(t Token) {
e.Child = append(e.Child, t)
}
// CreateAttr creates an attribute and adds it to element e. The key may be
// prefixed by a namespace and a colon. If an attribute with the key already
// exists, its value is replaced.
func (e *Element) CreateAttr(key, value string) *Attr {
space, skey := spaceDecompose(key)
return e.createAttr(space, skey, value)
}
// createAttr is a helper function that creates attributes.
func (e *Element) createAttr(space, key, value string) *Attr {
for i, a := range e.Attr {
if space == a.Space && key == a.Key {
e.Attr[i].Value = value
return &e.Attr[i]
}
}
a := Attr{space, key, value}
e.Attr = append(e.Attr, a)
return &e.Attr[len(e.Attr)-1]
}
// RemoveAttr removes and returns the first attribute of the element whose key
// matches the given key. The key may be prefixed by a namespace and a colon.
// If an equal attribute does not exist, nil is returned.
func (e *Element) RemoveAttr(key string) *Attr {
space, skey := spaceDecompose(key)
for i, a := range e.Attr {
if space == a.Space && skey == a.Key {
e.Attr = append(e.Attr[0:i], e.Attr[i+1:]...)
return &a
}
}
return nil
}
var xmlReplacerNormal = strings.NewReplacer(
"&", "&amp;",
"<", "&lt;",
">", "&gt;",
"'", "&apos;",
`"`, "&quot;",
)
var xmlReplacerCanonicalText = strings.NewReplacer(
"&", "&amp;",
"<", "&lt;",
">", "&gt;",
"\r", "&#xD;",
)
var xmlReplacerCanonicalAttrVal = strings.NewReplacer(
"&", "&amp;",
"<", "&lt;",
`"`, "&quot;",
"\t", "&#x9;",
"\n", "&#xA;",
"\r", "&#xD;",
)
// writeTo serializes the attribute to the writer.
func (a *Attr) writeTo(w *bufio.Writer, s *WriteSettings) {
if a.Space != "" {
w.WriteString(a.Space)
w.WriteByte(':')
}
w.WriteString(a.Key)
w.WriteString(`="`)
var r *strings.Replacer
if s.CanonicalAttrVal {
r = xmlReplacerCanonicalAttrVal
} else {
r = xmlReplacerNormal
}
w.WriteString(r.Replace(a.Value))
w.WriteByte('"')
}
// NewCharData creates a parentless XML character data entity.
func NewCharData(data string) *CharData {
return newCharData(data, false, nil)
}
// newCharData creates an XML character data entity and binds it to a parent
// element. If parent is nil, the CharData token remains unbound.
func newCharData(data string, whitespace bool, parent *Element) *CharData {
c := &CharData{
Data: data,
whitespace: whitespace,
parent: parent,
}
if parent != nil {
parent.addChild(c)
}
return c
}
// CreateCharData creates an XML character data entity and adds it as a child
// of element e.
func (e *Element) CreateCharData(data string) *CharData {
return newCharData(data, false, e)
}
// dup duplicates the character data.
func (c *CharData) dup(parent *Element) Token {
return &CharData{
Data: c.Data,
whitespace: c.whitespace,
parent: parent,
}
}
// Parent returns the character data token's parent element, or nil if it has
// no parent.
func (c *CharData) Parent() *Element {
return c.parent
}
// setParent replaces the character data token's parent.
func (c *CharData) setParent(parent *Element) {
c.parent = parent
}
// writeTo serializes the character data entity to the writer.
func (c *CharData) writeTo(w *bufio.Writer, s *WriteSettings) {
var r *strings.Replacer
if s.CanonicalText {
r = xmlReplacerCanonicalText
} else {
r = xmlReplacerNormal
}
w.WriteString(r.Replace(c.Data))
}
// NewComment creates a parentless XML comment.
func NewComment(comment string) *Comment {
return newComment(comment, nil)
}
// NewComment creates an XML comment and binds it to a parent element. If
// parent is nil, the Comment remains unbound.
func newComment(comment string, parent *Element) *Comment {
c := &Comment{
Data: comment,
parent: parent,
}
if parent != nil {
parent.addChild(c)
}
return c
}
// CreateComment creates an XML comment and adds it as a child of element e.
func (e *Element) CreateComment(comment string) *Comment {
return newComment(comment, e)
}
// dup duplicates the comment.
func (c *Comment) dup(parent *Element) Token {
return &Comment{
Data: c.Data,
parent: parent,
}
}
// Parent returns comment token's parent element, or nil if it has no parent.
func (c *Comment) Parent() *Element {
return c.parent
}
// setParent replaces the comment token's parent.
func (c *Comment) setParent(parent *Element) {
c.parent = parent
}
// writeTo serialies the comment to the writer.
func (c *Comment) writeTo(w *bufio.Writer, s *WriteSettings) {
w.WriteString("<!--")
w.WriteString(c.Data)
w.WriteString("-->")
}
// NewDirective creates a parentless XML directive.
func NewDirective(data string) *Directive {
return newDirective(data, nil)
}
// newDirective creates an XML directive and binds it to a parent element. If
// parent is nil, the Directive remains unbound.
func newDirective(data string, parent *Element) *Directive {
d := &Directive{
Data: data,
parent: parent,
}
if parent != nil {
parent.addChild(d)
}
return d
}
// CreateDirective creates an XML directive and adds it as the last child of
// element e.
func (e *Element) CreateDirective(data string) *Directive {
return newDirective(data, e)
}
// dup duplicates the directive.
func (d *Directive) dup(parent *Element) Token {
return &Directive{
Data: d.Data,
parent: parent,
}
}
// Parent returns directive token's parent element, or nil if it has no
// parent.
func (d *Directive) Parent() *Element {
return d.parent
}
// setParent replaces the directive token's parent.
func (d *Directive) setParent(parent *Element) {
d.parent = parent
}
// writeTo serializes the XML directive to the writer.
func (d *Directive) writeTo(w *bufio.Writer, s *WriteSettings) {
w.WriteString("<!")
w.WriteString(d.Data)
w.WriteString(">")
}
// NewProcInst creates a parentless XML processing instruction.
func NewProcInst(target, inst string) *ProcInst {
return newProcInst(target, inst, nil)
}
// newProcInst creates an XML processing instruction and binds it to a parent
// element. If parent is nil, the ProcInst remains unbound.
func newProcInst(target, inst string, parent *Element) *ProcInst {
p := &ProcInst{
Target: target,
Inst: inst,
parent: parent,
}
if parent != nil {
parent.addChild(p)
}
return p
}
// CreateProcInst creates a processing instruction and adds it as a child of
// element e.
func (e *Element) CreateProcInst(target, inst string) *ProcInst {
return newProcInst(target, inst, e)
}
// dup duplicates the procinst.
func (p *ProcInst) dup(parent *Element) Token {
return &ProcInst{
Target: p.Target,
Inst: p.Inst,
parent: parent,
}
}
// Parent returns processing instruction token's parent element, or nil if it
// has no parent.
func (p *ProcInst) Parent() *Element {
return p.parent
}
// setParent replaces the processing instruction token's parent.
func (p *ProcInst) setParent(parent *Element) {
p.parent = parent
}
// writeTo serializes the processing instruction to the writer.
func (p *ProcInst) writeTo(w *bufio.Writer, s *WriteSettings) {
w.WriteString("<?")
w.WriteString(p.Target)
if p.Inst != "" {
w.WriteByte(' ')
w.WriteString(p.Inst)
}
w.WriteString("?>")
}

188
vendor/github.com/beevik/etree/helpers.go generated vendored Normal file
View file

@ -0,0 +1,188 @@
// Copyright 2015 Brett Vickers.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package etree
import (
"io"
"strings"
)
// A simple stack
type stack struct {
data []interface{}
}
func (s *stack) empty() bool {
return len(s.data) == 0
}
func (s *stack) push(value interface{}) {
s.data = append(s.data, value)
}
func (s *stack) pop() interface{} {
value := s.data[len(s.data)-1]
s.data[len(s.data)-1] = nil
s.data = s.data[:len(s.data)-1]
return value
}
func (s *stack) peek() interface{} {
return s.data[len(s.data)-1]
}
// A fifo is a simple first-in-first-out queue.
type fifo struct {
data []interface{}
head, tail int
}
func (f *fifo) add(value interface{}) {
if f.len()+1 >= len(f.data) {
f.grow()
}
f.data[f.tail] = value
if f.tail++; f.tail == len(f.data) {
f.tail = 0
}
}
func (f *fifo) remove() interface{} {
value := f.data[f.head]
f.data[f.head] = nil
if f.head++; f.head == len(f.data) {
f.head = 0
}
return value
}
func (f *fifo) len() int {
if f.tail >= f.head {
return f.tail - f.head
}
return len(f.data) - f.head + f.tail
}
func (f *fifo) grow() {
c := len(f.data) * 2
if c == 0 {
c = 4
}
buf, count := make([]interface{}, c), f.len()
if f.tail >= f.head {
copy(buf[0:count], f.data[f.head:f.tail])
} else {
hindex := len(f.data) - f.head
copy(buf[0:hindex], f.data[f.head:])
copy(buf[hindex:count], f.data[:f.tail])
}
f.data, f.head, f.tail = buf, 0, count
}
// countReader implements a proxy reader that counts the number of
// bytes read from its encapsulated reader.
type countReader struct {
r io.Reader
bytes int64
}
func newCountReader(r io.Reader) *countReader {
return &countReader{r: r}
}
func (cr *countReader) Read(p []byte) (n int, err error) {
b, err := cr.r.Read(p)
cr.bytes += int64(b)
return b, err
}
// countWriter implements a proxy writer that counts the number of
// bytes written by its encapsulated writer.
type countWriter struct {
w io.Writer
bytes int64
}
func newCountWriter(w io.Writer) *countWriter {
return &countWriter{w: w}
}
func (cw *countWriter) Write(p []byte) (n int, err error) {
b, err := cw.w.Write(p)
cw.bytes += int64(b)
return b, err
}
// isWhitespace returns true if the byte slice contains only
// whitespace characters.
func isWhitespace(s string) bool {
for i := 0; i < len(s); i++ {
if c := s[i]; c != ' ' && c != '\t' && c != '\n' && c != '\r' {
return false
}
}
return true
}
// spaceMatch returns true if namespace a is the empty string
// or if namespace a equals namespace b.
func spaceMatch(a, b string) bool {
switch {
case a == "":
return true
default:
return a == b
}
}
// spaceDecompose breaks a namespace:tag identifier at the ':'
// and returns the two parts.
func spaceDecompose(str string) (space, key string) {
colon := strings.IndexByte(str, ':')
if colon == -1 {
return "", str
}
return str[:colon], str[colon+1:]
}
// Strings used by crIndent
const (
crsp = "\n "
crtab = "\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t"
)
// crIndent returns a carriage return followed by n copies of the
// first non-CR character in the source string.
func crIndent(n int, source string) string {
switch {
case n < 0:
return source[:1]
case n < len(source):
return source[:n+1]
default:
return source + strings.Repeat(source[1:2], n-len(source)+1)
}
}
// nextIndex returns the index of the next occurrence of sep in s,
// starting from offset. It returns -1 if the sep string is not found.
func nextIndex(s, sep string, offset int) int {
switch i := strings.Index(s[offset:], sep); i {
case -1:
return -1
default:
return offset + i
}
}
// isInteger returns true if the string s contains an integer.
func isInteger(s string) bool {
for i := 0; i < len(s); i++ {
if (s[i] < '0' || s[i] > '9') && !(i == 0 && s[i] == '-') {
return false
}
}
return true
}

516
vendor/github.com/beevik/etree/path.go generated vendored Normal file
View file

@ -0,0 +1,516 @@
// Copyright 2015 Brett Vickers.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package etree
import (
"strconv"
"strings"
)
/*
A Path is an object that represents an optimized version of an
XPath-like search string. Although path strings are XPath-like,
only the following limited syntax is supported:
. Selects the current element
.. Selects the parent of the current element
* Selects all child elements
// Selects all descendants of the current element
tag Selects all child elements with the given tag
[#] Selects the element of the given index (1-based,
negative starts from the end)
[@attrib] Selects all elements with the given attribute
[@attrib='val'] Selects all elements with the given attribute set to val
[tag] Selects all elements with a child element named tag
[tag='val'] Selects all elements with a child element named tag
and text matching val
[text()] Selects all elements with non-empty text
[text()='val'] Selects all elements whose text matches val
Examples:
Select the title elements of all descendant book elements having a
'category' attribute of 'WEB':
//book[@category='WEB']/title
Select the first book element with a title child containing the text
'Great Expectations':
.//book[title='Great Expectations'][1]
Starting from the current element, select all children of book elements
with an attribute 'language' set to 'english':
./book/*[@language='english']
Starting from the current element, select all children of book elements
containing the text 'special':
./book/*[text()='special']
Select all descendant book elements whose title element has an attribute
'language' set to 'french':
//book/title[@language='french']/..
*/
type Path struct {
segments []segment
}
// ErrPath is returned by path functions when an invalid etree path is provided.
type ErrPath string
// Error returns the string describing a path error.
func (err ErrPath) Error() string {
return "etree: " + string(err)
}
// CompilePath creates an optimized version of an XPath-like string that
// can be used to query elements in an element tree.
func CompilePath(path string) (Path, error) {
var comp compiler
segments := comp.parsePath(path)
if comp.err != ErrPath("") {
return Path{nil}, comp.err
}
return Path{segments}, nil
}
// MustCompilePath creates an optimized version of an XPath-like string that
// can be used to query elements in an element tree. Panics if an error
// occurs. Use this function to create Paths when you know the path is
// valid (i.e., if it's hard-coded).
func MustCompilePath(path string) Path {
p, err := CompilePath(path)
if err != nil {
panic(err)
}
return p
}
// A segment is a portion of a path between "/" characters.
// It contains one selector and zero or more [filters].
type segment struct {
sel selector
filters []filter
}
func (seg *segment) apply(e *Element, p *pather) {
seg.sel.apply(e, p)
for _, f := range seg.filters {
f.apply(p)
}
}
// A selector selects XML elements for consideration by the
// path traversal.
type selector interface {
apply(e *Element, p *pather)
}
// A filter pares down a list of candidate XML elements based
// on a path filter in [brackets].
type filter interface {
apply(p *pather)
}
// A pather is helper object that traverses an element tree using
// a Path object. It collects and deduplicates all elements matching
// the path query.
type pather struct {
queue fifo
results []*Element
inResults map[*Element]bool
candidates []*Element
scratch []*Element // used by filters
}
// A node represents an element and the remaining path segments that
// should be applied against it by the pather.
type node struct {
e *Element
segments []segment
}
func newPather() *pather {
return &pather{
results: make([]*Element, 0),
inResults: make(map[*Element]bool),
candidates: make([]*Element, 0),
scratch: make([]*Element, 0),
}
}
// traverse follows the path from the element e, collecting
// and then returning all elements that match the path's selectors
// and filters.
func (p *pather) traverse(e *Element, path Path) []*Element {
for p.queue.add(node{e, path.segments}); p.queue.len() > 0; {
p.eval(p.queue.remove().(node))
}
return p.results
}
// eval evalutes the current path node by applying the remaining
// path's selector rules against the node's element.
func (p *pather) eval(n node) {
p.candidates = p.candidates[0:0]
seg, remain := n.segments[0], n.segments[1:]
seg.apply(n.e, p)
if len(remain) == 0 {
for _, c := range p.candidates {
if in := p.inResults[c]; !in {
p.inResults[c] = true
p.results = append(p.results, c)
}
}
} else {
for _, c := range p.candidates {
p.queue.add(node{c, remain})
}
}
}
// A compiler generates a compiled path from a path string.
type compiler struct {
err ErrPath
}
// parsePath parses an XPath-like string describing a path
// through an element tree and returns a slice of segment
// descriptors.
func (c *compiler) parsePath(path string) []segment {
// If path starts or ends with //, fix it
if strings.HasPrefix(path, "//") {
path = "." + path
}
if strings.HasSuffix(path, "//") {
path = path + "*"
}
// Paths cannot be absolute
if strings.HasPrefix(path, "/") {
c.err = ErrPath("paths cannot be absolute.")
return nil
}
// Split path into segment objects
var segments []segment
for _, s := range splitPath(path) {
segments = append(segments, c.parseSegment(s))
if c.err != ErrPath("") {
break
}
}
return segments
}
func splitPath(path string) []string {
pieces := make([]string, 0)
start := 0
inquote := false
for i := 0; i+1 <= len(path); i++ {
if path[i] == '\'' {
inquote = !inquote
} else if path[i] == '/' && !inquote {
pieces = append(pieces, path[start:i])
start = i + 1
}
}
return append(pieces, path[start:])
}
// parseSegment parses a path segment between / characters.
func (c *compiler) parseSegment(path string) segment {
pieces := strings.Split(path, "[")
seg := segment{
sel: c.parseSelector(pieces[0]),
filters: make([]filter, 0),
}
for i := 1; i < len(pieces); i++ {
fpath := pieces[i]
if fpath[len(fpath)-1] != ']' {
c.err = ErrPath("path has invalid filter [brackets].")
break
}
seg.filters = append(seg.filters, c.parseFilter(fpath[:len(fpath)-1]))
}
return seg
}
// parseSelector parses a selector at the start of a path segment.
func (c *compiler) parseSelector(path string) selector {
switch path {
case ".":
return new(selectSelf)
case "..":
return new(selectParent)
case "*":
return new(selectChildren)
case "":
return new(selectDescendants)
default:
return newSelectChildrenByTag(path)
}
}
// parseFilter parses a path filter contained within [brackets].
func (c *compiler) parseFilter(path string) filter {
if len(path) == 0 {
c.err = ErrPath("path contains an empty filter expression.")
return nil
}
// Filter contains [@attr='val'], [text()='val'], or [tag='val']?
eqindex := strings.Index(path, "='")
if eqindex >= 0 {
rindex := nextIndex(path, "'", eqindex+2)
if rindex != len(path)-1 {
c.err = ErrPath("path has mismatched filter quotes.")
return nil
}
switch {
case path[0] == '@':
return newFilterAttrVal(path[1:eqindex], path[eqindex+2:rindex])
case strings.HasPrefix(path, "text()"):
return newFilterTextVal(path[eqindex+2 : rindex])
default:
return newFilterChildText(path[:eqindex], path[eqindex+2:rindex])
}
}
// Filter contains [@attr], [N], [tag] or [text()]
switch {
case path[0] == '@':
return newFilterAttr(path[1:])
case path == "text()":
return newFilterText()
case isInteger(path):
pos, _ := strconv.Atoi(path)
switch {
case pos > 0:
return newFilterPos(pos - 1)
default:
return newFilterPos(pos)
}
default:
return newFilterChild(path)
}
}
// selectSelf selects the current element into the candidate list.
type selectSelf struct{}
func (s *selectSelf) apply(e *Element, p *pather) {
p.candidates = append(p.candidates, e)
}
// selectParent selects the element's parent into the candidate list.
type selectParent struct{}
func (s *selectParent) apply(e *Element, p *pather) {
if e.parent != nil {
p.candidates = append(p.candidates, e.parent)
}
}
// selectChildren selects the element's child elements into the
// candidate list.
type selectChildren struct{}
func (s *selectChildren) apply(e *Element, p *pather) {
for _, c := range e.Child {
if c, ok := c.(*Element); ok {
p.candidates = append(p.candidates, c)
}
}
}
// selectDescendants selects all descendant child elements
// of the element into the candidate list.
type selectDescendants struct{}
func (s *selectDescendants) apply(e *Element, p *pather) {
var queue fifo
for queue.add(e); queue.len() > 0; {
e := queue.remove().(*Element)
p.candidates = append(p.candidates, e)
for _, c := range e.Child {
if c, ok := c.(*Element); ok {
queue.add(c)
}
}
}
}
// selectChildrenByTag selects into the candidate list all child
// elements of the element having the specified tag.
type selectChildrenByTag struct {
space, tag string
}
func newSelectChildrenByTag(path string) *selectChildrenByTag {
s, l := spaceDecompose(path)
return &selectChildrenByTag{s, l}
}
func (s *selectChildrenByTag) apply(e *Element, p *pather) {
for _, c := range e.Child {
if c, ok := c.(*Element); ok && spaceMatch(s.space, c.Space) && s.tag == c.Tag {
p.candidates = append(p.candidates, c)
}
}
}
// filterPos filters the candidate list, keeping only the
// candidate at the specified index.
type filterPos struct {
index int
}
func newFilterPos(pos int) *filterPos {
return &filterPos{pos}
}
func (f *filterPos) apply(p *pather) {
if f.index >= 0 {
if f.index < len(p.candidates) {
p.scratch = append(p.scratch, p.candidates[f.index])
}
} else {
if -f.index <= len(p.candidates) {
p.scratch = append(p.scratch, p.candidates[len(p.candidates)+f.index])
}
}
p.candidates, p.scratch = p.scratch, p.candidates[0:0]
}
// filterAttr filters the candidate list for elements having
// the specified attribute.
type filterAttr struct {
space, key string
}
func newFilterAttr(str string) *filterAttr {
s, l := spaceDecompose(str)
return &filterAttr{s, l}
}
func (f *filterAttr) apply(p *pather) {
for _, c := range p.candidates {
for _, a := range c.Attr {
if spaceMatch(f.space, a.Space) && f.key == a.Key {
p.scratch = append(p.scratch, c)
break
}
}
}
p.candidates, p.scratch = p.scratch, p.candidates[0:0]
}
// filterAttrVal filters the candidate list for elements having
// the specified attribute with the specified value.
type filterAttrVal struct {
space, key, val string
}
func newFilterAttrVal(str, value string) *filterAttrVal {
s, l := spaceDecompose(str)
return &filterAttrVal{s, l, value}
}
func (f *filterAttrVal) apply(p *pather) {
for _, c := range p.candidates {
for _, a := range c.Attr {
if spaceMatch(f.space, a.Space) && f.key == a.Key && f.val == a.Value {
p.scratch = append(p.scratch, c)
break
}
}
}
p.candidates, p.scratch = p.scratch, p.candidates[0:0]
}
// filterText filters the candidate list for elements having text.
type filterText struct{}
func newFilterText() *filterText {
return &filterText{}
}
func (f *filterText) apply(p *pather) {
for _, c := range p.candidates {
if c.Text() != "" {
p.scratch = append(p.scratch, c)
}
}
p.candidates, p.scratch = p.scratch, p.candidates[0:0]
}
// filterTextVal filters the candidate list for elements having
// text equal to the specified value.
type filterTextVal struct {
val string
}
func newFilterTextVal(value string) *filterTextVal {
return &filterTextVal{value}
}
func (f *filterTextVal) apply(p *pather) {
for _, c := range p.candidates {
if c.Text() == f.val {
p.scratch = append(p.scratch, c)
}
}
p.candidates, p.scratch = p.scratch, p.candidates[0:0]
}
// filterChild filters the candidate list for elements having
// a child element with the specified tag.
type filterChild struct {
space, tag string
}
func newFilterChild(str string) *filterChild {
s, l := spaceDecompose(str)
return &filterChild{s, l}
}
func (f *filterChild) apply(p *pather) {
for _, c := range p.candidates {
for _, cc := range c.Child {
if cc, ok := cc.(*Element); ok &&
spaceMatch(f.space, cc.Space) &&
f.tag == cc.Tag {
p.scratch = append(p.scratch, c)
}
}
}
p.candidates, p.scratch = p.scratch, p.candidates[0:0]
}
// filterChildText filters the candidate list for elements having
// a child element with the specified tag and text.
type filterChildText struct {
space, tag, text string
}
func newFilterChildText(str, text string) *filterChildText {
s, l := spaceDecompose(str)
return &filterChildText{s, l, text}
}
func (f *filterChildText) apply(p *pather) {
for _, c := range p.candidates {
for _, cc := range c.Child {
if cc, ok := cc.(*Element); ok &&
spaceMatch(f.space, cc.Space) &&
f.tag == cc.Tag &&
f.text == cc.Text() {
p.scratch = append(p.scratch, c)
}
}
}
p.candidates, p.scratch = p.scratch, p.candidates[0:0]
}

17
vendor/github.com/laurent22/ical-go/ical/calendar.go generated vendored Normal file
View file

@ -0,0 +1,17 @@
package ical
type Calendar struct {
Items []CalendarEvent
}
func (this *Calendar) Serialize() string {
serializer := calSerializer{
calendar: this,
buffer: new(strBuffer),
}
return serializer.serialize()
}
func (this *Calendar) ToICS() string {
return this.Serialize()
}

View file

@ -0,0 +1,50 @@
package ical
import (
"time"
)
type CalendarEvent struct {
Id string
Summary string
Description string
Location string
CreatedAtUTC *time.Time
ModifiedAtUTC *time.Time
StartAt *time.Time
EndAt *time.Time
}
func (this *CalendarEvent) StartAtUTC() *time.Time {
return inUTC(this.StartAt)
}
func (this *CalendarEvent) EndAtUTC() *time.Time {
return inUTC(this.EndAt)
}
func (this *CalendarEvent) Serialize() string {
buffer := new(strBuffer)
return this.serializeWithBuffer(buffer)
}
func (this *CalendarEvent) ToICS() string {
return this.Serialize()
}
func (this *CalendarEvent) serializeWithBuffer(buffer *strBuffer) string {
serializer := calEventSerializer{
event: this,
buffer: buffer,
}
return serializer.serialize()
}
func inUTC(t *time.Time) *time.Time {
if t == nil {
return nil
}
tUTC := t.UTC()
return &tUTC
}

18
vendor/github.com/laurent22/ical-go/ical/lib.go generated vendored Normal file
View file

@ -0,0 +1,18 @@
package ical
import (
"fmt"
"bytes"
)
type strBuffer struct {
buffer bytes.Buffer
}
func (b *strBuffer) Write(format string, elem ...interface{}) {
b.buffer.WriteString(fmt.Sprintf(format, elem...))
}
func (b *strBuffer) String() string {
return b.buffer.String()
}

168
vendor/github.com/laurent22/ical-go/ical/node.go generated vendored Normal file
View file

@ -0,0 +1,168 @@
package ical
import (
"time"
"regexp"
"strconv"
)
type Node struct {
Name string
Value string
Type int // 1 = Object, 0 = Name/Value
Parameters map[string]string
Children []*Node
}
func (this *Node) ChildrenByName(name string) []*Node {
var output []*Node
for _, child := range this.Children {
if child.Name == name {
output = append(output, child)
}
}
return output
}
func (this *Node) ChildByName(name string) *Node {
for _, child := range this.Children {
if child.Name == name {
return child
}
}
return nil
}
func (this *Node) PropString(name string, defaultValue string) string {
for _, child := range this.Children {
if child.Name == name {
return child.Value
}
}
return defaultValue
}
func (this *Node) PropDate(name string, defaultValue time.Time) time.Time {
node := this.ChildByName(name)
if node == nil { return defaultValue }
tzid := node.Parameter("TZID", "")
var output time.Time
var err error
if tzid != "" {
loc, err := time.LoadLocation(tzid)
if err != nil { panic(err) }
output, err = time.ParseInLocation("20060102T150405", node.Value, loc)
} else {
output, err = time.Parse("20060102T150405Z", node.Value)
}
if err != nil { panic(err) }
return output
}
func (this *Node) PropDuration(name string) time.Duration {
durStr := this.PropString(name, "")
if durStr == "" {
return time.Duration(0)
}
durRgx := regexp.MustCompile("PT(?:([0-9]+)H)?(?:([0-9]+)M)?(?:([0-9]+)S)?")
matches := durRgx.FindStringSubmatch(durStr)
if len(matches) != 4 {
return time.Duration(0)
}
strToDuration := func(value string) time.Duration {
d := 0
if value != "" {
d, _ = strconv.Atoi(value)
}
return time.Duration(d)
}
hours := strToDuration(matches[1])
min := strToDuration(matches[2])
sec := strToDuration(matches[3])
return hours * time.Hour + min * time.Minute + sec * time.Second
}
func (this *Node) PropInt(name string, defaultValue int) int {
n := this.PropString(name, "")
if n == "" { return defaultValue }
output, err := strconv.Atoi(n)
if err != nil { panic(err) }
return output
}
func (this *Node) DigProperty(propPath... string) (string, bool) {
return this.dig("prop", propPath...)
}
func (this *Node) Parameter(name string, defaultValue string) string {
if len(this.Parameters) <= 0 { return defaultValue }
v, ok := this.Parameters[name]
if !ok { return defaultValue }
return v
}
func (this *Node) DigParameter(paramPath... string) (string, bool) {
return this.dig("param", paramPath...)
}
// Digs a value based on a given value path.
// valueType: can be "param" or "prop".
// valuePath: the path to access the value.
// Returns ("", false) when not found or (value, true) when found.
//
// Example:
// dig("param", "VCALENDAR", "VEVENT", "DTEND", "TYPE") -> It will search for "VCALENDAR" node,
// then a "VEVENT" node, then a "DTEND" note, then finally the "TYPE" param.
func (this *Node) dig(valueType string, valuePath... string) (string, bool) {
current := this
lastIndex := len(valuePath) - 1
for _, v := range valuePath[:lastIndex] {
current = current.ChildByName(v)
if current == nil {
return "", false
}
}
target := valuePath[lastIndex]
value := ""
if valueType == "param" {
value = current.Parameter(target, "")
} else if valueType == "prop" {
value = current.PropString(target, "")
}
if value == "" {
return "", false
}
return value, true
}
func (this *Node) String() string {
s := ""
if this.Type == 1 {
s += "===== " + this.Name
s += "\n"
} else {
s += this.Name
s += ":" + this.Value
s += "\n"
}
for _, child := range this.Children {
s += child.String()
}
if this.Type == 1 {
s += "===== /" + this.Name
s += "\n"
}
return s
}

106
vendor/github.com/laurent22/ical-go/ical/parsers.go generated vendored Normal file
View file

@ -0,0 +1,106 @@
package ical
import (
"log"
"errors"
"regexp"
"strings"
)
func ParseCalendar(data string) (*Node, error) {
r := regexp.MustCompile("([\r|\t| ]*\n[\r|\t| ]*)+")
lines := r.Split(strings.TrimSpace(data), -1)
node, _, err, _ := parseCalendarNode(lines, 0)
return node, err
}
func parseCalendarNode(lines []string, lineIndex int) (*Node, bool, error, int) {
line := strings.TrimSpace(lines[lineIndex])
_ = log.Println
colonIndex := strings.Index(line, ":")
if colonIndex <= 0 {
return nil, false, errors.New("Invalid value/pair: " + line), lineIndex + 1
}
name := line[0:colonIndex]
splitted := strings.Split(name, ";")
var parameters map[string]string
if len(splitted) >= 2 {
name = splitted[0]
parameters = make(map[string]string)
for i := 1; i < len(splitted); i++ {
p := strings.Split(splitted[i], "=")
if len(p) != 2 { panic("Invalid parameter format: " + name) }
parameters[p[0]] = p[1]
}
}
value := line[colonIndex+1:len(line)]
if name == "BEGIN" {
node := new(Node)
node.Name = value
node.Type = 1
lineIndex = lineIndex + 1
for {
child, finished, _, newLineIndex := parseCalendarNode(lines, lineIndex)
if finished {
return node, false, nil, newLineIndex
} else {
if child != nil {
node.Children = append(node.Children, child)
}
lineIndex = newLineIndex
}
}
} else if name == "END" {
return nil, true, nil, lineIndex + 1
} else {
node := new(Node)
node.Name = name
if name == "DESCRIPTION" || name == "SUMMARY" {
text, newLineIndex := parseTextType(lines, lineIndex)
node.Value = text
node.Parameters = parameters
return node, false, nil, newLineIndex
} else {
node.Value = value
node.Parameters = parameters
return node, false, nil, lineIndex + 1
}
}
panic("Unreachable")
return nil, false, nil, lineIndex + 1
}
func parseTextType(lines []string, lineIndex int) (string, int) {
line := lines[lineIndex]
colonIndex := strings.Index(line, ":")
output := strings.TrimSpace(line[colonIndex+1:len(line)])
lineIndex++
for {
line := lines[lineIndex]
if line == "" || line[0] != ' ' {
return unescapeTextType(output), lineIndex
}
output += line[1:len(line)]
lineIndex++
}
return unescapeTextType(output), lineIndex
}
func escapeTextType(input string) string {
output := strings.Replace(input, "\\", "\\\\", -1)
output = strings.Replace(output, ";", "\\;", -1)
output = strings.Replace(output, ",", "\\,", -1)
output = strings.Replace(output, "\n", "\\n", -1)
return output
}
func unescapeTextType(s string) string {
s = strings.Replace(s, "\\;", ";", -1)
s = strings.Replace(s, "\\,", ",", -1)
s = strings.Replace(s, "\\n", "\n", -1)
s = strings.Replace(s, "\\\\", "\\", -1)
return s
}

View file

@ -0,0 +1,9 @@
package ical
const (
VCALENDAR = "VCALENDAR"
VEVENT = "VEVENT"
DTSTART = "DTSTART"
DTEND = "DTEND"
DURATION = "DURATION"
)

121
vendor/github.com/laurent22/ical-go/ical/serializers.go generated vendored Normal file
View file

@ -0,0 +1,121 @@
package ical
import (
"time"
"strings"
)
type calSerializer struct {
calendar *Calendar
buffer *strBuffer
}
func (this *calSerializer) serialize() string {
this.serializeCalendar()
return strings.TrimSpace(this.buffer.String())
}
func (this *calSerializer) serializeCalendar() {
this.begin()
this.version()
this.items()
this.end()
}
func (this *calSerializer) begin() {
this.buffer.Write("BEGIN:VCALENDAR\n")
}
func (this *calSerializer) end() {
this.buffer.Write("END:VCALENDAR\n")
}
func (this *calSerializer) version() {
this.buffer.Write("VERSION:2.0\n")
}
func (this *calSerializer) items() {
for _, item := range this.calendar.Items {
item.serializeWithBuffer(this.buffer)
}
}
type calEventSerializer struct {
event *CalendarEvent
buffer *strBuffer
}
const (
eventSerializerTimeFormat = "20060102T150405Z"
)
func (this *calEventSerializer) serialize() string {
this.serializeEvent()
return strings.TrimSpace(this.buffer.String())
}
func (this *calEventSerializer) serializeEvent() {
this.begin()
this.uid()
this.created()
this.lastModified()
this.dtstart()
this.dtend()
this.summary()
this.description()
this.location()
this.end()
}
func (this *calEventSerializer) begin() {
this.buffer.Write("BEGIN:VEVENT\n")
}
func (this *calEventSerializer) end() {
this.buffer.Write("END:VEVENT\n")
}
func (this *calEventSerializer) uid() {
this.serializeStringProp("UID", this.event.Id)
}
func (this *calEventSerializer) summary() {
this.serializeStringProp("SUMMARY", this.event.Summary)
}
func (this *calEventSerializer) description() {
this.serializeStringProp("DESCRIPTION", this.event.Description)
}
func (this *calEventSerializer) location() {
this.serializeStringProp("LOCATION", this.event.Location)
}
func (this *calEventSerializer) dtstart() {
this.serializeTimeProp("DTSTART", this.event.StartAtUTC())
}
func (this *calEventSerializer) dtend() {
this.serializeTimeProp("DTEND", this.event.EndAtUTC())
}
func (this *calEventSerializer) created() {
this.serializeTimeProp("CREATED", this.event.CreatedAtUTC)
}
func (this *calEventSerializer) lastModified() {
this.serializeTimeProp("LAST-MODIFIED", this.event.ModifiedAtUTC)
}
func (this *calEventSerializer) serializeStringProp(name, value string) {
if value != "" {
escapedValue := escapeTextType(value)
this.buffer.Write("%s:%s\n", name, escapedValue)
}
}
func (this *calEventSerializer) serializeTimeProp(name string, value *time.Time) {
if value != nil {
this.buffer.Write("%s:%s\n", name, value.Format(eventSerializerTimeFormat))
}
}

94
vendor/github.com/laurent22/ical-go/ical/todo.go generated vendored Normal file
View file

@ -0,0 +1,94 @@
package ical
// import (
// "time"
// "strconv"
// "strings"
// )
//
// func TodoFromNode(node *Node) Todo {
// if node.Name != "VTODO" { panic("Node is not a VTODO") }
//
// var todo Todo
// todo.SetId(node.PropString("UID", ""))
// todo.SetSummary(node.PropString("SUMMARY", ""))
// todo.SetDescription(node.PropString("DESCRIPTION", ""))
// todo.SetDueDate(node.PropDate("DUE", time.Time{}))
// //todo.SetAlarmDate(this.TimestampBytesToTime(reminderDate))
// todo.SetCreatedDate(node.PropDate("CREATED", time.Time{}))
// todo.SetModifiedDate(node.PropDate("DTSTAMP", time.Time{}))
// todo.SetPriority(node.PropInt("PRIORITY", 0))
// todo.SetPercentComplete(node.PropInt("PERCENT-COMPLETE", 0))
// return todo
// }
//
// type Todo struct {
// CalendarItem
// dueDate time.Time
// }
//
// func (this *Todo) SetDueDate(v time.Time) { this.dueDate = v }
// func (this *Todo) DueDate() time.Time { return this.dueDate }
//
// func (this *Todo) ICalString(target string) string {
// s := "BEGIN:VTODO\n"
//
// if target == "macTodo" {
// status := "NEEDS-ACTION"
// if this.PercentComplete() == 100 {
// status = "COMPLETED"
// }
// s += "STATUS:" + status + "\n"
// }
//
// s += encodeDateProperty("CREATED", this.CreatedDate()) + "\n"
// s += "UID:" + this.Id() + "\n"
// s += "SUMMARY:" + escapeTextType(this.Summary()) + "\n"
// if this.PercentComplete() == 100 && !this.CompletedDate().IsZero() {
// s += encodeDateProperty("COMPLETED", this.CompletedDate()) + "\n"
// }
// s += encodeDateProperty("DTSTAMP", this.ModifiedDate()) + "\n"
// if this.Priority() != 0 {
// s += "PRIORITY:" + strconv.Itoa(this.Priority()) + "\n"
// }
// if this.PercentComplete() != 0 {
// s += "PERCENT-COMPLETE:" + strconv.Itoa(this.PercentComplete()) + "\n"
// }
// if target == "macTodo" {
// s += "SEQUENCE:" + strconv.Itoa(this.Sequence()) + "\n"
// }
// if this.Description() != "" {
// s += "DESCRIPTION:" + encodeTextType(this.Description()) + "\n"
// }
//
// s += "END:VTODO\n"
//
// return s
// }
//
// func encodeDateProperty(name string, t time.Time) string {
// var output string
// zone, _ := t.Zone()
// if zone != "UTC" && zone != "" {
// output = ";TZID=" + zone + ":" + t.Format("20060102T150405")
// } else {
// output = ":" + t.Format("20060102T150405") + "Z"
// }
// return name + output
// }
//
//
// func encodeTextType(s string) string {
// output := ""
// s = escapeTextType(s)
// lineLength := 0
// for _, c := range s {
// if lineLength + len(string(c)) > 75 {
// output += "\n "
// lineLength = 1
// }
// output += string(c)
// lineLength += len(string(c))
// }
// return output
// }

2
vendor/github.com/samedi/caldav-go/.gitignore generated vendored Normal file
View file

@ -0,0 +1,2 @@
test-data/
vendor

69
vendor/github.com/samedi/caldav-go/CHANGELOG.md generated vendored Normal file
View file

@ -0,0 +1,69 @@
# CHANGELOG
v3.0.0
-----------
2017-08-01 Daniel Ferraz <d.ferrazm@gmail.com>
Main change:
Add two ways to get resource from the storage: shallow or not.
`data.GetShallowResource`: means that, if it's collection resource, it will not include its child VEVENTs in the ICS data.
This is used throughout the palces where the children dont matter.
`data.GetResource`: means that the child VEVENTs will be included in the returned ICS content data for collection resources.
This is used then sending a GET request to fetch a specific resource and expecting its full ICS data in response.
Other changes:
* Removed the need to pass the useless `writer http.ResponseWriter` parameter when calling the `caldav.HandleRequest` function.
* Added a `caldav.HandleRequestWithStorage` function that makes it easy to pass a custom storage to be used and handle the request with a single function call.
v2.0.0
-----------
2017-05-10 Daniel Ferraz <d.ferrazm@gmail.com>
All commits squashed and LICENSE updated to release as OSS in github.
Feature-wise it remains the same.
v1.0.1
-----------
2017-01-25 Daniel Ferraz <d.ferrazm@gmail.com>
Escape the contents in `<calendar-data>` and `<displayname>` in the `multistatus` XML responses. Fixing possible bugs
related to having special characters (e.g. &) in the XML multistatus responses that would possible break the encoding.
v1.0.0
-----------
2017-01-18 Daniel Ferraz <d.ferrazm@gmail.com>
Main feature:
* Handles the `Prefer` header on PROPFIND and REPORT requests (defined in this [draft/proposal](https://tools.ietf.org/html/draft-murchison-webdav-prefer-05)). Useful to shrink down possible big and verbose responses when the client demands. Ex: current iOS calendar client uses this feature on its PROPFIND requests.
Other changes:
* Added the `handlers.Response` to allow clients of the lib to interact with the generated response before being written/sent back to the client.
* Added `GetResourcesByFilters` to the storage interface to allow filtering of resources in the storage level. Useful to provide an already filtered and smaller resource collection to a the REPORT handler when dealing with a filtered REPORT request.
* Added `GetResourcesByList` to the storage interface to fetch a set a of resources based on a set of paths. Useful to provide, in one call, the correct resource collection to the REPORT handler when dealing with a REPORT request for specific `hrefs`.
* Remove useless `IsResourcePresent` from the storage interface.
v0.1.0
-----------
2016-09-23 Daniel Ferraz <d.ferrazm@gmail.com>
This version implements:
* Allow: "GET, HEAD, PUT, DELETE, OPTIONS, PROPFIND, REPORT"
* DAV: "1, 3, calendar-access"
* Also only handles the following components: `VCALENDAR`, `VEVENT`
Currently unsupported:
* Components `VTODO`, `VJOURNAL`, `VFREEBUSY`
* `VEVENT` recurrences
* Resource locking
* User authentication

20
vendor/github.com/samedi/caldav-go/LICENSE generated vendored Normal file
View file

@ -0,0 +1,20 @@
Copyright 2017 samedi GmbH
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NON INFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

140
vendor/github.com/samedi/caldav-go/README.md generated vendored Normal file
View file

@ -0,0 +1,140 @@
# go CalDAV
This is a Go lib that aims to implement the CalDAV specification ([RFC4791]). It allows the quick implementation of a CalDAV server in Go. Basically, it provides the request handlers that will handle the several CalDAV HTTP requests, fetch the appropriate resources, build and return the responses.
### How to install
```
go get github.com/samedi/caldav-go
```
### Dependencies
For dependency management, `glide` is used.
```bash
# install glide (once!)
curl https://glide.sh/get | sh
# install dependencies
glide install
```
### How to use it
The easiest way to quickly implement a CalDAV server is by just using the lib's request handler. Example:
```go
package mycaldav
import (
"net/http"
"github.com/samedi/caldav-go"
)
func runServer() {
http.HandleFunc(PATH, caldav.RequestHandler)
http.ListenAndServe(PORT, nil)
}
```
With that, all the HTTP requests (GET, PUT, REPORT, PROPFIND, etc) will be handled and responded by the `caldav` handler. In case of any HTTP methods not supported by the lib, a `501 Not Implemented` response will be returned.
In case you want more flexibility to handle the requests, e.g., if you wanted to access the generated response before being sent back to the caller, you could do like:
```go
package mycaldav
import (
"net/http"
"github.com/samedi/caldav-go"
)
func runServer() {
http.HandleFunc(PATH, myHandler)
http.ListenAndServe(PORT, nil)
}
func myHandler(writer http.ResponseWriter, request *http.Request) {
response := caldav.HandleRequest(writer, request)
// ... do something with the `response` ...
// the response is written with the current `http.ResponseWriter` and ready to be sent back
response.Write(writer)
}
```
### Storage & Resources
The storage is where the caldav resources are stored. To interact with that, the caldav lib needs only a type that conforms with the `data.Storage` interface to operate on top of the storage. Basically, this interface defines all the CRUD functions to work on top of the resources. With that, resources can be stored anywhere: in the filesystem, in the cloud, database, etc. As long as the used storage implements all the required storage interface functions, the caldav lib will work fine.
For example, we could use the following dummy storage implementation:
```go
type DummyStorage struct{
}
func (d *DummyStorage) GetResources(rpath string, withChildren bool) ([]Resource, error) {
return []Resource{}, nil
}
func (d *DummyStorage) GetResourcesByFilters(rpath string, filters *ResourceFilter) ([]Resource, error) {
return []Resource{}, nil
}
func (d *DummyStorage) GetResourcesByList(rpaths []string) ([]Resource, error) {
return []Resource{}, nil
}
func (d *DummyStorage) GetResource(rpath string) (*Resource, bool, error) {
return nil, false, nil
}
func (d *DummyStorage) GetShallowResource(rpath string) (*Resource, bool, error) {
return nil, false, nil
}
func (d *DummyStorage) CreateResource(rpath, content string) (*Resource, error) {
return nil, nil
}
func (d *DummyStorage) UpdateResource(rpath, content string) (*Resource, error) {
return nil, nil
}
func (d *DummyStorage) DeleteResource(rpath string) error {
return nil
}
```
Then we just need to tell the caldav lib to use our dummy storage:
```go
dummyStg := new(DummyStorage)
caldav.SetupStorage(dummyStg)
```
All the CRUD operations on resources will then be forwarded to our dummy storage.
The default storage used (if none is explicitly set) is the `data.FileStorage` which deals with resources as files in the File System.
The resources can be of two types: collection and non-collection. A collection resource is basically a resource that has children resources, but does not have any data content. A non-collection resource is a resource that does not have children, but has data. In the case of a file storage, collections correspond to directories and non-collection to plain files. The data of a caldav resource is all the info that shows up in the calendar client, in the [iCalendar](https://en.wikipedia.org/wiki/ICalendar) format.
### Features
Please check the **CHANGELOG** to see specific features that are currently implemented.
### Contributing and testing
Everyone is welcome to contribute. Please raise an issue or pull request accordingly.
To run the tests:
```
./test.sh
```
### License
MIT License.
[RFC4791]: https://tools.ietf.org/html/rfc4791

14
vendor/github.com/samedi/caldav-go/config.go generated vendored Normal file
View file

@ -0,0 +1,14 @@
package caldav
import (
"github.com/samedi/caldav-go/data"
"github.com/samedi/caldav-go/global"
)
func SetupStorage(stg data.Storage) {
global.Storage = stg
}
func SetupUser(username string) {
global.User = &data.CalUser{username}
}

362
vendor/github.com/samedi/caldav-go/data/filters.go generated vendored Normal file
View file

@ -0,0 +1,362 @@
package data
import (
"log"
"time"
"strings"
"errors"
"github.com/beevik/etree"
"github.com/samedi/caldav-go/lib"
)
// ================ FILTERS ==================
// Filters are a set of rules used to retrieve a range of resources. It is used primarily
// on REPORT requests and is described in details here (RFC4791#7.8).
const (
TAG_FILTER = "filter"
TAG_COMP_FILTER = "comp-filter"
TAG_PROP_FILTER = "prop-filter"
TAG_PARAM_FILTER = "param-filter"
TAG_TIME_RANGE = "time-range"
TAG_TEXT_MATCH = "text-match"
TAG_IS_NOT_DEFINED = "is-not-defined"
// from the RFC, the time range `start` and `end` attributes MUST be in UTC and in this specific format
FILTER_TIME_FORMAT = "20060102T150405Z"
)
type ResourceFilter struct {
name string
text string
attrs map[string]string
children []ResourceFilter // collection of child filters.
etreeElem *etree.Element // holds the parsed XML node/tag as an `etree` element.
}
// This function creates a new filter object from a piece of XML string.
func ParseResourceFilters(xml string) (*ResourceFilter, error) {
doc := etree.NewDocument()
if err := doc.ReadFromString(xml); err != nil {
log.Printf("ERROR: Could not parse filter from XML string. XML:\n%s", xml)
return new(ResourceFilter), err
}
// Right now we're searching for a <filter> tag to initialize the filter struct from it.
// It SHOULD be a valid XML CALDAV:filter tag (RFC4791#9.7). We're not checking namespaces yet.
// TODO: check for XML namespaces and restrict it to accept only CALDAV:filter tag.
elem := doc.FindElement("//" + TAG_FILTER)
if elem == nil {
log.Printf("WARNING: The filter XML should contain a <%s> element. XML:\n%s", TAG_FILTER, xml)
return new(ResourceFilter), errors.New("invalid XML filter")
}
filter := newFilterFromEtreeElem(elem)
return &filter, nil
}
func newFilterFromEtreeElem(elem *etree.Element) ResourceFilter {
// init filter from etree element
filter := ResourceFilter{
name: elem.Tag,
text: strings.TrimSpace(elem.Text()),
etreeElem: elem,
attrs: make(map[string]string),
}
// set attributes
for _, attr := range elem.Attr {
filter.attrs[attr.Key] = attr.Value
}
return filter
}
func (f *ResourceFilter) Attr(attrName string) string {
return f.attrs[attrName]
}
func (f *ResourceFilter) TimeAttr(attrName string) *time.Time {
t, err := time.Parse(FILTER_TIME_FORMAT, f.attrs[attrName])
if err != nil {
return nil
}
return &t
}
// GetTimeRangeFilter checks if the current filter has a child "time-range" filter and
// returns it (wrapped in a `ResourceFilter` type). It returns nil if the current filter does
// not contain any "time-range" filter.
func (f *ResourceFilter) GetTimeRangeFilter() *ResourceFilter {
return f.findChild(TAG_TIME_RANGE, true)
}
func (f *ResourceFilter) Match(target ResourceInterface) bool {
if f.name == TAG_FILTER {
return f.rootFilterMatch(target)
}
return false
}
func (f *ResourceFilter) rootFilterMatch(target ResourceInterface) bool {
if f.isEmpty() {
return false
}
return f.rootChildrenMatch(target)
}
// checks if all the root's child filters match the target resource
func (f *ResourceFilter) rootChildrenMatch(target ResourceInterface) bool {
scope := []string{}
for _, child := range f.getChildren() {
// root filters only accept comp filters as children
if child.name != TAG_COMP_FILTER || !child.compMatch(target, scope) {
return false
}
}
return true
}
// See RFC4791-9.7.1.
func (f *ResourceFilter) compMatch(target ResourceInterface, scope []string) bool {
targetComp := target.ComponentName()
compName := f.attrs["name"]
if f.isEmpty() {
// Point #1 of RFC4791#9.7.1
return compName == targetComp
} else if f.contains(TAG_IS_NOT_DEFINED) {
// Point #2 of RFC4791#9.7.1
return compName != targetComp
} else {
// check each child of the current filter if they all match.
childrenScope := append(scope, compName)
return f.compChildrenMatch(target, childrenScope)
}
}
// checks if all the comp's child filters match the target resource
func (f *ResourceFilter) compChildrenMatch(target ResourceInterface, scope []string) bool {
for _, child := range f.getChildren() {
var match bool
switch child.name {
case TAG_TIME_RANGE:
// Point #3 of RFC4791#9.7.1
match = child.timeRangeMatch(target)
case TAG_PROP_FILTER:
// Point #4 of RFC4791#9.7.1
match = child.propMatch(target, scope)
case TAG_COMP_FILTER:
// Point #4 of RFC4791#9.7.1
match = child.compMatch(target, scope)
}
if !match {
return false
}
}
return true
}
// See RFC4791-9.9
func (f *ResourceFilter) timeRangeMatch(target ResourceInterface) bool {
startAttr := f.attrs["start"]
endAttr := f.attrs["end"]
// at least one of the two MUST be present
if startAttr == "" && endAttr == "" {
// if both of them are missing, return false
return false
} else if startAttr == "" {
// if missing only the `start`, set it open ended to the left
startAttr = "00010101T000000Z"
} else if endAttr == "" {
// if missing only the `end`, set it open ended to the right
endAttr = "99991231T235959Z"
}
// The logic below is only applicable for VEVENT components. So
// we return false if the resource is not a VEVENT component.
if target.ComponentName() != lib.VEVENT {
return false
}
rangeStart, err := time.Parse(FILTER_TIME_FORMAT, startAttr)
if err != nil {
log.Printf("ERROR: Could not parse start time in time-range filter.\nError: %s.\nStart attr: %s", err, startAttr)
return false
}
rangeEnd, err := time.Parse(FILTER_TIME_FORMAT, endAttr)
if err != nil {
log.Printf("ERROR: Could not parse end time in time-range filter.\nError: %s.\nEnd attr: %s", err, endAttr)
return false
}
// the following logic is inferred from the rules table for VEVENT components,
// described in RFC4791-9.9.
overlapRange := func(dtStart, dtEnd, rangeStart, rangeEnd time.Time) bool {
if dtStart.Equal(dtEnd) {
// Lines 3 and 4 of the table deal when the DTSTART and DTEND dates are equals.
// In this case we use the rule: (start <= DTSTART && end > DTSTART)
return (rangeStart.Before(dtStart) || rangeStart.Equal(dtStart)) && rangeEnd.After(dtStart)
} else {
// Lines 1, 2 and 6 of the table deal when the DTSTART and DTEND dates are different.
// In this case we use the rule: (start < DTEND && end > DTSTART)
return rangeStart.Before(dtEnd) && rangeEnd.After(dtStart)
}
}
// first we check each of the target recurrences (if any).
for _, recurrence := range target.Recurrences() {
// if any of them overlap the filter range, we return true right away
if overlapRange(recurrence.StartTime, recurrence.EndTime, rangeStart, rangeEnd) {
return true
}
}
// if none of the recurrences match, we just return if the actual
// resource's `start` and `end` times match the filter range
return overlapRange(target.StartTimeUTC(), target.EndTimeUTC(), rangeStart, rangeEnd)
}
// See RFC4791-9.7.2.
func (f *ResourceFilter) propMatch(target ResourceInterface, scope []string) bool {
propName := f.attrs["name"]
propPath := append(scope, propName)
if f.isEmpty() {
// Point #1 of RFC4791#9.7.2
return target.HasProperty(propPath...)
} else if f.contains(TAG_IS_NOT_DEFINED) {
// Point #2 of RFC4791#9.7.2
return !target.HasProperty(propPath...)
} else {
// check each child of the current filter if they all match.
return f.propChildrenMatch(target, propPath)
}
}
// checks if all the prop's child filters match the target resource
func (f *ResourceFilter) propChildrenMatch(target ResourceInterface, propPath []string) bool {
for _, child := range f.getChildren() {
var match bool
switch child.name {
case TAG_TIME_RANGE:
// Point #3 of RFC4791#9.7.2
// TODO: this point is not very clear on how to match time range against properties.
// So we're returning `false` in the meantime.
match = false
case TAG_TEXT_MATCH:
// Point #4 of RFC4791#9.7.2
propText := target.GetPropertyValue(propPath...)
match = child.textMatch(propText)
case TAG_PARAM_FILTER:
// Point #4 of RFC4791#9.7.2
match = child.paramMatch(target, propPath)
}
if !match {
return false
}
}
return true
}
// See RFC4791-9.7.3
func (f *ResourceFilter) paramMatch(target ResourceInterface, parentPropPath []string) bool {
paramName := f.attrs["name"]
paramPath := append(parentPropPath, paramName)
if f.isEmpty() {
// Point #1 of RFC4791#9.7.3
return target.HasPropertyParam(paramPath...)
} else if f.contains(TAG_IS_NOT_DEFINED) {
// Point #2 of RFC4791#9.7.3
return !target.HasPropertyParam(paramPath...)
} else {
child := f.getChildren()[0]
// param filters can also have (only-one) nested text-match filter
if child.name == TAG_TEXT_MATCH {
paramValue := target.GetPropertyParamValue(paramPath...)
return child.textMatch(paramValue)
}
}
return false
}
// See RFC4791-9.7.5
func (f *ResourceFilter) textMatch(targetText string) bool {
// TODO: collations are not being considered/supported yet.
// Texts are lowered to be case-insensitive, almost as the "i;ascii-casemap" value.
targetText = strings.ToLower(targetText)
expectedSubstr := strings.ToLower(f.text)
match := strings.Contains(targetText, expectedSubstr)
if f.attrs["negate-condition"] == "yes" {
return !match
}
return match
}
func (f *ResourceFilter) isEmpty() bool {
return len(f.getChildren()) == 0 && f.text == ""
}
func (f *ResourceFilter) contains(filterName string) bool {
if f.findChild(filterName, false) != nil {
return true
}
return false
}
func (f *ResourceFilter) findChild(filterName string, dig bool) *ResourceFilter {
for _, child := range f.getChildren() {
if child.name == filterName {
return &child
}
if !dig {
continue
}
dugChild := child.findChild(filterName, true)
if dugChild != nil {
return dugChild
}
}
return nil
}
// lazy evaluation of the child filters
func (f *ResourceFilter) getChildren() []ResourceFilter {
if f.children == nil {
f.children = []ResourceFilter{}
for _, childElem := range f.etreeElem.ChildElements() {
childFilter := newFilterFromEtreeElem(childElem)
f.children = append(f.children, childFilter)
}
}
return f.children
}

277
vendor/github.com/samedi/caldav-go/data/resource.go generated vendored Normal file
View file

@ -0,0 +1,277 @@
package data
import (
"os"
"fmt"
"log"
"time"
"strings"
"strconv"
"io/ioutil"
"github.com/laurent22/ical-go/ical"
"github.com/samedi/caldav-go/lib"
"github.com/samedi/caldav-go/files"
)
type ResourceInterface interface {
ComponentName() string
StartTimeUTC() time.Time
EndTimeUTC() time.Time
Recurrences() []ResourceRecurrence
HasProperty(propPath... string) bool
GetPropertyValue(propPath... string) string
HasPropertyParam(paramName... string) bool
GetPropertyParamValue(paramName... string) string
}
type ResourceAdapter interface {
IsCollection() bool
CalculateEtag() string
GetContent() string
GetContentSize() int64
GetModTime() time.Time
}
type ResourceRecurrence struct {
StartTime time.Time
EndTime time.Time
}
type Resource struct {
Name string
Path string
pathSplit []string
adapter ResourceAdapter
emptyTime time.Time
}
func NewResource(resPath string, adp ResourceAdapter) Resource {
pClean := lib.ToSlashPath(resPath)
pSplit := strings.Split(strings.Trim(pClean, "/"), "/")
return Resource {
Name: pSplit[len(pSplit) - 1],
Path: pClean,
pathSplit: pSplit,
adapter: adp,
}
}
func (r *Resource) IsCollection() bool {
return r.adapter.IsCollection()
}
func (r *Resource) IsPrincipal() bool {
return len(r.pathSplit) <= 1
}
func (r *Resource) ComponentName() string {
if r.IsCollection() {
return lib.VCALENDAR
} else {
return lib.VEVENT
}
}
func (r *Resource) StartTimeUTC() time.Time {
vevent := r.icalVEVENT()
dtstart := vevent.PropDate(ical.DTSTART, r.emptyTime)
if dtstart == r.emptyTime {
log.Printf("WARNING: The property DTSTART was not found in the resource's ical data.\nResource path: %s", r.Path)
return r.emptyTime
}
return dtstart.UTC()
}
func (r *Resource) EndTimeUTC() time.Time {
vevent := r.icalVEVENT()
dtend := vevent.PropDate(ical.DTEND, r.emptyTime)
// when the DTEND property is not present, we just add the DURATION (if any) to the DTSTART
if dtend == r.emptyTime {
duration := vevent.PropDuration(ical.DURATION)
dtend = r.StartTimeUTC().Add(duration)
}
return dtend.UTC()
}
func (r *Resource) Recurrences() []ResourceRecurrence {
// TODO: Implement. This server does not support ical recurrences yet. We just return an empty array.
return []ResourceRecurrence{}
}
func (r *Resource) HasProperty(propPath... string) bool {
return r.GetPropertyValue(propPath...) != ""
}
func (r *Resource) GetPropertyValue(propPath... string) string {
if propPath[0] == ical.VCALENDAR {
propPath = propPath[1:]
}
prop, _ := r.icalendar().DigProperty(propPath...)
return prop
}
func (r *Resource) HasPropertyParam(paramPath... string) bool {
return r.GetPropertyParamValue(paramPath...) != ""
}
func (r *Resource) GetPropertyParamValue(paramPath... string) string {
if paramPath[0] == ical.VCALENDAR {
paramPath = paramPath[1:]
}
param, _ := r.icalendar().DigParameter(paramPath...)
return param
}
func (r *Resource) GetEtag() (string, bool) {
if r.IsCollection() {
return "", false
}
return r.adapter.CalculateEtag(), true
}
func (r *Resource) GetContentType() (string, bool) {
if r.IsCollection() {
return "text/calendar", true
} else {
return "text/calendar; component=vcalendar", true
}
}
func (r *Resource) GetDisplayName() (string, bool) {
return r.Name, true
}
func (r *Resource) GetContentData() (string, bool) {
data := r.adapter.GetContent()
found := data != ""
return data, found
}
func (r *Resource) GetContentLength() (string, bool) {
// If its collection, it does not have any content, so mark it as not found
if r.IsCollection() {
return "", false
}
contentSize := r.adapter.GetContentSize()
return strconv.FormatInt(contentSize, 10), true
}
func (r *Resource) GetLastModified(format string) (string, bool) {
return r.adapter.GetModTime().Format(format), true
}
func (r *Resource) GetOwner() (string, bool) {
var owner string
if len(r.pathSplit) > 1 {
owner = r.pathSplit[0]
} else {
owner = ""
}
return owner, true
}
func (r *Resource) GetOwnerPath() (string, bool) {
owner, _ := r.GetOwner()
if owner != "" {
return fmt.Sprintf("/%s/", owner), true
} else {
return "", false
}
}
// TODO: memoize
func (r *Resource) icalVEVENT() *ical.Node {
vevent := r.icalendar().ChildByName(ical.VEVENT)
// if nil, log it and return an empty vevent
if vevent == nil {
log.Printf("WARNING: The resource's ical data is missing the VEVENT property.\nResource path: %s", r.Path)
return &ical.Node{
Name: ical.VEVENT,
}
}
return vevent
}
// TODO: memoize
func (r *Resource) icalendar() *ical.Node {
data, found := r.GetContentData()
if !found {
log.Printf("WARNING: The resource's ical data does not have any data.\nResource path: %s", r.Path)
return &ical.Node{
Name: ical.VCALENDAR,
}
}
icalNode, err := ical.ParseCalendar(data)
if err != nil {
log.Printf("ERROR: Could not parse the resource's ical data.\nError: %s.\nResource path: %s", err, r.Path)
return &ical.Node{
Name: ical.VCALENDAR,
}
}
return icalNode
}
type FileResourceAdapter struct {
finfo os.FileInfo
resourcePath string
}
func (adp *FileResourceAdapter) IsCollection() bool {
return adp.finfo.IsDir()
}
func (adp *FileResourceAdapter) GetContent() string {
if adp.IsCollection() {
return ""
}
data, err := ioutil.ReadFile(files.AbsPath(adp.resourcePath))
if err != nil {
log.Printf("ERROR: Could not read file content for the resource.\nError: %s.\nResource path: %s.", err, adp.resourcePath)
return ""
}
return string(data)
}
func (adp *FileResourceAdapter) GetContentSize() int64 {
return adp.finfo.Size()
}
func (adp *FileResourceAdapter) CalculateEtag() string {
// returns ETag as the concatenated hex values of a file's
// modification time and size. This is not a reliable synchronization
// mechanism for directories, so for collections we return empty.
if adp.IsCollection() {
return ""
}
fi := adp.finfo
return fmt.Sprintf(`"%x%x"`, fi.ModTime().UnixNano(), fi.Size())
}
func (adp *FileResourceAdapter) GetModTime() time.Time {
return adp.finfo.ModTime()
}

217
vendor/github.com/samedi/caldav-go/data/storage.go generated vendored Normal file
View file

@ -0,0 +1,217 @@
package data
import (
"os"
"log"
"io/ioutil"
"github.com/samedi/caldav-go/errs"
"github.com/samedi/caldav-go/files"
)
// The Storage is the responsible for the CRUD operations on the caldav resources.
type Storage interface {
// GetResources gets a list of resources based on a given `rpath`. The
// `rpath` is the path to the original resource that's being requested. The resultant list
// will/must contain that original resource in it, apart from any additional resources. It also receives
// `withChildren` flag to say if the result must also include all the original resource`s
// children (if original is a collection resource). If `true`, the result will have the requested resource + children.
// If `false`, it will have only the requested original resource (from the `rpath` path).
// It returns errors if anything went wrong or if it could not find any resource on `rpath` path.
GetResources(rpath string, withChildren bool) ([]Resource, error)
// GetResourcesByList fetches a list of resources by path from the storage.
// This method fetches all the `rpaths` and return an array of the reosurces found.
// No error 404 will be returned if one of the resources cannot be found.
// Errors are returned if any errors other than "not found" happens.
GetResourcesByList(rpaths []string) ([]Resource, error)
// GetResourcesByFilters returns the filtered children of a target collection resource.
// The target collection resource is the one pointed by the `rpath` parameter. All of its children
// will be checked against a set of `filters` and the matching ones are returned. The results
// contains only the filtered children and does NOT include the target resource. If the target resource
// is not a collection, an empty array is returned as the result.
GetResourcesByFilters(rpath string, filters *ResourceFilter) ([]Resource, error)
// GetResource gets the requested resource based on a given `rpath` path. It returns the resource (if found) or
// nil (if not found). Also returns a flag specifying if the resource was found or not.
GetResource(rpath string) (*Resource, bool, error)
// GetShallowResource has the same behaviour of `storage.GetResource`. The only difference is that, for collection resources,
// it does not return its children in the collection `storage.Resource` struct (hence the name shallow). The motive is
// for optimizations reasons, as this function is used on places where the collection's children are not important.
GetShallowResource(rpath string) (*Resource, bool, error)
// CreateResource creates a new resource on the `rpath` path with a given `content`.
CreateResource(rpath, content string) (*Resource, error)
// UpdateResource udpates a resource on the `rpath` path with a given `content`.
UpdateResource(rpath, content string) (*Resource, error)
// DeleteResource deletes a resource on the `rpath` path.
DeleteResource(rpath string) error
}
// FileStorage is the storage that deals with resources as files in the file system. So, a collection resource
// is treated as a folder/directory and its children resources are the files it contains. On the other hand, non-collection
// resources are just plain files.
type FileStorage struct {
}
func (fs *FileStorage) GetResources(rpath string, withChildren bool) ([]Resource, error) {
result := []Resource{}
// tries to open the file by the given path
f, e := fs.openResourceFile(rpath, os.O_RDONLY)
if e != nil {
return nil, e
}
// add it as a resource to the result list
finfo, _ := f.Stat()
resource := NewResource(rpath, &FileResourceAdapter{finfo, rpath})
result = append(result, resource)
// if the file is a dir, add its children to the result list
if withChildren && finfo.IsDir() {
dirFiles, _ := f.Readdir(0)
for _, finfo := range dirFiles {
childPath := files.JoinPaths(rpath, finfo.Name())
resource = NewResource(childPath, &FileResourceAdapter{finfo, childPath})
result = append(result, resource)
}
}
return result, nil
}
func (fs *FileStorage) GetResourcesByFilters(rpath string, filters *ResourceFilter) ([]Resource, error) {
result := []Resource{}
childPaths := fs.getDirectoryChildPaths(rpath)
for _, path := range childPaths {
resource, _, err := fs.GetShallowResource(path)
if err != nil {
// if we can't find this resource, something weird went wrong, but not that serious, so we log it and continue
log.Printf("WARNING: returned error when trying to get resource with path %s from collection with path %s. Error: %s", path, rpath, err)
continue
}
// only add it if the resource matches the filters
if filters == nil || filters.Match(resource) {
result = append(result, *resource)
}
}
return result, nil
}
func (fs *FileStorage) GetResourcesByList(rpaths []string) ([]Resource, error) {
results := []Resource{}
for _, rpath := range rpaths {
resource, found, err := fs.GetShallowResource(rpath)
if err != nil && err != errs.ResourceNotFoundError {
return nil, err
}
if found {
results = append(results, *resource)
}
}
return results, nil
}
func (fs *FileStorage) GetResource(rpath string) (*Resource, bool, error) {
// For simplicity we just return the shallow resource.
return fs.GetShallowResource(rpath)
}
func (fs *FileStorage) GetShallowResource(rpath string) (*Resource, bool, error) {
resources, err := fs.GetResources(rpath, false)
if err != nil {
return nil, false, err
}
if resources == nil || len(resources) == 0 {
return nil, false, errs.ResourceNotFoundError
}
res := resources[0]
return &res, true, nil
}
func (fs *FileStorage) CreateResource(rpath, content string) (*Resource, error) {
rAbsPath := files.AbsPath(rpath)
if fs.isResourcePresent(rAbsPath) {
return nil, errs.ResourceAlreadyExistsError
}
// create parent directories (if needed)
if err := os.MkdirAll(files.DirPath(rAbsPath), os.ModePerm); err != nil {
return nil, err
}
// create file/resource and write content
f, err := os.Create(rAbsPath)
if err != nil {
return nil, err
}
f.WriteString(content)
finfo, _ := f.Stat()
res := NewResource(rpath, &FileResourceAdapter{finfo, rpath})
return &res, nil
}
func (fs *FileStorage) UpdateResource(rpath, content string) (*Resource, error) {
f, e := fs.openResourceFile(rpath, os.O_RDWR)
if e != nil {
return nil, e
}
// update content
f.Truncate(0)
f.WriteString(content)
finfo, _ := f.Stat()
res := NewResource(rpath, &FileResourceAdapter{finfo, rpath})
return &res, nil
}
func (fs *FileStorage) DeleteResource(rpath string) error {
err := os.Remove(files.AbsPath(rpath))
return err
}
func (fs *FileStorage) isResourcePresent(rpath string) bool {
_, found, _ := fs.GetShallowResource(rpath)
return found
}
func (fs *FileStorage) openResourceFile(filepath string, mode int) (*os.File, error) {
f, e := os.OpenFile(files.AbsPath(filepath), mode, 0666)
if e != nil {
if os.IsNotExist(e) {
return nil, errs.ResourceNotFoundError
}
return nil, e
}
return f, nil
}
func (fs *FileStorage) getDirectoryChildPaths(dirpath string) []string {
content, err := ioutil.ReadDir(files.AbsPath(dirpath))
if err != nil {
log.Printf("ERROR: Could not read resource as file directory.\nError: %s.\nResource path: %s.", err, dirpath)
return nil
}
result := []string{}
for _, file := range content {
fpath := files.JoinPaths(dirpath, file.Name())
result = append(result, fpath)
}
return result
}

5
vendor/github.com/samedi/caldav-go/data/user.go generated vendored Normal file
View file

@ -0,0 +1,5 @@
package data
type CalUser struct {
Name string
}

12
vendor/github.com/samedi/caldav-go/errs/errors.go generated vendored Normal file
View file

@ -0,0 +1,12 @@
package errs
import (
"errors"
)
var (
ResourceNotFoundError = errors.New("caldav: resource not found")
ResourceAlreadyExistsError = errors.New("caldav: resource already exists")
UnauthorizedError = errors.New("caldav: unauthorized. credentials needed.")
ForbiddenError = errors.New("caldav: forbidden operation.")
)

30
vendor/github.com/samedi/caldav-go/files/paths.go generated vendored Normal file
View file

@ -0,0 +1,30 @@
package files
import (
"strings"
"path/filepath"
"github.com/samedi/caldav-go/lib"
)
const (
Separator = string(filepath.Separator)
)
func AbsPath(path string) string {
path = strings.Trim(path, "/")
absPath, _ := filepath.Abs(path)
return absPath
}
func DirPath(path string) string {
return filepath.Dir(path)
}
func JoinPaths(paths ...string) string {
return filepath.Join(paths...)
}
func ToSlashPath(path string) string {
return lib.ToSlashPath(path)
}

10
vendor/github.com/samedi/caldav-go/glide.lock generated vendored Normal file
View file

@ -0,0 +1,10 @@
hash: 2796726e69757f4af1a13f6ebd056ebc626d712051aa213875bb03f5bdc1ebfd
updated: 2017-01-18T11:43:23.127761353+01:00
imports:
- name: github.com/beevik/etree
version: 4cd0dd976db869f817248477718071a28e978df0
- name: github.com/laurent22/ical-go
version: 4811ac5553eae5fed7cd5d7a9024727f1311b2a2
subpackages:
- ical
testImports: []

7
vendor/github.com/samedi/caldav-go/glide.yaml generated vendored Normal file
View file

@ -0,0 +1,7 @@
package: github.com/samedi/caldav-go
import:
- package: github.com/beevik/etree
- package: github.com/laurent22/ical-go
version: ~0.1.0
subpackages:
- ical

12
vendor/github.com/samedi/caldav-go/global/global.go generated vendored Normal file
View file

@ -0,0 +1,12 @@
package global
// This file defines accessible variables used to setup the caldav server.
import (
"github.com/samedi/caldav-go/data"
)
// The global storage used in the CRUD operations of resources. Default storage is the `FileStorage`.
var Storage data.Storage = new(data.FileStorage)
// Current caldav user. It is used to keep the info of the current user that is interacting with the calendar.
var User *data.CalUser

28
vendor/github.com/samedi/caldav-go/handler.go generated vendored Normal file
View file

@ -0,0 +1,28 @@
package caldav
import (
"net/http"
"github.com/samedi/caldav-go/data"
"github.com/samedi/caldav-go/handlers"
)
// RequestHandler handles the given CALDAV request and writes the reponse righ away. This function is to be
// used by passing it directly as the handle func to the `http` lib. Example: http.HandleFunc("/", caldav.RequestHandler).
func RequestHandler(writer http.ResponseWriter, request *http.Request) {
response := HandleRequest(request)
response.Write(writer)
}
// HandleRequest handles the given CALDAV request and returns the response. Useful when the caller
// wants to do something else with the response before writing it to the response stream.
func HandleRequest(request *http.Request) *handlers.Response {
handler := handlers.NewHandler(request)
return handler.Handle()
}
// HandleRequestWithStorage handles the request the same way as `HandleRequest` does, but before,
// it sets the given storage that will be used throughout the request handling flow.
func HandleRequestWithStorage(request *http.Request, stg data.Storage) *handlers.Response {
SetupStorage(stg)
return HandleRequest(request)
}

24
vendor/github.com/samedi/caldav-go/handlers/builder.go generated vendored Normal file
View file

@ -0,0 +1,24 @@
package handlers
import (
"net/http"
)
type handlerInterface interface {
Handle() *Response
}
func NewHandler(request *http.Request) handlerInterface {
response := NewResponse()
switch request.Method {
case "GET": return getHandler{request, response, false}
case "HEAD": return getHandler{request, response, true}
case "PUT": return putHandler{request, response}
case "DELETE": return deleteHandler{request, response}
case "PROPFIND": return propfindHandler{request, response}
case "OPTIONS": return optionsHandler{response}
case "REPORT": return reportHandler{request, response}
default: return notImplementedHandler{response}
}
}

40
vendor/github.com/samedi/caldav-go/handlers/delete.go generated vendored Normal file
View file

@ -0,0 +1,40 @@
package handlers
import (
"net/http"
"github.com/samedi/caldav-go/global"
)
type deleteHandler struct {
request *http.Request
response *Response
}
func (dh deleteHandler) Handle() *Response {
precond := requestPreconditions{dh.request}
// get the event from the storage
resource, _, err := global.Storage.GetShallowResource(dh.request.URL.Path)
if err != nil {
return dh.response.SetError(err)
}
// TODO: Handle delete on collections
if resource.IsCollection() {
return dh.response.Set(http.StatusMethodNotAllowed, "")
}
// check ETag pre-condition
resourceEtag, _ := resource.GetEtag()
if !precond.IfMatch(resourceEtag) {
return dh.response.Set(http.StatusPreconditionFailed, "")
}
// delete event after pre-condition passed
err = global.Storage.DeleteResource(resource.Path)
if err != nil {
return dh.response.SetError(err)
}
return dh.response.Set(http.StatusNoContent, "")
}

37
vendor/github.com/samedi/caldav-go/handlers/get.go generated vendored Normal file
View file

@ -0,0 +1,37 @@
package handlers
import (
"net/http"
"github.com/samedi/caldav-go/global"
)
type getHandler struct {
request *http.Request
response *Response
onlyHeaders bool
}
func (gh getHandler) Handle() *Response {
resource, _, err := global.Storage.GetResource(gh.request.URL.Path)
if err != nil {
return gh.response.SetError(err)
}
var response string
if gh.onlyHeaders {
response = ""
} else {
response, _ = resource.GetContentData()
}
etag, _ := resource.GetEtag()
lastm, _ := resource.GetLastModified(http.TimeFormat)
ctype, _ := resource.GetContentType()
gh.response.SetHeader("ETag", etag).
SetHeader("Last-Modified", lastm).
SetHeader("Content-Type", ctype).
Set(http.StatusOK, response)
return gh.response
}

27
vendor/github.com/samedi/caldav-go/handlers/headers.go generated vendored Normal file
View file

@ -0,0 +1,27 @@
package handlers
import (
"net/http"
)
const (
HD_DEPTH = "Depth"
HD_DEPTH_DEEP = "1"
HD_PREFER = "Prefer"
HD_PREFER_MINIMAL = "return=minimal"
HD_PREFERENCE_APPLIED = "Preference-Applied"
)
type headers struct {
http.Header
}
func (this headers) IsDeep() bool {
depth := this.Get(HD_DEPTH)
return (depth == HD_DEPTH_DEEP)
}
func (this headers) IsMinimal() bool {
prefer := this.Get(HD_PREFER)
return (prefer == HD_PREFER_MINIMAL)
}

View file

@ -0,0 +1,207 @@
package handlers
import (
"fmt"
"net/http"
"encoding/xml"
"github.com/samedi/caldav-go/lib"
"github.com/samedi/caldav-go/global"
"github.com/samedi/caldav-go/data"
"github.com/samedi/caldav-go/ixml"
)
// Wraps a multistatus response. It contains the set of `Responses`
// that will serve to build the final XML. Multistatus responses are
// used by the REPORT and PROPFIND methods.
type multistatusResp struct {
// The set of multistatus responses used to build each of the <DAV:response> nodes.
Responses []msResponse
// Flag that XML should be minimal or not
// [defined in the draft https://tools.ietf.org/html/draft-murchison-webdav-prefer-05]
Minimal bool
}
type msResponse struct {
Href string
Found bool
Propstats msPropstats
}
type msPropstats map[int]msProps
// Adds a msProp to the map with the key being the prop status.
func (stats msPropstats) Add(prop msProp) {
stats[prop.Status] = append(stats[prop.Status], prop)
}
func (stats msPropstats) Clone() msPropstats {
clone := make(msPropstats)
for k, v := range stats {
clone[k] = v
}
return clone
}
type msProps []msProp
type msProp struct {
Tag xml.Name
Content string
Contents []string
Status int
}
// Function that processes all the required props for a given resource.
// ## Params
// resource: the target calendar resource.
// reqprops: set of required props that must be processed for the resource.
// ## Returns
// The set of props (msProp) processed. Each prop is mapped to a HTTP status code.
// So if a prop is found and processed ok, it'll be mapped to 200. If it's not found,
// it'll be mapped to 404, and so on.
func (ms *multistatusResp) Propstats(resource *data.Resource, reqprops []xml.Name) msPropstats {
if resource == nil {
return nil
}
result := make(msPropstats)
for _, ptag := range reqprops {
pvalue := msProp{
Tag: ptag,
Status: http.StatusOK,
}
pfound := false
switch ptag {
case ixml.CALENDAR_DATA_TG:
pvalue.Content, pfound = resource.GetContentData()
if pfound {
pvalue.Content = ixml.EscapeText(pvalue.Content)
}
case ixml.GET_ETAG_TG:
pvalue.Content, pfound = resource.GetEtag()
case ixml.GET_CONTENT_TYPE_TG:
pvalue.Content, pfound = resource.GetContentType()
case ixml.GET_CONTENT_LENGTH_TG:
pvalue.Content, pfound = resource.GetContentLength()
case ixml.DISPLAY_NAME_TG:
pvalue.Content, pfound = resource.GetDisplayName()
if pfound {
pvalue.Content = ixml.EscapeText(pvalue.Content)
}
case ixml.GET_LAST_MODIFIED_TG:
pvalue.Content, pfound = resource.GetLastModified(http.TimeFormat)
case ixml.OWNER_TG:
pvalue.Content, pfound = resource.GetOwnerPath()
case ixml.GET_CTAG_TG:
pvalue.Content, pfound = resource.GetEtag()
case ixml.PRINCIPAL_URL_TG,
ixml.PRINCIPAL_COLLECTION_SET_TG,
ixml.CALENDAR_USER_ADDRESS_SET_TG,
ixml.CALENDAR_HOME_SET_TG:
pvalue.Content, pfound = ixml.HrefTag(resource.Path), true
case ixml.RESOURCE_TYPE_TG:
if resource.IsCollection() {
pvalue.Content, pfound = ixml.Tag(ixml.COLLECTION_TG, "") + ixml.Tag(ixml.CALENDAR_TG, ""), true
if resource.IsPrincipal() {
pvalue.Content += ixml.Tag(ixml.PRINCIPAL_TG, "")
}
} else {
// resourcetype must be returned empty for non-collection elements
pvalue.Content, pfound = "", true
}
case ixml.CURRENT_USER_PRINCIPAL_TG:
if global.User != nil {
path := fmt.Sprintf("/%s/", global.User.Name)
pvalue.Content, pfound = ixml.HrefTag(path), true
}
case ixml.SUPPORTED_CALENDAR_COMPONENT_SET_TG:
if resource.IsCollection() {
for _, component := range supportedComponents {
// TODO: use ixml somehow to build the below tag
compTag := fmt.Sprintf(`<C:comp name="%s"/>`, component)
pvalue.Contents = append(pvalue.Contents, compTag)
}
pfound = true
}
}
if !pfound {
pvalue.Status = http.StatusNotFound
}
result.Add(pvalue)
}
return result
}
// Adds a new `msResponse` to the `Responses` array.
func (ms *multistatusResp) AddResponse(href string, found bool, propstats msPropstats) {
ms.Responses = append(ms.Responses, msResponse{
Href: href,
Found: found,
Propstats: propstats,
})
}
func (ms *multistatusResp) ToXML() string {
// init multistatus
var bf lib.StringBuffer
bf.Write(`<?xml version="1.0" encoding="UTF-8"?>`)
bf.Write(`<D:multistatus %s>`, ixml.Namespaces())
// iterate over event hrefs and build multistatus XML on the fly
for _, response := range ms.Responses {
bf.Write("<D:response>")
bf.Write(ixml.HrefTag(response.Href))
if response.Found {
propstats := response.Propstats.Clone()
if ms.Minimal {
delete(propstats, http.StatusNotFound)
if len(propstats) == 0 {
bf.Write("<D:propstat>")
bf.Write("<D:prop/>")
bf.Write(ixml.StatusTag(http.StatusOK))
bf.Write("</D:propstat>")
bf.Write("</D:response>")
continue
}
}
for status, props := range propstats {
bf.Write("<D:propstat>")
bf.Write("<D:prop>")
for _, prop := range props {
bf.Write(ms.propToXML(prop))
}
bf.Write("</D:prop>")
bf.Write(ixml.StatusTag(status))
bf.Write("</D:propstat>")
}
} else {
// if does not find the resource set 404
bf.Write(ixml.StatusTag(http.StatusNotFound))
}
bf.Write("</D:response>")
}
bf.Write("</D:multistatus>")
return bf.String()
}
func (ms *multistatusResp) propToXML(prop msProp) string {
for _, content := range prop.Contents {
prop.Content += content
}
xmlString := ixml.Tag(prop.Tag, prop.Content)
return xmlString
}

View file

@ -0,0 +1,13 @@
package handlers
import (
"net/http"
)
type notImplementedHandler struct {
response *Response
}
func (h notImplementedHandler) Handle() *Response {
return h.response.Set(http.StatusNotImplemented, "")
}

23
vendor/github.com/samedi/caldav-go/handlers/options.go generated vendored Normal file
View file

@ -0,0 +1,23 @@
package handlers
import (
"net/http"
)
type optionsHandler struct {
response *Response
}
// Returns the allowed methods and the DAV features implemented by the current server.
// For more information about the values and format read RFC4918 Sections 10.1 and 18.
func (oh optionsHandler) Handle() *Response {
// Set the DAV compliance header:
// 1: Server supports all the requirements specified in RFC2518
// 3: Server supports all the revisions specified in RFC4918
// calendar-access: Server supports all the extensions specified in RFC4791
oh.response.SetHeader("DAV", "1, 3, calendar-access").
SetHeader("Allow", "GET, HEAD, PUT, DELETE, OPTIONS, PROPFIND, REPORT").
Set(http.StatusOK, "")
return oh.response
}

View file

@ -0,0 +1,23 @@
package handlers
import (
"net/http"
)
type requestPreconditions struct {
request *http.Request
}
func (p *requestPreconditions) IfMatch(etag string) bool {
etagMatch := p.request.Header["If-Match"]
return len(etagMatch) == 0 || etagMatch[0] == "*" || etagMatch[0] == etag
}
func (p *requestPreconditions) IfMatchPresent() bool {
return len(p.request.Header["If-Match"]) != 0
}
func (p *requestPreconditions) IfNoneMatch(value string) bool {
valueMatch := p.request.Header["If-None-Match"]
return len(valueMatch) == 1 && valueMatch[0] == value
}

View file

@ -0,0 +1,49 @@
package handlers
import (
"net/http"
"encoding/xml"
"github.com/samedi/caldav-go/global"
)
type propfindHandler struct {
request *http.Request
response *Response
}
func (ph propfindHandler) Handle() *Response {
requestBody := readRequestBody(ph.request)
header := headers{ph.request.Header}
// get the target resources based on the request URL
resources, err := global.Storage.GetResources(ph.request.URL.Path, header.IsDeep())
if err != nil {
return ph.response.SetError(err)
}
// read body string to xml struct
type XMLProp2 struct {
Tags []xml.Name `xml:",any"`
}
type XMLRoot2 struct {
XMLName xml.Name
Prop XMLProp2 `xml:"DAV: prop"`
}
var requestXML XMLRoot2
xml.Unmarshal([]byte(requestBody), &requestXML)
multistatus := &multistatusResp{
Minimal: header.IsMinimal(),
}
// for each href, build the multistatus responses
for _, resource := range resources {
propstats := multistatus.Propstats(&resource, requestXML.Prop.Tags)
multistatus.AddResponse(resource.Path, true, propstats)
}
if multistatus.Minimal {
ph.response.SetHeader(HD_PREFERENCE_APPLIED, HD_PREFER_MINIMAL)
}
return ph.response.Set(207, multistatus.ToXML())
}

65
vendor/github.com/samedi/caldav-go/handlers/put.go generated vendored Normal file
View file

@ -0,0 +1,65 @@
package handlers
import (
"net/http"
"github.com/samedi/caldav-go/errs"
"github.com/samedi/caldav-go/global"
)
type putHandler struct {
request *http.Request
response *Response
}
func (ph putHandler) Handle() *Response {
requestBody := readRequestBody(ph.request)
precond := requestPreconditions{ph.request}
success := false
// check if resource exists
resourcePath := ph.request.URL.Path
resource, found, err := global.Storage.GetShallowResource(resourcePath)
if err != nil && err != errs.ResourceNotFoundError {
return ph.response.SetError(err)
}
// PUT is allowed in 2 cases:
//
// 1. Item NOT FOUND and there is NO ETAG match header: CREATE a new item
if !found && !precond.IfMatchPresent() {
// create new event resource
resource, err = global.Storage.CreateResource(resourcePath, requestBody)
if err != nil {
return ph.response.SetError(err)
}
success = true
}
if found {
// TODO: Handle PUT on collections
if resource.IsCollection() {
return ph.response.Set(http.StatusPreconditionFailed, "")
}
// 2. Item exists, the resource etag is verified and there's no IF-NONE-MATCH=* header: UPDATE the item
resourceEtag, _ := resource.GetEtag()
if found && precond.IfMatch(resourceEtag) && !precond.IfNoneMatch("*") {
// update resource
resource, err = global.Storage.UpdateResource(resourcePath, requestBody)
if err != nil {
return ph.response.SetError(err)
}
success = true
}
}
if !success {
return ph.response.Set(http.StatusPreconditionFailed, "")
}
resourceEtag, _ := resource.GetEtag()
return ph.response.SetHeader("ETag", resourceEtag).
Set(http.StatusCreated, "")
}

168
vendor/github.com/samedi/caldav-go/handlers/report.go generated vendored Normal file
View file

@ -0,0 +1,168 @@
package handlers
import (
"fmt"
"strings"
"net/http"
"encoding/xml"
"github.com/samedi/caldav-go/data"
"github.com/samedi/caldav-go/ixml"
"github.com/samedi/caldav-go/global"
)
type reportHandler struct{
request *http.Request
response *Response
}
// See more at RFC4791#section-7.1
func (rh reportHandler) Handle() *Response {
requestBody := readRequestBody(rh.request)
header := headers{rh.request.Header}
urlResource, found, err := global.Storage.GetShallowResource(rh.request.URL.Path)
if !found {
return rh.response.Set(http.StatusNotFound, "")
} else if err != nil {
return rh.response.SetError(err)
}
// read body string to xml struct
var requestXML reportRootXML
xml.Unmarshal([]byte(requestBody), &requestXML)
// The resources to be reported are fetched by the type of the request. If it is
// a `calendar-multiget`, the resources come based on a set of `hrefs` in the request body.
// If it is a `calendar-query`, the resources are calculated based on set of filters in the request.
var resourcesToReport []reportRes
switch requestXML.XMLName {
case ixml.CALENDAR_MULTIGET_TG:
resourcesToReport, err = rh.fetchResourcesByList(urlResource, requestXML.Hrefs)
case ixml.CALENDAR_QUERY_TG:
resourcesToReport, err = rh.fetchResourcesByFilters(urlResource, requestXML.Filters)
default:
return rh.response.Set(http.StatusPreconditionFailed, "")
}
if err != nil {
return rh.response.SetError(err)
}
multistatus := &multistatusResp{
Minimal: header.IsMinimal(),
}
// for each href, build the multistatus responses
for _, r := range resourcesToReport {
propstats := multistatus.Propstats(r.resource, requestXML.Prop.Tags)
multistatus.AddResponse(r.href, r.found, propstats)
}
if multistatus.Minimal {
rh.response.SetHeader(HD_PREFERENCE_APPLIED, HD_PREFER_MINIMAL)
}
return rh.response.Set(207, multistatus.ToXML())
}
type reportPropXML struct {
Tags []xml.Name `xml:",any"`
}
type reportRootXML struct {
XMLName xml.Name
Prop reportPropXML `xml:"DAV: prop"`
Hrefs []string `xml:"DAV: href"`
Filters reportFilterXML `xml:"urn:ietf:params:xml:ns:caldav filter"`
}
type reportFilterXML struct {
XMLName xml.Name
InnerContent string `xml:",innerxml"`
}
func (this reportFilterXML) toString() string {
return fmt.Sprintf("<%s>%s</%s>", this.XMLName.Local, this.InnerContent, this.XMLName.Local)
}
// Wraps a resource that has to be reported, either fetched by filters or by a list.
// Basically it contains the original requested `href`, the actual `resource` (can be nil)
// and if the `resource` was `found` or not
type reportRes struct {
href string
resource *data.Resource
found bool
}
// The resources are fetched based on the origin resource and a set of filters.
// If the origin resource is a collection, the filters are checked against each of the collection's resources
// to see if they match. The collection's resources that match the filters are returned. The ones that will be returned
// are the resources that were not found (does not exist) and the ones that matched the filters. The ones that did not
// match the filter will not appear in the response result.
// If the origin resource is not a collection, the function just returns it and ignore any filter processing.
// [See RFC4791#section-7.8]
func (rh reportHandler) fetchResourcesByFilters(origin *data.Resource, filtersXML reportFilterXML) ([]reportRes, error) {
// The list of resources that has to be reported back in the response.
reps := []reportRes{}
if origin.IsCollection() {
filters, _ := data.ParseResourceFilters(filtersXML.toString())
resources, err := global.Storage.GetResourcesByFilters(origin.Path, filters)
if err != nil {
return reps, err
}
for _, resource := range resources {
reps = append(reps, reportRes{resource.Path, &resource, true})
}
} else {
// the origin resource is not a collection, so returns just that as the result
reps = append(reps, reportRes{origin.Path, origin, true})
}
return reps, nil
}
// The hrefs can come from (1) the request URL or (2) from the request body itself.
// If the origin resource from the URL points to a collection (2), we will check the request body
// to get the requested `hrefs` (resource paths). Each requested href has to be related to the collection.
// The ones that are not, we simply ignore them.
// If the resource from the URL is NOT a collection (1) we process the the report only for this resource
// and ignore any othre requested hrefs that might be present in the request body.
// [See RFC4791#section-7.9]
func (rh reportHandler) fetchResourcesByList(origin *data.Resource, requestedPaths []string) ([]reportRes, error) {
reps := []reportRes{}
if origin.IsCollection() {
resources, err := global.Storage.GetResourcesByList(requestedPaths)
if err != nil {
return reps, err
}
// we put all the resources found in a map path -> resource.
// this will be used later to query which requested resource was found
// or not and mount the response
resourcesMap := make(map[string]*data.Resource)
for _, resource := range resources {
r := resource
resourcesMap[resource.Path] = &r
}
for _, requestedPath := range requestedPaths {
// if the requested path does not belong to the origin collection, skip
// ('belonging' means that the path's prefix is the same as the collection path)
if !strings.HasPrefix(requestedPath, origin.Path) {
continue
}
resource, found := resourcesMap[requestedPath]
reps = append(reps, reportRes{requestedPath, resource, found})
}
} else {
reps = append(reps, reportRes{origin.Path, origin, true})
}
return reps, nil
}

View file

@ -0,0 +1,65 @@
package handlers
import (
"io"
"net/http"
"github.com/samedi/caldav-go/errs"
)
type Response struct {
Status int
Header http.Header
Body string
Error error
}
func NewResponse() *Response {
return &Response{
Header: make(http.Header),
}
}
func (this *Response) Set(status int, body string) *Response {
this.Status = status
this.Body = body
return this
}
func (this *Response) SetHeader(key, value string) *Response {
this.Header.Set(key, value)
return this
}
func (this *Response) SetError(err error) *Response {
this.Error = err
switch err {
case errs.ResourceNotFoundError:
this.Status = http.StatusNotFound
case errs.UnauthorizedError:
this.Status = http.StatusUnauthorized
case errs.ForbiddenError:
this.Status = http.StatusForbidden
default:
this.Status = http.StatusInternalServerError
}
return this
}
func (this *Response) Write(writer http.ResponseWriter) {
if this.Error == errs.UnauthorizedError {
this.SetHeader("WWW-Authenticate", `Basic realm="Restricted"`)
}
for key, values := range this.Header {
for _, value := range values {
writer.Header().Set(key, value)
}
}
writer.WriteHeader(this.Status)
io.WriteString(writer, this.Body)
}

22
vendor/github.com/samedi/caldav-go/handlers/shared.go generated vendored Normal file
View file

@ -0,0 +1,22 @@
package handlers
import (
"net/http"
"io/ioutil"
"bytes"
"github.com/samedi/caldav-go/lib"
)
// Supported ICal components on this server.
var supportedComponents = []string{lib.VCALENDAR, lib.VEVENT}
// This function reads the request body and restore its content, so that
// the request body can be read a second time.
func readRequestBody(request *http.Request) string {
// Read the content
body, _ := ioutil.ReadAll(request.Body)
// Restore the io.ReadCloser to its original state
request.Body = ioutil.NopCloser(bytes.NewBuffer(body))
// Use the content
return string(body)
}

93
vendor/github.com/samedi/caldav-go/ixml/ixml.go generated vendored Normal file
View file

@ -0,0 +1,93 @@
package ixml
import (
"fmt"
"bytes"
"net/http"
"encoding/xml"
"github.com/samedi/caldav-go/lib"
)
const (
DAV_NS = "DAV:"
CALDAV_NS = "urn:ietf:params:xml:ns:caldav"
CALSERV_NS = "http://calendarserver.org/ns/"
)
var NS_PREFIXES = map[string]string{
DAV_NS: "D",
CALDAV_NS: "C",
CALSERV_NS: "CS",
}
var (
CALENDAR_TG = xml.Name{CALDAV_NS, "calendar"}
CALENDAR_DATA_TG = xml.Name{CALDAV_NS, "calendar-data"}
CALENDAR_HOME_SET_TG = xml.Name{CALDAV_NS, "calendar-home-set"}
CALENDAR_QUERY_TG = xml.Name{CALDAV_NS, "calendar-query"}
CALENDAR_MULTIGET_TG = xml.Name{CALDAV_NS, "calendar-multiget"}
CALENDAR_USER_ADDRESS_SET_TG = xml.Name{CALDAV_NS, "calendar-user-address-set"}
COLLECTION_TG = xml.Name{DAV_NS, "collection"}
CURRENT_USER_PRINCIPAL_TG = xml.Name{DAV_NS, "current-user-principal"}
DISPLAY_NAME_TG = xml.Name{DAV_NS, "displayname"}
GET_CONTENT_LENGTH_TG = xml.Name{DAV_NS, "getcontentlength"}
GET_CONTENT_TYPE_TG = xml.Name{DAV_NS, "getcontenttype"}
GET_CTAG_TG = xml.Name{CALSERV_NS, "getctag"}
GET_ETAG_TG = xml.Name{DAV_NS, "getetag"}
GET_LAST_MODIFIED_TG = xml.Name{DAV_NS, "getlastmodified"}
HREF_TG = xml.Name{DAV_NS, "href"}
OWNER_TG = xml.Name{DAV_NS, "owner"}
PRINCIPAL_TG = xml.Name{DAV_NS, "principal"}
PRINCIPAL_COLLECTION_SET_TG = xml.Name{DAV_NS, "principal-collection-set"}
PRINCIPAL_URL_TG = xml.Name{DAV_NS, "principal-URL"}
RESOURCE_TYPE_TG = xml.Name{DAV_NS, "resourcetype"}
STATUS_TG = xml.Name{DAV_NS, "status"}
SUPPORTED_CALENDAR_COMPONENT_SET_TG = xml.Name{CALDAV_NS, "supported-calendar-component-set"}
)
func Namespaces() string {
bf := new(lib.StringBuffer)
bf.Write(`xmlns:%s="%s" `, NS_PREFIXES[DAV_NS], DAV_NS)
bf.Write(`xmlns:%s="%s" `, NS_PREFIXES[CALDAV_NS], CALDAV_NS)
bf.Write(`xmlns:%s="%s"`, NS_PREFIXES[CALSERV_NS], CALSERV_NS)
return bf.String()
}
// Tag returns a XML tag as string based on the given tag name and content. It
// takes in consideration the namespace and also if it is an empty content or not.
func Tag(xmlName xml.Name, content string) string {
name := xmlName.Local
ns := NS_PREFIXES[xmlName.Space]
if ns != "" {
ns = ns + ":"
}
if content != "" {
return fmt.Sprintf("<%s%s>%s</%s%s>", ns, name, content, ns, name)
} else {
return fmt.Sprintf("<%s%s/>", ns, name)
}
}
// HrefTag returns a DAV <D:href> tag with the given href path.
func HrefTag(href string) (tag string) {
return Tag(HREF_TG, href)
}
// StatusTag returns a DAV <D:status> tag with the given HTTP status. The
// status is translated into a label, e.g.: HTTP/1.1 404 NotFound.
func StatusTag(status int) string {
statusText := fmt.Sprintf("HTTP/1.1 %d %s", status, http.StatusText(status))
return Tag(STATUS_TG, statusText)
}
// EscapeText escapes any special character in the given text and returns the result.
func EscapeText(text string) string {
buffer := bytes.NewBufferString("")
xml.EscapeText(buffer, []byte(text))
return buffer.String()
}

8
vendor/github.com/samedi/caldav-go/lib/components.go generated vendored Normal file
View file

@ -0,0 +1,8 @@
package lib
const (
VCALENDAR = "VCALENDAR"
VEVENT = "VEVENT"
VJOURNAL = "VJOURNAL"
VTODO = "VTODO"
)

10
vendor/github.com/samedi/caldav-go/lib/paths.go generated vendored Normal file
View file

@ -0,0 +1,10 @@
package lib
import (
"path/filepath"
)
func ToSlashPath(path string) string {
cleanPath := filepath.Clean(path)
return filepath.ToSlash(cleanPath)
}

18
vendor/github.com/samedi/caldav-go/lib/strbuff.go generated vendored Normal file
View file

@ -0,0 +1,18 @@
package lib
import (
"fmt"
"bytes"
)
type StringBuffer struct {
buffer bytes.Buffer
}
func (b *StringBuffer) Write(format string, elem ...interface{}) {
b.buffer.WriteString(fmt.Sprintf(format, elem...))
}
func (b *StringBuffer) String() string {
return b.buffer.String()
}

4
vendor/github.com/samedi/caldav-go/test.sh generated vendored Executable file
View file

@ -0,0 +1,4 @@
#!/usr/bin/env bash
go test -race ./...
rm -rf test-data

5
vendor/github.com/samedi/caldav-go/version.go generated vendored Normal file
View file

@ -0,0 +1,5 @@
package caldav
const (
VERSION = "3.0.0"
)