mirror of
https://github.com/vbatts/freezing-octo-hipster.git
synced 2025-06-04 02:52:28 +00:00
go*: one go module for the repo, no more nested
Signed-off-by: Vincent Batts <vbatts@hashbangbash.com>
This commit is contained in:
parent
a788e9ac95
commit
4ab3be9bc6
632 changed files with 153930 additions and 133148 deletions
2
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
2
vendor/github.com/BurntSushi/toml/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,2 @@
|
|||
/toml.test
|
||||
/toml-test
|
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
21
vendor/github.com/BurntSushi/toml/COPYING
generated
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2013 TOML authors
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
120
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
120
vendor/github.com/BurntSushi/toml/README.md
generated
vendored
Normal file
|
@ -0,0 +1,120 @@
|
|||
TOML stands for Tom's Obvious, Minimal Language. This Go package provides a
|
||||
reflection interface similar to Go's standard library `json` and `xml` packages.
|
||||
|
||||
Compatible with TOML version [v1.0.0](https://toml.io/en/v1.0.0).
|
||||
|
||||
Documentation: https://godocs.io/github.com/BurntSushi/toml
|
||||
|
||||
See the [releases page](https://github.com/BurntSushi/toml/releases) for a
|
||||
changelog; this information is also in the git tag annotations (e.g. `git show
|
||||
v0.4.0`).
|
||||
|
||||
This library requires Go 1.13 or newer; add it to your go.mod with:
|
||||
|
||||
% go get github.com/BurntSushi/toml@latest
|
||||
|
||||
It also comes with a TOML validator CLI tool:
|
||||
|
||||
% go install github.com/BurntSushi/toml/cmd/tomlv@latest
|
||||
% tomlv some-toml-file.toml
|
||||
|
||||
### Examples
|
||||
For the simplest example, consider some TOML file as just a list of keys and
|
||||
values:
|
||||
|
||||
```toml
|
||||
Age = 25
|
||||
Cats = [ "Cauchy", "Plato" ]
|
||||
Pi = 3.14
|
||||
Perfection = [ 6, 28, 496, 8128 ]
|
||||
DOB = 1987-07-05T05:45:00Z
|
||||
```
|
||||
|
||||
Which can be decoded with:
|
||||
|
||||
```go
|
||||
type Config struct {
|
||||
Age int
|
||||
Cats []string
|
||||
Pi float64
|
||||
Perfection []int
|
||||
DOB time.Time
|
||||
}
|
||||
|
||||
var conf Config
|
||||
_, err := toml.Decode(tomlData, &conf)
|
||||
```
|
||||
|
||||
You can also use struct tags if your struct field name doesn't map to a TOML key
|
||||
value directly:
|
||||
|
||||
```toml
|
||||
some_key_NAME = "wat"
|
||||
```
|
||||
|
||||
```go
|
||||
type TOML struct {
|
||||
ObscureKey string `toml:"some_key_NAME"`
|
||||
}
|
||||
```
|
||||
|
||||
Beware that like other decoders **only exported fields** are considered when
|
||||
encoding and decoding; private fields are silently ignored.
|
||||
|
||||
### Using the `Marshaler` and `encoding.TextUnmarshaler` interfaces
|
||||
Here's an example that automatically parses values in a `mail.Address`:
|
||||
|
||||
```toml
|
||||
contacts = [
|
||||
"Donald Duck <donald@duckburg.com>",
|
||||
"Scrooge McDuck <scrooge@duckburg.com>",
|
||||
]
|
||||
```
|
||||
|
||||
Can be decoded with:
|
||||
|
||||
```go
|
||||
// Create address type which satisfies the encoding.TextUnmarshaler interface.
|
||||
type address struct {
|
||||
*mail.Address
|
||||
}
|
||||
|
||||
func (a *address) UnmarshalText(text []byte) error {
|
||||
var err error
|
||||
a.Address, err = mail.ParseAddress(string(text))
|
||||
return err
|
||||
}
|
||||
|
||||
// Decode it.
|
||||
func decode() {
|
||||
blob := `
|
||||
contacts = [
|
||||
"Donald Duck <donald@duckburg.com>",
|
||||
"Scrooge McDuck <scrooge@duckburg.com>",
|
||||
]
|
||||
`
|
||||
|
||||
var contacts struct {
|
||||
Contacts []address
|
||||
}
|
||||
|
||||
_, err := toml.Decode(blob, &contacts)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
for _, c := range contacts.Contacts {
|
||||
fmt.Printf("%#v\n", c.Address)
|
||||
}
|
||||
|
||||
// Output:
|
||||
// &mail.Address{Name:"Donald Duck", Address:"donald@duckburg.com"}
|
||||
// &mail.Address{Name:"Scrooge McDuck", Address:"scrooge@duckburg.com"}
|
||||
}
|
||||
```
|
||||
|
||||
To target TOML specifically you can implement `UnmarshalTOML` TOML interface in
|
||||
a similar way.
|
||||
|
||||
### More complex usage
|
||||
See the [`_example/`](/_example) directory for a more complex example.
|
602
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
602
vendor/github.com/BurntSushi/toml/decode.go
generated
vendored
Normal file
|
@ -0,0 +1,602 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"math"
|
||||
"os"
|
||||
"reflect"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
// Unmarshaler is the interface implemented by objects that can unmarshal a
|
||||
// TOML description of themselves.
|
||||
type Unmarshaler interface {
|
||||
UnmarshalTOML(interface{}) error
|
||||
}
|
||||
|
||||
// Unmarshal decodes the contents of data in TOML format into a pointer v.
|
||||
//
|
||||
// See [Decoder] for a description of the decoding process.
|
||||
func Unmarshal(data []byte, v interface{}) error {
|
||||
_, err := NewDecoder(bytes.NewReader(data)).Decode(v)
|
||||
return err
|
||||
}
|
||||
|
||||
// Decode the TOML data in to the pointer v.
|
||||
//
|
||||
// See [Decoder] for a description of the decoding process.
|
||||
func Decode(data string, v interface{}) (MetaData, error) {
|
||||
return NewDecoder(strings.NewReader(data)).Decode(v)
|
||||
}
|
||||
|
||||
// DecodeFile reads the contents of a file and decodes it with [Decode].
|
||||
func DecodeFile(path string, v interface{}) (MetaData, error) {
|
||||
fp, err := os.Open(path)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
defer fp.Close()
|
||||
return NewDecoder(fp).Decode(v)
|
||||
}
|
||||
|
||||
// Primitive is a TOML value that hasn't been decoded into a Go value.
|
||||
//
|
||||
// This type can be used for any value, which will cause decoding to be delayed.
|
||||
// You can use [PrimitiveDecode] to "manually" decode these values.
|
||||
//
|
||||
// NOTE: The underlying representation of a `Primitive` value is subject to
|
||||
// change. Do not rely on it.
|
||||
//
|
||||
// NOTE: Primitive values are still parsed, so using them will only avoid the
|
||||
// overhead of reflection. They can be useful when you don't know the exact type
|
||||
// of TOML data until runtime.
|
||||
type Primitive struct {
|
||||
undecoded interface{}
|
||||
context Key
|
||||
}
|
||||
|
||||
// The significand precision for float32 and float64 is 24 and 53 bits; this is
|
||||
// the range a natural number can be stored in a float without loss of data.
|
||||
const (
|
||||
maxSafeFloat32Int = 16777215 // 2^24-1
|
||||
maxSafeFloat64Int = int64(9007199254740991) // 2^53-1
|
||||
)
|
||||
|
||||
// Decoder decodes TOML data.
|
||||
//
|
||||
// TOML tables correspond to Go structs or maps; they can be used
|
||||
// interchangeably, but structs offer better type safety.
|
||||
//
|
||||
// TOML table arrays correspond to either a slice of structs or a slice of maps.
|
||||
//
|
||||
// TOML datetimes correspond to [time.Time]. Local datetimes are parsed in the
|
||||
// local timezone.
|
||||
//
|
||||
// [time.Duration] types are treated as nanoseconds if the TOML value is an
|
||||
// integer, or they're parsed with time.ParseDuration() if they're strings.
|
||||
//
|
||||
// All other TOML types (float, string, int, bool and array) correspond to the
|
||||
// obvious Go types.
|
||||
//
|
||||
// An exception to the above rules is if a type implements the TextUnmarshaler
|
||||
// interface, in which case any primitive TOML value (floats, strings, integers,
|
||||
// booleans, datetimes) will be converted to a []byte and given to the value's
|
||||
// UnmarshalText method. See the Unmarshaler example for a demonstration with
|
||||
// email addresses.
|
||||
//
|
||||
// # Key mapping
|
||||
//
|
||||
// TOML keys can map to either keys in a Go map or field names in a Go struct.
|
||||
// The special `toml` struct tag can be used to map TOML keys to struct fields
|
||||
// that don't match the key name exactly (see the example). A case insensitive
|
||||
// match to struct names will be tried if an exact match can't be found.
|
||||
//
|
||||
// The mapping between TOML values and Go values is loose. That is, there may
|
||||
// exist TOML values that cannot be placed into your representation, and there
|
||||
// may be parts of your representation that do not correspond to TOML values.
|
||||
// This loose mapping can be made stricter by using the IsDefined and/or
|
||||
// Undecoded methods on the MetaData returned.
|
||||
//
|
||||
// This decoder does not handle cyclic types. Decode will not terminate if a
|
||||
// cyclic type is passed.
|
||||
type Decoder struct {
|
||||
r io.Reader
|
||||
}
|
||||
|
||||
// NewDecoder creates a new Decoder.
|
||||
func NewDecoder(r io.Reader) *Decoder {
|
||||
return &Decoder{r: r}
|
||||
}
|
||||
|
||||
var (
|
||||
unmarshalToml = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
|
||||
unmarshalText = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
|
||||
primitiveType = reflect.TypeOf((*Primitive)(nil)).Elem()
|
||||
)
|
||||
|
||||
// Decode TOML data in to the pointer `v`.
|
||||
func (dec *Decoder) Decode(v interface{}) (MetaData, error) {
|
||||
rv := reflect.ValueOf(v)
|
||||
if rv.Kind() != reflect.Ptr {
|
||||
s := "%q"
|
||||
if reflect.TypeOf(v) == nil {
|
||||
s = "%v"
|
||||
}
|
||||
|
||||
return MetaData{}, fmt.Errorf("toml: cannot decode to non-pointer "+s, reflect.TypeOf(v))
|
||||
}
|
||||
if rv.IsNil() {
|
||||
return MetaData{}, fmt.Errorf("toml: cannot decode to nil value of %q", reflect.TypeOf(v))
|
||||
}
|
||||
|
||||
// Check if this is a supported type: struct, map, interface{}, or something
|
||||
// that implements UnmarshalTOML or UnmarshalText.
|
||||
rv = indirect(rv)
|
||||
rt := rv.Type()
|
||||
if rv.Kind() != reflect.Struct && rv.Kind() != reflect.Map &&
|
||||
!(rv.Kind() == reflect.Interface && rv.NumMethod() == 0) &&
|
||||
!rt.Implements(unmarshalToml) && !rt.Implements(unmarshalText) {
|
||||
return MetaData{}, fmt.Errorf("toml: cannot decode to type %s", rt)
|
||||
}
|
||||
|
||||
// TODO: parser should read from io.Reader? Or at the very least, make it
|
||||
// read from []byte rather than string
|
||||
data, err := ioutil.ReadAll(dec.r)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
|
||||
p, err := parse(string(data))
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
|
||||
md := MetaData{
|
||||
mapping: p.mapping,
|
||||
keyInfo: p.keyInfo,
|
||||
keys: p.ordered,
|
||||
decoded: make(map[string]struct{}, len(p.ordered)),
|
||||
context: nil,
|
||||
data: data,
|
||||
}
|
||||
return md, md.unify(p.mapping, rv)
|
||||
}
|
||||
|
||||
// PrimitiveDecode is just like the other Decode* functions, except it decodes a
|
||||
// TOML value that has already been parsed. Valid primitive values can *only* be
|
||||
// obtained from values filled by the decoder functions, including this method.
|
||||
// (i.e., v may contain more [Primitive] values.)
|
||||
//
|
||||
// Meta data for primitive values is included in the meta data returned by the
|
||||
// Decode* functions with one exception: keys returned by the Undecoded method
|
||||
// will only reflect keys that were decoded. Namely, any keys hidden behind a
|
||||
// Primitive will be considered undecoded. Executing this method will update the
|
||||
// undecoded keys in the meta data. (See the example.)
|
||||
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
|
||||
md.context = primValue.context
|
||||
defer func() { md.context = nil }()
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
|
||||
// unify performs a sort of type unification based on the structure of `rv`,
|
||||
// which is the client representation.
|
||||
//
|
||||
// Any type mismatch produces an error. Finding a type that we don't know
|
||||
// how to handle produces an unsupported type error.
|
||||
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
|
||||
// Special case. Look for a `Primitive` value.
|
||||
// TODO: #76 would make this superfluous after implemented.
|
||||
if rv.Type() == primitiveType {
|
||||
// Save the undecoded data and the key context into the primitive
|
||||
// value.
|
||||
context := make(Key, len(md.context))
|
||||
copy(context, md.context)
|
||||
rv.Set(reflect.ValueOf(Primitive{
|
||||
undecoded: data,
|
||||
context: context,
|
||||
}))
|
||||
return nil
|
||||
}
|
||||
|
||||
rvi := rv.Interface()
|
||||
if v, ok := rvi.(Unmarshaler); ok {
|
||||
return v.UnmarshalTOML(data)
|
||||
}
|
||||
if v, ok := rvi.(encoding.TextUnmarshaler); ok {
|
||||
return md.unifyText(data, v)
|
||||
}
|
||||
|
||||
// TODO:
|
||||
// The behavior here is incorrect whenever a Go type satisfies the
|
||||
// encoding.TextUnmarshaler interface but also corresponds to a TOML hash or
|
||||
// array. In particular, the unmarshaler should only be applied to primitive
|
||||
// TOML values. But at this point, it will be applied to all kinds of values
|
||||
// and produce an incorrect error whenever those values are hashes or arrays
|
||||
// (including arrays of tables).
|
||||
|
||||
k := rv.Kind()
|
||||
|
||||
if k >= reflect.Int && k <= reflect.Uint64 {
|
||||
return md.unifyInt(data, rv)
|
||||
}
|
||||
switch k {
|
||||
case reflect.Ptr:
|
||||
elem := reflect.New(rv.Type().Elem())
|
||||
err := md.unify(data, reflect.Indirect(elem))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rv.Set(elem)
|
||||
return nil
|
||||
case reflect.Struct:
|
||||
return md.unifyStruct(data, rv)
|
||||
case reflect.Map:
|
||||
return md.unifyMap(data, rv)
|
||||
case reflect.Array:
|
||||
return md.unifyArray(data, rv)
|
||||
case reflect.Slice:
|
||||
return md.unifySlice(data, rv)
|
||||
case reflect.String:
|
||||
return md.unifyString(data, rv)
|
||||
case reflect.Bool:
|
||||
return md.unifyBool(data, rv)
|
||||
case reflect.Interface:
|
||||
if rv.NumMethod() > 0 { /// Only empty interfaces are supported.
|
||||
return md.e("unsupported type %s", rv.Type())
|
||||
}
|
||||
return md.unifyAnything(data, rv)
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return md.unifyFloat64(data, rv)
|
||||
}
|
||||
return md.e("unsupported type %s", rv.Kind())
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
|
||||
tmap, ok := mapping.(map[string]interface{})
|
||||
if !ok {
|
||||
if mapping == nil {
|
||||
return nil
|
||||
}
|
||||
return md.e("type mismatch for %s: expected table but found %T",
|
||||
rv.Type().String(), mapping)
|
||||
}
|
||||
|
||||
for key, datum := range tmap {
|
||||
var f *field
|
||||
fields := cachedTypeFields(rv.Type())
|
||||
for i := range fields {
|
||||
ff := &fields[i]
|
||||
if ff.name == key {
|
||||
f = ff
|
||||
break
|
||||
}
|
||||
if f == nil && strings.EqualFold(ff.name, key) {
|
||||
f = ff
|
||||
}
|
||||
}
|
||||
if f != nil {
|
||||
subv := rv
|
||||
for _, i := range f.index {
|
||||
subv = indirect(subv.Field(i))
|
||||
}
|
||||
|
||||
if isUnifiable(subv) {
|
||||
md.decoded[md.context.add(key).String()] = struct{}{}
|
||||
md.context = append(md.context, key)
|
||||
|
||||
err := md.unify(datum, subv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
} else if f.name != "" {
|
||||
return md.e("cannot write unexported field %s.%s", rv.Type().String(), f.name)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
|
||||
keyType := rv.Type().Key().Kind()
|
||||
if keyType != reflect.String && keyType != reflect.Interface {
|
||||
return fmt.Errorf("toml: cannot decode to a map with non-string key type (%s in %q)",
|
||||
keyType, rv.Type())
|
||||
}
|
||||
|
||||
tmap, ok := mapping.(map[string]interface{})
|
||||
if !ok {
|
||||
if tmap == nil {
|
||||
return nil
|
||||
}
|
||||
return md.badtype("map", mapping)
|
||||
}
|
||||
if rv.IsNil() {
|
||||
rv.Set(reflect.MakeMap(rv.Type()))
|
||||
}
|
||||
for k, v := range tmap {
|
||||
md.decoded[md.context.add(k).String()] = struct{}{}
|
||||
md.context = append(md.context, k)
|
||||
|
||||
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
|
||||
|
||||
err := md.unify(v, indirect(rvval))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
md.context = md.context[0 : len(md.context)-1]
|
||||
|
||||
rvkey := indirect(reflect.New(rv.Type().Key()))
|
||||
|
||||
switch keyType {
|
||||
case reflect.Interface:
|
||||
rvkey.Set(reflect.ValueOf(k))
|
||||
case reflect.String:
|
||||
rvkey.SetString(k)
|
||||
}
|
||||
|
||||
rv.SetMapIndex(rvkey, rvval)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return md.badtype("slice", data)
|
||||
}
|
||||
if l := datav.Len(); l != rv.Len() {
|
||||
return md.e("expected array length %d; got TOML array of length %d", rv.Len(), l)
|
||||
}
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
|
||||
datav := reflect.ValueOf(data)
|
||||
if datav.Kind() != reflect.Slice {
|
||||
if !datav.IsValid() {
|
||||
return nil
|
||||
}
|
||||
return md.badtype("slice", data)
|
||||
}
|
||||
n := datav.Len()
|
||||
if rv.IsNil() || rv.Cap() < n {
|
||||
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
|
||||
}
|
||||
rv.SetLen(n)
|
||||
return md.unifySliceArray(datav, rv)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
|
||||
l := data.Len()
|
||||
for i := 0; i < l; i++ {
|
||||
err := md.unify(data.Index(i).Interface(), indirect(rv.Index(i)))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
|
||||
_, ok := rv.Interface().(json.Number)
|
||||
if ok {
|
||||
if i, ok := data.(int64); ok {
|
||||
rv.SetString(strconv.FormatInt(i, 10))
|
||||
} else if f, ok := data.(float64); ok {
|
||||
rv.SetString(strconv.FormatFloat(f, 'f', -1, 64))
|
||||
} else {
|
||||
return md.badtype("string", data)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if s, ok := data.(string); ok {
|
||||
rv.SetString(s)
|
||||
return nil
|
||||
}
|
||||
return md.badtype("string", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
|
||||
rvk := rv.Kind()
|
||||
|
||||
if num, ok := data.(float64); ok {
|
||||
switch rvk {
|
||||
case reflect.Float32:
|
||||
if num < -math.MaxFloat32 || num > math.MaxFloat32 {
|
||||
return md.parseErr(errParseRange{i: num, size: rvk.String()})
|
||||
}
|
||||
fallthrough
|
||||
case reflect.Float64:
|
||||
rv.SetFloat(num)
|
||||
default:
|
||||
panic("bug")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
if num, ok := data.(int64); ok {
|
||||
if (rvk == reflect.Float32 && (num < -maxSafeFloat32Int || num > maxSafeFloat32Int)) ||
|
||||
(rvk == reflect.Float64 && (num < -maxSafeFloat64Int || num > maxSafeFloat64Int)) {
|
||||
return md.parseErr(errParseRange{i: num, size: rvk.String()})
|
||||
}
|
||||
rv.SetFloat(float64(num))
|
||||
return nil
|
||||
}
|
||||
|
||||
return md.badtype("float", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
|
||||
_, ok := rv.Interface().(time.Duration)
|
||||
if ok {
|
||||
// Parse as string duration, and fall back to regular integer parsing
|
||||
// (as nanosecond) if this is not a string.
|
||||
if s, ok := data.(string); ok {
|
||||
dur, err := time.ParseDuration(s)
|
||||
if err != nil {
|
||||
return md.parseErr(errParseDuration{s})
|
||||
}
|
||||
rv.SetInt(int64(dur))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
num, ok := data.(int64)
|
||||
if !ok {
|
||||
return md.badtype("integer", data)
|
||||
}
|
||||
|
||||
rvk := rv.Kind()
|
||||
switch {
|
||||
case rvk >= reflect.Int && rvk <= reflect.Int64:
|
||||
if (rvk == reflect.Int8 && (num < math.MinInt8 || num > math.MaxInt8)) ||
|
||||
(rvk == reflect.Int16 && (num < math.MinInt16 || num > math.MaxInt16)) ||
|
||||
(rvk == reflect.Int32 && (num < math.MinInt32 || num > math.MaxInt32)) {
|
||||
return md.parseErr(errParseRange{i: num, size: rvk.String()})
|
||||
}
|
||||
rv.SetInt(num)
|
||||
case rvk >= reflect.Uint && rvk <= reflect.Uint64:
|
||||
unum := uint64(num)
|
||||
if rvk == reflect.Uint8 && (num < 0 || unum > math.MaxUint8) ||
|
||||
rvk == reflect.Uint16 && (num < 0 || unum > math.MaxUint16) ||
|
||||
rvk == reflect.Uint32 && (num < 0 || unum > math.MaxUint32) {
|
||||
return md.parseErr(errParseRange{i: num, size: rvk.String()})
|
||||
}
|
||||
rv.SetUint(unum)
|
||||
default:
|
||||
panic("unreachable")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
|
||||
if b, ok := data.(bool); ok {
|
||||
rv.SetBool(b)
|
||||
return nil
|
||||
}
|
||||
return md.badtype("boolean", data)
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
|
||||
rv.Set(reflect.ValueOf(data))
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) unifyText(data interface{}, v encoding.TextUnmarshaler) error {
|
||||
var s string
|
||||
switch sdata := data.(type) {
|
||||
case Marshaler:
|
||||
text, err := sdata.MarshalTOML()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s = string(text)
|
||||
case encoding.TextMarshaler:
|
||||
text, err := sdata.MarshalText()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
s = string(text)
|
||||
case fmt.Stringer:
|
||||
s = sdata.String()
|
||||
case string:
|
||||
s = sdata
|
||||
case bool:
|
||||
s = fmt.Sprintf("%v", sdata)
|
||||
case int64:
|
||||
s = fmt.Sprintf("%d", sdata)
|
||||
case float64:
|
||||
s = fmt.Sprintf("%f", sdata)
|
||||
default:
|
||||
return md.badtype("primitive (string-like)", data)
|
||||
}
|
||||
if err := v.UnmarshalText([]byte(s)); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (md *MetaData) badtype(dst string, data interface{}) error {
|
||||
return md.e("incompatible types: TOML value has type %T; destination has type %s", data, dst)
|
||||
}
|
||||
|
||||
func (md *MetaData) parseErr(err error) error {
|
||||
k := md.context.String()
|
||||
return ParseError{
|
||||
LastKey: k,
|
||||
Position: md.keyInfo[k].pos,
|
||||
Line: md.keyInfo[k].pos.Line,
|
||||
err: err,
|
||||
input: string(md.data),
|
||||
}
|
||||
}
|
||||
|
||||
func (md *MetaData) e(format string, args ...interface{}) error {
|
||||
f := "toml: "
|
||||
if len(md.context) > 0 {
|
||||
f = fmt.Sprintf("toml: (last key %q): ", md.context)
|
||||
p := md.keyInfo[md.context.String()].pos
|
||||
if p.Line > 0 {
|
||||
f = fmt.Sprintf("toml: line %d (last key %q): ", p.Line, md.context)
|
||||
}
|
||||
}
|
||||
return fmt.Errorf(f+format, args...)
|
||||
}
|
||||
|
||||
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
|
||||
func rvalue(v interface{}) reflect.Value {
|
||||
return indirect(reflect.ValueOf(v))
|
||||
}
|
||||
|
||||
// indirect returns the value pointed to by a pointer.
|
||||
//
|
||||
// Pointers are followed until the value is not a pointer. New values are
|
||||
// allocated for each nil pointer.
|
||||
//
|
||||
// An exception to this rule is if the value satisfies an interface of interest
|
||||
// to us (like encoding.TextUnmarshaler).
|
||||
func indirect(v reflect.Value) reflect.Value {
|
||||
if v.Kind() != reflect.Ptr {
|
||||
if v.CanSet() {
|
||||
pv := v.Addr()
|
||||
pvi := pv.Interface()
|
||||
if _, ok := pvi.(encoding.TextUnmarshaler); ok {
|
||||
return pv
|
||||
}
|
||||
if _, ok := pvi.(Unmarshaler); ok {
|
||||
return pv
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
if v.IsNil() {
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
}
|
||||
return indirect(reflect.Indirect(v))
|
||||
}
|
||||
|
||||
func isUnifiable(rv reflect.Value) bool {
|
||||
if rv.CanSet() {
|
||||
return true
|
||||
}
|
||||
rvi := rv.Interface()
|
||||
if _, ok := rvi.(encoding.TextUnmarshaler); ok {
|
||||
return true
|
||||
}
|
||||
if _, ok := rvi.(Unmarshaler); ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
19
vendor/github.com/BurntSushi/toml/decode_go116.go
generated
vendored
Normal file
19
vendor/github.com/BurntSushi/toml/decode_go116.go
generated
vendored
Normal file
|
@ -0,0 +1,19 @@
|
|||
//go:build go1.16
|
||||
// +build go1.16
|
||||
|
||||
package toml
|
||||
|
||||
import (
|
||||
"io/fs"
|
||||
)
|
||||
|
||||
// DecodeFS reads the contents of a file from [fs.FS] and decodes it with
|
||||
// [Decode].
|
||||
func DecodeFS(fsys fs.FS, path string, v interface{}) (MetaData, error) {
|
||||
fp, err := fsys.Open(path)
|
||||
if err != nil {
|
||||
return MetaData{}, err
|
||||
}
|
||||
defer fp.Close()
|
||||
return NewDecoder(fp).Decode(v)
|
||||
}
|
29
vendor/github.com/BurntSushi/toml/deprecated.go
generated
vendored
Normal file
29
vendor/github.com/BurntSushi/toml/deprecated.go
generated
vendored
Normal file
|
@ -0,0 +1,29 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"encoding"
|
||||
"io"
|
||||
)
|
||||
|
||||
// TextMarshaler is an alias for encoding.TextMarshaler.
|
||||
//
|
||||
// Deprecated: use encoding.TextMarshaler
|
||||
type TextMarshaler encoding.TextMarshaler
|
||||
|
||||
// TextUnmarshaler is an alias for encoding.TextUnmarshaler.
|
||||
//
|
||||
// Deprecated: use encoding.TextUnmarshaler
|
||||
type TextUnmarshaler encoding.TextUnmarshaler
|
||||
|
||||
// PrimitiveDecode is an alias for MetaData.PrimitiveDecode().
|
||||
//
|
||||
// Deprecated: use MetaData.PrimitiveDecode.
|
||||
func PrimitiveDecode(primValue Primitive, v interface{}) error {
|
||||
md := MetaData{decoded: make(map[string]struct{})}
|
||||
return md.unify(primValue.undecoded, rvalue(v))
|
||||
}
|
||||
|
||||
// DecodeReader is an alias for NewDecoder(r).Decode(v).
|
||||
//
|
||||
// Deprecated: use NewDecoder(reader).Decode(&value).
|
||||
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) { return NewDecoder(r).Decode(v) }
|
11
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
11
vendor/github.com/BurntSushi/toml/doc.go
generated
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
// Package toml implements decoding and encoding of TOML files.
|
||||
//
|
||||
// This package supports TOML v1.0.0, as specified at https://toml.io
|
||||
//
|
||||
// There is also support for delaying decoding with the Primitive type, and
|
||||
// querying the set of keys in a TOML document with the MetaData type.
|
||||
//
|
||||
// The github.com/BurntSushi/toml/cmd/tomlv package implements a TOML validator,
|
||||
// and can be used to verify if TOML document is valid. It can also be used to
|
||||
// print the type of each key.
|
||||
package toml
|
759
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
759
vendor/github.com/BurntSushi/toml/encode.go
generated
vendored
Normal file
|
@ -0,0 +1,759 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"encoding"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"reflect"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/BurntSushi/toml/internal"
|
||||
)
|
||||
|
||||
type tomlEncodeError struct{ error }
|
||||
|
||||
var (
|
||||
errArrayNilElement = errors.New("toml: cannot encode array with nil element")
|
||||
errNonString = errors.New("toml: cannot encode a map with non-string key type")
|
||||
errNoKey = errors.New("toml: top-level values must be Go maps or structs")
|
||||
errAnything = errors.New("") // used in testing
|
||||
)
|
||||
|
||||
var dblQuotedReplacer = strings.NewReplacer(
|
||||
"\"", "\\\"",
|
||||
"\\", "\\\\",
|
||||
"\x00", `\u0000`,
|
||||
"\x01", `\u0001`,
|
||||
"\x02", `\u0002`,
|
||||
"\x03", `\u0003`,
|
||||
"\x04", `\u0004`,
|
||||
"\x05", `\u0005`,
|
||||
"\x06", `\u0006`,
|
||||
"\x07", `\u0007`,
|
||||
"\b", `\b`,
|
||||
"\t", `\t`,
|
||||
"\n", `\n`,
|
||||
"\x0b", `\u000b`,
|
||||
"\f", `\f`,
|
||||
"\r", `\r`,
|
||||
"\x0e", `\u000e`,
|
||||
"\x0f", `\u000f`,
|
||||
"\x10", `\u0010`,
|
||||
"\x11", `\u0011`,
|
||||
"\x12", `\u0012`,
|
||||
"\x13", `\u0013`,
|
||||
"\x14", `\u0014`,
|
||||
"\x15", `\u0015`,
|
||||
"\x16", `\u0016`,
|
||||
"\x17", `\u0017`,
|
||||
"\x18", `\u0018`,
|
||||
"\x19", `\u0019`,
|
||||
"\x1a", `\u001a`,
|
||||
"\x1b", `\u001b`,
|
||||
"\x1c", `\u001c`,
|
||||
"\x1d", `\u001d`,
|
||||
"\x1e", `\u001e`,
|
||||
"\x1f", `\u001f`,
|
||||
"\x7f", `\u007f`,
|
||||
)
|
||||
|
||||
var (
|
||||
marshalToml = reflect.TypeOf((*Marshaler)(nil)).Elem()
|
||||
marshalText = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
|
||||
timeType = reflect.TypeOf((*time.Time)(nil)).Elem()
|
||||
)
|
||||
|
||||
// Marshaler is the interface implemented by types that can marshal themselves
|
||||
// into valid TOML.
|
||||
type Marshaler interface {
|
||||
MarshalTOML() ([]byte, error)
|
||||
}
|
||||
|
||||
// Encoder encodes a Go to a TOML document.
|
||||
//
|
||||
// The mapping between Go values and TOML values should be precisely the same as
|
||||
// for [Decode].
|
||||
//
|
||||
// time.Time is encoded as a RFC 3339 string, and time.Duration as its string
|
||||
// representation.
|
||||
//
|
||||
// The [Marshaler] and [encoding.TextMarshaler] interfaces are supported to
|
||||
// encoding the value as custom TOML.
|
||||
//
|
||||
// If you want to write arbitrary binary data then you will need to use
|
||||
// something like base64 since TOML does not have any binary types.
|
||||
//
|
||||
// When encoding TOML hashes (Go maps or structs), keys without any sub-hashes
|
||||
// are encoded first.
|
||||
//
|
||||
// Go maps will be sorted alphabetically by key for deterministic output.
|
||||
//
|
||||
// The toml struct tag can be used to provide the key name; if omitted the
|
||||
// struct field name will be used. If the "omitempty" option is present the
|
||||
// following value will be skipped:
|
||||
//
|
||||
// - arrays, slices, maps, and string with len of 0
|
||||
// - struct with all zero values
|
||||
// - bool false
|
||||
//
|
||||
// If omitzero is given all int and float types with a value of 0 will be
|
||||
// skipped.
|
||||
//
|
||||
// Encoding Go values without a corresponding TOML representation will return an
|
||||
// error. Examples of this includes maps with non-string keys, slices with nil
|
||||
// elements, embedded non-struct types, and nested slices containing maps or
|
||||
// structs. (e.g. [][]map[string]string is not allowed but []map[string]string
|
||||
// is okay, as is []map[string][]string).
|
||||
//
|
||||
// NOTE: only exported keys are encoded due to the use of reflection. Unexported
|
||||
// keys are silently discarded.
|
||||
type Encoder struct {
|
||||
// String to use for a single indentation level; default is two spaces.
|
||||
Indent string
|
||||
|
||||
w *bufio.Writer
|
||||
hasWritten bool // written any output to w yet?
|
||||
}
|
||||
|
||||
// NewEncoder create a new Encoder.
|
||||
func NewEncoder(w io.Writer) *Encoder {
|
||||
return &Encoder{
|
||||
w: bufio.NewWriter(w),
|
||||
Indent: " ",
|
||||
}
|
||||
}
|
||||
|
||||
// Encode writes a TOML representation of the Go value to the [Encoder]'s writer.
|
||||
//
|
||||
// An error is returned if the value given cannot be encoded to a valid TOML
|
||||
// document.
|
||||
func (enc *Encoder) Encode(v interface{}) error {
|
||||
rv := eindirect(reflect.ValueOf(v))
|
||||
err := enc.safeEncode(Key([]string{}), rv)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return enc.w.Flush()
|
||||
}
|
||||
|
||||
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
if terr, ok := r.(tomlEncodeError); ok {
|
||||
err = terr.error
|
||||
return
|
||||
}
|
||||
panic(r)
|
||||
}
|
||||
}()
|
||||
enc.encode(key, rv)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (enc *Encoder) encode(key Key, rv reflect.Value) {
|
||||
// If we can marshal the type to text, then we use that. This prevents the
|
||||
// encoder for handling these types as generic structs (or whatever the
|
||||
// underlying type of a TextMarshaler is).
|
||||
switch {
|
||||
case isMarshaler(rv):
|
||||
enc.writeKeyValue(key, rv, false)
|
||||
return
|
||||
case rv.Type() == primitiveType: // TODO: #76 would make this superfluous after implemented.
|
||||
enc.encode(key, reflect.ValueOf(rv.Interface().(Primitive).undecoded))
|
||||
return
|
||||
}
|
||||
|
||||
k := rv.Kind()
|
||||
switch k {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64,
|
||||
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
|
||||
enc.writeKeyValue(key, rv, false)
|
||||
case reflect.Array, reflect.Slice:
|
||||
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
|
||||
enc.eArrayOfTables(key, rv)
|
||||
} else {
|
||||
enc.writeKeyValue(key, rv, false)
|
||||
}
|
||||
case reflect.Interface:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Map:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.eTable(key, rv)
|
||||
case reflect.Ptr:
|
||||
if rv.IsNil() {
|
||||
return
|
||||
}
|
||||
enc.encode(key, rv.Elem())
|
||||
case reflect.Struct:
|
||||
enc.eTable(key, rv)
|
||||
default:
|
||||
encPanic(fmt.Errorf("unsupported type for key '%s': %s", key, k))
|
||||
}
|
||||
}
|
||||
|
||||
// eElement encodes any value that can be an array element.
|
||||
func (enc *Encoder) eElement(rv reflect.Value) {
|
||||
switch v := rv.Interface().(type) {
|
||||
case time.Time: // Using TextMarshaler adds extra quotes, which we don't want.
|
||||
format := time.RFC3339Nano
|
||||
switch v.Location() {
|
||||
case internal.LocalDatetime:
|
||||
format = "2006-01-02T15:04:05.999999999"
|
||||
case internal.LocalDate:
|
||||
format = "2006-01-02"
|
||||
case internal.LocalTime:
|
||||
format = "15:04:05.999999999"
|
||||
}
|
||||
switch v.Location() {
|
||||
default:
|
||||
enc.wf(v.Format(format))
|
||||
case internal.LocalDatetime, internal.LocalDate, internal.LocalTime:
|
||||
enc.wf(v.In(time.UTC).Format(format))
|
||||
}
|
||||
return
|
||||
case Marshaler:
|
||||
s, err := v.MarshalTOML()
|
||||
if err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
if s == nil {
|
||||
encPanic(errors.New("MarshalTOML returned nil and no error"))
|
||||
}
|
||||
enc.w.Write(s)
|
||||
return
|
||||
case encoding.TextMarshaler:
|
||||
s, err := v.MarshalText()
|
||||
if err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
if s == nil {
|
||||
encPanic(errors.New("MarshalText returned nil and no error"))
|
||||
}
|
||||
enc.writeQuoted(string(s))
|
||||
return
|
||||
case time.Duration:
|
||||
enc.writeQuoted(v.String())
|
||||
return
|
||||
case json.Number:
|
||||
n, _ := rv.Interface().(json.Number)
|
||||
|
||||
if n == "" { /// Useful zero value.
|
||||
enc.w.WriteByte('0')
|
||||
return
|
||||
} else if v, err := n.Int64(); err == nil {
|
||||
enc.eElement(reflect.ValueOf(v))
|
||||
return
|
||||
} else if v, err := n.Float64(); err == nil {
|
||||
enc.eElement(reflect.ValueOf(v))
|
||||
return
|
||||
}
|
||||
encPanic(fmt.Errorf("unable to convert %q to int64 or float64", n))
|
||||
}
|
||||
|
||||
switch rv.Kind() {
|
||||
case reflect.Ptr:
|
||||
enc.eElement(rv.Elem())
|
||||
return
|
||||
case reflect.String:
|
||||
enc.writeQuoted(rv.String())
|
||||
case reflect.Bool:
|
||||
enc.wf(strconv.FormatBool(rv.Bool()))
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
enc.wf(strconv.FormatInt(rv.Int(), 10))
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
enc.wf(strconv.FormatUint(rv.Uint(), 10))
|
||||
case reflect.Float32:
|
||||
f := rv.Float()
|
||||
if math.IsNaN(f) {
|
||||
enc.wf("nan")
|
||||
} else if math.IsInf(f, 0) {
|
||||
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
|
||||
} else {
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 32)))
|
||||
}
|
||||
case reflect.Float64:
|
||||
f := rv.Float()
|
||||
if math.IsNaN(f) {
|
||||
enc.wf("nan")
|
||||
} else if math.IsInf(f, 0) {
|
||||
enc.wf("%cinf", map[bool]byte{true: '-', false: '+'}[math.Signbit(f)])
|
||||
} else {
|
||||
enc.wf(floatAddDecimal(strconv.FormatFloat(f, 'f', -1, 64)))
|
||||
}
|
||||
case reflect.Array, reflect.Slice:
|
||||
enc.eArrayOrSliceElement(rv)
|
||||
case reflect.Struct:
|
||||
enc.eStruct(nil, rv, true)
|
||||
case reflect.Map:
|
||||
enc.eMap(nil, rv, true)
|
||||
case reflect.Interface:
|
||||
enc.eElement(rv.Elem())
|
||||
default:
|
||||
encPanic(fmt.Errorf("unexpected type: %T", rv.Interface()))
|
||||
}
|
||||
}
|
||||
|
||||
// By the TOML spec, all floats must have a decimal with at least one number on
|
||||
// either side.
|
||||
func floatAddDecimal(fstr string) string {
|
||||
if !strings.Contains(fstr, ".") {
|
||||
return fstr + ".0"
|
||||
}
|
||||
return fstr
|
||||
}
|
||||
|
||||
func (enc *Encoder) writeQuoted(s string) {
|
||||
enc.wf("\"%s\"", dblQuotedReplacer.Replace(s))
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
|
||||
length := rv.Len()
|
||||
enc.wf("[")
|
||||
for i := 0; i < length; i++ {
|
||||
elem := eindirect(rv.Index(i))
|
||||
enc.eElement(elem)
|
||||
if i != length-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
}
|
||||
enc.wf("]")
|
||||
}
|
||||
|
||||
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
|
||||
if len(key) == 0 {
|
||||
encPanic(errNoKey)
|
||||
}
|
||||
for i := 0; i < rv.Len(); i++ {
|
||||
trv := eindirect(rv.Index(i))
|
||||
if isNil(trv) {
|
||||
continue
|
||||
}
|
||||
enc.newline()
|
||||
enc.wf("%s[[%s]]", enc.indentStr(key), key)
|
||||
enc.newline()
|
||||
enc.eMapOrStruct(key, trv, false)
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
|
||||
if len(key) == 1 {
|
||||
// Output an extra newline between top-level tables.
|
||||
// (The newline isn't written if nothing else has been written though.)
|
||||
enc.newline()
|
||||
}
|
||||
if len(key) > 0 {
|
||||
enc.wf("%s[%s]", enc.indentStr(key), key)
|
||||
enc.newline()
|
||||
}
|
||||
enc.eMapOrStruct(key, rv, false)
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value, inline bool) {
|
||||
switch rv.Kind() {
|
||||
case reflect.Map:
|
||||
enc.eMap(key, rv, inline)
|
||||
case reflect.Struct:
|
||||
enc.eStruct(key, rv, inline)
|
||||
default:
|
||||
// Should never happen?
|
||||
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) eMap(key Key, rv reflect.Value, inline bool) {
|
||||
rt := rv.Type()
|
||||
if rt.Key().Kind() != reflect.String {
|
||||
encPanic(errNonString)
|
||||
}
|
||||
|
||||
// Sort keys so that we have deterministic output. And write keys directly
|
||||
// underneath this key first, before writing sub-structs or sub-maps.
|
||||
var mapKeysDirect, mapKeysSub []string
|
||||
for _, mapKey := range rv.MapKeys() {
|
||||
k := mapKey.String()
|
||||
if typeIsTable(tomlTypeOfGo(eindirect(rv.MapIndex(mapKey)))) {
|
||||
mapKeysSub = append(mapKeysSub, k)
|
||||
} else {
|
||||
mapKeysDirect = append(mapKeysDirect, k)
|
||||
}
|
||||
}
|
||||
|
||||
var writeMapKeys = func(mapKeys []string, trailC bool) {
|
||||
sort.Strings(mapKeys)
|
||||
for i, mapKey := range mapKeys {
|
||||
val := eindirect(rv.MapIndex(reflect.ValueOf(mapKey)))
|
||||
if isNil(val) {
|
||||
continue
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.writeKeyValue(Key{mapKey}, val, true)
|
||||
if trailC || i != len(mapKeys)-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
} else {
|
||||
enc.encode(key.add(mapKey), val)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.wf("{")
|
||||
}
|
||||
writeMapKeys(mapKeysDirect, len(mapKeysSub) > 0)
|
||||
writeMapKeys(mapKeysSub, false)
|
||||
if inline {
|
||||
enc.wf("}")
|
||||
}
|
||||
}
|
||||
|
||||
const is32Bit = (32 << (^uint(0) >> 63)) == 32
|
||||
|
||||
func pointerTo(t reflect.Type) reflect.Type {
|
||||
if t.Kind() == reflect.Ptr {
|
||||
return pointerTo(t.Elem())
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
func (enc *Encoder) eStruct(key Key, rv reflect.Value, inline bool) {
|
||||
// Write keys for fields directly under this key first, because if we write
|
||||
// a field that creates a new table then all keys under it will be in that
|
||||
// table (not the one we're writing here).
|
||||
//
|
||||
// Fields is a [][]int: for fieldsDirect this always has one entry (the
|
||||
// struct index). For fieldsSub it contains two entries: the parent field
|
||||
// index from tv, and the field indexes for the fields of the sub.
|
||||
var (
|
||||
rt = rv.Type()
|
||||
fieldsDirect, fieldsSub [][]int
|
||||
addFields func(rt reflect.Type, rv reflect.Value, start []int)
|
||||
)
|
||||
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
|
||||
for i := 0; i < rt.NumField(); i++ {
|
||||
f := rt.Field(i)
|
||||
isEmbed := f.Anonymous && pointerTo(f.Type).Kind() == reflect.Struct
|
||||
if f.PkgPath != "" && !isEmbed { /// Skip unexported fields.
|
||||
continue
|
||||
}
|
||||
opts := getOptions(f.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
|
||||
frv := eindirect(rv.Field(i))
|
||||
|
||||
if is32Bit {
|
||||
// Copy so it works correct on 32bit archs; not clear why this
|
||||
// is needed. See #314, and https://www.reddit.com/r/golang/comments/pnx8v4
|
||||
// This also works fine on 64bit, but 32bit archs are somewhat
|
||||
// rare and this is a wee bit faster.
|
||||
copyStart := make([]int, len(start))
|
||||
copy(copyStart, start)
|
||||
start = copyStart
|
||||
}
|
||||
|
||||
// Treat anonymous struct fields with tag names as though they are
|
||||
// not anonymous, like encoding/json does.
|
||||
//
|
||||
// Non-struct anonymous fields use the normal encoding logic.
|
||||
if isEmbed {
|
||||
if getOptions(f.Tag).name == "" && frv.Kind() == reflect.Struct {
|
||||
addFields(frv.Type(), frv, append(start, f.Index...))
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
if typeIsTable(tomlTypeOfGo(frv)) {
|
||||
fieldsSub = append(fieldsSub, append(start, f.Index...))
|
||||
} else {
|
||||
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
|
||||
}
|
||||
}
|
||||
}
|
||||
addFields(rt, rv, nil)
|
||||
|
||||
writeFields := func(fields [][]int) {
|
||||
for _, fieldIndex := range fields {
|
||||
fieldType := rt.FieldByIndex(fieldIndex)
|
||||
fieldVal := rv.FieldByIndex(fieldIndex)
|
||||
|
||||
opts := getOptions(fieldType.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
if opts.omitempty && isEmpty(fieldVal) {
|
||||
continue
|
||||
}
|
||||
|
||||
fieldVal = eindirect(fieldVal)
|
||||
|
||||
if isNil(fieldVal) { /// Don't write anything for nil fields.
|
||||
continue
|
||||
}
|
||||
|
||||
keyName := fieldType.Name
|
||||
if opts.name != "" {
|
||||
keyName = opts.name
|
||||
}
|
||||
|
||||
if opts.omitzero && isZero(fieldVal) {
|
||||
continue
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.writeKeyValue(Key{keyName}, fieldVal, true)
|
||||
if fieldIndex[0] != len(fields)-1 {
|
||||
enc.wf(", ")
|
||||
}
|
||||
} else {
|
||||
enc.encode(key.add(keyName), fieldVal)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if inline {
|
||||
enc.wf("{")
|
||||
}
|
||||
writeFields(fieldsDirect)
|
||||
writeFields(fieldsSub)
|
||||
if inline {
|
||||
enc.wf("}")
|
||||
}
|
||||
}
|
||||
|
||||
// tomlTypeOfGo returns the TOML type name of the Go value's type.
|
||||
//
|
||||
// It is used to determine whether the types of array elements are mixed (which
|
||||
// is forbidden). If the Go value is nil, then it is illegal for it to be an
|
||||
// array element, and valueIsNil is returned as true.
|
||||
//
|
||||
// The type may be `nil`, which means no concrete TOML type could be found.
|
||||
func tomlTypeOfGo(rv reflect.Value) tomlType {
|
||||
if isNil(rv) || !rv.IsValid() {
|
||||
return nil
|
||||
}
|
||||
|
||||
if rv.Kind() == reflect.Struct {
|
||||
if rv.Type() == timeType {
|
||||
return tomlDatetime
|
||||
}
|
||||
if isMarshaler(rv) {
|
||||
return tomlString
|
||||
}
|
||||
return tomlHash
|
||||
}
|
||||
|
||||
if isMarshaler(rv) {
|
||||
return tomlString
|
||||
}
|
||||
|
||||
switch rv.Kind() {
|
||||
case reflect.Bool:
|
||||
return tomlBool
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
|
||||
reflect.Int64,
|
||||
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
|
||||
reflect.Uint64:
|
||||
return tomlInteger
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return tomlFloat
|
||||
case reflect.Array, reflect.Slice:
|
||||
if isTableArray(rv) {
|
||||
return tomlArrayHash
|
||||
}
|
||||
return tomlArray
|
||||
case reflect.Ptr, reflect.Interface:
|
||||
return tomlTypeOfGo(rv.Elem())
|
||||
case reflect.String:
|
||||
return tomlString
|
||||
case reflect.Map:
|
||||
return tomlHash
|
||||
default:
|
||||
encPanic(errors.New("unsupported type: " + rv.Kind().String()))
|
||||
panic("unreachable")
|
||||
}
|
||||
}
|
||||
|
||||
func isMarshaler(rv reflect.Value) bool {
|
||||
return rv.Type().Implements(marshalText) || rv.Type().Implements(marshalToml)
|
||||
}
|
||||
|
||||
// isTableArray reports if all entries in the array or slice are a table.
|
||||
func isTableArray(arr reflect.Value) bool {
|
||||
if isNil(arr) || !arr.IsValid() || arr.Len() == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
ret := true
|
||||
for i := 0; i < arr.Len(); i++ {
|
||||
tt := tomlTypeOfGo(eindirect(arr.Index(i)))
|
||||
// Don't allow nil.
|
||||
if tt == nil {
|
||||
encPanic(errArrayNilElement)
|
||||
}
|
||||
|
||||
if ret && !typeEqual(tomlHash, tt) {
|
||||
ret = false
|
||||
}
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
type tagOptions struct {
|
||||
skip bool // "-"
|
||||
name string
|
||||
omitempty bool
|
||||
omitzero bool
|
||||
}
|
||||
|
||||
func getOptions(tag reflect.StructTag) tagOptions {
|
||||
t := tag.Get("toml")
|
||||
if t == "-" {
|
||||
return tagOptions{skip: true}
|
||||
}
|
||||
var opts tagOptions
|
||||
parts := strings.Split(t, ",")
|
||||
opts.name = parts[0]
|
||||
for _, s := range parts[1:] {
|
||||
switch s {
|
||||
case "omitempty":
|
||||
opts.omitempty = true
|
||||
case "omitzero":
|
||||
opts.omitzero = true
|
||||
}
|
||||
}
|
||||
return opts
|
||||
}
|
||||
|
||||
func isZero(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
return rv.Int() == 0
|
||||
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
|
||||
return rv.Uint() == 0
|
||||
case reflect.Float32, reflect.Float64:
|
||||
return rv.Float() == 0.0
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func isEmpty(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
|
||||
return rv.Len() == 0
|
||||
case reflect.Struct:
|
||||
if rv.Type().Comparable() {
|
||||
return reflect.Zero(rv.Type()).Interface() == rv.Interface()
|
||||
}
|
||||
// Need to also check if all the fields are empty, otherwise something
|
||||
// like this with uncomparable types will always return true:
|
||||
//
|
||||
// type a struct{ field b }
|
||||
// type b struct{ s []string }
|
||||
// s := a{field: b{s: []string{"AAA"}}}
|
||||
for i := 0; i < rv.NumField(); i++ {
|
||||
if !isEmpty(rv.Field(i)) {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
case reflect.Bool:
|
||||
return !rv.Bool()
|
||||
case reflect.Ptr:
|
||||
return rv.IsNil()
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (enc *Encoder) newline() {
|
||||
if enc.hasWritten {
|
||||
enc.wf("\n")
|
||||
}
|
||||
}
|
||||
|
||||
// Write a key/value pair:
|
||||
//
|
||||
// key = <any value>
|
||||
//
|
||||
// This is also used for "k = v" in inline tables; so something like this will
|
||||
// be written in three calls:
|
||||
//
|
||||
// ┌───────────────────┐
|
||||
// │ ┌───┐ ┌────┐│
|
||||
// v v v v vv
|
||||
// key = {k = 1, k2 = 2}
|
||||
func (enc *Encoder) writeKeyValue(key Key, val reflect.Value, inline bool) {
|
||||
/// Marshaler used on top-level document; call eElement() to just call
|
||||
/// Marshal{TOML,Text}.
|
||||
if len(key) == 0 {
|
||||
enc.eElement(val)
|
||||
return
|
||||
}
|
||||
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
|
||||
enc.eElement(val)
|
||||
if !inline {
|
||||
enc.newline()
|
||||
}
|
||||
}
|
||||
|
||||
func (enc *Encoder) wf(format string, v ...interface{}) {
|
||||
_, err := fmt.Fprintf(enc.w, format, v...)
|
||||
if err != nil {
|
||||
encPanic(err)
|
||||
}
|
||||
enc.hasWritten = true
|
||||
}
|
||||
|
||||
func (enc *Encoder) indentStr(key Key) string {
|
||||
return strings.Repeat(enc.Indent, len(key)-1)
|
||||
}
|
||||
|
||||
func encPanic(err error) {
|
||||
panic(tomlEncodeError{err})
|
||||
}
|
||||
|
||||
// Resolve any level of pointers to the actual value (e.g. **string → string).
|
||||
func eindirect(v reflect.Value) reflect.Value {
|
||||
if v.Kind() != reflect.Ptr && v.Kind() != reflect.Interface {
|
||||
if isMarshaler(v) {
|
||||
return v
|
||||
}
|
||||
if v.CanAddr() { /// Special case for marshalers; see #358.
|
||||
if pv := v.Addr(); isMarshaler(pv) {
|
||||
return pv
|
||||
}
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
if v.IsNil() {
|
||||
return v
|
||||
}
|
||||
|
||||
return eindirect(v.Elem())
|
||||
}
|
||||
|
||||
func isNil(rv reflect.Value) bool {
|
||||
switch rv.Kind() {
|
||||
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
|
||||
return rv.IsNil()
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
279
vendor/github.com/BurntSushi/toml/error.go
generated
vendored
Normal file
279
vendor/github.com/BurntSushi/toml/error.go
generated
vendored
Normal file
|
@ -0,0 +1,279 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ParseError is returned when there is an error parsing the TOML syntax such as
|
||||
// invalid syntax, duplicate keys, etc.
|
||||
//
|
||||
// In addition to the error message itself, you can also print detailed location
|
||||
// information with context by using [ErrorWithPosition]:
|
||||
//
|
||||
// toml: error: Key 'fruit' was already created and cannot be used as an array.
|
||||
//
|
||||
// At line 4, column 2-7:
|
||||
//
|
||||
// 2 | fruit = []
|
||||
// 3 |
|
||||
// 4 | [[fruit]] # Not allowed
|
||||
// ^^^^^
|
||||
//
|
||||
// [ErrorWithUsage] can be used to print the above with some more detailed usage
|
||||
// guidance:
|
||||
//
|
||||
// toml: error: newlines not allowed within inline tables
|
||||
//
|
||||
// At line 1, column 18:
|
||||
//
|
||||
// 1 | x = [{ key = 42 #
|
||||
// ^
|
||||
//
|
||||
// Error help:
|
||||
//
|
||||
// Inline tables must always be on a single line:
|
||||
//
|
||||
// table = {key = 42, second = 43}
|
||||
//
|
||||
// It is invalid to split them over multiple lines like so:
|
||||
//
|
||||
// # INVALID
|
||||
// table = {
|
||||
// key = 42,
|
||||
// second = 43
|
||||
// }
|
||||
//
|
||||
// Use regular for this:
|
||||
//
|
||||
// [table]
|
||||
// key = 42
|
||||
// second = 43
|
||||
type ParseError struct {
|
||||
Message string // Short technical message.
|
||||
Usage string // Longer message with usage guidance; may be blank.
|
||||
Position Position // Position of the error
|
||||
LastKey string // Last parsed key, may be blank.
|
||||
|
||||
// Line the error occurred.
|
||||
//
|
||||
// Deprecated: use [Position].
|
||||
Line int
|
||||
|
||||
err error
|
||||
input string
|
||||
}
|
||||
|
||||
// Position of an error.
|
||||
type Position struct {
|
||||
Line int // Line number, starting at 1.
|
||||
Start int // Start of error, as byte offset starting at 0.
|
||||
Len int // Lenght in bytes.
|
||||
}
|
||||
|
||||
func (pe ParseError) Error() string {
|
||||
msg := pe.Message
|
||||
if msg == "" { // Error from errorf()
|
||||
msg = pe.err.Error()
|
||||
}
|
||||
|
||||
if pe.LastKey == "" {
|
||||
return fmt.Sprintf("toml: line %d: %s", pe.Position.Line, msg)
|
||||
}
|
||||
return fmt.Sprintf("toml: line %d (last key %q): %s",
|
||||
pe.Position.Line, pe.LastKey, msg)
|
||||
}
|
||||
|
||||
// ErrorWithPosition returns the error with detailed location context.
|
||||
//
|
||||
// See the documentation on [ParseError].
|
||||
func (pe ParseError) ErrorWithPosition() string {
|
||||
if pe.input == "" { // Should never happen, but just in case.
|
||||
return pe.Error()
|
||||
}
|
||||
|
||||
var (
|
||||
lines = strings.Split(pe.input, "\n")
|
||||
col = pe.column(lines)
|
||||
b = new(strings.Builder)
|
||||
)
|
||||
|
||||
msg := pe.Message
|
||||
if msg == "" {
|
||||
msg = pe.err.Error()
|
||||
}
|
||||
|
||||
// TODO: don't show control characters as literals? This may not show up
|
||||
// well everywhere.
|
||||
|
||||
if pe.Position.Len == 1 {
|
||||
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d:\n\n",
|
||||
msg, pe.Position.Line, col+1)
|
||||
} else {
|
||||
fmt.Fprintf(b, "toml: error: %s\n\nAt line %d, column %d-%d:\n\n",
|
||||
msg, pe.Position.Line, col, col+pe.Position.Len)
|
||||
}
|
||||
if pe.Position.Line > 2 {
|
||||
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-2, lines[pe.Position.Line-3])
|
||||
}
|
||||
if pe.Position.Line > 1 {
|
||||
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line-1, lines[pe.Position.Line-2])
|
||||
}
|
||||
fmt.Fprintf(b, "% 7d | %s\n", pe.Position.Line, lines[pe.Position.Line-1])
|
||||
fmt.Fprintf(b, "% 10s%s%s\n", "", strings.Repeat(" ", col), strings.Repeat("^", pe.Position.Len))
|
||||
return b.String()
|
||||
}
|
||||
|
||||
// ErrorWithUsage returns the error with detailed location context and usage
|
||||
// guidance.
|
||||
//
|
||||
// See the documentation on [ParseError].
|
||||
func (pe ParseError) ErrorWithUsage() string {
|
||||
m := pe.ErrorWithPosition()
|
||||
if u, ok := pe.err.(interface{ Usage() string }); ok && u.Usage() != "" {
|
||||
lines := strings.Split(strings.TrimSpace(u.Usage()), "\n")
|
||||
for i := range lines {
|
||||
if lines[i] != "" {
|
||||
lines[i] = " " + lines[i]
|
||||
}
|
||||
}
|
||||
return m + "Error help:\n\n" + strings.Join(lines, "\n") + "\n"
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func (pe ParseError) column(lines []string) int {
|
||||
var pos, col int
|
||||
for i := range lines {
|
||||
ll := len(lines[i]) + 1 // +1 for the removed newline
|
||||
if pos+ll >= pe.Position.Start {
|
||||
col = pe.Position.Start - pos
|
||||
if col < 0 { // Should never happen, but just in case.
|
||||
col = 0
|
||||
}
|
||||
break
|
||||
}
|
||||
pos += ll
|
||||
}
|
||||
|
||||
return col
|
||||
}
|
||||
|
||||
type (
|
||||
errLexControl struct{ r rune }
|
||||
errLexEscape struct{ r rune }
|
||||
errLexUTF8 struct{ b byte }
|
||||
errLexInvalidNum struct{ v string }
|
||||
errLexInvalidDate struct{ v string }
|
||||
errLexInlineTableNL struct{}
|
||||
errLexStringNL struct{}
|
||||
errParseRange struct {
|
||||
i interface{} // int or float
|
||||
size string // "int64", "uint16", etc.
|
||||
}
|
||||
errParseDuration struct{ d string }
|
||||
)
|
||||
|
||||
func (e errLexControl) Error() string {
|
||||
return fmt.Sprintf("TOML files cannot contain control characters: '0x%02x'", e.r)
|
||||
}
|
||||
func (e errLexControl) Usage() string { return "" }
|
||||
|
||||
func (e errLexEscape) Error() string { return fmt.Sprintf(`invalid escape in string '\%c'`, e.r) }
|
||||
func (e errLexEscape) Usage() string { return usageEscape }
|
||||
func (e errLexUTF8) Error() string { return fmt.Sprintf("invalid UTF-8 byte: 0x%02x", e.b) }
|
||||
func (e errLexUTF8) Usage() string { return "" }
|
||||
func (e errLexInvalidNum) Error() string { return fmt.Sprintf("invalid number: %q", e.v) }
|
||||
func (e errLexInvalidNum) Usage() string { return "" }
|
||||
func (e errLexInvalidDate) Error() string { return fmt.Sprintf("invalid date: %q", e.v) }
|
||||
func (e errLexInvalidDate) Usage() string { return "" }
|
||||
func (e errLexInlineTableNL) Error() string { return "newlines not allowed within inline tables" }
|
||||
func (e errLexInlineTableNL) Usage() string { return usageInlineNewline }
|
||||
func (e errLexStringNL) Error() string { return "strings cannot contain newlines" }
|
||||
func (e errLexStringNL) Usage() string { return usageStringNewline }
|
||||
func (e errParseRange) Error() string { return fmt.Sprintf("%v is out of range for %s", e.i, e.size) }
|
||||
func (e errParseRange) Usage() string { return usageIntOverflow }
|
||||
func (e errParseDuration) Error() string { return fmt.Sprintf("invalid duration: %q", e.d) }
|
||||
func (e errParseDuration) Usage() string { return usageDuration }
|
||||
|
||||
const usageEscape = `
|
||||
A '\' inside a "-delimited string is interpreted as an escape character.
|
||||
|
||||
The following escape sequences are supported:
|
||||
\b, \t, \n, \f, \r, \", \\, \uXXXX, and \UXXXXXXXX
|
||||
|
||||
To prevent a '\' from being recognized as an escape character, use either:
|
||||
|
||||
- a ' or '''-delimited string; escape characters aren't processed in them; or
|
||||
- write two backslashes to get a single backslash: '\\'.
|
||||
|
||||
If you're trying to add a Windows path (e.g. "C:\Users\martin") then using '/'
|
||||
instead of '\' will usually also work: "C:/Users/martin".
|
||||
`
|
||||
|
||||
const usageInlineNewline = `
|
||||
Inline tables must always be on a single line:
|
||||
|
||||
table = {key = 42, second = 43}
|
||||
|
||||
It is invalid to split them over multiple lines like so:
|
||||
|
||||
# INVALID
|
||||
table = {
|
||||
key = 42,
|
||||
second = 43
|
||||
}
|
||||
|
||||
Use regular for this:
|
||||
|
||||
[table]
|
||||
key = 42
|
||||
second = 43
|
||||
`
|
||||
|
||||
const usageStringNewline = `
|
||||
Strings must always be on a single line, and cannot span more than one line:
|
||||
|
||||
# INVALID
|
||||
string = "Hello,
|
||||
world!"
|
||||
|
||||
Instead use """ or ''' to split strings over multiple lines:
|
||||
|
||||
string = """Hello,
|
||||
world!"""
|
||||
`
|
||||
|
||||
const usageIntOverflow = `
|
||||
This number is too large; this may be an error in the TOML, but it can also be a
|
||||
bug in the program that uses too small of an integer.
|
||||
|
||||
The maximum and minimum values are:
|
||||
|
||||
size │ lowest │ highest
|
||||
───────┼────────────────┼──────────
|
||||
int8 │ -128 │ 127
|
||||
int16 │ -32,768 │ 32,767
|
||||
int32 │ -2,147,483,648 │ 2,147,483,647
|
||||
int64 │ -9.2 × 10¹⁷ │ 9.2 × 10¹⁷
|
||||
uint8 │ 0 │ 255
|
||||
uint16 │ 0 │ 65535
|
||||
uint32 │ 0 │ 4294967295
|
||||
uint64 │ 0 │ 1.8 × 10¹⁸
|
||||
|
||||
int refers to int32 on 32-bit systems and int64 on 64-bit systems.
|
||||
`
|
||||
|
||||
const usageDuration = `
|
||||
A duration must be as "number<unit>", without any spaces. Valid units are:
|
||||
|
||||
ns nanoseconds (billionth of a second)
|
||||
us, µs microseconds (millionth of a second)
|
||||
ms milliseconds (thousands of a second)
|
||||
s seconds
|
||||
m minutes
|
||||
h hours
|
||||
|
||||
You can combine multiple units; for example "5m10s" for 5 minutes and 10
|
||||
seconds.
|
||||
`
|
3
vendor/github.com/BurntSushi/toml/go.mod
generated
vendored
Normal file
3
vendor/github.com/BurntSushi/toml/go.mod
generated
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
module github.com/BurntSushi/toml
|
||||
|
||||
go 1.16
|
36
vendor/github.com/BurntSushi/toml/internal/tz.go
generated
vendored
Normal file
36
vendor/github.com/BurntSushi/toml/internal/tz.go
generated
vendored
Normal file
|
@ -0,0 +1,36 @@
|
|||
package internal
|
||||
|
||||
import "time"
|
||||
|
||||
// Timezones used for local datetime, date, and time TOML types.
|
||||
//
|
||||
// The exact way times and dates without a timezone should be interpreted is not
|
||||
// well-defined in the TOML specification and left to the implementation. These
|
||||
// defaults to current local timezone offset of the computer, but this can be
|
||||
// changed by changing these variables before decoding.
|
||||
//
|
||||
// TODO:
|
||||
// Ideally we'd like to offer people the ability to configure the used timezone
|
||||
// by setting Decoder.Timezone and Encoder.Timezone; however, this is a bit
|
||||
// tricky: the reason we use three different variables for this is to support
|
||||
// round-tripping – without these specific TZ names we wouldn't know which
|
||||
// format to use.
|
||||
//
|
||||
// There isn't a good way to encode this right now though, and passing this sort
|
||||
// of information also ties in to various related issues such as string format
|
||||
// encoding, encoding of comments, etc.
|
||||
//
|
||||
// So, for the time being, just put this in internal until we can write a good
|
||||
// comprehensive API for doing all of this.
|
||||
//
|
||||
// The reason they're exported is because they're referred from in e.g.
|
||||
// internal/tag.
|
||||
//
|
||||
// Note that this behaviour is valid according to the TOML spec as the exact
|
||||
// behaviour is left up to implementations.
|
||||
var (
|
||||
localOffset = func() int { _, o := time.Now().Zone(); return o }()
|
||||
LocalDatetime = time.FixedZone("datetime-local", localOffset)
|
||||
LocalDate = time.FixedZone("date-local", localOffset)
|
||||
LocalTime = time.FixedZone("time-local", localOffset)
|
||||
)
|
1283
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
1283
vendor/github.com/BurntSushi/toml/lex.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load diff
121
vendor/github.com/BurntSushi/toml/meta.go
generated
vendored
Normal file
121
vendor/github.com/BurntSushi/toml/meta.go
generated
vendored
Normal file
|
@ -0,0 +1,121 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// MetaData allows access to meta information about TOML data that's not
|
||||
// accessible otherwise.
|
||||
//
|
||||
// It allows checking if a key is defined in the TOML data, whether any keys
|
||||
// were undecoded, and the TOML type of a key.
|
||||
type MetaData struct {
|
||||
context Key // Used only during decoding.
|
||||
|
||||
keyInfo map[string]keyInfo
|
||||
mapping map[string]interface{}
|
||||
keys []Key
|
||||
decoded map[string]struct{}
|
||||
data []byte // Input file; for errors.
|
||||
}
|
||||
|
||||
// IsDefined reports if the key exists in the TOML data.
|
||||
//
|
||||
// The key should be specified hierarchically, for example to access the TOML
|
||||
// key "a.b.c" you would use IsDefined("a", "b", "c"). Keys are case sensitive.
|
||||
//
|
||||
// Returns false for an empty key.
|
||||
func (md *MetaData) IsDefined(key ...string) bool {
|
||||
if len(key) == 0 {
|
||||
return false
|
||||
}
|
||||
|
||||
var (
|
||||
hash map[string]interface{}
|
||||
ok bool
|
||||
hashOrVal interface{} = md.mapping
|
||||
)
|
||||
for _, k := range key {
|
||||
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
|
||||
return false
|
||||
}
|
||||
if hashOrVal, ok = hash[k]; !ok {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// Type returns a string representation of the type of the key specified.
|
||||
//
|
||||
// Type will return the empty string if given an empty key or a key that does
|
||||
// not exist. Keys are case sensitive.
|
||||
func (md *MetaData) Type(key ...string) string {
|
||||
if ki, ok := md.keyInfo[Key(key).String()]; ok {
|
||||
return ki.tomlType.typeString()
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Keys returns a slice of every key in the TOML data, including key groups.
|
||||
//
|
||||
// Each key is itself a slice, where the first element is the top of the
|
||||
// hierarchy and the last is the most specific. The list will have the same
|
||||
// order as the keys appeared in the TOML data.
|
||||
//
|
||||
// All keys returned are non-empty.
|
||||
func (md *MetaData) Keys() []Key {
|
||||
return md.keys
|
||||
}
|
||||
|
||||
// Undecoded returns all keys that have not been decoded in the order in which
|
||||
// they appear in the original TOML document.
|
||||
//
|
||||
// This includes keys that haven't been decoded because of a [Primitive] value.
|
||||
// Once the Primitive value is decoded, the keys will be considered decoded.
|
||||
//
|
||||
// Also note that decoding into an empty interface will result in no decoding,
|
||||
// and so no keys will be considered decoded.
|
||||
//
|
||||
// In this sense, the Undecoded keys correspond to keys in the TOML document
|
||||
// that do not have a concrete type in your representation.
|
||||
func (md *MetaData) Undecoded() []Key {
|
||||
undecoded := make([]Key, 0, len(md.keys))
|
||||
for _, key := range md.keys {
|
||||
if _, ok := md.decoded[key.String()]; !ok {
|
||||
undecoded = append(undecoded, key)
|
||||
}
|
||||
}
|
||||
return undecoded
|
||||
}
|
||||
|
||||
// Key represents any TOML key, including key groups. Use [MetaData.Keys] to get
|
||||
// values of this type.
|
||||
type Key []string
|
||||
|
||||
func (k Key) String() string {
|
||||
ss := make([]string, len(k))
|
||||
for i := range k {
|
||||
ss[i] = k.maybeQuoted(i)
|
||||
}
|
||||
return strings.Join(ss, ".")
|
||||
}
|
||||
|
||||
func (k Key) maybeQuoted(i int) string {
|
||||
if k[i] == "" {
|
||||
return `""`
|
||||
}
|
||||
for _, c := range k[i] {
|
||||
if !isBareKeyChar(c, false) {
|
||||
return `"` + dblQuotedReplacer.Replace(k[i]) + `"`
|
||||
}
|
||||
}
|
||||
return k[i]
|
||||
}
|
||||
|
||||
func (k Key) add(piece string) Key {
|
||||
newKey := make(Key, len(k)+1)
|
||||
copy(newKey, k)
|
||||
newKey[len(k)] = piece
|
||||
return newKey
|
||||
}
|
811
vendor/github.com/BurntSushi/toml/parse.go
generated
vendored
Normal file
811
vendor/github.com/BurntSushi/toml/parse.go
generated
vendored
Normal file
|
@ -0,0 +1,811 @@
|
|||
package toml
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/BurntSushi/toml/internal"
|
||||
)
|
||||
|
||||
type parser struct {
|
||||
lx *lexer
|
||||
context Key // Full key for the current hash in scope.
|
||||
currentKey string // Base key name for everything except hashes.
|
||||
pos Position // Current position in the TOML file.
|
||||
tomlNext bool
|
||||
|
||||
ordered []Key // List of keys in the order that they appear in the TOML data.
|
||||
|
||||
keyInfo map[string]keyInfo // Map keyname → info about the TOML key.
|
||||
mapping map[string]interface{} // Map keyname → key value.
|
||||
implicits map[string]struct{} // Record implicit keys (e.g. "key.group.names").
|
||||
}
|
||||
|
||||
type keyInfo struct {
|
||||
pos Position
|
||||
tomlType tomlType
|
||||
}
|
||||
|
||||
func parse(data string) (p *parser, err error) {
|
||||
_, tomlNext := os.LookupEnv("BURNTSUSHI_TOML_110")
|
||||
|
||||
defer func() {
|
||||
if r := recover(); r != nil {
|
||||
if pErr, ok := r.(ParseError); ok {
|
||||
pErr.input = data
|
||||
err = pErr
|
||||
return
|
||||
}
|
||||
panic(r)
|
||||
}
|
||||
}()
|
||||
|
||||
// Read over BOM; do this here as the lexer calls utf8.DecodeRuneInString()
|
||||
// which mangles stuff. UTF-16 BOM isn't strictly valid, but some tools add
|
||||
// it anyway.
|
||||
if strings.HasPrefix(data, "\xff\xfe") || strings.HasPrefix(data, "\xfe\xff") { // UTF-16
|
||||
data = data[2:]
|
||||
} else if strings.HasPrefix(data, "\xef\xbb\xbf") { // UTF-8
|
||||
data = data[3:]
|
||||
}
|
||||
|
||||
// Examine first few bytes for NULL bytes; this probably means it's a UTF-16
|
||||
// file (second byte in surrogate pair being NULL). Again, do this here to
|
||||
// avoid having to deal with UTF-8/16 stuff in the lexer.
|
||||
ex := 6
|
||||
if len(data) < 6 {
|
||||
ex = len(data)
|
||||
}
|
||||
if i := strings.IndexRune(data[:ex], 0); i > -1 {
|
||||
return nil, ParseError{
|
||||
Message: "files cannot contain NULL bytes; probably using UTF-16; TOML files must be UTF-8",
|
||||
Position: Position{Line: 1, Start: i, Len: 1},
|
||||
Line: 1,
|
||||
input: data,
|
||||
}
|
||||
}
|
||||
|
||||
p = &parser{
|
||||
keyInfo: make(map[string]keyInfo),
|
||||
mapping: make(map[string]interface{}),
|
||||
lx: lex(data, tomlNext),
|
||||
ordered: make([]Key, 0),
|
||||
implicits: make(map[string]struct{}),
|
||||
tomlNext: tomlNext,
|
||||
}
|
||||
for {
|
||||
item := p.next()
|
||||
if item.typ == itemEOF {
|
||||
break
|
||||
}
|
||||
p.topLevel(item)
|
||||
}
|
||||
|
||||
return p, nil
|
||||
}
|
||||
|
||||
func (p *parser) panicErr(it item, err error) {
|
||||
panic(ParseError{
|
||||
err: err,
|
||||
Position: it.pos,
|
||||
Line: it.pos.Len,
|
||||
LastKey: p.current(),
|
||||
})
|
||||
}
|
||||
|
||||
func (p *parser) panicItemf(it item, format string, v ...interface{}) {
|
||||
panic(ParseError{
|
||||
Message: fmt.Sprintf(format, v...),
|
||||
Position: it.pos,
|
||||
Line: it.pos.Len,
|
||||
LastKey: p.current(),
|
||||
})
|
||||
}
|
||||
|
||||
func (p *parser) panicf(format string, v ...interface{}) {
|
||||
panic(ParseError{
|
||||
Message: fmt.Sprintf(format, v...),
|
||||
Position: p.pos,
|
||||
Line: p.pos.Line,
|
||||
LastKey: p.current(),
|
||||
})
|
||||
}
|
||||
|
||||
func (p *parser) next() item {
|
||||
it := p.lx.nextItem()
|
||||
//fmt.Printf("ITEM %-18s line %-3d │ %q\n", it.typ, it.pos.Line, it.val)
|
||||
if it.typ == itemError {
|
||||
if it.err != nil {
|
||||
panic(ParseError{
|
||||
Position: it.pos,
|
||||
Line: it.pos.Line,
|
||||
LastKey: p.current(),
|
||||
err: it.err,
|
||||
})
|
||||
}
|
||||
|
||||
p.panicItemf(it, "%s", it.val)
|
||||
}
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) nextPos() item {
|
||||
it := p.next()
|
||||
p.pos = it.pos
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) bug(format string, v ...interface{}) {
|
||||
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
|
||||
}
|
||||
|
||||
func (p *parser) expect(typ itemType) item {
|
||||
it := p.next()
|
||||
p.assertEqual(typ, it.typ)
|
||||
return it
|
||||
}
|
||||
|
||||
func (p *parser) assertEqual(expected, got itemType) {
|
||||
if expected != got {
|
||||
p.bug("Expected '%s' but got '%s'.", expected, got)
|
||||
}
|
||||
}
|
||||
|
||||
func (p *parser) topLevel(item item) {
|
||||
switch item.typ {
|
||||
case itemCommentStart: // # ..
|
||||
p.expect(itemText)
|
||||
case itemTableStart: // [ .. ]
|
||||
name := p.nextPos()
|
||||
|
||||
var key Key
|
||||
for ; name.typ != itemTableEnd && name.typ != itemEOF; name = p.next() {
|
||||
key = append(key, p.keyString(name))
|
||||
}
|
||||
p.assertEqual(itemTableEnd, name.typ)
|
||||
|
||||
p.addContext(key, false)
|
||||
p.setType("", tomlHash, item.pos)
|
||||
p.ordered = append(p.ordered, key)
|
||||
case itemArrayTableStart: // [[ .. ]]
|
||||
name := p.nextPos()
|
||||
|
||||
var key Key
|
||||
for ; name.typ != itemArrayTableEnd && name.typ != itemEOF; name = p.next() {
|
||||
key = append(key, p.keyString(name))
|
||||
}
|
||||
p.assertEqual(itemArrayTableEnd, name.typ)
|
||||
|
||||
p.addContext(key, true)
|
||||
p.setType("", tomlArrayHash, item.pos)
|
||||
p.ordered = append(p.ordered, key)
|
||||
case itemKeyStart: // key = ..
|
||||
outerContext := p.context
|
||||
/// Read all the key parts (e.g. 'a' and 'b' in 'a.b')
|
||||
k := p.nextPos()
|
||||
var key Key
|
||||
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
|
||||
key = append(key, p.keyString(k))
|
||||
}
|
||||
p.assertEqual(itemKeyEnd, k.typ)
|
||||
|
||||
/// The current key is the last part.
|
||||
p.currentKey = key[len(key)-1]
|
||||
|
||||
/// All the other parts (if any) are the context; need to set each part
|
||||
/// as implicit.
|
||||
context := key[:len(key)-1]
|
||||
for i := range context {
|
||||
p.addImplicitContext(append(p.context, context[i:i+1]...))
|
||||
}
|
||||
p.ordered = append(p.ordered, p.context.add(p.currentKey))
|
||||
|
||||
/// Set value.
|
||||
vItem := p.next()
|
||||
val, typ := p.value(vItem, false)
|
||||
p.set(p.currentKey, val, typ, vItem.pos)
|
||||
|
||||
/// Remove the context we added (preserving any context from [tbl] lines).
|
||||
p.context = outerContext
|
||||
p.currentKey = ""
|
||||
default:
|
||||
p.bug("Unexpected type at top level: %s", item.typ)
|
||||
}
|
||||
}
|
||||
|
||||
// Gets a string for a key (or part of a key in a table name).
|
||||
func (p *parser) keyString(it item) string {
|
||||
switch it.typ {
|
||||
case itemText:
|
||||
return it.val
|
||||
case itemString, itemMultilineString,
|
||||
itemRawString, itemRawMultilineString:
|
||||
s, _ := p.value(it, false)
|
||||
return s.(string)
|
||||
default:
|
||||
p.bug("Unexpected key type: %s", it.typ)
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
var datetimeRepl = strings.NewReplacer(
|
||||
"z", "Z",
|
||||
"t", "T",
|
||||
" ", "T")
|
||||
|
||||
// value translates an expected value from the lexer into a Go value wrapped
|
||||
// as an empty interface.
|
||||
func (p *parser) value(it item, parentIsArray bool) (interface{}, tomlType) {
|
||||
switch it.typ {
|
||||
case itemString:
|
||||
return p.replaceEscapes(it, it.val), p.typeOfPrimitive(it)
|
||||
case itemMultilineString:
|
||||
return p.replaceEscapes(it, p.stripEscapedNewlines(stripFirstNewline(it.val))), p.typeOfPrimitive(it)
|
||||
case itemRawString:
|
||||
return it.val, p.typeOfPrimitive(it)
|
||||
case itemRawMultilineString:
|
||||
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
|
||||
case itemInteger:
|
||||
return p.valueInteger(it)
|
||||
case itemFloat:
|
||||
return p.valueFloat(it)
|
||||
case itemBool:
|
||||
switch it.val {
|
||||
case "true":
|
||||
return true, p.typeOfPrimitive(it)
|
||||
case "false":
|
||||
return false, p.typeOfPrimitive(it)
|
||||
default:
|
||||
p.bug("Expected boolean value, but got '%s'.", it.val)
|
||||
}
|
||||
case itemDatetime:
|
||||
return p.valueDatetime(it)
|
||||
case itemArray:
|
||||
return p.valueArray(it)
|
||||
case itemInlineTableStart:
|
||||
return p.valueInlineTable(it, parentIsArray)
|
||||
default:
|
||||
p.bug("Unexpected value type: %s", it.typ)
|
||||
}
|
||||
panic("unreachable")
|
||||
}
|
||||
|
||||
func (p *parser) valueInteger(it item) (interface{}, tomlType) {
|
||||
if !numUnderscoresOK(it.val) {
|
||||
p.panicItemf(it, "Invalid integer %q: underscores must be surrounded by digits", it.val)
|
||||
}
|
||||
if numHasLeadingZero(it.val) {
|
||||
p.panicItemf(it, "Invalid integer %q: cannot have leading zeroes", it.val)
|
||||
}
|
||||
|
||||
num, err := strconv.ParseInt(it.val, 0, 64)
|
||||
if err != nil {
|
||||
// Distinguish integer values. Normally, it'd be a bug if the lexer
|
||||
// provides an invalid integer, but it's possible that the number is
|
||||
// out of range of valid values (which the lexer cannot determine).
|
||||
// So mark the former as a bug but the latter as a legitimate user
|
||||
// error.
|
||||
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
|
||||
p.panicErr(it, errParseRange{i: it.val, size: "int64"})
|
||||
} else {
|
||||
p.bug("Expected integer value, but got '%s'.", it.val)
|
||||
}
|
||||
}
|
||||
return num, p.typeOfPrimitive(it)
|
||||
}
|
||||
|
||||
func (p *parser) valueFloat(it item) (interface{}, tomlType) {
|
||||
parts := strings.FieldsFunc(it.val, func(r rune) bool {
|
||||
switch r {
|
||||
case '.', 'e', 'E':
|
||||
return true
|
||||
}
|
||||
return false
|
||||
})
|
||||
for _, part := range parts {
|
||||
if !numUnderscoresOK(part) {
|
||||
p.panicItemf(it, "Invalid float %q: underscores must be surrounded by digits", it.val)
|
||||
}
|
||||
}
|
||||
if len(parts) > 0 && numHasLeadingZero(parts[0]) {
|
||||
p.panicItemf(it, "Invalid float %q: cannot have leading zeroes", it.val)
|
||||
}
|
||||
if !numPeriodsOK(it.val) {
|
||||
// As a special case, numbers like '123.' or '1.e2',
|
||||
// which are valid as far as Go/strconv are concerned,
|
||||
// must be rejected because TOML says that a fractional
|
||||
// part consists of '.' followed by 1+ digits.
|
||||
p.panicItemf(it, "Invalid float %q: '.' must be followed by one or more digits", it.val)
|
||||
}
|
||||
val := strings.Replace(it.val, "_", "", -1)
|
||||
if val == "+nan" || val == "-nan" { // Go doesn't support this, but TOML spec does.
|
||||
val = "nan"
|
||||
}
|
||||
num, err := strconv.ParseFloat(val, 64)
|
||||
if err != nil {
|
||||
if e, ok := err.(*strconv.NumError); ok && e.Err == strconv.ErrRange {
|
||||
p.panicErr(it, errParseRange{i: it.val, size: "float64"})
|
||||
} else {
|
||||
p.panicItemf(it, "Invalid float value: %q", it.val)
|
||||
}
|
||||
}
|
||||
return num, p.typeOfPrimitive(it)
|
||||
}
|
||||
|
||||
var dtTypes = []struct {
|
||||
fmt string
|
||||
zone *time.Location
|
||||
next bool
|
||||
}{
|
||||
{time.RFC3339Nano, time.Local, false},
|
||||
{"2006-01-02T15:04:05.999999999", internal.LocalDatetime, false},
|
||||
{"2006-01-02", internal.LocalDate, false},
|
||||
{"15:04:05.999999999", internal.LocalTime, false},
|
||||
|
||||
// tomlNext
|
||||
{"2006-01-02T15:04Z07:00", time.Local, true},
|
||||
{"2006-01-02T15:04", internal.LocalDatetime, true},
|
||||
{"15:04", internal.LocalTime, true},
|
||||
}
|
||||
|
||||
func (p *parser) valueDatetime(it item) (interface{}, tomlType) {
|
||||
it.val = datetimeRepl.Replace(it.val)
|
||||
var (
|
||||
t time.Time
|
||||
ok bool
|
||||
err error
|
||||
)
|
||||
for _, dt := range dtTypes {
|
||||
if dt.next && !p.tomlNext {
|
||||
continue
|
||||
}
|
||||
t, err = time.ParseInLocation(dt.fmt, it.val, dt.zone)
|
||||
if err == nil {
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
}
|
||||
if !ok {
|
||||
p.panicItemf(it, "Invalid TOML Datetime: %q.", it.val)
|
||||
}
|
||||
return t, p.typeOfPrimitive(it)
|
||||
}
|
||||
|
||||
func (p *parser) valueArray(it item) (interface{}, tomlType) {
|
||||
p.setType(p.currentKey, tomlArray, it.pos)
|
||||
|
||||
var (
|
||||
types []tomlType
|
||||
|
||||
// Initialize to a non-nil empty slice. This makes it consistent with
|
||||
// how S = [] decodes into a non-nil slice inside something like struct
|
||||
// { S []string }. See #338
|
||||
array = []interface{}{}
|
||||
)
|
||||
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
|
||||
if it.typ == itemCommentStart {
|
||||
p.expect(itemText)
|
||||
continue
|
||||
}
|
||||
|
||||
val, typ := p.value(it, true)
|
||||
array = append(array, val)
|
||||
types = append(types, typ)
|
||||
|
||||
// XXX: types isn't used here, we need it to record the accurate type
|
||||
// information.
|
||||
//
|
||||
// Not entirely sure how to best store this; could use "key[0]",
|
||||
// "key[1]" notation, or maybe store it on the Array type?
|
||||
_ = types
|
||||
}
|
||||
return array, tomlArray
|
||||
}
|
||||
|
||||
func (p *parser) valueInlineTable(it item, parentIsArray bool) (interface{}, tomlType) {
|
||||
var (
|
||||
hash = make(map[string]interface{})
|
||||
outerContext = p.context
|
||||
outerKey = p.currentKey
|
||||
)
|
||||
|
||||
p.context = append(p.context, p.currentKey)
|
||||
prevContext := p.context
|
||||
p.currentKey = ""
|
||||
|
||||
p.addImplicit(p.context)
|
||||
p.addContext(p.context, parentIsArray)
|
||||
|
||||
/// Loop over all table key/value pairs.
|
||||
for it := p.next(); it.typ != itemInlineTableEnd; it = p.next() {
|
||||
if it.typ == itemCommentStart {
|
||||
p.expect(itemText)
|
||||
continue
|
||||
}
|
||||
|
||||
/// Read all key parts.
|
||||
k := p.nextPos()
|
||||
var key Key
|
||||
for ; k.typ != itemKeyEnd && k.typ != itemEOF; k = p.next() {
|
||||
key = append(key, p.keyString(k))
|
||||
}
|
||||
p.assertEqual(itemKeyEnd, k.typ)
|
||||
|
||||
/// The current key is the last part.
|
||||
p.currentKey = key[len(key)-1]
|
||||
|
||||
/// All the other parts (if any) are the context; need to set each part
|
||||
/// as implicit.
|
||||
context := key[:len(key)-1]
|
||||
for i := range context {
|
||||
p.addImplicitContext(append(p.context, context[i:i+1]...))
|
||||
}
|
||||
p.ordered = append(p.ordered, p.context.add(p.currentKey))
|
||||
|
||||
/// Set the value.
|
||||
val, typ := p.value(p.next(), false)
|
||||
p.set(p.currentKey, val, typ, it.pos)
|
||||
hash[p.currentKey] = val
|
||||
|
||||
/// Restore context.
|
||||
p.context = prevContext
|
||||
}
|
||||
p.context = outerContext
|
||||
p.currentKey = outerKey
|
||||
return hash, tomlHash
|
||||
}
|
||||
|
||||
// numHasLeadingZero checks if this number has leading zeroes, allowing for '0',
|
||||
// +/- signs, and base prefixes.
|
||||
func numHasLeadingZero(s string) bool {
|
||||
if len(s) > 1 && s[0] == '0' && !(s[1] == 'b' || s[1] == 'o' || s[1] == 'x') { // Allow 0b, 0o, 0x
|
||||
return true
|
||||
}
|
||||
if len(s) > 2 && (s[0] == '-' || s[0] == '+') && s[1] == '0' {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// numUnderscoresOK checks whether each underscore in s is surrounded by
|
||||
// characters that are not underscores.
|
||||
func numUnderscoresOK(s string) bool {
|
||||
switch s {
|
||||
case "nan", "+nan", "-nan", "inf", "-inf", "+inf":
|
||||
return true
|
||||
}
|
||||
accept := false
|
||||
for _, r := range s {
|
||||
if r == '_' {
|
||||
if !accept {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// isHexadecimal is a superset of all the permissable characters
|
||||
// surrounding an underscore.
|
||||
accept = isHexadecimal(r)
|
||||
}
|
||||
return accept
|
||||
}
|
||||
|
||||
// numPeriodsOK checks whether every period in s is followed by a digit.
|
||||
func numPeriodsOK(s string) bool {
|
||||
period := false
|
||||
for _, r := range s {
|
||||
if period && !isDigit(r) {
|
||||
return false
|
||||
}
|
||||
period = r == '.'
|
||||
}
|
||||
return !period
|
||||
}
|
||||
|
||||
// Set the current context of the parser, where the context is either a hash or
|
||||
// an array of hashes, depending on the value of the `array` parameter.
|
||||
//
|
||||
// Establishing the context also makes sure that the key isn't a duplicate, and
|
||||
// will create implicit hashes automatically.
|
||||
func (p *parser) addContext(key Key, array bool) {
|
||||
var ok bool
|
||||
|
||||
// Always start at the top level and drill down for our context.
|
||||
hashContext := p.mapping
|
||||
keyContext := make(Key, 0)
|
||||
|
||||
// We only need implicit hashes for key[0:-1]
|
||||
for _, k := range key[0 : len(key)-1] {
|
||||
_, ok = hashContext[k]
|
||||
keyContext = append(keyContext, k)
|
||||
|
||||
// No key? Make an implicit hash and move on.
|
||||
if !ok {
|
||||
p.addImplicit(keyContext)
|
||||
hashContext[k] = make(map[string]interface{})
|
||||
}
|
||||
|
||||
// If the hash context is actually an array of tables, then set
|
||||
// the hash context to the last element in that array.
|
||||
//
|
||||
// Otherwise, it better be a table, since this MUST be a key group (by
|
||||
// virtue of it not being the last element in a key).
|
||||
switch t := hashContext[k].(type) {
|
||||
case []map[string]interface{}:
|
||||
hashContext = t[len(t)-1]
|
||||
case map[string]interface{}:
|
||||
hashContext = t
|
||||
default:
|
||||
p.panicf("Key '%s' was already created as a hash.", keyContext)
|
||||
}
|
||||
}
|
||||
|
||||
p.context = keyContext
|
||||
if array {
|
||||
// If this is the first element for this array, then allocate a new
|
||||
// list of tables for it.
|
||||
k := key[len(key)-1]
|
||||
if _, ok := hashContext[k]; !ok {
|
||||
hashContext[k] = make([]map[string]interface{}, 0, 4)
|
||||
}
|
||||
|
||||
// Add a new table. But make sure the key hasn't already been used
|
||||
// for something else.
|
||||
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
|
||||
hashContext[k] = append(hash, make(map[string]interface{}))
|
||||
} else {
|
||||
p.panicf("Key '%s' was already created and cannot be used as an array.", key)
|
||||
}
|
||||
} else {
|
||||
p.setValue(key[len(key)-1], make(map[string]interface{}))
|
||||
}
|
||||
p.context = append(p.context, key[len(key)-1])
|
||||
}
|
||||
|
||||
// set calls setValue and setType.
|
||||
func (p *parser) set(key string, val interface{}, typ tomlType, pos Position) {
|
||||
p.setValue(key, val)
|
||||
p.setType(key, typ, pos)
|
||||
}
|
||||
|
||||
// setValue sets the given key to the given value in the current context.
|
||||
// It will make sure that the key hasn't already been defined, account for
|
||||
// implicit key groups.
|
||||
func (p *parser) setValue(key string, value interface{}) {
|
||||
var (
|
||||
tmpHash interface{}
|
||||
ok bool
|
||||
hash = p.mapping
|
||||
keyContext Key
|
||||
)
|
||||
for _, k := range p.context {
|
||||
keyContext = append(keyContext, k)
|
||||
if tmpHash, ok = hash[k]; !ok {
|
||||
p.bug("Context for key '%s' has not been established.", keyContext)
|
||||
}
|
||||
switch t := tmpHash.(type) {
|
||||
case []map[string]interface{}:
|
||||
// The context is a table of hashes. Pick the most recent table
|
||||
// defined as the current hash.
|
||||
hash = t[len(t)-1]
|
||||
case map[string]interface{}:
|
||||
hash = t
|
||||
default:
|
||||
p.panicf("Key '%s' has already been defined.", keyContext)
|
||||
}
|
||||
}
|
||||
keyContext = append(keyContext, key)
|
||||
|
||||
if _, ok := hash[key]; ok {
|
||||
// Normally redefining keys isn't allowed, but the key could have been
|
||||
// defined implicitly and it's allowed to be redefined concretely. (See
|
||||
// the `valid/implicit-and-explicit-after.toml` in toml-test)
|
||||
//
|
||||
// But we have to make sure to stop marking it as an implicit. (So that
|
||||
// another redefinition provokes an error.)
|
||||
//
|
||||
// Note that since it has already been defined (as a hash), we don't
|
||||
// want to overwrite it. So our business is done.
|
||||
if p.isArray(keyContext) {
|
||||
p.removeImplicit(keyContext)
|
||||
hash[key] = value
|
||||
return
|
||||
}
|
||||
if p.isImplicit(keyContext) {
|
||||
p.removeImplicit(keyContext)
|
||||
return
|
||||
}
|
||||
|
||||
// Otherwise, we have a concrete key trying to override a previous
|
||||
// key, which is *always* wrong.
|
||||
p.panicf("Key '%s' has already been defined.", keyContext)
|
||||
}
|
||||
|
||||
hash[key] = value
|
||||
}
|
||||
|
||||
// setType sets the type of a particular value at a given key. It should be
|
||||
// called immediately AFTER setValue.
|
||||
//
|
||||
// Note that if `key` is empty, then the type given will be applied to the
|
||||
// current context (which is either a table or an array of tables).
|
||||
func (p *parser) setType(key string, typ tomlType, pos Position) {
|
||||
keyContext := make(Key, 0, len(p.context)+1)
|
||||
keyContext = append(keyContext, p.context...)
|
||||
if len(key) > 0 { // allow type setting for hashes
|
||||
keyContext = append(keyContext, key)
|
||||
}
|
||||
// Special case to make empty keys ("" = 1) work.
|
||||
// Without it it will set "" rather than `""`.
|
||||
// TODO: why is this needed? And why is this only needed here?
|
||||
if len(keyContext) == 0 {
|
||||
keyContext = Key{""}
|
||||
}
|
||||
p.keyInfo[keyContext.String()] = keyInfo{tomlType: typ, pos: pos}
|
||||
}
|
||||
|
||||
// Implicit keys need to be created when tables are implied in "a.b.c.d = 1" and
|
||||
// "[a.b.c]" (the "a", "b", and "c" hashes are never created explicitly).
|
||||
func (p *parser) addImplicit(key Key) { p.implicits[key.String()] = struct{}{} }
|
||||
func (p *parser) removeImplicit(key Key) { delete(p.implicits, key.String()) }
|
||||
func (p *parser) isImplicit(key Key) bool { _, ok := p.implicits[key.String()]; return ok }
|
||||
func (p *parser) isArray(key Key) bool { return p.keyInfo[key.String()].tomlType == tomlArray }
|
||||
func (p *parser) addImplicitContext(key Key) { p.addImplicit(key); p.addContext(key, false) }
|
||||
|
||||
// current returns the full key name of the current context.
|
||||
func (p *parser) current() string {
|
||||
if len(p.currentKey) == 0 {
|
||||
return p.context.String()
|
||||
}
|
||||
if len(p.context) == 0 {
|
||||
return p.currentKey
|
||||
}
|
||||
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
|
||||
}
|
||||
|
||||
func stripFirstNewline(s string) string {
|
||||
if len(s) > 0 && s[0] == '\n' {
|
||||
return s[1:]
|
||||
}
|
||||
if len(s) > 1 && s[0] == '\r' && s[1] == '\n' {
|
||||
return s[2:]
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// stripEscapedNewlines removes whitespace after line-ending backslashes in
|
||||
// multiline strings.
|
||||
//
|
||||
// A line-ending backslash is an unescaped \ followed only by whitespace until
|
||||
// the next newline. After a line-ending backslash, all whitespace is removed
|
||||
// until the next non-whitespace character.
|
||||
func (p *parser) stripEscapedNewlines(s string) string {
|
||||
var b strings.Builder
|
||||
var i int
|
||||
for {
|
||||
ix := strings.Index(s[i:], `\`)
|
||||
if ix < 0 {
|
||||
b.WriteString(s)
|
||||
return b.String()
|
||||
}
|
||||
i += ix
|
||||
|
||||
if len(s) > i+1 && s[i+1] == '\\' {
|
||||
// Escaped backslash.
|
||||
i += 2
|
||||
continue
|
||||
}
|
||||
// Scan until the next non-whitespace.
|
||||
j := i + 1
|
||||
whitespaceLoop:
|
||||
for ; j < len(s); j++ {
|
||||
switch s[j] {
|
||||
case ' ', '\t', '\r', '\n':
|
||||
default:
|
||||
break whitespaceLoop
|
||||
}
|
||||
}
|
||||
if j == i+1 {
|
||||
// Not a whitespace escape.
|
||||
i++
|
||||
continue
|
||||
}
|
||||
if !strings.Contains(s[i:j], "\n") {
|
||||
// This is not a line-ending backslash.
|
||||
// (It's a bad escape sequence, but we can let
|
||||
// replaceEscapes catch it.)
|
||||
i++
|
||||
continue
|
||||
}
|
||||
b.WriteString(s[:i])
|
||||
s = s[j:]
|
||||
i = 0
|
||||
}
|
||||
}
|
||||
|
||||
func (p *parser) replaceEscapes(it item, str string) string {
|
||||
replaced := make([]rune, 0, len(str))
|
||||
s := []byte(str)
|
||||
r := 0
|
||||
for r < len(s) {
|
||||
if s[r] != '\\' {
|
||||
c, size := utf8.DecodeRune(s[r:])
|
||||
r += size
|
||||
replaced = append(replaced, c)
|
||||
continue
|
||||
}
|
||||
r += 1
|
||||
if r >= len(s) {
|
||||
p.bug("Escape sequence at end of string.")
|
||||
return ""
|
||||
}
|
||||
switch s[r] {
|
||||
default:
|
||||
p.bug("Expected valid escape code after \\, but got %q.", s[r])
|
||||
case ' ', '\t':
|
||||
p.panicItemf(it, "invalid escape: '\\%c'", s[r])
|
||||
case 'b':
|
||||
replaced = append(replaced, rune(0x0008))
|
||||
r += 1
|
||||
case 't':
|
||||
replaced = append(replaced, rune(0x0009))
|
||||
r += 1
|
||||
case 'n':
|
||||
replaced = append(replaced, rune(0x000A))
|
||||
r += 1
|
||||
case 'f':
|
||||
replaced = append(replaced, rune(0x000C))
|
||||
r += 1
|
||||
case 'r':
|
||||
replaced = append(replaced, rune(0x000D))
|
||||
r += 1
|
||||
case 'e':
|
||||
if p.tomlNext {
|
||||
replaced = append(replaced, rune(0x001B))
|
||||
r += 1
|
||||
}
|
||||
case '"':
|
||||
replaced = append(replaced, rune(0x0022))
|
||||
r += 1
|
||||
case '\\':
|
||||
replaced = append(replaced, rune(0x005C))
|
||||
r += 1
|
||||
case 'x':
|
||||
if p.tomlNext {
|
||||
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+3])
|
||||
replaced = append(replaced, escaped)
|
||||
r += 3
|
||||
}
|
||||
case 'u':
|
||||
// At this point, we know we have a Unicode escape of the form
|
||||
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
|
||||
// for us.)
|
||||
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+5])
|
||||
replaced = append(replaced, escaped)
|
||||
r += 5
|
||||
case 'U':
|
||||
// At this point, we know we have a Unicode escape of the form
|
||||
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
|
||||
// for us.)
|
||||
escaped := p.asciiEscapeToUnicode(it, s[r+1:r+9])
|
||||
replaced = append(replaced, escaped)
|
||||
r += 9
|
||||
}
|
||||
}
|
||||
return string(replaced)
|
||||
}
|
||||
|
||||
func (p *parser) asciiEscapeToUnicode(it item, bs []byte) rune {
|
||||
s := string(bs)
|
||||
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
|
||||
if err != nil {
|
||||
p.bug("Could not parse '%s' as a hexadecimal number, but the lexer claims it's OK: %s", s, err)
|
||||
}
|
||||
if !utf8.ValidRune(rune(hex)) {
|
||||
p.panicItemf(it, "Escaped character '\\u%s' is not valid UTF-8.", s)
|
||||
}
|
||||
return rune(hex)
|
||||
}
|
242
vendor/github.com/BurntSushi/toml/type_fields.go
generated
vendored
Normal file
242
vendor/github.com/BurntSushi/toml/type_fields.go
generated
vendored
Normal file
|
@ -0,0 +1,242 @@
|
|||
package toml
|
||||
|
||||
// Struct field handling is adapted from code in encoding/json:
|
||||
//
|
||||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the Go distribution.
|
||||
|
||||
import (
|
||||
"reflect"
|
||||
"sort"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// A field represents a single field found in a struct.
|
||||
type field struct {
|
||||
name string // the name of the field (`toml` tag included)
|
||||
tag bool // whether field has a `toml` tag
|
||||
index []int // represents the depth of an anonymous field
|
||||
typ reflect.Type // the type of the field
|
||||
}
|
||||
|
||||
// byName sorts field by name, breaking ties with depth,
|
||||
// then breaking ties with "name came from toml tag", then
|
||||
// breaking ties with index sequence.
|
||||
type byName []field
|
||||
|
||||
func (x byName) Len() int { return len(x) }
|
||||
|
||||
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
|
||||
func (x byName) Less(i, j int) bool {
|
||||
if x[i].name != x[j].name {
|
||||
return x[i].name < x[j].name
|
||||
}
|
||||
if len(x[i].index) != len(x[j].index) {
|
||||
return len(x[i].index) < len(x[j].index)
|
||||
}
|
||||
if x[i].tag != x[j].tag {
|
||||
return x[i].tag
|
||||
}
|
||||
return byIndex(x).Less(i, j)
|
||||
}
|
||||
|
||||
// byIndex sorts field by index sequence.
|
||||
type byIndex []field
|
||||
|
||||
func (x byIndex) Len() int { return len(x) }
|
||||
|
||||
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
|
||||
|
||||
func (x byIndex) Less(i, j int) bool {
|
||||
for k, xik := range x[i].index {
|
||||
if k >= len(x[j].index) {
|
||||
return false
|
||||
}
|
||||
if xik != x[j].index[k] {
|
||||
return xik < x[j].index[k]
|
||||
}
|
||||
}
|
||||
return len(x[i].index) < len(x[j].index)
|
||||
}
|
||||
|
||||
// typeFields returns a list of fields that TOML should recognize for the given
|
||||
// type. The algorithm is breadth-first search over the set of structs to
|
||||
// include - the top struct and then any reachable anonymous structs.
|
||||
func typeFields(t reflect.Type) []field {
|
||||
// Anonymous fields to explore at the current level and the next.
|
||||
current := []field{}
|
||||
next := []field{{typ: t}}
|
||||
|
||||
// Count of queued names for current level and the next.
|
||||
var count map[reflect.Type]int
|
||||
var nextCount map[reflect.Type]int
|
||||
|
||||
// Types already visited at an earlier level.
|
||||
visited := map[reflect.Type]bool{}
|
||||
|
||||
// Fields found.
|
||||
var fields []field
|
||||
|
||||
for len(next) > 0 {
|
||||
current, next = next, current[:0]
|
||||
count, nextCount = nextCount, map[reflect.Type]int{}
|
||||
|
||||
for _, f := range current {
|
||||
if visited[f.typ] {
|
||||
continue
|
||||
}
|
||||
visited[f.typ] = true
|
||||
|
||||
// Scan f.typ for fields to include.
|
||||
for i := 0; i < f.typ.NumField(); i++ {
|
||||
sf := f.typ.Field(i)
|
||||
if sf.PkgPath != "" && !sf.Anonymous { // unexported
|
||||
continue
|
||||
}
|
||||
opts := getOptions(sf.Tag)
|
||||
if opts.skip {
|
||||
continue
|
||||
}
|
||||
index := make([]int, len(f.index)+1)
|
||||
copy(index, f.index)
|
||||
index[len(f.index)] = i
|
||||
|
||||
ft := sf.Type
|
||||
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
|
||||
// Follow pointer.
|
||||
ft = ft.Elem()
|
||||
}
|
||||
|
||||
// Record found field and index sequence.
|
||||
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
|
||||
tagged := opts.name != ""
|
||||
name := opts.name
|
||||
if name == "" {
|
||||
name = sf.Name
|
||||
}
|
||||
fields = append(fields, field{name, tagged, index, ft})
|
||||
if count[f.typ] > 1 {
|
||||
// If there were multiple instances, add a second,
|
||||
// so that the annihilation code will see a duplicate.
|
||||
// It only cares about the distinction between 1 or 2,
|
||||
// so don't bother generating any more copies.
|
||||
fields = append(fields, fields[len(fields)-1])
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
// Record new anonymous struct to explore in next round.
|
||||
nextCount[ft]++
|
||||
if nextCount[ft] == 1 {
|
||||
f := field{name: ft.Name(), index: index, typ: ft}
|
||||
next = append(next, f)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
sort.Sort(byName(fields))
|
||||
|
||||
// Delete all fields that are hidden by the Go rules for embedded fields,
|
||||
// except that fields with TOML tags are promoted.
|
||||
|
||||
// The fields are sorted in primary order of name, secondary order
|
||||
// of field index length. Loop over names; for each name, delete
|
||||
// hidden fields by choosing the one dominant field that survives.
|
||||
out := fields[:0]
|
||||
for advance, i := 0, 0; i < len(fields); i += advance {
|
||||
// One iteration per name.
|
||||
// Find the sequence of fields with the name of this first field.
|
||||
fi := fields[i]
|
||||
name := fi.name
|
||||
for advance = 1; i+advance < len(fields); advance++ {
|
||||
fj := fields[i+advance]
|
||||
if fj.name != name {
|
||||
break
|
||||
}
|
||||
}
|
||||
if advance == 1 { // Only one field with this name
|
||||
out = append(out, fi)
|
||||
continue
|
||||
}
|
||||
dominant, ok := dominantField(fields[i : i+advance])
|
||||
if ok {
|
||||
out = append(out, dominant)
|
||||
}
|
||||
}
|
||||
|
||||
fields = out
|
||||
sort.Sort(byIndex(fields))
|
||||
|
||||
return fields
|
||||
}
|
||||
|
||||
// dominantField looks through the fields, all of which are known to
|
||||
// have the same name, to find the single field that dominates the
|
||||
// others using Go's embedding rules, modified by the presence of
|
||||
// TOML tags. If there are multiple top-level fields, the boolean
|
||||
// will be false: This condition is an error in Go and we skip all
|
||||
// the fields.
|
||||
func dominantField(fields []field) (field, bool) {
|
||||
// The fields are sorted in increasing index-length order. The winner
|
||||
// must therefore be one with the shortest index length. Drop all
|
||||
// longer entries, which is easy: just truncate the slice.
|
||||
length := len(fields[0].index)
|
||||
tagged := -1 // Index of first tagged field.
|
||||
for i, f := range fields {
|
||||
if len(f.index) > length {
|
||||
fields = fields[:i]
|
||||
break
|
||||
}
|
||||
if f.tag {
|
||||
if tagged >= 0 {
|
||||
// Multiple tagged fields at the same level: conflict.
|
||||
// Return no field.
|
||||
return field{}, false
|
||||
}
|
||||
tagged = i
|
||||
}
|
||||
}
|
||||
if tagged >= 0 {
|
||||
return fields[tagged], true
|
||||
}
|
||||
// All remaining fields have the same length. If there's more than one,
|
||||
// we have a conflict (two fields named "X" at the same level) and we
|
||||
// return no field.
|
||||
if len(fields) > 1 {
|
||||
return field{}, false
|
||||
}
|
||||
return fields[0], true
|
||||
}
|
||||
|
||||
var fieldCache struct {
|
||||
sync.RWMutex
|
||||
m map[reflect.Type][]field
|
||||
}
|
||||
|
||||
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
|
||||
func cachedTypeFields(t reflect.Type) []field {
|
||||
fieldCache.RLock()
|
||||
f := fieldCache.m[t]
|
||||
fieldCache.RUnlock()
|
||||
if f != nil {
|
||||
return f
|
||||
}
|
||||
|
||||
// Compute fields without lock.
|
||||
// Might duplicate effort but won't hold other computations back.
|
||||
f = typeFields(t)
|
||||
if f == nil {
|
||||
f = []field{}
|
||||
}
|
||||
|
||||
fieldCache.Lock()
|
||||
if fieldCache.m == nil {
|
||||
fieldCache.m = map[reflect.Type][]field{}
|
||||
}
|
||||
fieldCache.m[t] = f
|
||||
fieldCache.Unlock()
|
||||
return f
|
||||
}
|
70
vendor/github.com/BurntSushi/toml/type_toml.go
generated
vendored
Normal file
70
vendor/github.com/BurntSushi/toml/type_toml.go
generated
vendored
Normal file
|
@ -0,0 +1,70 @@
|
|||
package toml
|
||||
|
||||
// tomlType represents any Go type that corresponds to a TOML type.
|
||||
// While the first draft of the TOML spec has a simplistic type system that
|
||||
// probably doesn't need this level of sophistication, we seem to be militating
|
||||
// toward adding real composite types.
|
||||
type tomlType interface {
|
||||
typeString() string
|
||||
}
|
||||
|
||||
// typeEqual accepts any two types and returns true if they are equal.
|
||||
func typeEqual(t1, t2 tomlType) bool {
|
||||
if t1 == nil || t2 == nil {
|
||||
return false
|
||||
}
|
||||
return t1.typeString() == t2.typeString()
|
||||
}
|
||||
|
||||
func typeIsTable(t tomlType) bool {
|
||||
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
|
||||
}
|
||||
|
||||
type tomlBaseType string
|
||||
|
||||
func (btype tomlBaseType) typeString() string {
|
||||
return string(btype)
|
||||
}
|
||||
|
||||
func (btype tomlBaseType) String() string {
|
||||
return btype.typeString()
|
||||
}
|
||||
|
||||
var (
|
||||
tomlInteger tomlBaseType = "Integer"
|
||||
tomlFloat tomlBaseType = "Float"
|
||||
tomlDatetime tomlBaseType = "Datetime"
|
||||
tomlString tomlBaseType = "String"
|
||||
tomlBool tomlBaseType = "Bool"
|
||||
tomlArray tomlBaseType = "Array"
|
||||
tomlHash tomlBaseType = "Hash"
|
||||
tomlArrayHash tomlBaseType = "ArrayHash"
|
||||
)
|
||||
|
||||
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
|
||||
// Primitive values are: Integer, Float, Datetime, String and Bool.
|
||||
//
|
||||
// Passing a lexer item other than the following will cause a BUG message
|
||||
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
|
||||
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
|
||||
switch lexItem.typ {
|
||||
case itemInteger:
|
||||
return tomlInteger
|
||||
case itemFloat:
|
||||
return tomlFloat
|
||||
case itemDatetime:
|
||||
return tomlDatetime
|
||||
case itemString:
|
||||
return tomlString
|
||||
case itemMultilineString:
|
||||
return tomlString
|
||||
case itemRawString:
|
||||
return tomlString
|
||||
case itemRawMultilineString:
|
||||
return tomlString
|
||||
case itemBool:
|
||||
return tomlBool
|
||||
}
|
||||
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
|
||||
panic("unreachable")
|
||||
}
|
28
vendor/github.com/cyphar/filepath-securejoin/LICENSE
generated
vendored
Normal file
28
vendor/github.com/cyphar/filepath-securejoin/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,28 @@
|
|||
Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
|
||||
Copyright (C) 2017 SUSE LLC. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
79
vendor/github.com/cyphar/filepath-securejoin/README.md
generated
vendored
Normal file
79
vendor/github.com/cyphar/filepath-securejoin/README.md
generated
vendored
Normal file
|
@ -0,0 +1,79 @@
|
|||
## `filepath-securejoin` ##
|
||||
|
||||
[](https://github.com/cyphar/filepath-securejoin/actions/workflows/ci.yml)
|
||||
|
||||
An implementation of `SecureJoin`, a [candidate for inclusion in the Go
|
||||
standard library][go#20126]. The purpose of this function is to be a "secure"
|
||||
alternative to `filepath.Join`, and in particular it provides certain
|
||||
guarantees that are not provided by `filepath.Join`.
|
||||
|
||||
> **NOTE**: This code is *only* safe if you are not at risk of other processes
|
||||
> modifying path components after you've used `SecureJoin`. If it is possible
|
||||
> for a malicious process to modify path components of the resolved path, then
|
||||
> you will be vulnerable to some fairly trivial TOCTOU race conditions. [There
|
||||
> are some Linux kernel patches I'm working on which might allow for a better
|
||||
> solution.][lwn-obeneath]
|
||||
>
|
||||
> In addition, with a slightly modified API it might be possible to use
|
||||
> `O_PATH` and verify that the opened path is actually the resolved one -- but
|
||||
> I have not done that yet. I might add it in the future as a helper function
|
||||
> to help users verify the path (we can't just return `/proc/self/fd/<foo>`
|
||||
> because that doesn't always work transparently for all users).
|
||||
|
||||
This is the function prototype:
|
||||
|
||||
```go
|
||||
func SecureJoin(root, unsafePath string) (string, error)
|
||||
```
|
||||
|
||||
This library **guarantees** the following:
|
||||
|
||||
* If no error is set, the resulting string **must** be a child path of
|
||||
`root` and will not contain any symlink path components (they will all be
|
||||
expanded).
|
||||
|
||||
* When expanding symlinks, all symlink path components **must** be resolved
|
||||
relative to the provided root. In particular, this can be considered a
|
||||
userspace implementation of how `chroot(2)` operates on file paths. Note that
|
||||
these symlinks will **not** be expanded lexically (`filepath.Clean` is not
|
||||
called on the input before processing).
|
||||
|
||||
* Non-existent path components are unaffected by `SecureJoin` (similar to
|
||||
`filepath.EvalSymlinks`'s semantics).
|
||||
|
||||
* The returned path will always be `filepath.Clean`ed and thus not contain any
|
||||
`..` components.
|
||||
|
||||
A (trivial) implementation of this function on GNU/Linux systems could be done
|
||||
with the following (note that this requires root privileges and is far more
|
||||
opaque than the implementation in this library, and also requires that
|
||||
`readlink` is inside the `root` path):
|
||||
|
||||
```go
|
||||
package securejoin
|
||||
|
||||
import (
|
||||
"os/exec"
|
||||
"path/filepath"
|
||||
)
|
||||
|
||||
func SecureJoin(root, unsafePath string) (string, error) {
|
||||
unsafePath = string(filepath.Separator) + unsafePath
|
||||
cmd := exec.Command("chroot", root,
|
||||
"readlink", "--canonicalize-missing", "--no-newline", unsafePath)
|
||||
output, err := cmd.CombinedOutput()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
expanded := string(output)
|
||||
return filepath.Join(root, expanded), nil
|
||||
}
|
||||
```
|
||||
|
||||
[lwn-obeneath]: https://lwn.net/Articles/767547/
|
||||
[go#20126]: https://github.com/golang/go/issues/20126
|
||||
|
||||
### License ###
|
||||
|
||||
The license of this project is the same as Go, which is a BSD 3-clause license
|
||||
available in the `LICENSE` file.
|
1
vendor/github.com/cyphar/filepath-securejoin/VERSION
generated
vendored
Normal file
1
vendor/github.com/cyphar/filepath-securejoin/VERSION
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
0.2.4
|
3
vendor/github.com/cyphar/filepath-securejoin/go.mod
generated
vendored
Normal file
3
vendor/github.com/cyphar/filepath-securejoin/go.mod
generated
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
module github.com/cyphar/filepath-securejoin
|
||||
|
||||
go 1.13
|
125
vendor/github.com/cyphar/filepath-securejoin/join.go
generated
vendored
Normal file
125
vendor/github.com/cyphar/filepath-securejoin/join.go
generated
vendored
Normal file
|
@ -0,0 +1,125 @@
|
|||
// Copyright (C) 2014-2015 Docker Inc & Go Authors. All rights reserved.
|
||||
// Copyright (C) 2017 SUSE LLC. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package securejoin is an implementation of the hopefully-soon-to-be-included
|
||||
// SecureJoin helper that is meant to be part of the "path/filepath" package.
|
||||
// The purpose of this project is to provide a PoC implementation to make the
|
||||
// SecureJoin proposal (https://github.com/golang/go/issues/20126) more
|
||||
// tangible.
|
||||
package securejoin
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
// IsNotExist tells you if err is an error that implies that either the path
|
||||
// accessed does not exist (or path components don't exist). This is
|
||||
// effectively a more broad version of os.IsNotExist.
|
||||
func IsNotExist(err error) bool {
|
||||
// Check that it's not actually an ENOTDIR, which in some cases is a more
|
||||
// convoluted case of ENOENT (usually involving weird paths).
|
||||
return errors.Is(err, os.ErrNotExist) || errors.Is(err, syscall.ENOTDIR) || errors.Is(err, syscall.ENOENT)
|
||||
}
|
||||
|
||||
// SecureJoinVFS joins the two given path components (similar to Join) except
|
||||
// that the returned path is guaranteed to be scoped inside the provided root
|
||||
// path (when evaluated). Any symbolic links in the path are evaluated with the
|
||||
// given root treated as the root of the filesystem, similar to a chroot. The
|
||||
// filesystem state is evaluated through the given VFS interface (if nil, the
|
||||
// standard os.* family of functions are used).
|
||||
//
|
||||
// Note that the guarantees provided by this function only apply if the path
|
||||
// components in the returned string are not modified (in other words are not
|
||||
// replaced with symlinks on the filesystem) after this function has returned.
|
||||
// Such a symlink race is necessarily out-of-scope of SecureJoin.
|
||||
//
|
||||
// Volume names in unsafePath are always discarded, regardless if they are
|
||||
// provided via direct input or when evaluating symlinks. Therefore:
|
||||
//
|
||||
// "C:\Temp" + "D:\path\to\file.txt" results in "C:\Temp\path\to\file.txt"
|
||||
func SecureJoinVFS(root, unsafePath string, vfs VFS) (string, error) {
|
||||
// Use the os.* VFS implementation if none was specified.
|
||||
if vfs == nil {
|
||||
vfs = osVFS{}
|
||||
}
|
||||
|
||||
unsafePath = filepath.FromSlash(unsafePath)
|
||||
var path bytes.Buffer
|
||||
n := 0
|
||||
for unsafePath != "" {
|
||||
if n > 255 {
|
||||
return "", &os.PathError{Op: "SecureJoin", Path: root + string(filepath.Separator) + unsafePath, Err: syscall.ELOOP}
|
||||
}
|
||||
|
||||
if v := filepath.VolumeName(unsafePath); v != "" {
|
||||
unsafePath = unsafePath[len(v):]
|
||||
}
|
||||
|
||||
// Next path component, p.
|
||||
i := strings.IndexRune(unsafePath, filepath.Separator)
|
||||
var p string
|
||||
if i == -1 {
|
||||
p, unsafePath = unsafePath, ""
|
||||
} else {
|
||||
p, unsafePath = unsafePath[:i], unsafePath[i+1:]
|
||||
}
|
||||
|
||||
// Create a cleaned path, using the lexical semantics of /../a, to
|
||||
// create a "scoped" path component which can safely be joined to fullP
|
||||
// for evaluation. At this point, path.String() doesn't contain any
|
||||
// symlink components.
|
||||
cleanP := filepath.Clean(string(filepath.Separator) + path.String() + p)
|
||||
if cleanP == string(filepath.Separator) {
|
||||
path.Reset()
|
||||
continue
|
||||
}
|
||||
fullP := filepath.Clean(root + cleanP)
|
||||
|
||||
// Figure out whether the path is a symlink.
|
||||
fi, err := vfs.Lstat(fullP)
|
||||
if err != nil && !IsNotExist(err) {
|
||||
return "", err
|
||||
}
|
||||
// Treat non-existent path components the same as non-symlinks (we
|
||||
// can't do any better here).
|
||||
if IsNotExist(err) || fi.Mode()&os.ModeSymlink == 0 {
|
||||
path.WriteString(p)
|
||||
path.WriteRune(filepath.Separator)
|
||||
continue
|
||||
}
|
||||
|
||||
// Only increment when we actually dereference a link.
|
||||
n++
|
||||
|
||||
// It's a symlink, expand it by prepending it to the yet-unparsed path.
|
||||
dest, err := vfs.Readlink(fullP)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
// Absolute symlinks reset any work we've already done.
|
||||
if filepath.IsAbs(dest) {
|
||||
path.Reset()
|
||||
}
|
||||
unsafePath = dest + string(filepath.Separator) + unsafePath
|
||||
}
|
||||
|
||||
// We have to clean path.String() here because it may contain '..'
|
||||
// components that are entirely lexical, but would be misleading otherwise.
|
||||
// And finally do a final clean to ensure that root is also lexically
|
||||
// clean.
|
||||
fullP := filepath.Clean(string(filepath.Separator) + path.String())
|
||||
return filepath.Clean(root + fullP), nil
|
||||
}
|
||||
|
||||
// SecureJoin is a wrapper around SecureJoinVFS that just uses the os.* library
|
||||
// of functions as the VFS. If in doubt, use this function over SecureJoinVFS.
|
||||
func SecureJoin(root, unsafePath string) (string, error) {
|
||||
return SecureJoinVFS(root, unsafePath, nil)
|
||||
}
|
41
vendor/github.com/cyphar/filepath-securejoin/vfs.go
generated
vendored
Normal file
41
vendor/github.com/cyphar/filepath-securejoin/vfs.go
generated
vendored
Normal file
|
@ -0,0 +1,41 @@
|
|||
// Copyright (C) 2017 SUSE LLC. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package securejoin
|
||||
|
||||
import "os"
|
||||
|
||||
// In future this should be moved into a separate package, because now there
|
||||
// are several projects (umoci and go-mtree) that are using this sort of
|
||||
// interface.
|
||||
|
||||
// VFS is the minimal interface necessary to use SecureJoinVFS. A nil VFS is
|
||||
// equivalent to using the standard os.* family of functions. This is mainly
|
||||
// used for the purposes of mock testing, but also can be used to otherwise use
|
||||
// SecureJoin with VFS-like system.
|
||||
type VFS interface {
|
||||
// Lstat returns a FileInfo describing the named file. If the file is a
|
||||
// symbolic link, the returned FileInfo describes the symbolic link. Lstat
|
||||
// makes no attempt to follow the link. These semantics are identical to
|
||||
// os.Lstat.
|
||||
Lstat(name string) (os.FileInfo, error)
|
||||
|
||||
// Readlink returns the destination of the named symbolic link. These
|
||||
// semantics are identical to os.Readlink.
|
||||
Readlink(name string) (string, error)
|
||||
}
|
||||
|
||||
// osVFS is the "nil" VFS, in that it just passes everything through to the os
|
||||
// module.
|
||||
type osVFS struct{}
|
||||
|
||||
// Lstat returns a FileInfo describing the named file. If the file is a
|
||||
// symbolic link, the returned FileInfo describes the symbolic link. Lstat
|
||||
// makes no attempt to follow the link. These semantics are identical to
|
||||
// os.Lstat.
|
||||
func (o osVFS) Lstat(name string) (os.FileInfo, error) { return os.Lstat(name) }
|
||||
|
||||
// Readlink returns the destination of the named symbolic link. These
|
||||
// semantics are identical to os.Readlink.
|
||||
func (o osVFS) Readlink(name string) (string, error) { return os.Readlink(name) }
|
1
vendor/github.com/go-git/gcfg/.gitignore
generated
vendored
Normal file
1
vendor/github.com/go-git/gcfg/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
coverage.out
|
28
vendor/github.com/go-git/gcfg/LICENSE
generated
vendored
Normal file
28
vendor/github.com/go-git/gcfg/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,28 @@
|
|||
Copyright (c) 2012 Péter Surányi. Portions Copyright (c) 2009 The Go
|
||||
Authors. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
17
vendor/github.com/go-git/gcfg/Makefile
generated
vendored
Normal file
17
vendor/github.com/go-git/gcfg/Makefile
generated
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
# General
|
||||
WORKDIR = $(PWD)
|
||||
|
||||
# Go parameters
|
||||
GOCMD = go
|
||||
GOTEST = $(GOCMD) test
|
||||
|
||||
# Coverage
|
||||
COVERAGE_REPORT = coverage.out
|
||||
COVERAGE_MODE = count
|
||||
|
||||
test:
|
||||
$(GOTEST) ./...
|
||||
|
||||
test-coverage:
|
||||
echo "" > $(COVERAGE_REPORT); \
|
||||
$(GOTEST) -coverprofile=$(COVERAGE_REPORT) -coverpkg=./... -covermode=$(COVERAGE_MODE) ./...
|
4
vendor/github.com/go-git/gcfg/README
generated
vendored
Normal file
4
vendor/github.com/go-git/gcfg/README
generated
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
Gcfg reads INI-style configuration files into Go structs;
|
||||
supports user-defined types and subsections.
|
||||
|
||||
Package docs: https://godoc.org/gopkg.in/gcfg.v1
|
145
vendor/github.com/go-git/gcfg/doc.go
generated
vendored
Normal file
145
vendor/github.com/go-git/gcfg/doc.go
generated
vendored
Normal file
|
@ -0,0 +1,145 @@
|
|||
// Package gcfg reads "INI-style" text-based configuration files with
|
||||
// "name=value" pairs grouped into sections (gcfg files).
|
||||
//
|
||||
// This package is still a work in progress; see the sections below for planned
|
||||
// changes.
|
||||
//
|
||||
// Syntax
|
||||
//
|
||||
// The syntax is based on that used by git config:
|
||||
// http://git-scm.com/docs/git-config#_syntax .
|
||||
// There are some (planned) differences compared to the git config format:
|
||||
// - improve data portability:
|
||||
// - must be encoded in UTF-8 (for now) and must not contain the 0 byte
|
||||
// - include and "path" type is not supported
|
||||
// (path type may be implementable as a user-defined type)
|
||||
// - internationalization
|
||||
// - section and variable names can contain unicode letters, unicode digits
|
||||
// (as defined in http://golang.org/ref/spec#Characters ) and hyphens
|
||||
// (U+002D), starting with a unicode letter
|
||||
// - disallow potentially ambiguous or misleading definitions:
|
||||
// - `[sec.sub]` format is not allowed (deprecated in gitconfig)
|
||||
// - `[sec ""]` is not allowed
|
||||
// - use `[sec]` for section name "sec" and empty subsection name
|
||||
// - (planned) within a single file, definitions must be contiguous for each:
|
||||
// - section: '[secA]' -> '[secB]' -> '[secA]' is an error
|
||||
// - subsection: '[sec "A"]' -> '[sec "B"]' -> '[sec "A"]' is an error
|
||||
// - multivalued variable: 'multi=a' -> 'other=x' -> 'multi=b' is an error
|
||||
//
|
||||
// Data structure
|
||||
//
|
||||
// The functions in this package read values into a user-defined struct.
|
||||
// Each section corresponds to a struct field in the config struct, and each
|
||||
// variable in a section corresponds to a data field in the section struct.
|
||||
// The mapping of each section or variable name to fields is done either based
|
||||
// on the "gcfg" struct tag or by matching the name of the section or variable,
|
||||
// ignoring case. In the latter case, hyphens '-' in section and variable names
|
||||
// correspond to underscores '_' in field names.
|
||||
// Fields must be exported; to use a section or variable name starting with a
|
||||
// letter that is neither upper- or lower-case, prefix the field name with 'X'.
|
||||
// (See https://code.google.com/p/go/issues/detail?id=5763#c4 .)
|
||||
//
|
||||
// For sections with subsections, the corresponding field in config must be a
|
||||
// map, rather than a struct, with string keys and pointer-to-struct values.
|
||||
// Values for subsection variables are stored in the map with the subsection
|
||||
// name used as the map key.
|
||||
// (Note that unlike section and variable names, subsection names are case
|
||||
// sensitive.)
|
||||
// When using a map, and there is a section with the same section name but
|
||||
// without a subsection name, its values are stored with the empty string used
|
||||
// as the key.
|
||||
// It is possible to provide default values for subsections in the section
|
||||
// "default-<sectionname>" (or by setting values in the corresponding struct
|
||||
// field "Default_<sectionname>").
|
||||
//
|
||||
// The functions in this package panic if config is not a pointer to a struct,
|
||||
// or when a field is not of a suitable type (either a struct or a map with
|
||||
// string keys and pointer-to-struct values).
|
||||
//
|
||||
// Parsing of values
|
||||
//
|
||||
// The section structs in the config struct may contain single-valued or
|
||||
// multi-valued variables. Variables of unnamed slice type (that is, a type
|
||||
// starting with `[]`) are treated as multi-value; all others (including named
|
||||
// slice types) are treated as single-valued variables.
|
||||
//
|
||||
// Single-valued variables are handled based on the type as follows.
|
||||
// Unnamed pointer types (that is, types starting with `*`) are dereferenced,
|
||||
// and if necessary, a new instance is allocated.
|
||||
//
|
||||
// For types implementing the encoding.TextUnmarshaler interface, the
|
||||
// UnmarshalText method is used to set the value. Implementing this method is
|
||||
// the recommended way for parsing user-defined types.
|
||||
//
|
||||
// For fields of string kind, the value string is assigned to the field, after
|
||||
// unquoting and unescaping as needed.
|
||||
// For fields of bool kind, the field is set to true if the value is "true",
|
||||
// "yes", "on" or "1", and set to false if the value is "false", "no", "off" or
|
||||
// "0", ignoring case. In addition, single-valued bool fields can be specified
|
||||
// with a "blank" value (variable name without equals sign and value); in such
|
||||
// case the value is set to true.
|
||||
//
|
||||
// Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as
|
||||
// decimal or hexadecimal (if having '0x' prefix). (This is to prevent
|
||||
// unintuitively handling zero-padded numbers as octal.) Other types having
|
||||
// [u]int* as the underlying type, such as os.FileMode and uintptr allow
|
||||
// decimal, hexadecimal, or octal values.
|
||||
// Parsing mode for integer types can be overridden using the struct tag option
|
||||
// ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters
|
||||
// (each standing for decimal, hexadecimal, and octal, respectively.)
|
||||
//
|
||||
// All other types are parsed using fmt.Sscanf with the "%v" verb.
|
||||
//
|
||||
// For multi-valued variables, each individual value is parsed as above and
|
||||
// appended to the slice. If the first value is specified as a "blank" value
|
||||
// (variable name without equals sign and value), a new slice is allocated;
|
||||
// that is any values previously set in the slice will be ignored.
|
||||
//
|
||||
// The types subpackage for provides helpers for parsing "enum-like" and integer
|
||||
// types.
|
||||
//
|
||||
// Error handling
|
||||
//
|
||||
// There are 3 types of errors:
|
||||
//
|
||||
// - programmer errors / panics:
|
||||
// - invalid configuration structure
|
||||
// - data errors:
|
||||
// - fatal errors:
|
||||
// - invalid configuration syntax
|
||||
// - warnings:
|
||||
// - data that doesn't belong to any part of the config structure
|
||||
//
|
||||
// Programmer errors trigger panics. These are should be fixed by the programmer
|
||||
// before releasing code that uses gcfg.
|
||||
//
|
||||
// Data errors cause gcfg to return a non-nil error value. This includes the
|
||||
// case when there are extra unknown key-value definitions in the configuration
|
||||
// data (extra data).
|
||||
// However, in some occasions it is desirable to be able to proceed in
|
||||
// situations when the only data error is that of extra data.
|
||||
// These errors are handled at a different (warning) priority and can be
|
||||
// filtered out programmatically. To ignore extra data warnings, wrap the
|
||||
// gcfg.Read*Into invocation into a call to gcfg.FatalOnly.
|
||||
//
|
||||
// TODO
|
||||
//
|
||||
// The following is a list of changes under consideration:
|
||||
// - documentation
|
||||
// - self-contained syntax documentation
|
||||
// - more practical examples
|
||||
// - move TODOs to issue tracker (eventually)
|
||||
// - syntax
|
||||
// - reconsider valid escape sequences
|
||||
// (gitconfig doesn't support \r in value, \t in subsection name, etc.)
|
||||
// - reading / parsing gcfg files
|
||||
// - define internal representation structure
|
||||
// - support multiple inputs (readers, strings, files)
|
||||
// - support declaring encoding (?)
|
||||
// - support varying fields sets for subsections (?)
|
||||
// - writing gcfg files
|
||||
// - error handling
|
||||
// - make error context accessible programmatically?
|
||||
// - limit input size?
|
||||
//
|
||||
package gcfg // import "github.com/go-git/gcfg"
|
41
vendor/github.com/go-git/gcfg/errors.go
generated
vendored
Normal file
41
vendor/github.com/go-git/gcfg/errors.go
generated
vendored
Normal file
|
@ -0,0 +1,41 @@
|
|||
package gcfg
|
||||
|
||||
import (
|
||||
"gopkg.in/warnings.v0"
|
||||
)
|
||||
|
||||
// FatalOnly filters the results of a Read*Into invocation and returns only
|
||||
// fatal errors. That is, errors (warnings) indicating data for unknown
|
||||
// sections / variables is ignored. Example invocation:
|
||||
//
|
||||
// err := gcfg.FatalOnly(gcfg.ReadFileInto(&cfg, configFile))
|
||||
// if err != nil {
|
||||
// ...
|
||||
//
|
||||
func FatalOnly(err error) error {
|
||||
return warnings.FatalOnly(err)
|
||||
}
|
||||
|
||||
func isFatal(err error) bool {
|
||||
_, ok := err.(extraData)
|
||||
return !ok
|
||||
}
|
||||
|
||||
type extraData struct {
|
||||
section string
|
||||
subsection *string
|
||||
variable *string
|
||||
}
|
||||
|
||||
func (e extraData) Error() string {
|
||||
s := "can't store data at section \"" + e.section + "\""
|
||||
if e.subsection != nil {
|
||||
s += ", subsection \"" + *e.subsection + "\""
|
||||
}
|
||||
if e.variable != nil {
|
||||
s += ", variable \"" + *e.variable + "\""
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
var _ error = extraData{}
|
8
vendor/github.com/go-git/gcfg/go.mod
generated
vendored
Normal file
8
vendor/github.com/go-git/gcfg/go.mod
generated
vendored
Normal file
|
@ -0,0 +1,8 @@
|
|||
module github.com/go-git/gcfg
|
||||
|
||||
go 1.13
|
||||
|
||||
require (
|
||||
github.com/pkg/errors v0.9.1
|
||||
gopkg.in/warnings.v0 v0.1.2
|
||||
)
|
4
vendor/github.com/go-git/gcfg/go.sum
generated
vendored
Normal file
4
vendor/github.com/go-git/gcfg/go.sum
generated
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
gopkg.in/warnings.v0 v0.1.2 h1:wFXVbFY8DY5/xOe1ECiWdKCzZlxgshcYVNkBHstARME=
|
||||
gopkg.in/warnings.v0 v0.1.2/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
|
273
vendor/github.com/go-git/gcfg/read.go
generated
vendored
Normal file
273
vendor/github.com/go-git/gcfg/read.go
generated
vendored
Normal file
|
@ -0,0 +1,273 @@
|
|||
package gcfg
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"gopkg.in/warnings.v0"
|
||||
|
||||
"github.com/go-git/gcfg/scanner"
|
||||
"github.com/go-git/gcfg/token"
|
||||
)
|
||||
|
||||
var unescape = map[rune]rune{'\\': '\\', '"': '"', 'n': '\n', 't': '\t', 'b': '\b', '\n': '\n'}
|
||||
|
||||
// no error: invalid literals should be caught by scanner
|
||||
func unquote(s string) string {
|
||||
u, q, esc := make([]rune, 0, len(s)), false, false
|
||||
for _, c := range s {
|
||||
if esc {
|
||||
uc, ok := unescape[c]
|
||||
switch {
|
||||
case ok:
|
||||
u = append(u, uc)
|
||||
fallthrough
|
||||
case !q && c == '\n':
|
||||
esc = false
|
||||
continue
|
||||
}
|
||||
panic("invalid escape sequence")
|
||||
}
|
||||
switch c {
|
||||
case '"':
|
||||
q = !q
|
||||
case '\\':
|
||||
esc = true
|
||||
default:
|
||||
u = append(u, c)
|
||||
}
|
||||
}
|
||||
if q {
|
||||
panic("missing end quote")
|
||||
}
|
||||
if esc {
|
||||
panic("invalid escape sequence")
|
||||
}
|
||||
return string(u)
|
||||
}
|
||||
|
||||
func read(c *warnings.Collector, callback func(string, string, string, string, bool) error,
|
||||
fset *token.FileSet, file *token.File, src []byte) error {
|
||||
//
|
||||
var s scanner.Scanner
|
||||
var errs scanner.ErrorList
|
||||
s.Init(file, src, func(p token.Position, m string) { errs.Add(p, m) }, 0)
|
||||
sect, sectsub := "", ""
|
||||
pos, tok, lit := s.Scan()
|
||||
errfn := func(msg string) error {
|
||||
return fmt.Errorf("%s: %s", fset.Position(pos), msg)
|
||||
}
|
||||
for {
|
||||
if errs.Len() > 0 {
|
||||
if err := c.Collect(errs.Err()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
switch tok {
|
||||
case token.EOF:
|
||||
return nil
|
||||
case token.EOL, token.COMMENT:
|
||||
pos, tok, lit = s.Scan()
|
||||
case token.LBRACK:
|
||||
pos, tok, lit = s.Scan()
|
||||
if errs.Len() > 0 {
|
||||
if err := c.Collect(errs.Err()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if tok != token.IDENT {
|
||||
if err := c.Collect(errfn("expected section name")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
sect, sectsub = lit, ""
|
||||
pos, tok, lit = s.Scan()
|
||||
if errs.Len() > 0 {
|
||||
if err := c.Collect(errs.Err()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if tok == token.STRING {
|
||||
sectsub = unquote(lit)
|
||||
if sectsub == "" {
|
||||
if err := c.Collect(errfn("empty subsection name")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
pos, tok, lit = s.Scan()
|
||||
if errs.Len() > 0 {
|
||||
if err := c.Collect(errs.Err()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
if tok != token.RBRACK {
|
||||
if sectsub == "" {
|
||||
if err := c.Collect(errfn("expected subsection name or right bracket")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := c.Collect(errfn("expected right bracket")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
pos, tok, lit = s.Scan()
|
||||
if tok != token.EOL && tok != token.EOF && tok != token.COMMENT {
|
||||
if err := c.Collect(errfn("expected EOL, EOF, or comment")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// If a section/subsection header was found, ensure a
|
||||
// container object is created, even if there are no
|
||||
// variables further down.
|
||||
err := c.Collect(callback(sect, sectsub, "", "", true))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
case token.IDENT:
|
||||
if sect == "" {
|
||||
if err := c.Collect(errfn("expected section header")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
n := lit
|
||||
pos, tok, lit = s.Scan()
|
||||
if errs.Len() > 0 {
|
||||
return errs.Err()
|
||||
}
|
||||
blank, v := tok == token.EOF || tok == token.EOL || tok == token.COMMENT, ""
|
||||
if !blank {
|
||||
if tok != token.ASSIGN {
|
||||
if err := c.Collect(errfn("expected '='")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
pos, tok, lit = s.Scan()
|
||||
if errs.Len() > 0 {
|
||||
if err := c.Collect(errs.Err()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if tok != token.STRING {
|
||||
if err := c.Collect(errfn("expected value")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
v = unquote(lit)
|
||||
pos, tok, lit = s.Scan()
|
||||
if errs.Len() > 0 {
|
||||
if err := c.Collect(errs.Err()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if tok != token.EOL && tok != token.EOF && tok != token.COMMENT {
|
||||
if err := c.Collect(errfn("expected EOL, EOF, or comment")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
err := c.Collect(callback(sect, sectsub, n, v, blank))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
default:
|
||||
if sect == "" {
|
||||
if err := c.Collect(errfn("expected section header")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := c.Collect(errfn("expected section header or variable declaration")); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
panic("never reached")
|
||||
}
|
||||
|
||||
func readInto(config interface{}, fset *token.FileSet, file *token.File,
|
||||
src []byte) error {
|
||||
//
|
||||
c := warnings.NewCollector(isFatal)
|
||||
firstPassCallback := func(s string, ss string, k string, v string, bv bool) error {
|
||||
return set(c, config, s, ss, k, v, bv, false)
|
||||
}
|
||||
err := read(c, firstPassCallback, fset, file, src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
secondPassCallback := func(s string, ss string, k string, v string, bv bool) error {
|
||||
return set(c, config, s, ss, k, v, bv, true)
|
||||
}
|
||||
err = read(c, secondPassCallback, fset, file, src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return c.Done()
|
||||
}
|
||||
|
||||
// ReadWithCallback reads gcfg formatted data from reader and calls
|
||||
// callback with each section and option found.
|
||||
//
|
||||
// Callback is called with section, subsection, option key, option value
|
||||
// and blank value flag as arguments.
|
||||
//
|
||||
// When a section is found, callback is called with nil subsection, option key
|
||||
// and option value.
|
||||
//
|
||||
// When a subsection is found, callback is called with nil option key and
|
||||
// option value.
|
||||
//
|
||||
// If blank value flag is true, it means that the value was not set for an option
|
||||
// (as opposed to set to empty string).
|
||||
//
|
||||
// If callback returns an error, ReadWithCallback terminates with an error too.
|
||||
func ReadWithCallback(reader io.Reader, callback func(string, string, string, string, bool) error) error {
|
||||
src, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fset := token.NewFileSet()
|
||||
file := fset.AddFile("", fset.Base(), len(src))
|
||||
c := warnings.NewCollector(isFatal)
|
||||
|
||||
return read(c, callback, fset, file, src)
|
||||
}
|
||||
|
||||
// ReadInto reads gcfg formatted data from reader and sets the values into the
|
||||
// corresponding fields in config.
|
||||
func ReadInto(config interface{}, reader io.Reader) error {
|
||||
src, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fset := token.NewFileSet()
|
||||
file := fset.AddFile("", fset.Base(), len(src))
|
||||
return readInto(config, fset, file, src)
|
||||
}
|
||||
|
||||
// ReadStringInto reads gcfg formatted data from str and sets the values into
|
||||
// the corresponding fields in config.
|
||||
func ReadStringInto(config interface{}, str string) error {
|
||||
r := strings.NewReader(str)
|
||||
return ReadInto(config, r)
|
||||
}
|
||||
|
||||
// ReadFileInto reads gcfg formatted data from the file filename and sets the
|
||||
// values into the corresponding fields in config.
|
||||
func ReadFileInto(config interface{}, filename string) error {
|
||||
f, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
src, err := io.ReadAll(f)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
fset := token.NewFileSet()
|
||||
file := fset.AddFile(filename, fset.Base(), len(src))
|
||||
return readInto(config, fset, file, src)
|
||||
}
|
121
vendor/github.com/go-git/gcfg/scanner/errors.go
generated
vendored
Normal file
121
vendor/github.com/go-git/gcfg/scanner/errors.go
generated
vendored
Normal file
|
@ -0,0 +1,121 @@
|
|||
// Copyright 2009 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"sort"
|
||||
)
|
||||
|
||||
import (
|
||||
"github.com/go-git/gcfg/token"
|
||||
)
|
||||
|
||||
// In an ErrorList, an error is represented by an *Error.
|
||||
// The position Pos, if valid, points to the beginning of
|
||||
// the offending token, and the error condition is described
|
||||
// by Msg.
|
||||
//
|
||||
type Error struct {
|
||||
Pos token.Position
|
||||
Msg string
|
||||
}
|
||||
|
||||
// Error implements the error interface.
|
||||
func (e Error) Error() string {
|
||||
if e.Pos.Filename != "" || e.Pos.IsValid() {
|
||||
// don't print "<unknown position>"
|
||||
// TODO(gri) reconsider the semantics of Position.IsValid
|
||||
return e.Pos.String() + ": " + e.Msg
|
||||
}
|
||||
return e.Msg
|
||||
}
|
||||
|
||||
// ErrorList is a list of *Errors.
|
||||
// The zero value for an ErrorList is an empty ErrorList ready to use.
|
||||
//
|
||||
type ErrorList []*Error
|
||||
|
||||
// Add adds an Error with given position and error message to an ErrorList.
|
||||
func (p *ErrorList) Add(pos token.Position, msg string) {
|
||||
*p = append(*p, &Error{pos, msg})
|
||||
}
|
||||
|
||||
// Reset resets an ErrorList to no errors.
|
||||
func (p *ErrorList) Reset() { *p = (*p)[0:0] }
|
||||
|
||||
// ErrorList implements the sort Interface.
|
||||
func (p ErrorList) Len() int { return len(p) }
|
||||
func (p ErrorList) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
||||
|
||||
func (p ErrorList) Less(i, j int) bool {
|
||||
e := &p[i].Pos
|
||||
f := &p[j].Pos
|
||||
if e.Filename < f.Filename {
|
||||
return true
|
||||
}
|
||||
if e.Filename == f.Filename {
|
||||
return e.Offset < f.Offset
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// Sort sorts an ErrorList. *Error entries are sorted by position,
|
||||
// other errors are sorted by error message, and before any *Error
|
||||
// entry.
|
||||
//
|
||||
func (p ErrorList) Sort() {
|
||||
sort.Sort(p)
|
||||
}
|
||||
|
||||
// RemoveMultiples sorts an ErrorList and removes all but the first error per line.
|
||||
func (p *ErrorList) RemoveMultiples() {
|
||||
sort.Sort(p)
|
||||
var last token.Position // initial last.Line is != any legal error line
|
||||
i := 0
|
||||
for _, e := range *p {
|
||||
if e.Pos.Filename != last.Filename || e.Pos.Line != last.Line {
|
||||
last = e.Pos
|
||||
(*p)[i] = e
|
||||
i++
|
||||
}
|
||||
}
|
||||
(*p) = (*p)[0:i]
|
||||
}
|
||||
|
||||
// An ErrorList implements the error interface.
|
||||
func (p ErrorList) Error() string {
|
||||
switch len(p) {
|
||||
case 0:
|
||||
return "no errors"
|
||||
case 1:
|
||||
return p[0].Error()
|
||||
}
|
||||
return fmt.Sprintf("%s (and %d more errors)", p[0], len(p)-1)
|
||||
}
|
||||
|
||||
// Err returns an error equivalent to this error list.
|
||||
// If the list is empty, Err returns nil.
|
||||
func (p ErrorList) Err() error {
|
||||
if len(p) == 0 {
|
||||
return nil
|
||||
}
|
||||
return p
|
||||
}
|
||||
|
||||
// PrintError is a utility function that prints a list of errors to w,
|
||||
// one error per line, if the err parameter is an ErrorList. Otherwise
|
||||
// it prints the err string.
|
||||
//
|
||||
func PrintError(w io.Writer, err error) {
|
||||
if list, ok := err.(ErrorList); ok {
|
||||
for _, e := range list {
|
||||
fmt.Fprintf(w, "%s\n", e)
|
||||
}
|
||||
} else if err != nil {
|
||||
fmt.Fprintf(w, "%s\n", err)
|
||||
}
|
||||
}
|
334
vendor/github.com/go-git/gcfg/scanner/scanner.go
generated
vendored
Normal file
334
vendor/github.com/go-git/gcfg/scanner/scanner.go
generated
vendored
Normal file
|
@ -0,0 +1,334 @@
|
|||
// Copyright 2009 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package scanner implements a scanner for gcfg configuration text.
|
||||
// It takes a []byte as source which can then be tokenized
|
||||
// through repeated calls to the Scan method.
|
||||
//
|
||||
// Note that the API for the scanner package may change to accommodate new
|
||||
// features or implementation changes in gcfg.
|
||||
package scanner
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
|
||||
"github.com/go-git/gcfg/token"
|
||||
)
|
||||
|
||||
// An ErrorHandler may be provided to Scanner.Init. If a syntax error is
|
||||
// encountered and a handler was installed, the handler is called with a
|
||||
// position and an error message. The position points to the beginning of
|
||||
// the offending token.
|
||||
type ErrorHandler func(pos token.Position, msg string)
|
||||
|
||||
// A Scanner holds the scanner's internal state while processing
|
||||
// a given text. It can be allocated as part of another data
|
||||
// structure but must be initialized via Init before use.
|
||||
type Scanner struct {
|
||||
// immutable state
|
||||
file *token.File // source file handle
|
||||
dir string // directory portion of file.Name()
|
||||
src []byte // source
|
||||
err ErrorHandler // error reporting; or nil
|
||||
mode Mode // scanning mode
|
||||
|
||||
// scanning state
|
||||
ch rune // current character
|
||||
offset int // character offset
|
||||
rdOffset int // reading offset (position after current character)
|
||||
lineOffset int // current line offset
|
||||
nextVal bool // next token is expected to be a value
|
||||
|
||||
// public state - ok to modify
|
||||
ErrorCount int // number of errors encountered
|
||||
}
|
||||
|
||||
// Read the next Unicode char into s.ch.
|
||||
// s.ch < 0 means end-of-file.
|
||||
func (s *Scanner) next() {
|
||||
if s.rdOffset < len(s.src) {
|
||||
s.offset = s.rdOffset
|
||||
if s.ch == '\n' {
|
||||
s.lineOffset = s.offset
|
||||
s.file.AddLine(s.offset)
|
||||
}
|
||||
r, w := rune(s.src[s.rdOffset]), 1
|
||||
switch {
|
||||
case r == 0:
|
||||
s.error(s.offset, "illegal character NUL")
|
||||
case r >= 0x80:
|
||||
// not ASCII
|
||||
r, w = utf8.DecodeRune(s.src[s.rdOffset:])
|
||||
if r == utf8.RuneError && w == 1 {
|
||||
s.error(s.offset, "illegal UTF-8 encoding")
|
||||
}
|
||||
}
|
||||
s.rdOffset += w
|
||||
s.ch = r
|
||||
} else {
|
||||
s.offset = len(s.src)
|
||||
if s.ch == '\n' {
|
||||
s.lineOffset = s.offset
|
||||
s.file.AddLine(s.offset)
|
||||
}
|
||||
s.ch = -1 // eof
|
||||
}
|
||||
}
|
||||
|
||||
// A mode value is a set of flags (or 0).
|
||||
// They control scanner behavior.
|
||||
type Mode uint
|
||||
|
||||
const (
|
||||
ScanComments Mode = 1 << iota // return comments as COMMENT tokens
|
||||
)
|
||||
|
||||
// Init prepares the scanner s to tokenize the text src by setting the
|
||||
// scanner at the beginning of src. The scanner uses the file set file
|
||||
// for position information and it adds line information for each line.
|
||||
// It is ok to re-use the same file when re-scanning the same file as
|
||||
// line information which is already present is ignored. Init causes a
|
||||
// panic if the file size does not match the src size.
|
||||
//
|
||||
// Calls to Scan will invoke the error handler err if they encounter a
|
||||
// syntax error and err is not nil. Also, for each error encountered,
|
||||
// the Scanner field ErrorCount is incremented by one. The mode parameter
|
||||
// determines how comments are handled.
|
||||
//
|
||||
// Note that Init may call err if there is an error in the first character
|
||||
// of the file.
|
||||
func (s *Scanner) Init(file *token.File, src []byte, err ErrorHandler, mode Mode) {
|
||||
// Explicitly initialize all fields since a scanner may be reused.
|
||||
if file.Size() != len(src) {
|
||||
panic(fmt.Sprintf("file size (%d) does not match src len (%d)", file.Size(), len(src)))
|
||||
}
|
||||
s.file = file
|
||||
s.dir, _ = filepath.Split(file.Name())
|
||||
s.src = src
|
||||
s.err = err
|
||||
s.mode = mode
|
||||
|
||||
s.ch = ' '
|
||||
s.offset = 0
|
||||
s.rdOffset = 0
|
||||
s.lineOffset = 0
|
||||
s.ErrorCount = 0
|
||||
s.nextVal = false
|
||||
|
||||
s.next()
|
||||
}
|
||||
|
||||
func (s *Scanner) error(offs int, msg string) {
|
||||
if s.err != nil {
|
||||
s.err(s.file.Position(s.file.Pos(offs)), msg)
|
||||
}
|
||||
s.ErrorCount++
|
||||
}
|
||||
|
||||
func (s *Scanner) scanComment() string {
|
||||
// initial [;#] already consumed
|
||||
offs := s.offset - 1 // position of initial [;#]
|
||||
|
||||
for s.ch != '\n' && s.ch >= 0 {
|
||||
s.next()
|
||||
}
|
||||
return string(s.src[offs:s.offset])
|
||||
}
|
||||
|
||||
func isLetter(ch rune) bool {
|
||||
return 'a' <= ch && ch <= 'z' || 'A' <= ch && ch <= 'Z' || ch >= 0x80 && unicode.IsLetter(ch)
|
||||
}
|
||||
|
||||
func isDigit(ch rune) bool {
|
||||
return '0' <= ch && ch <= '9' || ch >= 0x80 && unicode.IsDigit(ch)
|
||||
}
|
||||
|
||||
func (s *Scanner) scanIdentifier() string {
|
||||
offs := s.offset
|
||||
for isLetter(s.ch) || isDigit(s.ch) || s.ch == '-' {
|
||||
s.next()
|
||||
}
|
||||
return string(s.src[offs:s.offset])
|
||||
}
|
||||
|
||||
// val indicate if we are scanning a value (vs a header)
|
||||
func (s *Scanner) scanEscape(val bool) {
|
||||
offs := s.offset
|
||||
ch := s.ch
|
||||
s.next() // always make progress
|
||||
switch ch {
|
||||
case '\\', '"', '\n':
|
||||
// ok
|
||||
case 'n', 't', 'b':
|
||||
if val {
|
||||
break // ok
|
||||
}
|
||||
fallthrough
|
||||
default:
|
||||
s.error(offs, "unknown escape sequence")
|
||||
}
|
||||
}
|
||||
|
||||
func (s *Scanner) scanString() string {
|
||||
// '"' opening already consumed
|
||||
offs := s.offset - 1
|
||||
|
||||
for s.ch != '"' {
|
||||
ch := s.ch
|
||||
s.next()
|
||||
if ch == '\n' || ch < 0 {
|
||||
s.error(offs, "string not terminated")
|
||||
break
|
||||
}
|
||||
if ch == '\\' {
|
||||
s.scanEscape(false)
|
||||
}
|
||||
}
|
||||
|
||||
s.next()
|
||||
|
||||
return string(s.src[offs:s.offset])
|
||||
}
|
||||
|
||||
func stripCR(b []byte) []byte {
|
||||
c := make([]byte, len(b))
|
||||
i := 0
|
||||
for _, ch := range b {
|
||||
if ch != '\r' {
|
||||
c[i] = ch
|
||||
i++
|
||||
}
|
||||
}
|
||||
return c[:i]
|
||||
}
|
||||
|
||||
func (s *Scanner) scanValString() string {
|
||||
offs := s.offset
|
||||
|
||||
hasCR := false
|
||||
end := offs
|
||||
inQuote := false
|
||||
loop:
|
||||
for inQuote || s.ch >= 0 && s.ch != '\n' && s.ch != ';' && s.ch != '#' {
|
||||
ch := s.ch
|
||||
s.next()
|
||||
switch {
|
||||
case inQuote && ch == '\\':
|
||||
s.scanEscape(true)
|
||||
case !inQuote && ch == '\\':
|
||||
if s.ch == '\r' {
|
||||
hasCR = true
|
||||
s.next()
|
||||
}
|
||||
if s.ch != '\n' {
|
||||
s.scanEscape(true)
|
||||
} else {
|
||||
s.next()
|
||||
}
|
||||
case ch == '"':
|
||||
inQuote = !inQuote
|
||||
case ch == '\r':
|
||||
hasCR = true
|
||||
case ch < 0 || inQuote && ch == '\n':
|
||||
s.error(offs, "string not terminated")
|
||||
break loop
|
||||
}
|
||||
if inQuote || !isWhiteSpace(ch) {
|
||||
end = s.offset
|
||||
}
|
||||
}
|
||||
|
||||
lit := s.src[offs:end]
|
||||
if hasCR {
|
||||
lit = stripCR(lit)
|
||||
}
|
||||
|
||||
return string(lit)
|
||||
}
|
||||
|
||||
func isWhiteSpace(ch rune) bool {
|
||||
return ch == ' ' || ch == '\t' || ch == '\r'
|
||||
}
|
||||
|
||||
func (s *Scanner) skipWhitespace() {
|
||||
for isWhiteSpace(s.ch) {
|
||||
s.next()
|
||||
}
|
||||
}
|
||||
|
||||
// Scan scans the next token and returns the token position, the token,
|
||||
// and its literal string if applicable. The source end is indicated by
|
||||
// token.EOF.
|
||||
//
|
||||
// If the returned token is a literal (token.IDENT, token.STRING) or
|
||||
// token.COMMENT, the literal string has the corresponding value.
|
||||
//
|
||||
// If the returned token is token.ILLEGAL, the literal string is the
|
||||
// offending character.
|
||||
//
|
||||
// In all other cases, Scan returns an empty literal string.
|
||||
//
|
||||
// For more tolerant parsing, Scan will return a valid token if
|
||||
// possible even if a syntax error was encountered. Thus, even
|
||||
// if the resulting token sequence contains no illegal tokens,
|
||||
// a client may not assume that no error occurred. Instead it
|
||||
// must check the scanner's ErrorCount or the number of calls
|
||||
// of the error handler, if there was one installed.
|
||||
//
|
||||
// Scan adds line information to the file added to the file
|
||||
// set with Init. Token positions are relative to that file
|
||||
// and thus relative to the file set.
|
||||
func (s *Scanner) Scan() (pos token.Pos, tok token.Token, lit string) {
|
||||
scanAgain:
|
||||
s.skipWhitespace()
|
||||
|
||||
// current token start
|
||||
pos = s.file.Pos(s.offset)
|
||||
|
||||
// determine token value
|
||||
switch ch := s.ch; {
|
||||
case s.nextVal:
|
||||
lit = s.scanValString()
|
||||
tok = token.STRING
|
||||
s.nextVal = false
|
||||
case isLetter(ch):
|
||||
lit = s.scanIdentifier()
|
||||
tok = token.IDENT
|
||||
default:
|
||||
s.next() // always make progress
|
||||
switch ch {
|
||||
case -1:
|
||||
tok = token.EOF
|
||||
case '\n':
|
||||
tok = token.EOL
|
||||
case '"':
|
||||
tok = token.STRING
|
||||
lit = s.scanString()
|
||||
case '[':
|
||||
tok = token.LBRACK
|
||||
case ']':
|
||||
tok = token.RBRACK
|
||||
case ';', '#':
|
||||
// comment
|
||||
lit = s.scanComment()
|
||||
if s.mode&ScanComments == 0 {
|
||||
// skip comment
|
||||
goto scanAgain
|
||||
}
|
||||
tok = token.COMMENT
|
||||
case '=':
|
||||
tok = token.ASSIGN
|
||||
s.nextVal = true
|
||||
default:
|
||||
s.error(s.file.Offset(pos), fmt.Sprintf("illegal character %#U", ch))
|
||||
tok = token.ILLEGAL
|
||||
lit = string(ch)
|
||||
}
|
||||
}
|
||||
|
||||
return
|
||||
}
|
334
vendor/github.com/go-git/gcfg/set.go
generated
vendored
Normal file
334
vendor/github.com/go-git/gcfg/set.go
generated
vendored
Normal file
|
@ -0,0 +1,334 @@
|
|||
package gcfg
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding"
|
||||
"encoding/gob"
|
||||
"fmt"
|
||||
"math/big"
|
||||
"reflect"
|
||||
"strings"
|
||||
"unicode"
|
||||
"unicode/utf8"
|
||||
|
||||
"gopkg.in/warnings.v0"
|
||||
|
||||
"github.com/go-git/gcfg/types"
|
||||
)
|
||||
|
||||
type tag struct {
|
||||
ident string
|
||||
intMode string
|
||||
}
|
||||
|
||||
func newTag(ts string) tag {
|
||||
t := tag{}
|
||||
s := strings.Split(ts, ",")
|
||||
t.ident = s[0]
|
||||
for _, tse := range s[1:] {
|
||||
if strings.HasPrefix(tse, "int=") {
|
||||
t.intMode = tse[len("int="):]
|
||||
}
|
||||
}
|
||||
return t
|
||||
}
|
||||
|
||||
func fieldFold(v reflect.Value, name string) (reflect.Value, tag) {
|
||||
var n string
|
||||
r0, _ := utf8.DecodeRuneInString(name)
|
||||
if unicode.IsLetter(r0) && !unicode.IsLower(r0) && !unicode.IsUpper(r0) {
|
||||
n = "X"
|
||||
}
|
||||
n += strings.Replace(name, "-", "_", -1)
|
||||
f, ok := v.Type().FieldByNameFunc(func(fieldName string) bool {
|
||||
if !v.FieldByName(fieldName).CanSet() {
|
||||
return false
|
||||
}
|
||||
f, _ := v.Type().FieldByName(fieldName)
|
||||
t := newTag(f.Tag.Get("gcfg"))
|
||||
if t.ident != "" {
|
||||
return strings.EqualFold(t.ident, name)
|
||||
}
|
||||
return strings.EqualFold(n, fieldName)
|
||||
})
|
||||
if !ok {
|
||||
return reflect.Value{}, tag{}
|
||||
}
|
||||
return v.FieldByName(f.Name), newTag(f.Tag.Get("gcfg"))
|
||||
}
|
||||
|
||||
type setter func(destp interface{}, blank bool, val string, t tag) error
|
||||
|
||||
var errUnsupportedType = fmt.Errorf("unsupported type")
|
||||
var errBlankUnsupported = fmt.Errorf("blank value not supported for type")
|
||||
|
||||
var setters = []setter{
|
||||
typeSetter, textUnmarshalerSetter, kindSetter, scanSetter,
|
||||
}
|
||||
|
||||
func textUnmarshalerSetter(d interface{}, blank bool, val string, t tag) error {
|
||||
dtu, ok := d.(encoding.TextUnmarshaler)
|
||||
if !ok {
|
||||
return errUnsupportedType
|
||||
}
|
||||
if blank {
|
||||
return errBlankUnsupported
|
||||
}
|
||||
return dtu.UnmarshalText([]byte(val))
|
||||
}
|
||||
|
||||
func boolSetter(d interface{}, blank bool, val string, t tag) error {
|
||||
if blank {
|
||||
reflect.ValueOf(d).Elem().Set(reflect.ValueOf(true))
|
||||
return nil
|
||||
}
|
||||
b, err := types.ParseBool(val)
|
||||
if err == nil {
|
||||
reflect.ValueOf(d).Elem().Set(reflect.ValueOf(b))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func intMode(mode string) types.IntMode {
|
||||
var m types.IntMode
|
||||
if strings.ContainsAny(mode, "dD") {
|
||||
m |= types.Dec
|
||||
}
|
||||
if strings.ContainsAny(mode, "hH") {
|
||||
m |= types.Hex
|
||||
}
|
||||
if strings.ContainsAny(mode, "oO") {
|
||||
m |= types.Oct
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
var typeModes = map[reflect.Type]types.IntMode{
|
||||
reflect.TypeOf(int(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(int8(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(int16(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(int32(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(int64(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(uint(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(uint8(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(uint16(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(uint32(0)): types.Dec | types.Hex,
|
||||
reflect.TypeOf(uint64(0)): types.Dec | types.Hex,
|
||||
// use default mode (allow dec/hex/oct) for uintptr type
|
||||
reflect.TypeOf(big.Int{}): types.Dec | types.Hex,
|
||||
}
|
||||
|
||||
func intModeDefault(t reflect.Type) types.IntMode {
|
||||
m, ok := typeModes[t]
|
||||
if !ok {
|
||||
m = types.Dec | types.Hex | types.Oct
|
||||
}
|
||||
return m
|
||||
}
|
||||
|
||||
func intSetter(d interface{}, blank bool, val string, t tag) error {
|
||||
if blank {
|
||||
return errBlankUnsupported
|
||||
}
|
||||
mode := intMode(t.intMode)
|
||||
if mode == 0 {
|
||||
mode = intModeDefault(reflect.TypeOf(d).Elem())
|
||||
}
|
||||
return types.ParseInt(d, val, mode)
|
||||
}
|
||||
|
||||
func stringSetter(d interface{}, blank bool, val string, t tag) error {
|
||||
if blank {
|
||||
return errBlankUnsupported
|
||||
}
|
||||
dsp, ok := d.(*string)
|
||||
if !ok {
|
||||
return errUnsupportedType
|
||||
}
|
||||
*dsp = val
|
||||
return nil
|
||||
}
|
||||
|
||||
var kindSetters = map[reflect.Kind]setter{
|
||||
reflect.String: stringSetter,
|
||||
reflect.Bool: boolSetter,
|
||||
reflect.Int: intSetter,
|
||||
reflect.Int8: intSetter,
|
||||
reflect.Int16: intSetter,
|
||||
reflect.Int32: intSetter,
|
||||
reflect.Int64: intSetter,
|
||||
reflect.Uint: intSetter,
|
||||
reflect.Uint8: intSetter,
|
||||
reflect.Uint16: intSetter,
|
||||
reflect.Uint32: intSetter,
|
||||
reflect.Uint64: intSetter,
|
||||
reflect.Uintptr: intSetter,
|
||||
}
|
||||
|
||||
var typeSetters = map[reflect.Type]setter{
|
||||
reflect.TypeOf(big.Int{}): intSetter,
|
||||
}
|
||||
|
||||
func typeSetter(d interface{}, blank bool, val string, tt tag) error {
|
||||
t := reflect.ValueOf(d).Type().Elem()
|
||||
setter, ok := typeSetters[t]
|
||||
if !ok {
|
||||
return errUnsupportedType
|
||||
}
|
||||
return setter(d, blank, val, tt)
|
||||
}
|
||||
|
||||
func kindSetter(d interface{}, blank bool, val string, tt tag) error {
|
||||
k := reflect.ValueOf(d).Type().Elem().Kind()
|
||||
setter, ok := kindSetters[k]
|
||||
if !ok {
|
||||
return errUnsupportedType
|
||||
}
|
||||
return setter(d, blank, val, tt)
|
||||
}
|
||||
|
||||
func scanSetter(d interface{}, blank bool, val string, tt tag) error {
|
||||
if blank {
|
||||
return errBlankUnsupported
|
||||
}
|
||||
return types.ScanFully(d, val, 'v')
|
||||
}
|
||||
|
||||
func newValue(c *warnings.Collector, sect string, vCfg reflect.Value,
|
||||
vType reflect.Type) (reflect.Value, error) {
|
||||
//
|
||||
pv := reflect.New(vType)
|
||||
dfltName := "default-" + sect
|
||||
dfltField, _ := fieldFold(vCfg, dfltName)
|
||||
var err error
|
||||
if dfltField.IsValid() {
|
||||
b := bytes.NewBuffer(nil)
|
||||
ge := gob.NewEncoder(b)
|
||||
if err = c.Collect(ge.EncodeValue(dfltField)); err != nil {
|
||||
return pv, err
|
||||
}
|
||||
gd := gob.NewDecoder(bytes.NewReader(b.Bytes()))
|
||||
if err = c.Collect(gd.DecodeValue(pv.Elem())); err != nil {
|
||||
return pv, err
|
||||
}
|
||||
}
|
||||
return pv, nil
|
||||
}
|
||||
|
||||
func set(c *warnings.Collector, cfg interface{}, sect, sub, name string,
|
||||
value string, blankValue bool, subsectPass bool) error {
|
||||
//
|
||||
vPCfg := reflect.ValueOf(cfg)
|
||||
if vPCfg.Kind() != reflect.Ptr || vPCfg.Elem().Kind() != reflect.Struct {
|
||||
panic(fmt.Errorf("config must be a pointer to a struct"))
|
||||
}
|
||||
vCfg := vPCfg.Elem()
|
||||
vSect, _ := fieldFold(vCfg, sect)
|
||||
if !vSect.IsValid() {
|
||||
err := extraData{section: sect}
|
||||
return c.Collect(err)
|
||||
}
|
||||
isSubsect := vSect.Kind() == reflect.Map
|
||||
if subsectPass != isSubsect {
|
||||
return nil
|
||||
}
|
||||
if isSubsect {
|
||||
vst := vSect.Type()
|
||||
if vst.Key().Kind() != reflect.String ||
|
||||
vst.Elem().Kind() != reflect.Ptr ||
|
||||
vst.Elem().Elem().Kind() != reflect.Struct {
|
||||
panic(fmt.Errorf("map field for section must have string keys and "+
|
||||
" pointer-to-struct values: section %q", sect))
|
||||
}
|
||||
if vSect.IsNil() {
|
||||
vSect.Set(reflect.MakeMap(vst))
|
||||
}
|
||||
k := reflect.ValueOf(sub)
|
||||
pv := vSect.MapIndex(k)
|
||||
if !pv.IsValid() {
|
||||
vType := vSect.Type().Elem().Elem()
|
||||
var err error
|
||||
if pv, err = newValue(c, sect, vCfg, vType); err != nil {
|
||||
return err
|
||||
}
|
||||
vSect.SetMapIndex(k, pv)
|
||||
}
|
||||
vSect = pv.Elem()
|
||||
} else if vSect.Kind() != reflect.Struct {
|
||||
panic(fmt.Errorf("field for section must be a map or a struct: "+
|
||||
"section %q", sect))
|
||||
} else if sub != "" {
|
||||
err := extraData{section: sect, subsection: &sub}
|
||||
return c.Collect(err)
|
||||
}
|
||||
// Empty name is a special value, meaning that only the
|
||||
// section/subsection object is to be created, with no values set.
|
||||
if name == "" {
|
||||
return nil
|
||||
}
|
||||
vVar, t := fieldFold(vSect, name)
|
||||
if !vVar.IsValid() {
|
||||
var err error
|
||||
if isSubsect {
|
||||
err = extraData{section: sect, subsection: &sub, variable: &name}
|
||||
} else {
|
||||
err = extraData{section: sect, variable: &name}
|
||||
}
|
||||
return c.Collect(err)
|
||||
}
|
||||
// vVal is either single-valued var, or newly allocated value within multi-valued var
|
||||
var vVal reflect.Value
|
||||
// multi-value if unnamed slice type
|
||||
isMulti := vVar.Type().Name() == "" && vVar.Kind() == reflect.Slice ||
|
||||
vVar.Type().Name() == "" && vVar.Kind() == reflect.Ptr && vVar.Type().Elem().Name() == "" && vVar.Type().Elem().Kind() == reflect.Slice
|
||||
if isMulti && vVar.Kind() == reflect.Ptr {
|
||||
if vVar.IsNil() {
|
||||
vVar.Set(reflect.New(vVar.Type().Elem()))
|
||||
}
|
||||
vVar = vVar.Elem()
|
||||
}
|
||||
if isMulti && blankValue {
|
||||
vVar.Set(reflect.Zero(vVar.Type()))
|
||||
return nil
|
||||
}
|
||||
if isMulti {
|
||||
vVal = reflect.New(vVar.Type().Elem()).Elem()
|
||||
} else {
|
||||
vVal = vVar
|
||||
}
|
||||
isDeref := vVal.Type().Name() == "" && vVal.Type().Kind() == reflect.Ptr
|
||||
isNew := isDeref && vVal.IsNil()
|
||||
// vAddr is address of value to set (dereferenced & allocated as needed)
|
||||
var vAddr reflect.Value
|
||||
switch {
|
||||
case isNew:
|
||||
vAddr = reflect.New(vVal.Type().Elem())
|
||||
case isDeref && !isNew:
|
||||
vAddr = vVal
|
||||
default:
|
||||
vAddr = vVal.Addr()
|
||||
}
|
||||
vAddrI := vAddr.Interface()
|
||||
err, ok := error(nil), false
|
||||
for _, s := range setters {
|
||||
err = s(vAddrI, blankValue, value, t)
|
||||
if err == nil {
|
||||
ok = true
|
||||
break
|
||||
}
|
||||
if err != errUnsupportedType {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if !ok {
|
||||
// in case all setters returned errUnsupportedType
|
||||
return err
|
||||
}
|
||||
if isNew { // set reference if it was dereferenced and newly allocated
|
||||
vVal.Set(vAddr)
|
||||
}
|
||||
if isMulti { // append if multi-valued
|
||||
vVar.Set(reflect.Append(vVar, vVal))
|
||||
}
|
||||
return nil
|
||||
}
|
435
vendor/github.com/go-git/gcfg/token/position.go
generated
vendored
Normal file
435
vendor/github.com/go-git/gcfg/token/position.go
generated
vendored
Normal file
|
@ -0,0 +1,435 @@
|
|||
// Copyright 2010 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// TODO(gri) consider making this a separate package outside the go directory.
|
||||
|
||||
package token
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"sync"
|
||||
)
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Positions
|
||||
|
||||
// Position describes an arbitrary source position
|
||||
// including the file, line, and column location.
|
||||
// A Position is valid if the line number is > 0.
|
||||
//
|
||||
type Position struct {
|
||||
Filename string // filename, if any
|
||||
Offset int // offset, starting at 0
|
||||
Line int // line number, starting at 1
|
||||
Column int // column number, starting at 1 (character count)
|
||||
}
|
||||
|
||||
// IsValid returns true if the position is valid.
|
||||
func (pos *Position) IsValid() bool { return pos.Line > 0 }
|
||||
|
||||
// String returns a string in one of several forms:
|
||||
//
|
||||
// file:line:column valid position with file name
|
||||
// line:column valid position without file name
|
||||
// file invalid position with file name
|
||||
// - invalid position without file name
|
||||
//
|
||||
func (pos Position) String() string {
|
||||
s := pos.Filename
|
||||
if pos.IsValid() {
|
||||
if s != "" {
|
||||
s += ":"
|
||||
}
|
||||
s += fmt.Sprintf("%d:%d", pos.Line, pos.Column)
|
||||
}
|
||||
if s == "" {
|
||||
s = "-"
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Pos is a compact encoding of a source position within a file set.
|
||||
// It can be converted into a Position for a more convenient, but much
|
||||
// larger, representation.
|
||||
//
|
||||
// The Pos value for a given file is a number in the range [base, base+size],
|
||||
// where base and size are specified when adding the file to the file set via
|
||||
// AddFile.
|
||||
//
|
||||
// To create the Pos value for a specific source offset, first add
|
||||
// the respective file to the current file set (via FileSet.AddFile)
|
||||
// and then call File.Pos(offset) for that file. Given a Pos value p
|
||||
// for a specific file set fset, the corresponding Position value is
|
||||
// obtained by calling fset.Position(p).
|
||||
//
|
||||
// Pos values can be compared directly with the usual comparison operators:
|
||||
// If two Pos values p and q are in the same file, comparing p and q is
|
||||
// equivalent to comparing the respective source file offsets. If p and q
|
||||
// are in different files, p < q is true if the file implied by p was added
|
||||
// to the respective file set before the file implied by q.
|
||||
//
|
||||
type Pos int
|
||||
|
||||
// The zero value for Pos is NoPos; there is no file and line information
|
||||
// associated with it, and NoPos().IsValid() is false. NoPos is always
|
||||
// smaller than any other Pos value. The corresponding Position value
|
||||
// for NoPos is the zero value for Position.
|
||||
//
|
||||
const NoPos Pos = 0
|
||||
|
||||
// IsValid returns true if the position is valid.
|
||||
func (p Pos) IsValid() bool {
|
||||
return p != NoPos
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// File
|
||||
|
||||
// A File is a handle for a file belonging to a FileSet.
|
||||
// A File has a name, size, and line offset table.
|
||||
//
|
||||
type File struct {
|
||||
set *FileSet
|
||||
name string // file name as provided to AddFile
|
||||
base int // Pos value range for this file is [base...base+size]
|
||||
size int // file size as provided to AddFile
|
||||
|
||||
// lines and infos are protected by set.mutex
|
||||
lines []int
|
||||
infos []lineInfo
|
||||
}
|
||||
|
||||
// Name returns the file name of file f as registered with AddFile.
|
||||
func (f *File) Name() string {
|
||||
return f.name
|
||||
}
|
||||
|
||||
// Base returns the base offset of file f as registered with AddFile.
|
||||
func (f *File) Base() int {
|
||||
return f.base
|
||||
}
|
||||
|
||||
// Size returns the size of file f as registered with AddFile.
|
||||
func (f *File) Size() int {
|
||||
return f.size
|
||||
}
|
||||
|
||||
// LineCount returns the number of lines in file f.
|
||||
func (f *File) LineCount() int {
|
||||
f.set.mutex.RLock()
|
||||
n := len(f.lines)
|
||||
f.set.mutex.RUnlock()
|
||||
return n
|
||||
}
|
||||
|
||||
// AddLine adds the line offset for a new line.
|
||||
// The line offset must be larger than the offset for the previous line
|
||||
// and smaller than the file size; otherwise the line offset is ignored.
|
||||
//
|
||||
func (f *File) AddLine(offset int) {
|
||||
f.set.mutex.Lock()
|
||||
if i := len(f.lines); (i == 0 || f.lines[i-1] < offset) && offset < f.size {
|
||||
f.lines = append(f.lines, offset)
|
||||
}
|
||||
f.set.mutex.Unlock()
|
||||
}
|
||||
|
||||
// SetLines sets the line offsets for a file and returns true if successful.
|
||||
// The line offsets are the offsets of the first character of each line;
|
||||
// for instance for the content "ab\nc\n" the line offsets are {0, 3}.
|
||||
// An empty file has an empty line offset table.
|
||||
// Each line offset must be larger than the offset for the previous line
|
||||
// and smaller than the file size; otherwise SetLines fails and returns
|
||||
// false.
|
||||
//
|
||||
func (f *File) SetLines(lines []int) bool {
|
||||
// verify validity of lines table
|
||||
size := f.size
|
||||
for i, offset := range lines {
|
||||
if i > 0 && offset <= lines[i-1] || size <= offset {
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// set lines table
|
||||
f.set.mutex.Lock()
|
||||
f.lines = lines
|
||||
f.set.mutex.Unlock()
|
||||
return true
|
||||
}
|
||||
|
||||
// SetLinesForContent sets the line offsets for the given file content.
|
||||
func (f *File) SetLinesForContent(content []byte) {
|
||||
var lines []int
|
||||
line := 0
|
||||
for offset, b := range content {
|
||||
if line >= 0 {
|
||||
lines = append(lines, line)
|
||||
}
|
||||
line = -1
|
||||
if b == '\n' {
|
||||
line = offset + 1
|
||||
}
|
||||
}
|
||||
|
||||
// set lines table
|
||||
f.set.mutex.Lock()
|
||||
f.lines = lines
|
||||
f.set.mutex.Unlock()
|
||||
}
|
||||
|
||||
// A lineInfo object describes alternative file and line number
|
||||
// information (such as provided via a //line comment in a .go
|
||||
// file) for a given file offset.
|
||||
type lineInfo struct {
|
||||
// fields are exported to make them accessible to gob
|
||||
Offset int
|
||||
Filename string
|
||||
Line int
|
||||
}
|
||||
|
||||
// AddLineInfo adds alternative file and line number information for
|
||||
// a given file offset. The offset must be larger than the offset for
|
||||
// the previously added alternative line info and smaller than the
|
||||
// file size; otherwise the information is ignored.
|
||||
//
|
||||
// AddLineInfo is typically used to register alternative position
|
||||
// information for //line filename:line comments in source files.
|
||||
//
|
||||
func (f *File) AddLineInfo(offset int, filename string, line int) {
|
||||
f.set.mutex.Lock()
|
||||
if i := len(f.infos); i == 0 || f.infos[i-1].Offset < offset && offset < f.size {
|
||||
f.infos = append(f.infos, lineInfo{offset, filename, line})
|
||||
}
|
||||
f.set.mutex.Unlock()
|
||||
}
|
||||
|
||||
// Pos returns the Pos value for the given file offset;
|
||||
// the offset must be <= f.Size().
|
||||
// f.Pos(f.Offset(p)) == p.
|
||||
//
|
||||
func (f *File) Pos(offset int) Pos {
|
||||
if offset > f.size {
|
||||
panic("illegal file offset")
|
||||
}
|
||||
return Pos(f.base + offset)
|
||||
}
|
||||
|
||||
// Offset returns the offset for the given file position p;
|
||||
// p must be a valid Pos value in that file.
|
||||
// f.Offset(f.Pos(offset)) == offset.
|
||||
//
|
||||
func (f *File) Offset(p Pos) int {
|
||||
if int(p) < f.base || int(p) > f.base+f.size {
|
||||
panic("illegal Pos value")
|
||||
}
|
||||
return int(p) - f.base
|
||||
}
|
||||
|
||||
// Line returns the line number for the given file position p;
|
||||
// p must be a Pos value in that file or NoPos.
|
||||
//
|
||||
func (f *File) Line(p Pos) int {
|
||||
// TODO(gri) this can be implemented much more efficiently
|
||||
return f.Position(p).Line
|
||||
}
|
||||
|
||||
func searchLineInfos(a []lineInfo, x int) int {
|
||||
return sort.Search(len(a), func(i int) bool { return a[i].Offset > x }) - 1
|
||||
}
|
||||
|
||||
// info returns the file name, line, and column number for a file offset.
|
||||
func (f *File) info(offset int) (filename string, line, column int) {
|
||||
filename = f.name
|
||||
if i := searchInts(f.lines, offset); i >= 0 {
|
||||
line, column = i+1, offset-f.lines[i]+1
|
||||
}
|
||||
if len(f.infos) > 0 {
|
||||
// almost no files have extra line infos
|
||||
if i := searchLineInfos(f.infos, offset); i >= 0 {
|
||||
alt := &f.infos[i]
|
||||
filename = alt.Filename
|
||||
if i := searchInts(f.lines, alt.Offset); i >= 0 {
|
||||
line += alt.Line - i - 1
|
||||
}
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (f *File) position(p Pos) (pos Position) {
|
||||
offset := int(p) - f.base
|
||||
pos.Offset = offset
|
||||
pos.Filename, pos.Line, pos.Column = f.info(offset)
|
||||
return
|
||||
}
|
||||
|
||||
// Position returns the Position value for the given file position p;
|
||||
// p must be a Pos value in that file or NoPos.
|
||||
//
|
||||
func (f *File) Position(p Pos) (pos Position) {
|
||||
if p != NoPos {
|
||||
if int(p) < f.base || int(p) > f.base+f.size {
|
||||
panic("illegal Pos value")
|
||||
}
|
||||
pos = f.position(p)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// FileSet
|
||||
|
||||
// A FileSet represents a set of source files.
|
||||
// Methods of file sets are synchronized; multiple goroutines
|
||||
// may invoke them concurrently.
|
||||
//
|
||||
type FileSet struct {
|
||||
mutex sync.RWMutex // protects the file set
|
||||
base int // base offset for the next file
|
||||
files []*File // list of files in the order added to the set
|
||||
last *File // cache of last file looked up
|
||||
}
|
||||
|
||||
// NewFileSet creates a new file set.
|
||||
func NewFileSet() *FileSet {
|
||||
s := new(FileSet)
|
||||
s.base = 1 // 0 == NoPos
|
||||
return s
|
||||
}
|
||||
|
||||
// Base returns the minimum base offset that must be provided to
|
||||
// AddFile when adding the next file.
|
||||
//
|
||||
func (s *FileSet) Base() int {
|
||||
s.mutex.RLock()
|
||||
b := s.base
|
||||
s.mutex.RUnlock()
|
||||
return b
|
||||
|
||||
}
|
||||
|
||||
// AddFile adds a new file with a given filename, base offset, and file size
|
||||
// to the file set s and returns the file. Multiple files may have the same
|
||||
// name. The base offset must not be smaller than the FileSet's Base(), and
|
||||
// size must not be negative.
|
||||
//
|
||||
// Adding the file will set the file set's Base() value to base + size + 1
|
||||
// as the minimum base value for the next file. The following relationship
|
||||
// exists between a Pos value p for a given file offset offs:
|
||||
//
|
||||
// int(p) = base + offs
|
||||
//
|
||||
// with offs in the range [0, size] and thus p in the range [base, base+size].
|
||||
// For convenience, File.Pos may be used to create file-specific position
|
||||
// values from a file offset.
|
||||
//
|
||||
func (s *FileSet) AddFile(filename string, base, size int) *File {
|
||||
s.mutex.Lock()
|
||||
defer s.mutex.Unlock()
|
||||
if base < s.base || size < 0 {
|
||||
panic("illegal base or size")
|
||||
}
|
||||
// base >= s.base && size >= 0
|
||||
f := &File{s, filename, base, size, []int{0}, nil}
|
||||
base += size + 1 // +1 because EOF also has a position
|
||||
if base < 0 {
|
||||
panic("token.Pos offset overflow (> 2G of source code in file set)")
|
||||
}
|
||||
// add the file to the file set
|
||||
s.base = base
|
||||
s.files = append(s.files, f)
|
||||
s.last = f
|
||||
return f
|
||||
}
|
||||
|
||||
// Iterate calls f for the files in the file set in the order they were added
|
||||
// until f returns false.
|
||||
//
|
||||
func (s *FileSet) Iterate(f func(*File) bool) {
|
||||
for i := 0; ; i++ {
|
||||
var file *File
|
||||
s.mutex.RLock()
|
||||
if i < len(s.files) {
|
||||
file = s.files[i]
|
||||
}
|
||||
s.mutex.RUnlock()
|
||||
if file == nil || !f(file) {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func searchFiles(a []*File, x int) int {
|
||||
return sort.Search(len(a), func(i int) bool { return a[i].base > x }) - 1
|
||||
}
|
||||
|
||||
func (s *FileSet) file(p Pos) *File {
|
||||
// common case: p is in last file
|
||||
if f := s.last; f != nil && f.base <= int(p) && int(p) <= f.base+f.size {
|
||||
return f
|
||||
}
|
||||
// p is not in last file - search all files
|
||||
if i := searchFiles(s.files, int(p)); i >= 0 {
|
||||
f := s.files[i]
|
||||
// f.base <= int(p) by definition of searchFiles
|
||||
if int(p) <= f.base+f.size {
|
||||
s.last = f
|
||||
return f
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// File returns the file that contains the position p.
|
||||
// If no such file is found (for instance for p == NoPos),
|
||||
// the result is nil.
|
||||
//
|
||||
func (s *FileSet) File(p Pos) (f *File) {
|
||||
if p != NoPos {
|
||||
s.mutex.RLock()
|
||||
f = s.file(p)
|
||||
s.mutex.RUnlock()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// Position converts a Pos in the fileset into a general Position.
|
||||
func (s *FileSet) Position(p Pos) (pos Position) {
|
||||
if p != NoPos {
|
||||
s.mutex.RLock()
|
||||
if f := s.file(p); f != nil {
|
||||
pos = f.position(p)
|
||||
}
|
||||
s.mutex.RUnlock()
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// Helper functions
|
||||
|
||||
func searchInts(a []int, x int) int {
|
||||
// This function body is a manually inlined version of:
|
||||
//
|
||||
// return sort.Search(len(a), func(i int) bool { return a[i] > x }) - 1
|
||||
//
|
||||
// With better compiler optimizations, this may not be needed in the
|
||||
// future, but at the moment this change improves the go/printer
|
||||
// benchmark performance by ~30%. This has a direct impact on the
|
||||
// speed of gofmt and thus seems worthwhile (2011-04-29).
|
||||
// TODO(gri): Remove this when compilers have caught up.
|
||||
i, j := 0, len(a)
|
||||
for i < j {
|
||||
h := i + (j-i)/2 // avoid overflow when computing h
|
||||
// i ≤ h < j
|
||||
if a[h] <= x {
|
||||
i = h + 1
|
||||
} else {
|
||||
j = h
|
||||
}
|
||||
}
|
||||
return i - 1
|
||||
}
|
56
vendor/github.com/go-git/gcfg/token/serialize.go
generated
vendored
Normal file
56
vendor/github.com/go-git/gcfg/token/serialize.go
generated
vendored
Normal file
|
@ -0,0 +1,56 @@
|
|||
// Copyright 2011 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package token
|
||||
|
||||
type serializedFile struct {
|
||||
// fields correspond 1:1 to fields with same (lower-case) name in File
|
||||
Name string
|
||||
Base int
|
||||
Size int
|
||||
Lines []int
|
||||
Infos []lineInfo
|
||||
}
|
||||
|
||||
type serializedFileSet struct {
|
||||
Base int
|
||||
Files []serializedFile
|
||||
}
|
||||
|
||||
// Read calls decode to deserialize a file set into s; s must not be nil.
|
||||
func (s *FileSet) Read(decode func(interface{}) error) error {
|
||||
var ss serializedFileSet
|
||||
if err := decode(&ss); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
s.mutex.Lock()
|
||||
s.base = ss.Base
|
||||
files := make([]*File, len(ss.Files))
|
||||
for i := 0; i < len(ss.Files); i++ {
|
||||
f := &ss.Files[i]
|
||||
files[i] = &File{s, f.Name, f.Base, f.Size, f.Lines, f.Infos}
|
||||
}
|
||||
s.files = files
|
||||
s.last = nil
|
||||
s.mutex.Unlock()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Write calls encode to serialize the file set s.
|
||||
func (s *FileSet) Write(encode func(interface{}) error) error {
|
||||
var ss serializedFileSet
|
||||
|
||||
s.mutex.Lock()
|
||||
ss.Base = s.base
|
||||
files := make([]serializedFile, len(s.files))
|
||||
for i, f := range s.files {
|
||||
files[i] = serializedFile{f.name, f.base, f.size, f.lines, f.infos}
|
||||
}
|
||||
ss.Files = files
|
||||
s.mutex.Unlock()
|
||||
|
||||
return encode(ss)
|
||||
}
|
83
vendor/github.com/go-git/gcfg/token/token.go
generated
vendored
Normal file
83
vendor/github.com/go-git/gcfg/token/token.go
generated
vendored
Normal file
|
@ -0,0 +1,83 @@
|
|||
// Copyright 2009 The Go Authors. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
// Package token defines constants representing the lexical tokens of the gcfg
|
||||
// configuration syntax and basic operations on tokens (printing, predicates).
|
||||
//
|
||||
// Note that the API for the token package may change to accommodate new
|
||||
// features or implementation changes in gcfg.
|
||||
//
|
||||
package token
|
||||
|
||||
import "strconv"
|
||||
|
||||
// Token is the set of lexical tokens of the gcfg configuration syntax.
|
||||
type Token int
|
||||
|
||||
// The list of tokens.
|
||||
const (
|
||||
// Special tokens
|
||||
ILLEGAL Token = iota
|
||||
EOF
|
||||
COMMENT
|
||||
|
||||
literal_beg
|
||||
// Identifiers and basic type literals
|
||||
// (these tokens stand for classes of literals)
|
||||
IDENT // section-name, variable-name
|
||||
STRING // "subsection-name", variable value
|
||||
literal_end
|
||||
|
||||
operator_beg
|
||||
// Operators and delimiters
|
||||
ASSIGN // =
|
||||
LBRACK // [
|
||||
RBRACK // ]
|
||||
EOL // \n
|
||||
operator_end
|
||||
)
|
||||
|
||||
var tokens = [...]string{
|
||||
ILLEGAL: "ILLEGAL",
|
||||
|
||||
EOF: "EOF",
|
||||
COMMENT: "COMMENT",
|
||||
|
||||
IDENT: "IDENT",
|
||||
STRING: "STRING",
|
||||
|
||||
ASSIGN: "=",
|
||||
LBRACK: "[",
|
||||
RBRACK: "]",
|
||||
EOL: "\n",
|
||||
}
|
||||
|
||||
// String returns the string corresponding to the token tok.
|
||||
// For operators and delimiters, the string is the actual token character
|
||||
// sequence (e.g., for the token ASSIGN, the string is "="). For all other
|
||||
// tokens the string corresponds to the token constant name (e.g. for the
|
||||
// token IDENT, the string is "IDENT").
|
||||
//
|
||||
func (tok Token) String() string {
|
||||
s := ""
|
||||
if 0 <= tok && tok < Token(len(tokens)) {
|
||||
s = tokens[tok]
|
||||
}
|
||||
if s == "" {
|
||||
s = "token(" + strconv.Itoa(int(tok)) + ")"
|
||||
}
|
||||
return s
|
||||
}
|
||||
|
||||
// Predicates
|
||||
|
||||
// IsLiteral returns true for tokens corresponding to identifiers
|
||||
// and basic type literals; it returns false otherwise.
|
||||
//
|
||||
func (tok Token) IsLiteral() bool { return literal_beg < tok && tok < literal_end }
|
||||
|
||||
// IsOperator returns true for tokens corresponding to operators and
|
||||
// delimiters; it returns false otherwise.
|
||||
//
|
||||
func (tok Token) IsOperator() bool { return operator_beg < tok && tok < operator_end }
|
23
vendor/github.com/go-git/gcfg/types/bool.go
generated
vendored
Normal file
23
vendor/github.com/go-git/gcfg/types/bool.go
generated
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
package types
|
||||
|
||||
// BoolValues defines the name and value mappings for ParseBool.
|
||||
var BoolValues = map[string]interface{}{
|
||||
"true": true, "yes": true, "on": true, "1": true,
|
||||
"false": false, "no": false, "off": false, "0": false,
|
||||
}
|
||||
|
||||
var boolParser = func() *EnumParser {
|
||||
ep := &EnumParser{}
|
||||
ep.AddVals(BoolValues)
|
||||
return ep
|
||||
}()
|
||||
|
||||
// ParseBool parses bool values according to the definitions in BoolValues.
|
||||
// Parsing is case-insensitive.
|
||||
func ParseBool(s string) (bool, error) {
|
||||
v, err := boolParser.Parse(s)
|
||||
if err != nil {
|
||||
return false, err
|
||||
}
|
||||
return v.(bool), nil
|
||||
}
|
4
vendor/github.com/go-git/gcfg/types/doc.go
generated
vendored
Normal file
4
vendor/github.com/go-git/gcfg/types/doc.go
generated
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
// Package types defines helpers for type conversions.
|
||||
//
|
||||
// The API for this package is not finalized yet.
|
||||
package types
|
44
vendor/github.com/go-git/gcfg/types/enum.go
generated
vendored
Normal file
44
vendor/github.com/go-git/gcfg/types/enum.go
generated
vendored
Normal file
|
@ -0,0 +1,44 @@
|
|||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// EnumParser parses "enum" values; i.e. a predefined set of strings to
|
||||
// predefined values.
|
||||
type EnumParser struct {
|
||||
Type string // type name; if not set, use type of first value added
|
||||
CaseMatch bool // if true, matching of strings is case-sensitive
|
||||
// PrefixMatch bool
|
||||
vals map[string]interface{}
|
||||
}
|
||||
|
||||
// AddVals adds strings and values to an EnumParser.
|
||||
func (ep *EnumParser) AddVals(vals map[string]interface{}) {
|
||||
if ep.vals == nil {
|
||||
ep.vals = make(map[string]interface{})
|
||||
}
|
||||
for k, v := range vals {
|
||||
if ep.Type == "" {
|
||||
ep.Type = reflect.TypeOf(v).Name()
|
||||
}
|
||||
if !ep.CaseMatch {
|
||||
k = strings.ToLower(k)
|
||||
}
|
||||
ep.vals[k] = v
|
||||
}
|
||||
}
|
||||
|
||||
// Parse parses the string and returns the value or an error.
|
||||
func (ep EnumParser) Parse(s string) (interface{}, error) {
|
||||
if !ep.CaseMatch {
|
||||
s = strings.ToLower(s)
|
||||
}
|
||||
v, ok := ep.vals[s]
|
||||
if !ok {
|
||||
return false, fmt.Errorf("failed to parse %s %#q", ep.Type, s)
|
||||
}
|
||||
return v, nil
|
||||
}
|
86
vendor/github.com/go-git/gcfg/types/int.go
generated
vendored
Normal file
86
vendor/github.com/go-git/gcfg/types/int.go
generated
vendored
Normal file
|
@ -0,0 +1,86 @@
|
|||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// An IntMode is a mode for parsing integer values, representing a set of
|
||||
// accepted bases.
|
||||
type IntMode uint8
|
||||
|
||||
// IntMode values for ParseInt; can be combined using binary or.
|
||||
const (
|
||||
Dec IntMode = 1 << iota
|
||||
Hex
|
||||
Oct
|
||||
)
|
||||
|
||||
// String returns a string representation of IntMode; e.g. `IntMode(Dec|Hex)`.
|
||||
func (m IntMode) String() string {
|
||||
var modes []string
|
||||
if m&Dec != 0 {
|
||||
modes = append(modes, "Dec")
|
||||
}
|
||||
if m&Hex != 0 {
|
||||
modes = append(modes, "Hex")
|
||||
}
|
||||
if m&Oct != 0 {
|
||||
modes = append(modes, "Oct")
|
||||
}
|
||||
return "IntMode(" + strings.Join(modes, "|") + ")"
|
||||
}
|
||||
|
||||
var errIntAmbig = fmt.Errorf("ambiguous integer value; must include '0' prefix")
|
||||
|
||||
func prefix0(val string) bool {
|
||||
return strings.HasPrefix(val, "0") || strings.HasPrefix(val, "-0")
|
||||
}
|
||||
|
||||
func prefix0x(val string) bool {
|
||||
return strings.HasPrefix(val, "0x") || strings.HasPrefix(val, "-0x")
|
||||
}
|
||||
|
||||
// ParseInt parses val using mode into intptr, which must be a pointer to an
|
||||
// integer kind type. Non-decimal value require prefix `0` or `0x` in the cases
|
||||
// when mode permits ambiguity of base; otherwise the prefix can be omitted.
|
||||
func ParseInt(intptr interface{}, val string, mode IntMode) error {
|
||||
val = strings.TrimSpace(val)
|
||||
verb := byte(0)
|
||||
switch mode {
|
||||
case Dec:
|
||||
verb = 'd'
|
||||
case Dec + Hex:
|
||||
if prefix0x(val) {
|
||||
verb = 'v'
|
||||
} else {
|
||||
verb = 'd'
|
||||
}
|
||||
case Dec + Oct:
|
||||
if prefix0(val) && !prefix0x(val) {
|
||||
verb = 'v'
|
||||
} else {
|
||||
verb = 'd'
|
||||
}
|
||||
case Dec + Hex + Oct:
|
||||
verb = 'v'
|
||||
case Hex:
|
||||
if prefix0x(val) {
|
||||
verb = 'v'
|
||||
} else {
|
||||
verb = 'x'
|
||||
}
|
||||
case Oct:
|
||||
verb = 'o'
|
||||
case Hex + Oct:
|
||||
if prefix0(val) {
|
||||
verb = 'v'
|
||||
} else {
|
||||
return errIntAmbig
|
||||
}
|
||||
}
|
||||
if verb == 0 {
|
||||
panic("unsupported mode")
|
||||
}
|
||||
return ScanFully(intptr, val, verb)
|
||||
}
|
23
vendor/github.com/go-git/gcfg/types/scan.go
generated
vendored
Normal file
23
vendor/github.com/go-git/gcfg/types/scan.go
generated
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
package types
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
// ScanFully uses fmt.Sscanf with verb to fully scan val into ptr.
|
||||
func ScanFully(ptr interface{}, val string, verb byte) error {
|
||||
t := reflect.ValueOf(ptr).Elem().Type()
|
||||
// attempt to read extra bytes to make sure the value is consumed
|
||||
var b []byte
|
||||
n, err := fmt.Sscanf(val, "%"+string(verb)+"%s", ptr, &b)
|
||||
switch {
|
||||
case n < 1 || n == 1 && err != io.EOF:
|
||||
return fmt.Errorf("failed to parse %q as %v: %v", val, t, err)
|
||||
case n > 1:
|
||||
return fmt.Errorf("failed to parse %q as %v: extra characters %q", val, t, string(b))
|
||||
}
|
||||
// n == 1 && err == io.EOF
|
||||
return nil
|
||||
}
|
4
vendor/github.com/go-git/go-billy/v5/.gitignore
generated
vendored
Normal file
4
vendor/github.com/go-git/go-billy/v5/.gitignore
generated
vendored
Normal file
|
@ -0,0 +1,4 @@
|
|||
/coverage.txt
|
||||
/vendor
|
||||
Gopkg.lock
|
||||
Gopkg.toml
|
201
vendor/github.com/go-git/go-billy/v5/LICENSE
generated
vendored
Normal file
201
vendor/github.com/go-git/go-billy/v5/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,201 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2017 Sourced Technologies S.L.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
11
vendor/github.com/go-git/go-billy/v5/Makefile
generated
vendored
Normal file
11
vendor/github.com/go-git/go-billy/v5/Makefile
generated
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
# Go parameters
|
||||
GOCMD = go
|
||||
GOTEST = $(GOCMD) test
|
||||
|
||||
.PHONY: test
|
||||
test:
|
||||
$(GOTEST) -race ./...
|
||||
|
||||
test-coverage:
|
||||
echo "" > $(COVERAGE_REPORT); \
|
||||
$(GOTEST) -coverprofile=$(COVERAGE_REPORT) -coverpkg=./... -covermode=$(COVERAGE_MODE) ./...
|
73
vendor/github.com/go-git/go-billy/v5/README.md
generated
vendored
Normal file
73
vendor/github.com/go-git/go-billy/v5/README.md
generated
vendored
Normal file
|
@ -0,0 +1,73 @@
|
|||
# go-billy [](https://pkg.go.dev/github.com/go-git/go-billy/v5) [](https://github.com/go-git/go-billy/actions?query=workflow%3ATest)
|
||||
|
||||
The missing interface filesystem abstraction for Go.
|
||||
Billy implements an interface based on the `os` standard library, allowing to develop applications without dependency on the underlying storage. Makes it virtually free to implement mocks and testing over filesystem operations.
|
||||
|
||||
Billy was born as part of [go-git/go-git](https://github.com/go-git/go-git) project.
|
||||
|
||||
## Installation
|
||||
|
||||
```go
|
||||
import "github.com/go-git/go-billy/v5" // with go modules enabled (GO111MODULE=on or outside GOPATH)
|
||||
import "github.com/go-git/go-billy" // with go modules disabled
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
Billy exposes filesystems using the
|
||||
[`Filesystem` interface](https://pkg.go.dev/github.com/go-git/go-billy/v5?tab=doc#Filesystem).
|
||||
Each filesystem implementation gives you a `New` method, whose arguments depend on
|
||||
the implementation itself, that returns a new `Filesystem`.
|
||||
|
||||
The following example caches in memory all readable files in a directory from any
|
||||
billy's filesystem implementation.
|
||||
|
||||
```go
|
||||
func LoadToMemory(origin billy.Filesystem, path string) (*memory.Memory, error) {
|
||||
memory := memory.New()
|
||||
|
||||
files, err := origin.ReadDir("/")
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
if file.IsDir() {
|
||||
continue
|
||||
}
|
||||
|
||||
src, err := origin.Open(file.Name())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
dst, err := memory.Create(file.Name())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if _, err = io.Copy(dst, src); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := dst.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if err := src.Close(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return memory, nil
|
||||
}
|
||||
```
|
||||
|
||||
## Why billy?
|
||||
|
||||
The library billy deals with storage systems and Billy is the name of a well-known, IKEA
|
||||
bookcase. That's it.
|
||||
|
||||
## License
|
||||
|
||||
Apache License Version 2.0, see [LICENSE](LICENSE)
|
202
vendor/github.com/go-git/go-billy/v5/fs.go
generated
vendored
Normal file
202
vendor/github.com/go-git/go-billy/v5/fs.go
generated
vendored
Normal file
|
@ -0,0 +1,202 @@
|
|||
package billy
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrReadOnly = errors.New("read-only filesystem")
|
||||
ErrNotSupported = errors.New("feature not supported")
|
||||
ErrCrossedBoundary = errors.New("chroot boundary crossed")
|
||||
)
|
||||
|
||||
// Capability holds the supported features of a billy filesystem. This does
|
||||
// not mean that the capability has to be supported by the underlying storage.
|
||||
// For example, a billy filesystem may support WriteCapability but the
|
||||
// storage be mounted in read only mode.
|
||||
type Capability uint64
|
||||
|
||||
const (
|
||||
// WriteCapability means that the fs is writable.
|
||||
WriteCapability Capability = 1 << iota
|
||||
// ReadCapability means that the fs is readable.
|
||||
ReadCapability
|
||||
// ReadAndWriteCapability is the ability to open a file in read and write mode.
|
||||
ReadAndWriteCapability
|
||||
// SeekCapability means it is able to move position inside the file.
|
||||
SeekCapability
|
||||
// TruncateCapability means that a file can be truncated.
|
||||
TruncateCapability
|
||||
// LockCapability is the ability to lock a file.
|
||||
LockCapability
|
||||
|
||||
// DefaultCapabilities lists all capable features supported by filesystems
|
||||
// without Capability interface. This list should not be changed until a
|
||||
// major version is released.
|
||||
DefaultCapabilities Capability = WriteCapability | ReadCapability |
|
||||
ReadAndWriteCapability | SeekCapability | TruncateCapability |
|
||||
LockCapability
|
||||
|
||||
// AllCapabilities lists all capable features.
|
||||
AllCapabilities Capability = WriteCapability | ReadCapability |
|
||||
ReadAndWriteCapability | SeekCapability | TruncateCapability |
|
||||
LockCapability
|
||||
)
|
||||
|
||||
// Filesystem abstract the operations in a storage-agnostic interface.
|
||||
// Each method implementation mimics the behavior of the equivalent functions
|
||||
// at the os package from the standard library.
|
||||
type Filesystem interface {
|
||||
Basic
|
||||
TempFile
|
||||
Dir
|
||||
Symlink
|
||||
Chroot
|
||||
}
|
||||
|
||||
// Basic abstract the basic operations in a storage-agnostic interface as
|
||||
// an extension to the Basic interface.
|
||||
type Basic interface {
|
||||
// Create creates the named file with mode 0666 (before umask), truncating
|
||||
// it if it already exists. If successful, methods on the returned File can
|
||||
// be used for I/O; the associated file descriptor has mode O_RDWR.
|
||||
Create(filename string) (File, error)
|
||||
// Open opens the named file for reading. If successful, methods on the
|
||||
// returned file can be used for reading; the associated file descriptor has
|
||||
// mode O_RDONLY.
|
||||
Open(filename string) (File, error)
|
||||
// OpenFile is the generalized open call; most users will use Open or Create
|
||||
// instead. It opens the named file with specified flag (O_RDONLY etc.) and
|
||||
// perm, (0666 etc.) if applicable. If successful, methods on the returned
|
||||
// File can be used for I/O.
|
||||
OpenFile(filename string, flag int, perm os.FileMode) (File, error)
|
||||
// Stat returns a FileInfo describing the named file.
|
||||
Stat(filename string) (os.FileInfo, error)
|
||||
// Rename renames (moves) oldpath to newpath. If newpath already exists and
|
||||
// is not a directory, Rename replaces it. OS-specific restrictions may
|
||||
// apply when oldpath and newpath are in different directories.
|
||||
Rename(oldpath, newpath string) error
|
||||
// Remove removes the named file or directory.
|
||||
Remove(filename string) error
|
||||
// Join joins any number of path elements into a single path, adding a
|
||||
// Separator if necessary. Join calls filepath.Clean on the result; in
|
||||
// particular, all empty strings are ignored. On Windows, the result is a
|
||||
// UNC path if and only if the first path element is a UNC path.
|
||||
Join(elem ...string) string
|
||||
}
|
||||
|
||||
type TempFile interface {
|
||||
// TempFile creates a new temporary file in the directory dir with a name
|
||||
// beginning with prefix, opens the file for reading and writing, and
|
||||
// returns the resulting *os.File. If dir is the empty string, TempFile
|
||||
// uses the default directory for temporary files (see os.TempDir).
|
||||
// Multiple programs calling TempFile simultaneously will not choose the
|
||||
// same file. The caller can use f.Name() to find the pathname of the file.
|
||||
// It is the caller's responsibility to remove the file when no longer
|
||||
// needed.
|
||||
TempFile(dir, prefix string) (File, error)
|
||||
}
|
||||
|
||||
// Dir abstract the dir related operations in a storage-agnostic interface as
|
||||
// an extension to the Basic interface.
|
||||
type Dir interface {
|
||||
// ReadDir reads the directory named by dirname and returns a list of
|
||||
// directory entries sorted by filename.
|
||||
ReadDir(path string) ([]os.FileInfo, error)
|
||||
// MkdirAll creates a directory named path, along with any necessary
|
||||
// parents, and returns nil, or else returns an error. The permission bits
|
||||
// perm are used for all directories that MkdirAll creates. If path is/
|
||||
// already a directory, MkdirAll does nothing and returns nil.
|
||||
MkdirAll(filename string, perm os.FileMode) error
|
||||
}
|
||||
|
||||
// Symlink abstract the symlink related operations in a storage-agnostic
|
||||
// interface as an extension to the Basic interface.
|
||||
type Symlink interface {
|
||||
// Lstat returns a FileInfo describing the named file. If the file is a
|
||||
// symbolic link, the returned FileInfo describes the symbolic link. Lstat
|
||||
// makes no attempt to follow the link.
|
||||
Lstat(filename string) (os.FileInfo, error)
|
||||
// Symlink creates a symbolic-link from link to target. target may be an
|
||||
// absolute or relative path, and need not refer to an existing node.
|
||||
// Parent directories of link are created as necessary.
|
||||
Symlink(target, link string) error
|
||||
// Readlink returns the target path of link.
|
||||
Readlink(link string) (string, error)
|
||||
}
|
||||
|
||||
// Change abstract the FileInfo change related operations in a storage-agnostic
|
||||
// interface as an extension to the Basic interface
|
||||
type Change interface {
|
||||
// Chmod changes the mode of the named file to mode. If the file is a
|
||||
// symbolic link, it changes the mode of the link's target.
|
||||
Chmod(name string, mode os.FileMode) error
|
||||
// Lchown changes the numeric uid and gid of the named file. If the file is
|
||||
// a symbolic link, it changes the uid and gid of the link itself.
|
||||
Lchown(name string, uid, gid int) error
|
||||
// Chown changes the numeric uid and gid of the named file. If the file is a
|
||||
// symbolic link, it changes the uid and gid of the link's target.
|
||||
Chown(name string, uid, gid int) error
|
||||
// Chtimes changes the access and modification times of the named file,
|
||||
// similar to the Unix utime() or utimes() functions.
|
||||
//
|
||||
// The underlying filesystem may truncate or round the values to a less
|
||||
// precise time unit.
|
||||
Chtimes(name string, atime time.Time, mtime time.Time) error
|
||||
}
|
||||
|
||||
// Chroot abstract the chroot related operations in a storage-agnostic interface
|
||||
// as an extension to the Basic interface.
|
||||
type Chroot interface {
|
||||
// Chroot returns a new filesystem from the same type where the new root is
|
||||
// the given path. Files outside of the designated directory tree cannot be
|
||||
// accessed.
|
||||
Chroot(path string) (Filesystem, error)
|
||||
// Root returns the root path of the filesystem.
|
||||
Root() string
|
||||
}
|
||||
|
||||
// File represent a file, being a subset of the os.File
|
||||
type File interface {
|
||||
// Name returns the name of the file as presented to Open.
|
||||
Name() string
|
||||
io.Writer
|
||||
io.Reader
|
||||
io.ReaderAt
|
||||
io.Seeker
|
||||
io.Closer
|
||||
// Lock locks the file like e.g. flock. It protects against access from
|
||||
// other processes.
|
||||
Lock() error
|
||||
// Unlock unlocks the file.
|
||||
Unlock() error
|
||||
// Truncate the file.
|
||||
Truncate(size int64) error
|
||||
}
|
||||
|
||||
// Capable interface can return the available features of a filesystem.
|
||||
type Capable interface {
|
||||
// Capabilities returns the capabilities of a filesystem in bit flags.
|
||||
Capabilities() Capability
|
||||
}
|
||||
|
||||
// Capabilities returns the features supported by a filesystem. If the FS
|
||||
// does not implement Capable interface it returns all features.
|
||||
func Capabilities(fs Basic) Capability {
|
||||
capable, ok := fs.(Capable)
|
||||
if !ok {
|
||||
return DefaultCapabilities
|
||||
}
|
||||
|
||||
return capable.Capabilities()
|
||||
}
|
||||
|
||||
// CapabilityCheck tests the filesystem for the provided capabilities and
|
||||
// returns true in case it supports all of them.
|
||||
func CapabilityCheck(fs Basic, capabilities Capability) bool {
|
||||
fsCaps := Capabilities(fs)
|
||||
return fsCaps&capabilities == capabilities
|
||||
}
|
21
vendor/github.com/go-git/go-billy/v5/go.mod
generated
vendored
Normal file
21
vendor/github.com/go-git/go-billy/v5/go.mod
generated
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
module github.com/go-git/go-billy/v5
|
||||
|
||||
// go-git supports the last 3 stable Go versions.
|
||||
go 1.19
|
||||
|
||||
require (
|
||||
github.com/cyphar/filepath-securejoin v0.2.4
|
||||
github.com/onsi/gomega v1.27.10
|
||||
golang.org/x/sys v0.12.0
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/kr/pretty v0.3.1 // indirect
|
||||
github.com/kr/text v0.2.0 // indirect
|
||||
github.com/rogpeppe/go-internal v1.11.0 // indirect
|
||||
golang.org/x/net v0.15.0 // indirect
|
||||
golang.org/x/text v0.13.0 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
)
|
34
vendor/github.com/go-git/go-billy/v5/go.sum
generated
vendored
Normal file
34
vendor/github.com/go-git/go-billy/v5/go.sum
generated
vendored
Normal file
|
@ -0,0 +1,34 @@
|
|||
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
|
||||
github.com/cyphar/filepath-securejoin v0.2.4 h1:Ugdm7cg7i6ZK6x3xDF1oEu1nfkyfH53EtKeQYTC3kyg=
|
||||
github.com/cyphar/filepath-securejoin v0.2.4/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
|
||||
github.com/go-logr/logr v1.2.4 h1:g01GSCwiDw2xSZfjJ2/T9M+S6pFdcNtFYsp+Y43HYDQ=
|
||||
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
|
||||
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 h1:yAJXTCF9TqKcTiHJAE8dj7HMvPfh66eeA2JYW7eFpSE=
|
||||
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/onsi/ginkgo/v2 v2.11.0 h1:WgqUCUt/lT6yXoQ8Wef0fsNn5cAuMK7+KT9UFRz2tcU=
|
||||
github.com/onsi/gomega v1.27.10 h1:naR28SdDFlqrG6kScpT8VWpu1xWY5nJRCF3XaYyBjhI=
|
||||
github.com/onsi/gomega v1.27.10/go.mod h1:RsS8tutOdbdgzbPtzzATp12yT7kM5I5aElG3evPbQ0M=
|
||||
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
|
||||
github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
|
||||
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
|
||||
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
|
||||
golang.org/x/net v0.15.0 h1:ugBLEUaxABaB5AJqW9enI0ACdci2RUd4eP51NTBvuJ8=
|
||||
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
|
||||
golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
|
||||
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
|
||||
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
|
||||
golang.org/x/tools v0.9.3 h1:Gn1I8+64MsuTb/HpH+LmQtNas23LhUVr3rYZ0eKuaMM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
242
vendor/github.com/go-git/go-billy/v5/helper/chroot/chroot.go
generated
vendored
Normal file
242
vendor/github.com/go-git/go-billy/v5/helper/chroot/chroot.go
generated
vendored
Normal file
|
@ -0,0 +1,242 @@
|
|||
package chroot
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
"github.com/go-git/go-billy/v5/helper/polyfill"
|
||||
)
|
||||
|
||||
// ChrootHelper is a helper to implement billy.Chroot.
|
||||
type ChrootHelper struct {
|
||||
underlying billy.Filesystem
|
||||
base string
|
||||
}
|
||||
|
||||
// New creates a new filesystem wrapping up the given 'fs'.
|
||||
// The created filesystem has its base in the given ChrootHelperectory of the
|
||||
// underlying filesystem.
|
||||
func New(fs billy.Basic, base string) billy.Filesystem {
|
||||
return &ChrootHelper{
|
||||
underlying: polyfill.New(fs),
|
||||
base: base,
|
||||
}
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) underlyingPath(filename string) (string, error) {
|
||||
if isCrossBoundaries(filename) {
|
||||
return "", billy.ErrCrossedBoundary
|
||||
}
|
||||
|
||||
return fs.Join(fs.Root(), filename), nil
|
||||
}
|
||||
|
||||
func isCrossBoundaries(path string) bool {
|
||||
path = filepath.ToSlash(path)
|
||||
path = filepath.Clean(path)
|
||||
|
||||
return strings.HasPrefix(path, ".."+string(filepath.Separator))
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Create(filename string) (billy.File, error) {
|
||||
fullpath, err := fs.underlyingPath(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f, err := fs.underlying.Create(fullpath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newFile(fs, f, filename), nil
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Open(filename string) (billy.File, error) {
|
||||
fullpath, err := fs.underlyingPath(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f, err := fs.underlying.Open(fullpath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newFile(fs, f, filename), nil
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) OpenFile(filename string, flag int, mode os.FileMode) (billy.File, error) {
|
||||
fullpath, err := fs.underlyingPath(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f, err := fs.underlying.OpenFile(fullpath, flag, mode)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newFile(fs, f, filename), nil
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Stat(filename string) (os.FileInfo, error) {
|
||||
fullpath, err := fs.underlyingPath(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return fs.underlying.Stat(fullpath)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Rename(from, to string) error {
|
||||
var err error
|
||||
from, err = fs.underlyingPath(from)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
to, err = fs.underlyingPath(to)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return fs.underlying.Rename(from, to)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Remove(path string) error {
|
||||
fullpath, err := fs.underlyingPath(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return fs.underlying.Remove(fullpath)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Join(elem ...string) string {
|
||||
return fs.underlying.Join(elem...)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) TempFile(dir, prefix string) (billy.File, error) {
|
||||
fullpath, err := fs.underlyingPath(dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
f, err := fs.underlying.(billy.TempFile).TempFile(fullpath, prefix)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return newFile(fs, f, fs.Join(dir, filepath.Base(f.Name()))), nil
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) ReadDir(path string) ([]os.FileInfo, error) {
|
||||
fullpath, err := fs.underlyingPath(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return fs.underlying.(billy.Dir).ReadDir(fullpath)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) MkdirAll(filename string, perm os.FileMode) error {
|
||||
fullpath, err := fs.underlyingPath(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return fs.underlying.(billy.Dir).MkdirAll(fullpath, perm)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Lstat(filename string) (os.FileInfo, error) {
|
||||
fullpath, err := fs.underlyingPath(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return fs.underlying.(billy.Symlink).Lstat(fullpath)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Symlink(target, link string) error {
|
||||
target = filepath.FromSlash(target)
|
||||
|
||||
// only rewrite target if it's already absolute
|
||||
if filepath.IsAbs(target) || strings.HasPrefix(target, string(filepath.Separator)) {
|
||||
target = fs.Join(fs.Root(), target)
|
||||
target = filepath.Clean(filepath.FromSlash(target))
|
||||
}
|
||||
|
||||
link, err := fs.underlyingPath(link)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return fs.underlying.(billy.Symlink).Symlink(target, link)
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Readlink(link string) (string, error) {
|
||||
fullpath, err := fs.underlyingPath(link)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
target, err := fs.underlying.(billy.Symlink).Readlink(fullpath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
if !filepath.IsAbs(target) && !strings.HasPrefix(target, string(filepath.Separator)) {
|
||||
return target, nil
|
||||
}
|
||||
|
||||
target, err = filepath.Rel(fs.base, target)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return string(os.PathSeparator) + target, nil
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Chroot(path string) (billy.Filesystem, error) {
|
||||
fullpath, err := fs.underlyingPath(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return New(fs.underlying, fullpath), nil
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Root() string {
|
||||
return fs.base
|
||||
}
|
||||
|
||||
func (fs *ChrootHelper) Underlying() billy.Basic {
|
||||
return fs.underlying
|
||||
}
|
||||
|
||||
// Capabilities implements the Capable interface.
|
||||
func (fs *ChrootHelper) Capabilities() billy.Capability {
|
||||
return billy.Capabilities(fs.underlying)
|
||||
}
|
||||
|
||||
type file struct {
|
||||
billy.File
|
||||
name string
|
||||
}
|
||||
|
||||
func newFile(fs billy.Filesystem, f billy.File, filename string) billy.File {
|
||||
filename = fs.Join(fs.Root(), filename)
|
||||
filename, _ = filepath.Rel(fs.Root(), filename)
|
||||
|
||||
return &file{
|
||||
File: f,
|
||||
name: filename,
|
||||
}
|
||||
}
|
||||
|
||||
func (f *file) Name() string {
|
||||
return f.name
|
||||
}
|
105
vendor/github.com/go-git/go-billy/v5/helper/polyfill/polyfill.go
generated
vendored
Normal file
105
vendor/github.com/go-git/go-billy/v5/helper/polyfill/polyfill.go
generated
vendored
Normal file
|
@ -0,0 +1,105 @@
|
|||
package polyfill
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
)
|
||||
|
||||
// Polyfill is a helper that implements all missing method from billy.Filesystem.
|
||||
type Polyfill struct {
|
||||
billy.Basic
|
||||
c capabilities
|
||||
}
|
||||
|
||||
type capabilities struct{ tempfile, dir, symlink, chroot bool }
|
||||
|
||||
// New creates a new filesystem wrapping up 'fs' the intercepts all the calls
|
||||
// made and errors if fs doesn't implement any of the billy interfaces.
|
||||
func New(fs billy.Basic) billy.Filesystem {
|
||||
if original, ok := fs.(billy.Filesystem); ok {
|
||||
return original
|
||||
}
|
||||
|
||||
h := &Polyfill{Basic: fs}
|
||||
|
||||
_, h.c.tempfile = h.Basic.(billy.TempFile)
|
||||
_, h.c.dir = h.Basic.(billy.Dir)
|
||||
_, h.c.symlink = h.Basic.(billy.Symlink)
|
||||
_, h.c.chroot = h.Basic.(billy.Chroot)
|
||||
return h
|
||||
}
|
||||
|
||||
func (h *Polyfill) TempFile(dir, prefix string) (billy.File, error) {
|
||||
if !h.c.tempfile {
|
||||
return nil, billy.ErrNotSupported
|
||||
}
|
||||
|
||||
return h.Basic.(billy.TempFile).TempFile(dir, prefix)
|
||||
}
|
||||
|
||||
func (h *Polyfill) ReadDir(path string) ([]os.FileInfo, error) {
|
||||
if !h.c.dir {
|
||||
return nil, billy.ErrNotSupported
|
||||
}
|
||||
|
||||
return h.Basic.(billy.Dir).ReadDir(path)
|
||||
}
|
||||
|
||||
func (h *Polyfill) MkdirAll(filename string, perm os.FileMode) error {
|
||||
if !h.c.dir {
|
||||
return billy.ErrNotSupported
|
||||
}
|
||||
|
||||
return h.Basic.(billy.Dir).MkdirAll(filename, perm)
|
||||
}
|
||||
|
||||
func (h *Polyfill) Symlink(target, link string) error {
|
||||
if !h.c.symlink {
|
||||
return billy.ErrNotSupported
|
||||
}
|
||||
|
||||
return h.Basic.(billy.Symlink).Symlink(target, link)
|
||||
}
|
||||
|
||||
func (h *Polyfill) Readlink(link string) (string, error) {
|
||||
if !h.c.symlink {
|
||||
return "", billy.ErrNotSupported
|
||||
}
|
||||
|
||||
return h.Basic.(billy.Symlink).Readlink(link)
|
||||
}
|
||||
|
||||
func (h *Polyfill) Lstat(path string) (os.FileInfo, error) {
|
||||
if !h.c.symlink {
|
||||
return nil, billy.ErrNotSupported
|
||||
}
|
||||
|
||||
return h.Basic.(billy.Symlink).Lstat(path)
|
||||
}
|
||||
|
||||
func (h *Polyfill) Chroot(path string) (billy.Filesystem, error) {
|
||||
if !h.c.chroot {
|
||||
return nil, billy.ErrNotSupported
|
||||
}
|
||||
|
||||
return h.Basic.(billy.Chroot).Chroot(path)
|
||||
}
|
||||
|
||||
func (h *Polyfill) Root() string {
|
||||
if !h.c.chroot {
|
||||
return string(filepath.Separator)
|
||||
}
|
||||
|
||||
return h.Basic.(billy.Chroot).Root()
|
||||
}
|
||||
|
||||
func (h *Polyfill) Underlying() billy.Basic {
|
||||
return h.Basic
|
||||
}
|
||||
|
||||
// Capabilities implements the Capable interface.
|
||||
func (h *Polyfill) Capabilities() billy.Capability {
|
||||
return billy.Capabilities(h.Basic)
|
||||
}
|
410
vendor/github.com/go-git/go-billy/v5/memfs/memory.go
generated
vendored
Normal file
410
vendor/github.com/go-git/go-billy/v5/memfs/memory.go
generated
vendored
Normal file
|
@ -0,0 +1,410 @@
|
|||
// Package memfs provides a billy filesystem base on memory.
|
||||
package memfs // import "github.com/go-git/go-billy/v5/memfs"
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
"github.com/go-git/go-billy/v5/helper/chroot"
|
||||
"github.com/go-git/go-billy/v5/util"
|
||||
)
|
||||
|
||||
const separator = filepath.Separator
|
||||
|
||||
// Memory a very convenient filesystem based on memory files
|
||||
type Memory struct {
|
||||
s *storage
|
||||
|
||||
tempCount int
|
||||
}
|
||||
|
||||
//New returns a new Memory filesystem.
|
||||
func New() billy.Filesystem {
|
||||
fs := &Memory{s: newStorage()}
|
||||
return chroot.New(fs, string(separator))
|
||||
}
|
||||
|
||||
func (fs *Memory) Create(filename string) (billy.File, error) {
|
||||
return fs.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0666)
|
||||
}
|
||||
|
||||
func (fs *Memory) Open(filename string) (billy.File, error) {
|
||||
return fs.OpenFile(filename, os.O_RDONLY, 0)
|
||||
}
|
||||
|
||||
func (fs *Memory) OpenFile(filename string, flag int, perm os.FileMode) (billy.File, error) {
|
||||
f, has := fs.s.Get(filename)
|
||||
if !has {
|
||||
if !isCreate(flag) {
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
|
||||
var err error
|
||||
f, err = fs.s.New(filename, perm, flag)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
if isExclusive(flag) {
|
||||
return nil, os.ErrExist
|
||||
}
|
||||
|
||||
if target, isLink := fs.resolveLink(filename, f); isLink {
|
||||
return fs.OpenFile(target, flag, perm)
|
||||
}
|
||||
}
|
||||
|
||||
if f.mode.IsDir() {
|
||||
return nil, fmt.Errorf("cannot open directory: %s", filename)
|
||||
}
|
||||
|
||||
return f.Duplicate(filename, perm, flag), nil
|
||||
}
|
||||
|
||||
var errNotLink = errors.New("not a link")
|
||||
|
||||
func (fs *Memory) resolveLink(fullpath string, f *file) (target string, isLink bool) {
|
||||
if !isSymlink(f.mode) {
|
||||
return fullpath, false
|
||||
}
|
||||
|
||||
target = string(f.content.bytes)
|
||||
if !isAbs(target) {
|
||||
target = fs.Join(filepath.Dir(fullpath), target)
|
||||
}
|
||||
|
||||
return target, true
|
||||
}
|
||||
|
||||
// On Windows OS, IsAbs validates if a path is valid based on if stars with a
|
||||
// unit (eg.: `C:\`) to assert that is absolute, but in this mem implementation
|
||||
// any path starting by `separator` is also considered absolute.
|
||||
func isAbs(path string) bool {
|
||||
return filepath.IsAbs(path) || strings.HasPrefix(path, string(separator))
|
||||
}
|
||||
|
||||
func (fs *Memory) Stat(filename string) (os.FileInfo, error) {
|
||||
f, has := fs.s.Get(filename)
|
||||
if !has {
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
|
||||
fi, _ := f.Stat()
|
||||
|
||||
var err error
|
||||
if target, isLink := fs.resolveLink(filename, f); isLink {
|
||||
fi, err = fs.Stat(target)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
// the name of the file should always the name of the stated file, so we
|
||||
// overwrite the Stat returned from the storage with it, since the
|
||||
// filename may belong to a link.
|
||||
fi.(*fileInfo).name = filepath.Base(filename)
|
||||
return fi, nil
|
||||
}
|
||||
|
||||
func (fs *Memory) Lstat(filename string) (os.FileInfo, error) {
|
||||
f, has := fs.s.Get(filename)
|
||||
if !has {
|
||||
return nil, os.ErrNotExist
|
||||
}
|
||||
|
||||
return f.Stat()
|
||||
}
|
||||
|
||||
type ByName []os.FileInfo
|
||||
|
||||
func (a ByName) Len() int { return len(a) }
|
||||
func (a ByName) Less(i, j int) bool { return a[i].Name() < a[j].Name() }
|
||||
func (a ByName) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
|
||||
|
||||
func (fs *Memory) ReadDir(path string) ([]os.FileInfo, error) {
|
||||
if f, has := fs.s.Get(path); has {
|
||||
if target, isLink := fs.resolveLink(path, f); isLink {
|
||||
return fs.ReadDir(target)
|
||||
}
|
||||
}
|
||||
|
||||
var entries []os.FileInfo
|
||||
for _, f := range fs.s.Children(path) {
|
||||
fi, _ := f.Stat()
|
||||
entries = append(entries, fi)
|
||||
}
|
||||
|
||||
sort.Sort(ByName(entries))
|
||||
|
||||
return entries, nil
|
||||
}
|
||||
|
||||
func (fs *Memory) MkdirAll(path string, perm os.FileMode) error {
|
||||
_, err := fs.s.New(path, perm|os.ModeDir, 0)
|
||||
return err
|
||||
}
|
||||
|
||||
func (fs *Memory) TempFile(dir, prefix string) (billy.File, error) {
|
||||
return util.TempFile(fs, dir, prefix)
|
||||
}
|
||||
|
||||
func (fs *Memory) getTempFilename(dir, prefix string) string {
|
||||
fs.tempCount++
|
||||
filename := fmt.Sprintf("%s_%d_%d", prefix, fs.tempCount, time.Now().UnixNano())
|
||||
return fs.Join(dir, filename)
|
||||
}
|
||||
|
||||
func (fs *Memory) Rename(from, to string) error {
|
||||
return fs.s.Rename(from, to)
|
||||
}
|
||||
|
||||
func (fs *Memory) Remove(filename string) error {
|
||||
return fs.s.Remove(filename)
|
||||
}
|
||||
|
||||
func (fs *Memory) Join(elem ...string) string {
|
||||
return filepath.Join(elem...)
|
||||
}
|
||||
|
||||
func (fs *Memory) Symlink(target, link string) error {
|
||||
_, err := fs.Stat(link)
|
||||
if err == nil {
|
||||
return os.ErrExist
|
||||
}
|
||||
|
||||
if !os.IsNotExist(err) {
|
||||
return err
|
||||
}
|
||||
|
||||
return util.WriteFile(fs, link, []byte(target), 0777|os.ModeSymlink)
|
||||
}
|
||||
|
||||
func (fs *Memory) Readlink(link string) (string, error) {
|
||||
f, has := fs.s.Get(link)
|
||||
if !has {
|
||||
return "", os.ErrNotExist
|
||||
}
|
||||
|
||||
if !isSymlink(f.mode) {
|
||||
return "", &os.PathError{
|
||||
Op: "readlink",
|
||||
Path: link,
|
||||
Err: fmt.Errorf("not a symlink"),
|
||||
}
|
||||
}
|
||||
|
||||
return string(f.content.bytes), nil
|
||||
}
|
||||
|
||||
// Capabilities implements the Capable interface.
|
||||
func (fs *Memory) Capabilities() billy.Capability {
|
||||
return billy.WriteCapability |
|
||||
billy.ReadCapability |
|
||||
billy.ReadAndWriteCapability |
|
||||
billy.SeekCapability |
|
||||
billy.TruncateCapability
|
||||
}
|
||||
|
||||
type file struct {
|
||||
name string
|
||||
content *content
|
||||
position int64
|
||||
flag int
|
||||
mode os.FileMode
|
||||
|
||||
isClosed bool
|
||||
}
|
||||
|
||||
func (f *file) Name() string {
|
||||
return f.name
|
||||
}
|
||||
|
||||
func (f *file) Read(b []byte) (int, error) {
|
||||
n, err := f.ReadAt(b, f.position)
|
||||
f.position += int64(n)
|
||||
|
||||
if err == io.EOF && n != 0 {
|
||||
err = nil
|
||||
}
|
||||
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (f *file) ReadAt(b []byte, off int64) (int, error) {
|
||||
if f.isClosed {
|
||||
return 0, os.ErrClosed
|
||||
}
|
||||
|
||||
if !isReadAndWrite(f.flag) && !isReadOnly(f.flag) {
|
||||
return 0, errors.New("read not supported")
|
||||
}
|
||||
|
||||
n, err := f.content.ReadAt(b, off)
|
||||
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (f *file) Seek(offset int64, whence int) (int64, error) {
|
||||
if f.isClosed {
|
||||
return 0, os.ErrClosed
|
||||
}
|
||||
|
||||
switch whence {
|
||||
case io.SeekCurrent:
|
||||
f.position += offset
|
||||
case io.SeekStart:
|
||||
f.position = offset
|
||||
case io.SeekEnd:
|
||||
f.position = int64(f.content.Len()) + offset
|
||||
}
|
||||
|
||||
return f.position, nil
|
||||
}
|
||||
|
||||
func (f *file) Write(p []byte) (int, error) {
|
||||
if f.isClosed {
|
||||
return 0, os.ErrClosed
|
||||
}
|
||||
|
||||
if !isReadAndWrite(f.flag) && !isWriteOnly(f.flag) {
|
||||
return 0, errors.New("write not supported")
|
||||
}
|
||||
|
||||
n, err := f.content.WriteAt(p, f.position)
|
||||
f.position += int64(n)
|
||||
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (f *file) Close() error {
|
||||
if f.isClosed {
|
||||
return os.ErrClosed
|
||||
}
|
||||
|
||||
f.isClosed = true
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *file) Truncate(size int64) error {
|
||||
if size < int64(len(f.content.bytes)) {
|
||||
f.content.bytes = f.content.bytes[:size]
|
||||
} else if more := int(size) - len(f.content.bytes); more > 0 {
|
||||
f.content.bytes = append(f.content.bytes, make([]byte, more)...)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *file) Duplicate(filename string, mode os.FileMode, flag int) billy.File {
|
||||
new := &file{
|
||||
name: filename,
|
||||
content: f.content,
|
||||
mode: mode,
|
||||
flag: flag,
|
||||
}
|
||||
|
||||
if isTruncate(flag) {
|
||||
new.content.Truncate()
|
||||
}
|
||||
|
||||
if isAppend(flag) {
|
||||
new.position = int64(new.content.Len())
|
||||
}
|
||||
|
||||
return new
|
||||
}
|
||||
|
||||
func (f *file) Stat() (os.FileInfo, error) {
|
||||
return &fileInfo{
|
||||
name: f.Name(),
|
||||
mode: f.mode,
|
||||
size: f.content.Len(),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Lock is a no-op in memfs.
|
||||
func (f *file) Lock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Unlock is a no-op in memfs.
|
||||
func (f *file) Unlock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
type fileInfo struct {
|
||||
name string
|
||||
size int
|
||||
mode os.FileMode
|
||||
}
|
||||
|
||||
func (fi *fileInfo) Name() string {
|
||||
return fi.name
|
||||
}
|
||||
|
||||
func (fi *fileInfo) Size() int64 {
|
||||
return int64(fi.size)
|
||||
}
|
||||
|
||||
func (fi *fileInfo) Mode() os.FileMode {
|
||||
return fi.mode
|
||||
}
|
||||
|
||||
func (*fileInfo) ModTime() time.Time {
|
||||
return time.Now()
|
||||
}
|
||||
|
||||
func (fi *fileInfo) IsDir() bool {
|
||||
return fi.mode.IsDir()
|
||||
}
|
||||
|
||||
func (*fileInfo) Sys() interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *content) Truncate() {
|
||||
c.bytes = make([]byte, 0)
|
||||
}
|
||||
|
||||
func (c *content) Len() int {
|
||||
return len(c.bytes)
|
||||
}
|
||||
|
||||
func isCreate(flag int) bool {
|
||||
return flag&os.O_CREATE != 0
|
||||
}
|
||||
|
||||
func isExclusive(flag int) bool {
|
||||
return flag&os.O_EXCL != 0
|
||||
}
|
||||
|
||||
func isAppend(flag int) bool {
|
||||
return flag&os.O_APPEND != 0
|
||||
}
|
||||
|
||||
func isTruncate(flag int) bool {
|
||||
return flag&os.O_TRUNC != 0
|
||||
}
|
||||
|
||||
func isReadAndWrite(flag int) bool {
|
||||
return flag&os.O_RDWR != 0
|
||||
}
|
||||
|
||||
func isReadOnly(flag int) bool {
|
||||
return flag == os.O_RDONLY
|
||||
}
|
||||
|
||||
func isWriteOnly(flag int) bool {
|
||||
return flag&os.O_WRONLY != 0
|
||||
}
|
||||
|
||||
func isSymlink(m os.FileMode) bool {
|
||||
return m&os.ModeSymlink != 0
|
||||
}
|
238
vendor/github.com/go-git/go-billy/v5/memfs/storage.go
generated
vendored
Normal file
238
vendor/github.com/go-git/go-billy/v5/memfs/storage.go
generated
vendored
Normal file
|
@ -0,0 +1,238 @@
|
|||
package memfs
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sync"
|
||||
)
|
||||
|
||||
type storage struct {
|
||||
files map[string]*file
|
||||
children map[string]map[string]*file
|
||||
}
|
||||
|
||||
func newStorage() *storage {
|
||||
return &storage{
|
||||
files: make(map[string]*file, 0),
|
||||
children: make(map[string]map[string]*file, 0),
|
||||
}
|
||||
}
|
||||
|
||||
func (s *storage) Has(path string) bool {
|
||||
path = clean(path)
|
||||
|
||||
_, ok := s.files[path]
|
||||
return ok
|
||||
}
|
||||
|
||||
func (s *storage) New(path string, mode os.FileMode, flag int) (*file, error) {
|
||||
path = clean(path)
|
||||
if s.Has(path) {
|
||||
if !s.MustGet(path).mode.IsDir() {
|
||||
return nil, fmt.Errorf("file already exists %q", path)
|
||||
}
|
||||
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
name := filepath.Base(path)
|
||||
|
||||
f := &file{
|
||||
name: name,
|
||||
content: &content{name: name},
|
||||
mode: mode,
|
||||
flag: flag,
|
||||
}
|
||||
|
||||
s.files[path] = f
|
||||
s.createParent(path, mode, f)
|
||||
return f, nil
|
||||
}
|
||||
|
||||
func (s *storage) createParent(path string, mode os.FileMode, f *file) error {
|
||||
base := filepath.Dir(path)
|
||||
base = clean(base)
|
||||
if f.Name() == string(separator) {
|
||||
return nil
|
||||
}
|
||||
|
||||
if _, err := s.New(base, mode.Perm()|os.ModeDir, 0); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, ok := s.children[base]; !ok {
|
||||
s.children[base] = make(map[string]*file, 0)
|
||||
}
|
||||
|
||||
s.children[base][f.Name()] = f
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *storage) Children(path string) []*file {
|
||||
path = clean(path)
|
||||
|
||||
l := make([]*file, 0)
|
||||
for _, f := range s.children[path] {
|
||||
l = append(l, f)
|
||||
}
|
||||
|
||||
return l
|
||||
}
|
||||
|
||||
func (s *storage) MustGet(path string) *file {
|
||||
f, ok := s.Get(path)
|
||||
if !ok {
|
||||
panic(fmt.Errorf("couldn't find %q", path))
|
||||
}
|
||||
|
||||
return f
|
||||
}
|
||||
|
||||
func (s *storage) Get(path string) (*file, bool) {
|
||||
path = clean(path)
|
||||
if !s.Has(path) {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
file, ok := s.files[path]
|
||||
return file, ok
|
||||
}
|
||||
|
||||
func (s *storage) Rename(from, to string) error {
|
||||
from = clean(from)
|
||||
to = clean(to)
|
||||
|
||||
if !s.Has(from) {
|
||||
return os.ErrNotExist
|
||||
}
|
||||
|
||||
move := [][2]string{{from, to}}
|
||||
|
||||
for pathFrom := range s.files {
|
||||
if pathFrom == from || !filepath.HasPrefix(pathFrom, from) {
|
||||
continue
|
||||
}
|
||||
|
||||
rel, _ := filepath.Rel(from, pathFrom)
|
||||
pathTo := filepath.Join(to, rel)
|
||||
|
||||
move = append(move, [2]string{pathFrom, pathTo})
|
||||
}
|
||||
|
||||
for _, ops := range move {
|
||||
from := ops[0]
|
||||
to := ops[1]
|
||||
|
||||
if err := s.move(from, to); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *storage) move(from, to string) error {
|
||||
s.files[to] = s.files[from]
|
||||
s.files[to].name = filepath.Base(to)
|
||||
s.children[to] = s.children[from]
|
||||
|
||||
defer func() {
|
||||
delete(s.children, from)
|
||||
delete(s.files, from)
|
||||
delete(s.children[filepath.Dir(from)], filepath.Base(from))
|
||||
}()
|
||||
|
||||
return s.createParent(to, 0644, s.files[to])
|
||||
}
|
||||
|
||||
func (s *storage) Remove(path string) error {
|
||||
path = clean(path)
|
||||
|
||||
f, has := s.Get(path)
|
||||
if !has {
|
||||
return os.ErrNotExist
|
||||
}
|
||||
|
||||
if f.mode.IsDir() && len(s.children[path]) != 0 {
|
||||
return fmt.Errorf("dir: %s contains files", path)
|
||||
}
|
||||
|
||||
base, file := filepath.Split(path)
|
||||
base = filepath.Clean(base)
|
||||
|
||||
delete(s.children[base], file)
|
||||
delete(s.files, path)
|
||||
return nil
|
||||
}
|
||||
|
||||
func clean(path string) string {
|
||||
return filepath.Clean(filepath.FromSlash(path))
|
||||
}
|
||||
|
||||
type content struct {
|
||||
name string
|
||||
bytes []byte
|
||||
|
||||
m sync.RWMutex
|
||||
}
|
||||
|
||||
func (c *content) WriteAt(p []byte, off int64) (int, error) {
|
||||
if off < 0 {
|
||||
return 0, &os.PathError{
|
||||
Op: "writeat",
|
||||
Path: c.name,
|
||||
Err: errors.New("negative offset"),
|
||||
}
|
||||
}
|
||||
|
||||
c.m.Lock()
|
||||
prev := len(c.bytes)
|
||||
|
||||
diff := int(off) - prev
|
||||
if diff > 0 {
|
||||
c.bytes = append(c.bytes, make([]byte, diff)...)
|
||||
}
|
||||
|
||||
c.bytes = append(c.bytes[:off], p...)
|
||||
if len(c.bytes) < prev {
|
||||
c.bytes = c.bytes[:prev]
|
||||
}
|
||||
c.m.Unlock()
|
||||
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
func (c *content) ReadAt(b []byte, off int64) (n int, err error) {
|
||||
if off < 0 {
|
||||
return 0, &os.PathError{
|
||||
Op: "readat",
|
||||
Path: c.name,
|
||||
Err: errors.New("negative offset"),
|
||||
}
|
||||
}
|
||||
|
||||
c.m.RLock()
|
||||
size := int64(len(c.bytes))
|
||||
if off >= size {
|
||||
c.m.RUnlock()
|
||||
return 0, io.EOF
|
||||
}
|
||||
|
||||
l := int64(len(b))
|
||||
if off+l > size {
|
||||
l = size - off
|
||||
}
|
||||
|
||||
btr := c.bytes[off : off+l]
|
||||
n = copy(b, btr)
|
||||
|
||||
if len(btr) < len(b) {
|
||||
err = io.EOF
|
||||
}
|
||||
c.m.RUnlock()
|
||||
|
||||
return
|
||||
}
|
127
vendor/github.com/go-git/go-billy/v5/osfs/os.go
generated
vendored
Normal file
127
vendor/github.com/go-git/go-billy/v5/osfs/os.go
generated
vendored
Normal file
|
@ -0,0 +1,127 @@
|
|||
//go:build !js
|
||||
// +build !js
|
||||
|
||||
// Package osfs provides a billy filesystem for the OS.
|
||||
package osfs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io/fs"
|
||||
"os"
|
||||
"sync"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultDirectoryMode = 0o755
|
||||
defaultCreateMode = 0o666
|
||||
)
|
||||
|
||||
// Default Filesystem representing the root of the os filesystem.
|
||||
var Default = &ChrootOS{}
|
||||
|
||||
// New returns a new OS filesystem.
|
||||
// By default paths are deduplicated, but still enforced
|
||||
// under baseDir. For more info refer to WithDeduplicatePath.
|
||||
func New(baseDir string, opts ...Option) billy.Filesystem {
|
||||
o := &options{
|
||||
deduplicatePath: true,
|
||||
}
|
||||
for _, opt := range opts {
|
||||
opt(o)
|
||||
}
|
||||
|
||||
if o.Type == BoundOSFS {
|
||||
return newBoundOS(baseDir, o.deduplicatePath)
|
||||
}
|
||||
|
||||
return newChrootOS(baseDir)
|
||||
}
|
||||
|
||||
// WithBoundOS returns the option of using a Bound filesystem OS.
|
||||
func WithBoundOS() Option {
|
||||
return func(o *options) {
|
||||
o.Type = BoundOSFS
|
||||
}
|
||||
}
|
||||
|
||||
// WithChrootOS returns the option of using a Chroot filesystem OS.
|
||||
func WithChrootOS() Option {
|
||||
return func(o *options) {
|
||||
o.Type = ChrootOSFS
|
||||
}
|
||||
}
|
||||
|
||||
// WithDeduplicatePath toggles the deduplication of the base dir in the path.
|
||||
// This occurs when absolute links are being used.
|
||||
// Assuming base dir /base/dir and an absolute symlink /base/dir/target:
|
||||
//
|
||||
// With DeduplicatePath (default): /base/dir/target
|
||||
// Without DeduplicatePath: /base/dir/base/dir/target
|
||||
//
|
||||
// This option is only used by the BoundOS OS type.
|
||||
func WithDeduplicatePath(enabled bool) Option {
|
||||
return func(o *options) {
|
||||
o.deduplicatePath = enabled
|
||||
}
|
||||
}
|
||||
|
||||
type options struct {
|
||||
Type
|
||||
deduplicatePath bool
|
||||
}
|
||||
|
||||
type Type int
|
||||
|
||||
const (
|
||||
ChrootOSFS Type = iota
|
||||
BoundOSFS
|
||||
)
|
||||
|
||||
func readDir(dir string) ([]os.FileInfo, error) {
|
||||
entries, err := os.ReadDir(dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
infos := make([]fs.FileInfo, 0, len(entries))
|
||||
for _, entry := range entries {
|
||||
fi, err := entry.Info()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
infos = append(infos, fi)
|
||||
}
|
||||
return infos, nil
|
||||
}
|
||||
|
||||
func tempFile(dir, prefix string) (billy.File, error) {
|
||||
f, err := os.CreateTemp(dir, prefix)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &file{File: f}, nil
|
||||
}
|
||||
|
||||
func openFile(fn string, flag int, perm os.FileMode, createDir func(string) error) (billy.File, error) {
|
||||
if flag&os.O_CREATE != 0 {
|
||||
if createDir == nil {
|
||||
return nil, fmt.Errorf("createDir func cannot be nil if file needs to be opened in create mode")
|
||||
}
|
||||
if err := createDir(fn); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
f, err := os.OpenFile(fn, flag, perm)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &file{File: f}, err
|
||||
}
|
||||
|
||||
// file is a wrapper for an os.File which adds support for file locking.
|
||||
type file struct {
|
||||
*os.File
|
||||
m sync.Mutex
|
||||
}
|
261
vendor/github.com/go-git/go-billy/v5/osfs/os_bound.go
generated
vendored
Normal file
261
vendor/github.com/go-git/go-billy/v5/osfs/os_bound.go
generated
vendored
Normal file
|
@ -0,0 +1,261 @@
|
|||
//go:build !js
|
||||
// +build !js
|
||||
|
||||
/*
|
||||
Copyright 2022 The Flux authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package osfs
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
securejoin "github.com/cyphar/filepath-securejoin"
|
||||
"github.com/go-git/go-billy/v5"
|
||||
)
|
||||
|
||||
// BoundOS is a fs implementation based on the OS filesystem which is bound to
|
||||
// a base dir.
|
||||
// Prefer this fs implementation over ChrootOS.
|
||||
//
|
||||
// Behaviours of note:
|
||||
// 1. Read and write operations can only be directed to files which descends
|
||||
// from the base dir.
|
||||
// 2. Symlinks don't have their targets modified, and therefore can point
|
||||
// to locations outside the base dir or to non-existent paths.
|
||||
// 3. Readlink and Lstat ensures that the link file is located within the base
|
||||
// dir, evaluating any symlinks that file or base dir may contain.
|
||||
type BoundOS struct {
|
||||
baseDir string
|
||||
deduplicatePath bool
|
||||
}
|
||||
|
||||
func newBoundOS(d string, deduplicatePath bool) billy.Filesystem {
|
||||
return &BoundOS{baseDir: d, deduplicatePath: deduplicatePath}
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Create(filename string) (billy.File, error) {
|
||||
return fs.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, defaultCreateMode)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) OpenFile(filename string, flag int, perm os.FileMode) (billy.File, error) {
|
||||
fn, err := fs.abs(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return openFile(fn, flag, perm, fs.createDir)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) ReadDir(path string) ([]os.FileInfo, error) {
|
||||
dir, err := fs.abs(path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return readDir(dir)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Rename(from, to string) error {
|
||||
f, err := fs.abs(from)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
t, err := fs.abs(to)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// MkdirAll for target name.
|
||||
if err := fs.createDir(t); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return os.Rename(f, t)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) MkdirAll(path string, perm os.FileMode) error {
|
||||
dir, err := fs.abs(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.MkdirAll(dir, perm)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Open(filename string) (billy.File, error) {
|
||||
return fs.OpenFile(filename, os.O_RDONLY, 0)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Stat(filename string) (os.FileInfo, error) {
|
||||
filename, err := fs.abs(filename)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return os.Stat(filename)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Remove(filename string) error {
|
||||
fn, err := fs.abs(filename)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.Remove(fn)
|
||||
}
|
||||
|
||||
// TempFile creates a temporary file. If dir is empty, the file
|
||||
// will be created within the OS Temporary dir. If dir is provided
|
||||
// it must descend from the current base dir.
|
||||
func (fs *BoundOS) TempFile(dir, prefix string) (billy.File, error) {
|
||||
if dir != "" {
|
||||
var err error
|
||||
dir, err = fs.abs(dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
return tempFile(dir, prefix)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Join(elem ...string) string {
|
||||
return filepath.Join(elem...)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) RemoveAll(path string) error {
|
||||
dir, err := fs.abs(path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.RemoveAll(dir)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Symlink(target, link string) error {
|
||||
ln, err := fs.abs(link)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// MkdirAll for containing dir.
|
||||
if err := fs.createDir(ln); err != nil {
|
||||
return err
|
||||
}
|
||||
return os.Symlink(target, ln)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Lstat(filename string) (os.FileInfo, error) {
|
||||
filename = filepath.Clean(filename)
|
||||
if !filepath.IsAbs(filename) {
|
||||
filename = filepath.Join(fs.baseDir, filename)
|
||||
}
|
||||
if ok, err := fs.insideBaseDirEval(filename); !ok {
|
||||
return nil, err
|
||||
}
|
||||
return os.Lstat(filename)
|
||||
}
|
||||
|
||||
func (fs *BoundOS) Readlink(link string) (string, error) {
|
||||
if !filepath.IsAbs(link) {
|
||||
link = filepath.Clean(filepath.Join(fs.baseDir, link))
|
||||
}
|
||||
if ok, err := fs.insideBaseDirEval(link); !ok {
|
||||
return "", err
|
||||
}
|
||||
return os.Readlink(link)
|
||||
}
|
||||
|
||||
// Chroot returns a new OS filesystem, with the base dir set to the
|
||||
// result of joining the provided path with the underlying base dir.
|
||||
func (fs *BoundOS) Chroot(path string) (billy.Filesystem, error) {
|
||||
joined, err := securejoin.SecureJoin(fs.baseDir, path)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return New(joined), nil
|
||||
}
|
||||
|
||||
// Root returns the current base dir of the billy.Filesystem.
|
||||
// This is required in order for this implementation to be a drop-in
|
||||
// replacement for other upstream implementations (e.g. memory and osfs).
|
||||
func (fs *BoundOS) Root() string {
|
||||
return fs.baseDir
|
||||
}
|
||||
|
||||
func (fs *BoundOS) createDir(fullpath string) error {
|
||||
dir := filepath.Dir(fullpath)
|
||||
if dir != "." {
|
||||
if err := os.MkdirAll(dir, defaultDirectoryMode); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// abs transforms filename to an absolute path, taking into account the base dir.
|
||||
// Relative paths won't be allowed to ascend the base dir, so `../file` will become
|
||||
// `/working-dir/file`.
|
||||
//
|
||||
// Note that if filename is a symlink, the returned address will be the target of the
|
||||
// symlink.
|
||||
func (fs *BoundOS) abs(filename string) (string, error) {
|
||||
if filename == fs.baseDir {
|
||||
filename = string(filepath.Separator)
|
||||
}
|
||||
|
||||
path, err := securejoin.SecureJoin(fs.baseDir, filename)
|
||||
if err != nil {
|
||||
return "", nil
|
||||
}
|
||||
|
||||
if fs.deduplicatePath {
|
||||
vol := filepath.VolumeName(fs.baseDir)
|
||||
dup := filepath.Join(fs.baseDir, fs.baseDir[len(vol):])
|
||||
if strings.HasPrefix(path, dup+string(filepath.Separator)) {
|
||||
return fs.abs(path[len(dup):])
|
||||
}
|
||||
}
|
||||
return path, nil
|
||||
}
|
||||
|
||||
// insideBaseDir checks whether filename is located within
|
||||
// the fs.baseDir.
|
||||
func (fs *BoundOS) insideBaseDir(filename string) (bool, error) {
|
||||
if filename == fs.baseDir {
|
||||
return true, nil
|
||||
}
|
||||
if !strings.HasPrefix(filename, fs.baseDir+string(filepath.Separator)) {
|
||||
return false, fmt.Errorf("path outside base dir")
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
|
||||
// insideBaseDirEval checks whether filename is contained within
|
||||
// a dir that is within the fs.baseDir, by first evaluating any symlinks
|
||||
// that either filename or fs.baseDir may contain.
|
||||
func (fs *BoundOS) insideBaseDirEval(filename string) (bool, error) {
|
||||
dir, err := filepath.EvalSymlinks(filepath.Dir(filename))
|
||||
if dir == "" || os.IsNotExist(err) {
|
||||
dir = filepath.Dir(filename)
|
||||
}
|
||||
wd, err := filepath.EvalSymlinks(fs.baseDir)
|
||||
if wd == "" || os.IsNotExist(err) {
|
||||
wd = fs.baseDir
|
||||
}
|
||||
if filename != wd && dir != wd && !strings.HasPrefix(dir, wd+string(filepath.Separator)) {
|
||||
return false, fmt.Errorf("path outside base dir")
|
||||
}
|
||||
return true, nil
|
||||
}
|
112
vendor/github.com/go-git/go-billy/v5/osfs/os_chroot.go
generated
vendored
Normal file
112
vendor/github.com/go-git/go-billy/v5/osfs/os_chroot.go
generated
vendored
Normal file
|
@ -0,0 +1,112 @@
|
|||
//go:build !js
|
||||
// +build !js
|
||||
|
||||
package osfs
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
"github.com/go-git/go-billy/v5/helper/chroot"
|
||||
)
|
||||
|
||||
// ChrootOS is a legacy filesystem based on a "soft chroot" of the os filesystem.
|
||||
// Although this is still the default os filesystem, consider using BoundOS instead.
|
||||
//
|
||||
// Behaviours of note:
|
||||
// 1. A "soft chroot" translates the base dir to "/" for the purposes of the
|
||||
// fs abstraction.
|
||||
// 2. Symlinks targets may be modified to be kept within the chroot bounds.
|
||||
// 3. Some file modes does not pass-through the fs abstraction.
|
||||
// 4. The combination of 1 and 2 may cause go-git to think that a Git repository
|
||||
// is dirty, when in fact it isn't.
|
||||
type ChrootOS struct{}
|
||||
|
||||
func newChrootOS(baseDir string) billy.Filesystem {
|
||||
return chroot.New(&ChrootOS{}, baseDir)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Create(filename string) (billy.File, error) {
|
||||
return fs.OpenFile(filename, os.O_RDWR|os.O_CREATE|os.O_TRUNC, defaultCreateMode)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) OpenFile(filename string, flag int, perm os.FileMode) (billy.File, error) {
|
||||
return openFile(filename, flag, perm, fs.createDir)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) createDir(fullpath string) error {
|
||||
dir := filepath.Dir(fullpath)
|
||||
if dir != "." {
|
||||
if err := os.MkdirAll(dir, defaultDirectoryMode); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) ReadDir(dir string) ([]os.FileInfo, error) {
|
||||
return readDir(dir)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Rename(from, to string) error {
|
||||
if err := fs.createDir(to); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return rename(from, to)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) MkdirAll(path string, perm os.FileMode) error {
|
||||
return os.MkdirAll(path, defaultDirectoryMode)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Open(filename string) (billy.File, error) {
|
||||
return fs.OpenFile(filename, os.O_RDONLY, 0)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Stat(filename string) (os.FileInfo, error) {
|
||||
return os.Stat(filename)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Remove(filename string) error {
|
||||
return os.Remove(filename)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) TempFile(dir, prefix string) (billy.File, error) {
|
||||
if err := fs.createDir(dir + string(os.PathSeparator)); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return tempFile(dir, prefix)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Join(elem ...string) string {
|
||||
return filepath.Join(elem...)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) RemoveAll(path string) error {
|
||||
return os.RemoveAll(filepath.Clean(path))
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Lstat(filename string) (os.FileInfo, error) {
|
||||
return os.Lstat(filepath.Clean(filename))
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Symlink(target, link string) error {
|
||||
if err := fs.createDir(link); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return os.Symlink(target, link)
|
||||
}
|
||||
|
||||
func (fs *ChrootOS) Readlink(link string) (string, error) {
|
||||
return os.Readlink(link)
|
||||
}
|
||||
|
||||
// Capabilities implements the Capable interface.
|
||||
func (fs *ChrootOS) Capabilities() billy.Capability {
|
||||
return billy.DefaultCapabilities
|
||||
}
|
25
vendor/github.com/go-git/go-billy/v5/osfs/os_js.go
generated
vendored
Normal file
25
vendor/github.com/go-git/go-billy/v5/osfs/os_js.go
generated
vendored
Normal file
|
@ -0,0 +1,25 @@
|
|||
//go:build js
|
||||
// +build js
|
||||
|
||||
package osfs
|
||||
|
||||
import (
|
||||
"github.com/go-git/go-billy/v5"
|
||||
"github.com/go-git/go-billy/v5/helper/chroot"
|
||||
"github.com/go-git/go-billy/v5/memfs"
|
||||
)
|
||||
|
||||
// globalMemFs is the global memory fs
|
||||
var globalMemFs = memfs.New()
|
||||
|
||||
// Default Filesystem representing the root of in-memory filesystem for a
|
||||
// js/wasm environment.
|
||||
var Default = memfs.New()
|
||||
|
||||
// New returns a new OS filesystem.
|
||||
func New(baseDir string, _ ...Option) billy.Filesystem {
|
||||
return chroot.New(Default, Default.Join("/", baseDir))
|
||||
}
|
||||
|
||||
type options struct {
|
||||
}
|
3
vendor/github.com/go-git/go-billy/v5/osfs/os_options.go
generated
vendored
Normal file
3
vendor/github.com/go-git/go-billy/v5/osfs/os_options.go
generated
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
package osfs
|
||||
|
||||
type Option func(*options)
|
91
vendor/github.com/go-git/go-billy/v5/osfs/os_plan9.go
generated
vendored
Normal file
91
vendor/github.com/go-git/go-billy/v5/osfs/os_plan9.go
generated
vendored
Normal file
|
@ -0,0 +1,91 @@
|
|||
//go:build plan9
|
||||
// +build plan9
|
||||
|
||||
package osfs
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"syscall"
|
||||
)
|
||||
|
||||
func (f *file) Lock() error {
|
||||
// Plan 9 uses a mode bit instead of explicit lock/unlock syscalls.
|
||||
//
|
||||
// Per http://man.cat-v.org/plan_9/5/stat: “Exclusive use files may be open
|
||||
// for I/O by only one fid at a time across all clients of the server. If a
|
||||
// second open is attempted, it draws an error.”
|
||||
//
|
||||
// There is no obvious way to implement this function using the exclusive use bit.
|
||||
// See https://golang.org/src/cmd/go/internal/lockedfile/lockedfile_plan9.go
|
||||
// for how file locking is done by the go tool on Plan 9.
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *file) Unlock() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func rename(from, to string) error {
|
||||
// If from and to are in different directories, copy the file
|
||||
// since Plan 9 does not support cross-directory rename.
|
||||
if filepath.Dir(from) != filepath.Dir(to) {
|
||||
fi, err := os.Stat(from)
|
||||
if err != nil {
|
||||
return &os.LinkError{"rename", from, to, err}
|
||||
}
|
||||
if fi.Mode().IsDir() {
|
||||
return &os.LinkError{"rename", from, to, syscall.EISDIR}
|
||||
}
|
||||
fromFile, err := os.Open(from)
|
||||
if err != nil {
|
||||
return &os.LinkError{"rename", from, to, err}
|
||||
}
|
||||
toFile, err := os.OpenFile(to, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, fi.Mode())
|
||||
if err != nil {
|
||||
return &os.LinkError{"rename", from, to, err}
|
||||
}
|
||||
_, err = io.Copy(toFile, fromFile)
|
||||
if err != nil {
|
||||
return &os.LinkError{"rename", from, to, err}
|
||||
}
|
||||
|
||||
// Copy mtime and mode from original file.
|
||||
// We need only one syscall if we avoid os.Chmod and os.Chtimes.
|
||||
dir := fi.Sys().(*syscall.Dir)
|
||||
var d syscall.Dir
|
||||
d.Null()
|
||||
d.Mtime = dir.Mtime
|
||||
d.Mode = dir.Mode
|
||||
if err = dirwstat(to, &d); err != nil {
|
||||
return &os.LinkError{"rename", from, to, err}
|
||||
}
|
||||
|
||||
// Remove original file.
|
||||
err = os.Remove(from)
|
||||
if err != nil {
|
||||
return &os.LinkError{"rename", from, to, err}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
return os.Rename(from, to)
|
||||
}
|
||||
|
||||
func dirwstat(name string, d *syscall.Dir) error {
|
||||
var buf [syscall.STATFIXLEN]byte
|
||||
|
||||
n, err := d.Marshal(buf[:])
|
||||
if err != nil {
|
||||
return &os.PathError{"dirwstat", name, err}
|
||||
}
|
||||
if err = syscall.Wstat(name, buf[:n]); err != nil {
|
||||
return &os.PathError{"dirwstat", name, err}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func umask(new int) func() {
|
||||
return func() {
|
||||
}
|
||||
}
|
38
vendor/github.com/go-git/go-billy/v5/osfs/os_posix.go
generated
vendored
Normal file
38
vendor/github.com/go-git/go-billy/v5/osfs/os_posix.go
generated
vendored
Normal file
|
@ -0,0 +1,38 @@
|
|||
//go:build !plan9 && !windows && !js
|
||||
// +build !plan9,!windows,!js
|
||||
|
||||
package osfs
|
||||
|
||||
import (
|
||||
"os"
|
||||
"syscall"
|
||||
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
func (f *file) Lock() error {
|
||||
f.m.Lock()
|
||||
defer f.m.Unlock()
|
||||
|
||||
return unix.Flock(int(f.File.Fd()), unix.LOCK_EX)
|
||||
}
|
||||
|
||||
func (f *file) Unlock() error {
|
||||
f.m.Lock()
|
||||
defer f.m.Unlock()
|
||||
|
||||
return unix.Flock(int(f.File.Fd()), unix.LOCK_UN)
|
||||
}
|
||||
|
||||
func rename(from, to string) error {
|
||||
return os.Rename(from, to)
|
||||
}
|
||||
|
||||
// umask sets umask to a new value, and returns a func which allows the
|
||||
// caller to reset it back to what it was originally.
|
||||
func umask(new int) func() {
|
||||
old := syscall.Umask(new)
|
||||
return func() {
|
||||
syscall.Umask(old)
|
||||
}
|
||||
}
|
58
vendor/github.com/go-git/go-billy/v5/osfs/os_windows.go
generated
vendored
Normal file
58
vendor/github.com/go-git/go-billy/v5/osfs/os_windows.go
generated
vendored
Normal file
|
@ -0,0 +1,58 @@
|
|||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package osfs
|
||||
|
||||
import (
|
||||
"os"
|
||||
"runtime"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
var (
|
||||
kernel32DLL = windows.NewLazySystemDLL("kernel32.dll")
|
||||
lockFileExProc = kernel32DLL.NewProc("LockFileEx")
|
||||
unlockFileProc = kernel32DLL.NewProc("UnlockFile")
|
||||
)
|
||||
|
||||
const (
|
||||
lockfileExclusiveLock = 0x2
|
||||
)
|
||||
|
||||
func (f *file) Lock() error {
|
||||
f.m.Lock()
|
||||
defer f.m.Unlock()
|
||||
|
||||
var overlapped windows.Overlapped
|
||||
// err is always non-nil as per sys/windows semantics.
|
||||
ret, _, err := lockFileExProc.Call(f.File.Fd(), lockfileExclusiveLock, 0, 0xFFFFFFFF, 0,
|
||||
uintptr(unsafe.Pointer(&overlapped)))
|
||||
runtime.KeepAlive(&overlapped)
|
||||
if ret == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *file) Unlock() error {
|
||||
f.m.Lock()
|
||||
defer f.m.Unlock()
|
||||
|
||||
// err is always non-nil as per sys/windows semantics.
|
||||
ret, _, err := unlockFileProc.Call(f.File.Fd(), 0, 0, 0xFFFFFFFF, 0)
|
||||
if ret == 0 {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func rename(from, to string) error {
|
||||
return os.Rename(from, to)
|
||||
}
|
||||
|
||||
func umask(new int) func() {
|
||||
return func() {
|
||||
}
|
||||
}
|
111
vendor/github.com/go-git/go-billy/v5/util/glob.go
generated
vendored
Normal file
111
vendor/github.com/go-git/go-billy/v5/util/glob.go
generated
vendored
Normal file
|
@ -0,0 +1,111 @@
|
|||
package util
|
||||
|
||||
import (
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
)
|
||||
|
||||
// Glob returns the names of all files matching pattern or nil
|
||||
// if there is no matching file. The syntax of patterns is the same
|
||||
// as in Match. The pattern may describe hierarchical names such as
|
||||
// /usr/*/bin/ed (assuming the Separator is '/').
|
||||
//
|
||||
// Glob ignores file system errors such as I/O errors reading directories.
|
||||
// The only possible returned error is ErrBadPattern, when pattern
|
||||
// is malformed.
|
||||
//
|
||||
// Function originally from https://golang.org/src/path/filepath/match_test.go
|
||||
func Glob(fs billy.Filesystem, pattern string) (matches []string, err error) {
|
||||
if !hasMeta(pattern) {
|
||||
if _, err = fs.Lstat(pattern); err != nil {
|
||||
return nil, nil
|
||||
}
|
||||
return []string{pattern}, nil
|
||||
}
|
||||
|
||||
dir, file := filepath.Split(pattern)
|
||||
// Prevent infinite recursion. See issue 15879.
|
||||
if dir == pattern {
|
||||
return nil, filepath.ErrBadPattern
|
||||
}
|
||||
|
||||
var m []string
|
||||
m, err = Glob(fs, cleanGlobPath(dir))
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
for _, d := range m {
|
||||
matches, err = glob(fs, d, file, matches)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// cleanGlobPath prepares path for glob matching.
|
||||
func cleanGlobPath(path string) string {
|
||||
switch path {
|
||||
case "":
|
||||
return "."
|
||||
case string(filepath.Separator):
|
||||
// do nothing to the path
|
||||
return path
|
||||
default:
|
||||
return path[0 : len(path)-1] // chop off trailing separator
|
||||
}
|
||||
}
|
||||
|
||||
// glob searches for files matching pattern in the directory dir
|
||||
// and appends them to matches. If the directory cannot be
|
||||
// opened, it returns the existing matches. New matches are
|
||||
// added in lexicographical order.
|
||||
func glob(fs billy.Filesystem, dir, pattern string, matches []string) (m []string, e error) {
|
||||
m = matches
|
||||
fi, err := fs.Stat(dir)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
|
||||
if !fi.IsDir() {
|
||||
return
|
||||
}
|
||||
|
||||
names, _ := readdirnames(fs, dir)
|
||||
sort.Strings(names)
|
||||
|
||||
for _, n := range names {
|
||||
matched, err := filepath.Match(pattern, n)
|
||||
if err != nil {
|
||||
return m, err
|
||||
}
|
||||
if matched {
|
||||
m = append(m, filepath.Join(dir, n))
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// hasMeta reports whether path contains any of the magic characters
|
||||
// recognized by Match.
|
||||
func hasMeta(path string) bool {
|
||||
// TODO(niemeyer): Should other magic characters be added here?
|
||||
return strings.ContainsAny(path, "*?[")
|
||||
}
|
||||
|
||||
func readdirnames(fs billy.Filesystem, dir string) ([]string, error) {
|
||||
files, err := fs.ReadDir(dir)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var names []string
|
||||
for _, file := range files {
|
||||
names = append(names, file.Name())
|
||||
}
|
||||
|
||||
return names, nil
|
||||
}
|
282
vendor/github.com/go-git/go-billy/v5/util/util.go
generated
vendored
Normal file
282
vendor/github.com/go-git/go-billy/v5/util/util.go
generated
vendored
Normal file
|
@ -0,0 +1,282 @@
|
|||
package util
|
||||
|
||||
import (
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
)
|
||||
|
||||
// RemoveAll removes path and any children it contains. It removes everything it
|
||||
// can but returns the first error it encounters. If the path does not exist,
|
||||
// RemoveAll returns nil (no error).
|
||||
func RemoveAll(fs billy.Basic, path string) error {
|
||||
fs, path = getUnderlyingAndPath(fs, path)
|
||||
|
||||
if r, ok := fs.(removerAll); ok {
|
||||
return r.RemoveAll(path)
|
||||
}
|
||||
|
||||
return removeAll(fs, path)
|
||||
}
|
||||
|
||||
type removerAll interface {
|
||||
RemoveAll(string) error
|
||||
}
|
||||
|
||||
func removeAll(fs billy.Basic, path string) error {
|
||||
// This implementation is adapted from os.RemoveAll.
|
||||
|
||||
// Simple case: if Remove works, we're done.
|
||||
err := fs.Remove(path)
|
||||
if err == nil || os.IsNotExist(err) {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Otherwise, is this a directory we need to recurse into?
|
||||
dir, serr := fs.Stat(path)
|
||||
if serr != nil {
|
||||
if os.IsNotExist(serr) {
|
||||
return nil
|
||||
}
|
||||
|
||||
return serr
|
||||
}
|
||||
|
||||
if !dir.IsDir() {
|
||||
// Not a directory; return the error from Remove.
|
||||
return err
|
||||
}
|
||||
|
||||
dirfs, ok := fs.(billy.Dir)
|
||||
if !ok {
|
||||
return billy.ErrNotSupported
|
||||
}
|
||||
|
||||
// Directory.
|
||||
fis, err := dirfs.ReadDir(path)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
// Race. It was deleted between the Lstat and Open.
|
||||
// Return nil per RemoveAll's docs.
|
||||
return nil
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// Remove contents & return first error.
|
||||
err = nil
|
||||
for _, fi := range fis {
|
||||
cpath := fs.Join(path, fi.Name())
|
||||
err1 := removeAll(fs, cpath)
|
||||
if err == nil {
|
||||
err = err1
|
||||
}
|
||||
}
|
||||
|
||||
// Remove directory.
|
||||
err1 := fs.Remove(path)
|
||||
if err1 == nil || os.IsNotExist(err1) {
|
||||
return nil
|
||||
}
|
||||
|
||||
if err == nil {
|
||||
err = err1
|
||||
}
|
||||
|
||||
return err
|
||||
|
||||
}
|
||||
|
||||
// WriteFile writes data to a file named by filename in the given filesystem.
|
||||
// If the file does not exist, WriteFile creates it with permissions perm;
|
||||
// otherwise WriteFile truncates it before writing.
|
||||
func WriteFile(fs billy.Basic, filename string, data []byte, perm os.FileMode) error {
|
||||
f, err := fs.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, perm)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
n, err := f.Write(data)
|
||||
if err == nil && n < len(data) {
|
||||
err = io.ErrShortWrite
|
||||
}
|
||||
|
||||
if err1 := f.Close(); err == nil {
|
||||
err = err1
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// Random number state.
|
||||
// We generate random temporary file names so that there's a good
|
||||
// chance the file doesn't exist yet - keeps the number of tries in
|
||||
// TempFile to a minimum.
|
||||
var rand uint32
|
||||
var randmu sync.Mutex
|
||||
|
||||
func reseed() uint32 {
|
||||
return uint32(time.Now().UnixNano() + int64(os.Getpid()))
|
||||
}
|
||||
|
||||
func nextSuffix() string {
|
||||
randmu.Lock()
|
||||
r := rand
|
||||
if r == 0 {
|
||||
r = reseed()
|
||||
}
|
||||
r = r*1664525 + 1013904223 // constants from Numerical Recipes
|
||||
rand = r
|
||||
randmu.Unlock()
|
||||
return strconv.Itoa(int(1e9 + r%1e9))[1:]
|
||||
}
|
||||
|
||||
// TempFile creates a new temporary file in the directory dir with a name
|
||||
// beginning with prefix, opens the file for reading and writing, and returns
|
||||
// the resulting *os.File. If dir is the empty string, TempFile uses the default
|
||||
// directory for temporary files (see os.TempDir). Multiple programs calling
|
||||
// TempFile simultaneously will not choose the same file. The caller can use
|
||||
// f.Name() to find the pathname of the file. It is the caller's responsibility
|
||||
// to remove the file when no longer needed.
|
||||
func TempFile(fs billy.Basic, dir, prefix string) (f billy.File, err error) {
|
||||
// This implementation is based on stdlib ioutil.TempFile.
|
||||
if dir == "" {
|
||||
dir = getTempDir(fs)
|
||||
}
|
||||
|
||||
nconflict := 0
|
||||
for i := 0; i < 10000; i++ {
|
||||
name := filepath.Join(dir, prefix+nextSuffix())
|
||||
f, err = fs.OpenFile(name, os.O_RDWR|os.O_CREATE|os.O_EXCL, 0600)
|
||||
if os.IsExist(err) {
|
||||
if nconflict++; nconflict > 10 {
|
||||
randmu.Lock()
|
||||
rand = reseed()
|
||||
randmu.Unlock()
|
||||
}
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// TempDir creates a new temporary directory in the directory dir
|
||||
// with a name beginning with prefix and returns the path of the
|
||||
// new directory. If dir is the empty string, TempDir uses the
|
||||
// default directory for temporary files (see os.TempDir).
|
||||
// Multiple programs calling TempDir simultaneously
|
||||
// will not choose the same directory. It is the caller's responsibility
|
||||
// to remove the directory when no longer needed.
|
||||
func TempDir(fs billy.Dir, dir, prefix string) (name string, err error) {
|
||||
// This implementation is based on stdlib ioutil.TempDir
|
||||
|
||||
if dir == "" {
|
||||
dir = getTempDir(fs.(billy.Basic))
|
||||
}
|
||||
|
||||
nconflict := 0
|
||||
for i := 0; i < 10000; i++ {
|
||||
try := filepath.Join(dir, prefix+nextSuffix())
|
||||
err = fs.MkdirAll(try, 0700)
|
||||
if os.IsExist(err) {
|
||||
if nconflict++; nconflict > 10 {
|
||||
randmu.Lock()
|
||||
rand = reseed()
|
||||
randmu.Unlock()
|
||||
}
|
||||
continue
|
||||
}
|
||||
if os.IsNotExist(err) {
|
||||
if _, err := os.Stat(dir); os.IsNotExist(err) {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
if err == nil {
|
||||
name = try
|
||||
}
|
||||
break
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func getTempDir(fs billy.Basic) string {
|
||||
ch, ok := fs.(billy.Chroot)
|
||||
if !ok || ch.Root() == "" || ch.Root() == "/" || ch.Root() == string(filepath.Separator) {
|
||||
return os.TempDir()
|
||||
}
|
||||
|
||||
return ".tmp"
|
||||
}
|
||||
|
||||
type underlying interface {
|
||||
Underlying() billy.Basic
|
||||
}
|
||||
|
||||
func getUnderlyingAndPath(fs billy.Basic, path string) (billy.Basic, string) {
|
||||
u, ok := fs.(underlying)
|
||||
if !ok {
|
||||
return fs, path
|
||||
}
|
||||
if ch, ok := fs.(billy.Chroot); ok {
|
||||
path = fs.Join(ch.Root(), path)
|
||||
}
|
||||
|
||||
return u.Underlying(), path
|
||||
}
|
||||
|
||||
// ReadFile reads the named file and returns the contents from the given filesystem.
|
||||
// A successful call returns err == nil, not err == EOF.
|
||||
// Because ReadFile reads the whole file, it does not treat an EOF from Read
|
||||
// as an error to be reported.
|
||||
func ReadFile(fs billy.Basic, name string) ([]byte, error) {
|
||||
f, err := fs.Open(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
defer f.Close()
|
||||
|
||||
var size int
|
||||
if info, err := fs.Stat(name); err == nil {
|
||||
size64 := info.Size()
|
||||
if int64(int(size64)) == size64 {
|
||||
size = int(size64)
|
||||
}
|
||||
}
|
||||
|
||||
size++ // one byte for final read at EOF
|
||||
// If a file claims a small size, read at least 512 bytes.
|
||||
// In particular, files in Linux's /proc claim size 0 but
|
||||
// then do not work right if read in small pieces,
|
||||
// so an initial read of 1 byte would not work correctly.
|
||||
|
||||
if size < 512 {
|
||||
size = 512
|
||||
}
|
||||
|
||||
data := make([]byte, 0, size)
|
||||
for {
|
||||
if len(data) >= cap(data) {
|
||||
d := append(data[:cap(data)], 0)
|
||||
data = d[:len(data)]
|
||||
}
|
||||
|
||||
n, err := f.Read(data[len(data):cap(data)])
|
||||
data = data[:len(data)+n]
|
||||
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
err = nil
|
||||
}
|
||||
|
||||
return data, err
|
||||
}
|
||||
}
|
||||
}
|
72
vendor/github.com/go-git/go-billy/v5/util/walk.go
generated
vendored
Normal file
72
vendor/github.com/go-git/go-billy/v5/util/walk.go
generated
vendored
Normal file
|
@ -0,0 +1,72 @@
|
|||
package util
|
||||
|
||||
import (
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/go-git/go-billy/v5"
|
||||
)
|
||||
|
||||
// walk recursively descends path, calling walkFn
|
||||
// adapted from https://golang.org/src/path/filepath/path.go
|
||||
func walk(fs billy.Filesystem, path string, info os.FileInfo, walkFn filepath.WalkFunc) error {
|
||||
if !info.IsDir() {
|
||||
return walkFn(path, info, nil)
|
||||
}
|
||||
|
||||
names, err := readdirnames(fs, path)
|
||||
err1 := walkFn(path, info, err)
|
||||
// If err != nil, walk can't walk into this directory.
|
||||
// err1 != nil means walkFn want walk to skip this directory or stop walking.
|
||||
// Therefore, if one of err and err1 isn't nil, walk will return.
|
||||
if err != nil || err1 != nil {
|
||||
// The caller's behavior is controlled by the return value, which is decided
|
||||
// by walkFn. walkFn may ignore err and return nil.
|
||||
// If walkFn returns SkipDir, it will be handled by the caller.
|
||||
// So walk should return whatever walkFn returns.
|
||||
return err1
|
||||
}
|
||||
|
||||
for _, name := range names {
|
||||
filename := filepath.Join(path, name)
|
||||
fileInfo, err := fs.Lstat(filename)
|
||||
if err != nil {
|
||||
if err := walkFn(filename, fileInfo, err); err != nil && err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
err = walk(fs, filename, fileInfo, walkFn)
|
||||
if err != nil {
|
||||
if !fileInfo.IsDir() || err != filepath.SkipDir {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Walk walks the file tree rooted at root, calling fn for each file or
|
||||
// directory in the tree, including root. All errors that arise visiting files
|
||||
// and directories are filtered by fn: see the WalkFunc documentation for
|
||||
// details.
|
||||
//
|
||||
// The files are walked in lexical order, which makes the output deterministic
|
||||
// but requires Walk to read an entire directory into memory before proceeding
|
||||
// to walk that directory. Walk does not follow symbolic links.
|
||||
//
|
||||
// Function adapted from https://github.com/golang/go/blob/3b770f2ccb1fa6fecc22ea822a19447b10b70c5c/src/path/filepath/path.go#L500
|
||||
func Walk(fs billy.Filesystem, root string, walkFn filepath.WalkFunc) error {
|
||||
info, err := fs.Lstat(root)
|
||||
if err != nil {
|
||||
err = walkFn(root, nil, err)
|
||||
} else {
|
||||
err = walk(fs, root, info, walkFn)
|
||||
}
|
||||
|
||||
if err == filepath.SkipDir {
|
||||
return nil
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
201
vendor/github.com/go-git/go-git/v5/LICENSE
generated
vendored
Normal file
201
vendor/github.com/go-git/go-git/v5/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,201 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright 2018 Sourced Technologies, S.L.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
123
vendor/github.com/go-git/go-git/v5/config/branch.go
generated
vendored
Normal file
123
vendor/github.com/go-git/go-git/v5/config/branch.go
generated
vendored
Normal file
|
@ -0,0 +1,123 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
|
||||
"github.com/go-git/go-git/v5/plumbing"
|
||||
format "github.com/go-git/go-git/v5/plumbing/format/config"
|
||||
)
|
||||
|
||||
var (
|
||||
errBranchEmptyName = errors.New("branch config: empty name")
|
||||
errBranchInvalidMerge = errors.New("branch config: invalid merge")
|
||||
errBranchInvalidRebase = errors.New("branch config: rebase must be one of 'true' or 'interactive'")
|
||||
)
|
||||
|
||||
// Branch contains information on the
|
||||
// local branches and which remote to track
|
||||
type Branch struct {
|
||||
// Name of branch
|
||||
Name string
|
||||
// Remote name of remote to track
|
||||
Remote string
|
||||
// Merge is the local refspec for the branch
|
||||
Merge plumbing.ReferenceName
|
||||
// Rebase instead of merge when pulling. Valid values are
|
||||
// "true" and "interactive". "false" is undocumented and
|
||||
// typically represented by the non-existence of this field
|
||||
Rebase string
|
||||
// Description explains what the branch is for.
|
||||
// Multi-line explanations may be used.
|
||||
//
|
||||
// Original git command to edit:
|
||||
// git branch --edit-description
|
||||
Description string
|
||||
|
||||
raw *format.Subsection
|
||||
}
|
||||
|
||||
// Validate validates fields of branch
|
||||
func (b *Branch) Validate() error {
|
||||
if b.Name == "" {
|
||||
return errBranchEmptyName
|
||||
}
|
||||
|
||||
if b.Merge != "" && !b.Merge.IsBranch() {
|
||||
return errBranchInvalidMerge
|
||||
}
|
||||
|
||||
if b.Rebase != "" &&
|
||||
b.Rebase != "true" &&
|
||||
b.Rebase != "interactive" &&
|
||||
b.Rebase != "false" {
|
||||
return errBranchInvalidRebase
|
||||
}
|
||||
|
||||
return plumbing.NewBranchReferenceName(b.Name).Validate()
|
||||
}
|
||||
|
||||
func (b *Branch) marshal() *format.Subsection {
|
||||
if b.raw == nil {
|
||||
b.raw = &format.Subsection{}
|
||||
}
|
||||
|
||||
b.raw.Name = b.Name
|
||||
|
||||
if b.Remote == "" {
|
||||
b.raw.RemoveOption(remoteSection)
|
||||
} else {
|
||||
b.raw.SetOption(remoteSection, b.Remote)
|
||||
}
|
||||
|
||||
if b.Merge == "" {
|
||||
b.raw.RemoveOption(mergeKey)
|
||||
} else {
|
||||
b.raw.SetOption(mergeKey, string(b.Merge))
|
||||
}
|
||||
|
||||
if b.Rebase == "" {
|
||||
b.raw.RemoveOption(rebaseKey)
|
||||
} else {
|
||||
b.raw.SetOption(rebaseKey, b.Rebase)
|
||||
}
|
||||
|
||||
if b.Description == "" {
|
||||
b.raw.RemoveOption(descriptionKey)
|
||||
} else {
|
||||
desc := quoteDescription(b.Description)
|
||||
b.raw.SetOption(descriptionKey, desc)
|
||||
}
|
||||
|
||||
return b.raw
|
||||
}
|
||||
|
||||
// hack to trigger conditional quoting in the
|
||||
// plumbing/format/config/Encoder.encodeOptions
|
||||
//
|
||||
// Current Encoder implementation uses Go %q format if value contains a backslash character,
|
||||
// which is not consistent with reference git implementation.
|
||||
// git just replaces newline characters with \n, while Encoder prints them directly.
|
||||
// Until value quoting fix, we should escape description value by replacing newline characters with \n.
|
||||
func quoteDescription(desc string) string {
|
||||
return strings.ReplaceAll(desc, "\n", `\n`)
|
||||
}
|
||||
|
||||
func (b *Branch) unmarshal(s *format.Subsection) error {
|
||||
b.raw = s
|
||||
|
||||
b.Name = b.raw.Name
|
||||
b.Remote = b.raw.Options.Get(remoteSection)
|
||||
b.Merge = plumbing.ReferenceName(b.raw.Options.Get(mergeKey))
|
||||
b.Rebase = b.raw.Options.Get(rebaseKey)
|
||||
b.Description = unquoteDescription(b.raw.Options.Get(descriptionKey))
|
||||
|
||||
return b.Validate()
|
||||
}
|
||||
|
||||
// hack to enable conditional quoting in the
|
||||
// plumbing/format/config/Encoder.encodeOptions
|
||||
// goto quoteDescription for details.
|
||||
func unquoteDescription(desc string) string {
|
||||
return strings.ReplaceAll(desc, `\n`, "\n")
|
||||
}
|
696
vendor/github.com/go-git/go-git/v5/config/config.go
generated
vendored
Normal file
696
vendor/github.com/go-git/go-git/v5/config/config.go
generated
vendored
Normal file
|
@ -0,0 +1,696 @@
|
|||
// Package config contains the abstraction of multiple config files
|
||||
package config
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strconv"
|
||||
|
||||
"github.com/go-git/go-billy/v5/osfs"
|
||||
"github.com/go-git/go-git/v5/internal/url"
|
||||
"github.com/go-git/go-git/v5/plumbing"
|
||||
format "github.com/go-git/go-git/v5/plumbing/format/config"
|
||||
)
|
||||
|
||||
const (
|
||||
// DefaultFetchRefSpec is the default refspec used for fetch.
|
||||
DefaultFetchRefSpec = "+refs/heads/*:refs/remotes/%s/*"
|
||||
// DefaultPushRefSpec is the default refspec used for push.
|
||||
DefaultPushRefSpec = "refs/heads/*:refs/heads/*"
|
||||
)
|
||||
|
||||
// ConfigStorer generic storage of Config object
|
||||
type ConfigStorer interface {
|
||||
Config() (*Config, error)
|
||||
SetConfig(*Config) error
|
||||
}
|
||||
|
||||
var (
|
||||
ErrInvalid = errors.New("config invalid key in remote or branch")
|
||||
ErrRemoteConfigNotFound = errors.New("remote config not found")
|
||||
ErrRemoteConfigEmptyURL = errors.New("remote config: empty URL")
|
||||
ErrRemoteConfigEmptyName = errors.New("remote config: empty name")
|
||||
)
|
||||
|
||||
// Scope defines the scope of a config file, such as local, global or system.
|
||||
type Scope int
|
||||
|
||||
// Available ConfigScope's
|
||||
const (
|
||||
LocalScope Scope = iota
|
||||
GlobalScope
|
||||
SystemScope
|
||||
)
|
||||
|
||||
// Config contains the repository configuration
|
||||
// https://www.kernel.org/pub/software/scm/git/docs/git-config.html#FILES
|
||||
type Config struct {
|
||||
Core struct {
|
||||
// IsBare if true this repository is assumed to be bare and has no
|
||||
// working directory associated with it.
|
||||
IsBare bool
|
||||
// Worktree is the path to the root of the working tree.
|
||||
Worktree string
|
||||
// CommentChar is the character indicating the start of a
|
||||
// comment for commands like commit and tag
|
||||
CommentChar string
|
||||
// RepositoryFormatVersion identifies the repository format and layout version.
|
||||
RepositoryFormatVersion format.RepositoryFormatVersion
|
||||
}
|
||||
|
||||
User struct {
|
||||
// Name is the personal name of the author and the committer of a commit.
|
||||
Name string
|
||||
// Email is the email of the author and the committer of a commit.
|
||||
Email string
|
||||
}
|
||||
|
||||
Author struct {
|
||||
// Name is the personal name of the author of a commit.
|
||||
Name string
|
||||
// Email is the email of the author of a commit.
|
||||
Email string
|
||||
}
|
||||
|
||||
Committer struct {
|
||||
// Name is the personal name of the committer of a commit.
|
||||
Name string
|
||||
// Email is the email of the committer of a commit.
|
||||
Email string
|
||||
}
|
||||
|
||||
Pack struct {
|
||||
// Window controls the size of the sliding window for delta
|
||||
// compression. The default is 10. A value of 0 turns off
|
||||
// delta compression entirely.
|
||||
Window uint
|
||||
}
|
||||
|
||||
Init struct {
|
||||
// DefaultBranch Allows overriding the default branch name
|
||||
// e.g. when initializing a new repository or when cloning
|
||||
// an empty repository.
|
||||
DefaultBranch string
|
||||
}
|
||||
|
||||
Extensions struct {
|
||||
// ObjectFormat specifies the hash algorithm to use. The
|
||||
// acceptable values are sha1 and sha256. If not specified,
|
||||
// sha1 is assumed. It is an error to specify this key unless
|
||||
// core.repositoryFormatVersion is 1.
|
||||
//
|
||||
// This setting must not be changed after repository initialization
|
||||
// (e.g. clone or init).
|
||||
ObjectFormat format.ObjectFormat
|
||||
}
|
||||
|
||||
// Remotes list of repository remotes, the key of the map is the name
|
||||
// of the remote, should equal to RemoteConfig.Name.
|
||||
Remotes map[string]*RemoteConfig
|
||||
// Submodules list of repository submodules, the key of the map is the name
|
||||
// of the submodule, should equal to Submodule.Name.
|
||||
Submodules map[string]*Submodule
|
||||
// Branches list of branches, the key is the branch name and should
|
||||
// equal Branch.Name
|
||||
Branches map[string]*Branch
|
||||
// URLs list of url rewrite rules, if repo url starts with URL.InsteadOf value, it will be replaced with the
|
||||
// key instead.
|
||||
URLs map[string]*URL
|
||||
// Raw contains the raw information of a config file. The main goal is
|
||||
// preserve the parsed information from the original format, to avoid
|
||||
// dropping unsupported fields.
|
||||
Raw *format.Config
|
||||
}
|
||||
|
||||
// NewConfig returns a new empty Config.
|
||||
func NewConfig() *Config {
|
||||
config := &Config{
|
||||
Remotes: make(map[string]*RemoteConfig),
|
||||
Submodules: make(map[string]*Submodule),
|
||||
Branches: make(map[string]*Branch),
|
||||
URLs: make(map[string]*URL),
|
||||
Raw: format.New(),
|
||||
}
|
||||
|
||||
config.Pack.Window = DefaultPackWindow
|
||||
|
||||
return config
|
||||
}
|
||||
|
||||
// ReadConfig reads a config file from a io.Reader.
|
||||
func ReadConfig(r io.Reader) (*Config, error) {
|
||||
b, err := io.ReadAll(r)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cfg := NewConfig()
|
||||
if err = cfg.Unmarshal(b); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return cfg, nil
|
||||
}
|
||||
|
||||
// LoadConfig loads a config file from a given scope. The returned Config,
|
||||
// contains exclusively information from the given scope. If it couldn't find a
|
||||
// config file to the given scope, an empty one is returned.
|
||||
func LoadConfig(scope Scope) (*Config, error) {
|
||||
if scope == LocalScope {
|
||||
return nil, fmt.Errorf("LocalScope should be read from the a ConfigStorer")
|
||||
}
|
||||
|
||||
files, err := Paths(scope)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
for _, file := range files {
|
||||
f, err := osfs.Default.Open(file)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
continue
|
||||
}
|
||||
|
||||
return nil, err
|
||||
}
|
||||
|
||||
defer f.Close()
|
||||
return ReadConfig(f)
|
||||
}
|
||||
|
||||
return NewConfig(), nil
|
||||
}
|
||||
|
||||
// Paths returns the config file location for a given scope.
|
||||
func Paths(scope Scope) ([]string, error) {
|
||||
var files []string
|
||||
switch scope {
|
||||
case GlobalScope:
|
||||
xdg := os.Getenv("XDG_CONFIG_HOME")
|
||||
if xdg != "" {
|
||||
files = append(files, filepath.Join(xdg, "git/config"))
|
||||
}
|
||||
|
||||
home, err := os.UserHomeDir()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
files = append(files,
|
||||
filepath.Join(home, ".gitconfig"),
|
||||
filepath.Join(home, ".config/git/config"),
|
||||
)
|
||||
case SystemScope:
|
||||
files = append(files, "/etc/gitconfig")
|
||||
}
|
||||
|
||||
return files, nil
|
||||
}
|
||||
|
||||
// Validate validates the fields and sets the default values.
|
||||
func (c *Config) Validate() error {
|
||||
for name, r := range c.Remotes {
|
||||
if r.Name != name {
|
||||
return ErrInvalid
|
||||
}
|
||||
|
||||
if err := r.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for name, b := range c.Branches {
|
||||
if b.Name != name {
|
||||
return ErrInvalid
|
||||
}
|
||||
|
||||
if err := b.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
const (
|
||||
remoteSection = "remote"
|
||||
submoduleSection = "submodule"
|
||||
branchSection = "branch"
|
||||
coreSection = "core"
|
||||
packSection = "pack"
|
||||
userSection = "user"
|
||||
authorSection = "author"
|
||||
committerSection = "committer"
|
||||
initSection = "init"
|
||||
urlSection = "url"
|
||||
extensionsSection = "extensions"
|
||||
fetchKey = "fetch"
|
||||
urlKey = "url"
|
||||
bareKey = "bare"
|
||||
worktreeKey = "worktree"
|
||||
commentCharKey = "commentChar"
|
||||
windowKey = "window"
|
||||
mergeKey = "merge"
|
||||
rebaseKey = "rebase"
|
||||
nameKey = "name"
|
||||
emailKey = "email"
|
||||
descriptionKey = "description"
|
||||
defaultBranchKey = "defaultBranch"
|
||||
repositoryFormatVersionKey = "repositoryformatversion"
|
||||
objectFormat = "objectformat"
|
||||
mirrorKey = "mirror"
|
||||
|
||||
// DefaultPackWindow holds the number of previous objects used to
|
||||
// generate deltas. The value 10 is the same used by git command.
|
||||
DefaultPackWindow = uint(10)
|
||||
)
|
||||
|
||||
// Unmarshal parses a git-config file and stores it.
|
||||
func (c *Config) Unmarshal(b []byte) error {
|
||||
r := bytes.NewBuffer(b)
|
||||
d := format.NewDecoder(r)
|
||||
|
||||
c.Raw = format.New()
|
||||
if err := d.Decode(c.Raw); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.unmarshalCore()
|
||||
c.unmarshalUser()
|
||||
c.unmarshalInit()
|
||||
if err := c.unmarshalPack(); err != nil {
|
||||
return err
|
||||
}
|
||||
unmarshalSubmodules(c.Raw, c.Submodules)
|
||||
|
||||
if err := c.unmarshalBranches(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := c.unmarshalURLs(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return c.unmarshalRemotes()
|
||||
}
|
||||
|
||||
func (c *Config) unmarshalCore() {
|
||||
s := c.Raw.Section(coreSection)
|
||||
if s.Options.Get(bareKey) == "true" {
|
||||
c.Core.IsBare = true
|
||||
}
|
||||
|
||||
c.Core.Worktree = s.Options.Get(worktreeKey)
|
||||
c.Core.CommentChar = s.Options.Get(commentCharKey)
|
||||
}
|
||||
|
||||
func (c *Config) unmarshalUser() {
|
||||
s := c.Raw.Section(userSection)
|
||||
c.User.Name = s.Options.Get(nameKey)
|
||||
c.User.Email = s.Options.Get(emailKey)
|
||||
|
||||
s = c.Raw.Section(authorSection)
|
||||
c.Author.Name = s.Options.Get(nameKey)
|
||||
c.Author.Email = s.Options.Get(emailKey)
|
||||
|
||||
s = c.Raw.Section(committerSection)
|
||||
c.Committer.Name = s.Options.Get(nameKey)
|
||||
c.Committer.Email = s.Options.Get(emailKey)
|
||||
}
|
||||
|
||||
func (c *Config) unmarshalPack() error {
|
||||
s := c.Raw.Section(packSection)
|
||||
window := s.Options.Get(windowKey)
|
||||
if window == "" {
|
||||
c.Pack.Window = DefaultPackWindow
|
||||
} else {
|
||||
winUint, err := strconv.ParseUint(window, 10, 32)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
c.Pack.Window = uint(winUint)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) unmarshalRemotes() error {
|
||||
s := c.Raw.Section(remoteSection)
|
||||
for _, sub := range s.Subsections {
|
||||
r := &RemoteConfig{}
|
||||
if err := r.unmarshal(sub); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.Remotes[r.Name] = r
|
||||
}
|
||||
|
||||
// Apply insteadOf url rules
|
||||
for _, r := range c.Remotes {
|
||||
r.applyURLRules(c.URLs)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) unmarshalURLs() error {
|
||||
s := c.Raw.Section(urlSection)
|
||||
for _, sub := range s.Subsections {
|
||||
r := &URL{}
|
||||
if err := r.unmarshal(sub); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.URLs[r.Name] = r
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func unmarshalSubmodules(fc *format.Config, submodules map[string]*Submodule) {
|
||||
s := fc.Section(submoduleSection)
|
||||
for _, sub := range s.Subsections {
|
||||
m := &Submodule{}
|
||||
m.unmarshal(sub)
|
||||
|
||||
if m.Validate() == ErrModuleBadPath {
|
||||
continue
|
||||
}
|
||||
|
||||
submodules[m.Name] = m
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) unmarshalBranches() error {
|
||||
bs := c.Raw.Section(branchSection)
|
||||
for _, sub := range bs.Subsections {
|
||||
b := &Branch{}
|
||||
|
||||
if err := b.unmarshal(sub); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
c.Branches[b.Name] = b
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *Config) unmarshalInit() {
|
||||
s := c.Raw.Section(initSection)
|
||||
c.Init.DefaultBranch = s.Options.Get(defaultBranchKey)
|
||||
}
|
||||
|
||||
// Marshal returns Config encoded as a git-config file.
|
||||
func (c *Config) Marshal() ([]byte, error) {
|
||||
c.marshalCore()
|
||||
c.marshalExtensions()
|
||||
c.marshalUser()
|
||||
c.marshalPack()
|
||||
c.marshalRemotes()
|
||||
c.marshalSubmodules()
|
||||
c.marshalBranches()
|
||||
c.marshalURLs()
|
||||
c.marshalInit()
|
||||
|
||||
buf := bytes.NewBuffer(nil)
|
||||
if err := format.NewEncoder(buf).Encode(c.Raw); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
func (c *Config) marshalCore() {
|
||||
s := c.Raw.Section(coreSection)
|
||||
s.SetOption(bareKey, fmt.Sprintf("%t", c.Core.IsBare))
|
||||
if string(c.Core.RepositoryFormatVersion) != "" {
|
||||
s.SetOption(repositoryFormatVersionKey, string(c.Core.RepositoryFormatVersion))
|
||||
}
|
||||
|
||||
if c.Core.Worktree != "" {
|
||||
s.SetOption(worktreeKey, c.Core.Worktree)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) marshalExtensions() {
|
||||
// Extensions are only supported on Version 1, therefore
|
||||
// ignore them otherwise.
|
||||
if c.Core.RepositoryFormatVersion == format.Version_1 {
|
||||
s := c.Raw.Section(extensionsSection)
|
||||
s.SetOption(objectFormat, string(c.Extensions.ObjectFormat))
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) marshalUser() {
|
||||
s := c.Raw.Section(userSection)
|
||||
if c.User.Name != "" {
|
||||
s.SetOption(nameKey, c.User.Name)
|
||||
}
|
||||
|
||||
if c.User.Email != "" {
|
||||
s.SetOption(emailKey, c.User.Email)
|
||||
}
|
||||
|
||||
s = c.Raw.Section(authorSection)
|
||||
if c.Author.Name != "" {
|
||||
s.SetOption(nameKey, c.Author.Name)
|
||||
}
|
||||
|
||||
if c.Author.Email != "" {
|
||||
s.SetOption(emailKey, c.Author.Email)
|
||||
}
|
||||
|
||||
s = c.Raw.Section(committerSection)
|
||||
if c.Committer.Name != "" {
|
||||
s.SetOption(nameKey, c.Committer.Name)
|
||||
}
|
||||
|
||||
if c.Committer.Email != "" {
|
||||
s.SetOption(emailKey, c.Committer.Email)
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) marshalPack() {
|
||||
s := c.Raw.Section(packSection)
|
||||
if c.Pack.Window != DefaultPackWindow {
|
||||
s.SetOption(windowKey, fmt.Sprintf("%d", c.Pack.Window))
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) marshalRemotes() {
|
||||
s := c.Raw.Section(remoteSection)
|
||||
newSubsections := make(format.Subsections, 0, len(c.Remotes))
|
||||
added := make(map[string]bool)
|
||||
for _, subsection := range s.Subsections {
|
||||
if remote, ok := c.Remotes[subsection.Name]; ok {
|
||||
newSubsections = append(newSubsections, remote.marshal())
|
||||
added[subsection.Name] = true
|
||||
}
|
||||
}
|
||||
|
||||
remoteNames := make([]string, 0, len(c.Remotes))
|
||||
for name := range c.Remotes {
|
||||
remoteNames = append(remoteNames, name)
|
||||
}
|
||||
|
||||
sort.Strings(remoteNames)
|
||||
|
||||
for _, name := range remoteNames {
|
||||
if !added[name] {
|
||||
newSubsections = append(newSubsections, c.Remotes[name].marshal())
|
||||
}
|
||||
}
|
||||
|
||||
s.Subsections = newSubsections
|
||||
}
|
||||
|
||||
func (c *Config) marshalSubmodules() {
|
||||
s := c.Raw.Section(submoduleSection)
|
||||
s.Subsections = make(format.Subsections, len(c.Submodules))
|
||||
|
||||
var i int
|
||||
for _, r := range c.Submodules {
|
||||
section := r.marshal()
|
||||
// the submodule section at config is a subset of the .gitmodule file
|
||||
// we should remove the non-valid options for the config file.
|
||||
section.RemoveOption(pathKey)
|
||||
s.Subsections[i] = section
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) marshalBranches() {
|
||||
s := c.Raw.Section(branchSection)
|
||||
newSubsections := make(format.Subsections, 0, len(c.Branches))
|
||||
added := make(map[string]bool)
|
||||
for _, subsection := range s.Subsections {
|
||||
if branch, ok := c.Branches[subsection.Name]; ok {
|
||||
newSubsections = append(newSubsections, branch.marshal())
|
||||
added[subsection.Name] = true
|
||||
}
|
||||
}
|
||||
|
||||
branchNames := make([]string, 0, len(c.Branches))
|
||||
for name := range c.Branches {
|
||||
branchNames = append(branchNames, name)
|
||||
}
|
||||
|
||||
sort.Strings(branchNames)
|
||||
|
||||
for _, name := range branchNames {
|
||||
if !added[name] {
|
||||
newSubsections = append(newSubsections, c.Branches[name].marshal())
|
||||
}
|
||||
}
|
||||
|
||||
s.Subsections = newSubsections
|
||||
}
|
||||
|
||||
func (c *Config) marshalURLs() {
|
||||
s := c.Raw.Section(urlSection)
|
||||
s.Subsections = make(format.Subsections, len(c.URLs))
|
||||
|
||||
var i int
|
||||
for _, r := range c.URLs {
|
||||
section := r.marshal()
|
||||
// the submodule section at config is a subset of the .gitmodule file
|
||||
// we should remove the non-valid options for the config file.
|
||||
s.Subsections[i] = section
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) marshalInit() {
|
||||
s := c.Raw.Section(initSection)
|
||||
if c.Init.DefaultBranch != "" {
|
||||
s.SetOption(defaultBranchKey, c.Init.DefaultBranch)
|
||||
}
|
||||
}
|
||||
|
||||
// RemoteConfig contains the configuration for a given remote repository.
|
||||
type RemoteConfig struct {
|
||||
// Name of the remote
|
||||
Name string
|
||||
// URLs the URLs of a remote repository. It must be non-empty. Fetch will
|
||||
// always use the first URL, while push will use all of them.
|
||||
URLs []string
|
||||
// Mirror indicates that the repository is a mirror of remote.
|
||||
Mirror bool
|
||||
|
||||
// insteadOfRulesApplied have urls been modified
|
||||
insteadOfRulesApplied bool
|
||||
// originalURLs are the urls before applying insteadOf rules
|
||||
originalURLs []string
|
||||
|
||||
// Fetch the default set of "refspec" for fetch operation
|
||||
Fetch []RefSpec
|
||||
|
||||
// raw representation of the subsection, filled by marshal or unmarshal are
|
||||
// called
|
||||
raw *format.Subsection
|
||||
}
|
||||
|
||||
// Validate validates the fields and sets the default values.
|
||||
func (c *RemoteConfig) Validate() error {
|
||||
if c.Name == "" {
|
||||
return ErrRemoteConfigEmptyName
|
||||
}
|
||||
|
||||
if len(c.URLs) == 0 {
|
||||
return ErrRemoteConfigEmptyURL
|
||||
}
|
||||
|
||||
for _, r := range c.Fetch {
|
||||
if err := r.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(c.Fetch) == 0 {
|
||||
c.Fetch = []RefSpec{RefSpec(fmt.Sprintf(DefaultFetchRefSpec, c.Name))}
|
||||
}
|
||||
|
||||
return plumbing.NewRemoteHEADReferenceName(c.Name).Validate()
|
||||
}
|
||||
|
||||
func (c *RemoteConfig) unmarshal(s *format.Subsection) error {
|
||||
c.raw = s
|
||||
|
||||
fetch := []RefSpec{}
|
||||
for _, f := range c.raw.Options.GetAll(fetchKey) {
|
||||
rs := RefSpec(f)
|
||||
if err := rs.Validate(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fetch = append(fetch, rs)
|
||||
}
|
||||
|
||||
c.Name = c.raw.Name
|
||||
c.URLs = append([]string(nil), c.raw.Options.GetAll(urlKey)...)
|
||||
c.Fetch = fetch
|
||||
c.Mirror = c.raw.Options.Get(mirrorKey) == "true"
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *RemoteConfig) marshal() *format.Subsection {
|
||||
if c.raw == nil {
|
||||
c.raw = &format.Subsection{}
|
||||
}
|
||||
|
||||
c.raw.Name = c.Name
|
||||
if len(c.URLs) == 0 {
|
||||
c.raw.RemoveOption(urlKey)
|
||||
} else {
|
||||
urls := c.URLs
|
||||
if c.insteadOfRulesApplied {
|
||||
urls = c.originalURLs
|
||||
}
|
||||
|
||||
c.raw.SetOption(urlKey, urls...)
|
||||
}
|
||||
|
||||
if len(c.Fetch) == 0 {
|
||||
c.raw.RemoveOption(fetchKey)
|
||||
} else {
|
||||
var values []string
|
||||
for _, rs := range c.Fetch {
|
||||
values = append(values, rs.String())
|
||||
}
|
||||
|
||||
c.raw.SetOption(fetchKey, values...)
|
||||
}
|
||||
|
||||
if c.Mirror {
|
||||
c.raw.SetOption(mirrorKey, strconv.FormatBool(c.Mirror))
|
||||
}
|
||||
|
||||
return c.raw
|
||||
}
|
||||
|
||||
func (c *RemoteConfig) IsFirstURLLocal() bool {
|
||||
return url.IsLocalEndpoint(c.URLs[0])
|
||||
}
|
||||
|
||||
func (c *RemoteConfig) applyURLRules(urlRules map[string]*URL) {
|
||||
// save original urls
|
||||
originalURLs := make([]string, len(c.URLs))
|
||||
copy(originalURLs, c.URLs)
|
||||
|
||||
for i, url := range c.URLs {
|
||||
if matchingURLRule := findLongestInsteadOfMatch(url, urlRules); matchingURLRule != nil {
|
||||
c.URLs[i] = matchingURLRule.ApplyInsteadOf(c.URLs[i])
|
||||
c.insteadOfRulesApplied = true
|
||||
}
|
||||
}
|
||||
|
||||
if c.insteadOfRulesApplied {
|
||||
c.originalURLs = originalURLs
|
||||
}
|
||||
}
|
139
vendor/github.com/go-git/go-git/v5/config/modules.go
generated
vendored
Normal file
139
vendor/github.com/go-git/go-git/v5/config/modules.go
generated
vendored
Normal file
|
@ -0,0 +1,139 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"regexp"
|
||||
|
||||
format "github.com/go-git/go-git/v5/plumbing/format/config"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrModuleEmptyURL = errors.New("module config: empty URL")
|
||||
ErrModuleEmptyPath = errors.New("module config: empty path")
|
||||
ErrModuleBadPath = errors.New("submodule has an invalid path")
|
||||
)
|
||||
|
||||
var (
|
||||
// Matches module paths with dotdot ".." components.
|
||||
dotdotPath = regexp.MustCompile(`(^|[/\\])\.\.([/\\]|$)`)
|
||||
)
|
||||
|
||||
// Modules defines the submodules properties, represents a .gitmodules file
|
||||
// https://www.kernel.org/pub/software/scm/git/docs/gitmodules.html
|
||||
type Modules struct {
|
||||
// Submodules is a map of submodules being the key the name of the submodule.
|
||||
Submodules map[string]*Submodule
|
||||
|
||||
raw *format.Config
|
||||
}
|
||||
|
||||
// NewModules returns a new empty Modules
|
||||
func NewModules() *Modules {
|
||||
return &Modules{
|
||||
Submodules: make(map[string]*Submodule),
|
||||
raw: format.New(),
|
||||
}
|
||||
}
|
||||
|
||||
const (
|
||||
pathKey = "path"
|
||||
branchKey = "branch"
|
||||
)
|
||||
|
||||
// Unmarshal parses a git-config file and stores it.
|
||||
func (m *Modules) Unmarshal(b []byte) error {
|
||||
r := bytes.NewBuffer(b)
|
||||
d := format.NewDecoder(r)
|
||||
|
||||
m.raw = format.New()
|
||||
if err := d.Decode(m.raw); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
unmarshalSubmodules(m.raw, m.Submodules)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Marshal returns Modules encoded as a git-config file.
|
||||
func (m *Modules) Marshal() ([]byte, error) {
|
||||
s := m.raw.Section(submoduleSection)
|
||||
s.Subsections = make(format.Subsections, len(m.Submodules))
|
||||
|
||||
var i int
|
||||
for _, r := range m.Submodules {
|
||||
s.Subsections[i] = r.marshal()
|
||||
i++
|
||||
}
|
||||
|
||||
buf := bytes.NewBuffer(nil)
|
||||
if err := format.NewEncoder(buf).Encode(m.raw); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// Submodule defines a submodule.
|
||||
type Submodule struct {
|
||||
// Name module name
|
||||
Name string
|
||||
// Path defines the path, relative to the top-level directory of the Git
|
||||
// working tree.
|
||||
Path string
|
||||
// URL defines a URL from which the submodule repository can be cloned.
|
||||
URL string
|
||||
// Branch is a remote branch name for tracking updates in the upstream
|
||||
// submodule. Optional value.
|
||||
Branch string
|
||||
|
||||
// raw representation of the subsection, filled by marshal or unmarshal are
|
||||
// called.
|
||||
raw *format.Subsection
|
||||
}
|
||||
|
||||
// Validate validates the fields and sets the default values.
|
||||
func (m *Submodule) Validate() error {
|
||||
if m.Path == "" {
|
||||
return ErrModuleEmptyPath
|
||||
}
|
||||
|
||||
if m.URL == "" {
|
||||
return ErrModuleEmptyURL
|
||||
}
|
||||
|
||||
if dotdotPath.MatchString(m.Path) {
|
||||
return ErrModuleBadPath
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *Submodule) unmarshal(s *format.Subsection) {
|
||||
m.raw = s
|
||||
|
||||
m.Name = m.raw.Name
|
||||
m.Path = m.raw.Option(pathKey)
|
||||
m.URL = m.raw.Option(urlKey)
|
||||
m.Branch = m.raw.Option(branchKey)
|
||||
}
|
||||
|
||||
func (m *Submodule) marshal() *format.Subsection {
|
||||
if m.raw == nil {
|
||||
m.raw = &format.Subsection{}
|
||||
}
|
||||
|
||||
m.raw.Name = m.Name
|
||||
if m.raw.Name == "" {
|
||||
m.raw.Name = m.Path
|
||||
}
|
||||
|
||||
m.raw.SetOption(pathKey, m.Path)
|
||||
m.raw.SetOption(urlKey, m.URL)
|
||||
|
||||
if m.Branch != "" {
|
||||
m.raw.SetOption(branchKey, m.Branch)
|
||||
}
|
||||
|
||||
return m.raw
|
||||
}
|
155
vendor/github.com/go-git/go-git/v5/config/refspec.go
generated
vendored
Normal file
155
vendor/github.com/go-git/go-git/v5/config/refspec.go
generated
vendored
Normal file
|
@ -0,0 +1,155 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
|
||||
"github.com/go-git/go-git/v5/plumbing"
|
||||
)
|
||||
|
||||
const (
|
||||
refSpecWildcard = "*"
|
||||
refSpecForce = "+"
|
||||
refSpecSeparator = ":"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrRefSpecMalformedSeparator = errors.New("malformed refspec, separators are wrong")
|
||||
ErrRefSpecMalformedWildcard = errors.New("malformed refspec, mismatched number of wildcards")
|
||||
)
|
||||
|
||||
// RefSpec is a mapping from local branches to remote references.
|
||||
// The format of the refspec is an optional +, followed by <src>:<dst>, where
|
||||
// <src> is the pattern for references on the remote side and <dst> is where
|
||||
// those references will be written locally. The + tells Git to update the
|
||||
// reference even if it isn’t a fast-forward.
|
||||
// eg.: "+refs/heads/*:refs/remotes/origin/*"
|
||||
//
|
||||
// https://git-scm.com/book/en/v2/Git-Internals-The-Refspec
|
||||
type RefSpec string
|
||||
|
||||
// Validate validates the RefSpec
|
||||
func (s RefSpec) Validate() error {
|
||||
spec := string(s)
|
||||
if strings.Count(spec, refSpecSeparator) != 1 {
|
||||
return ErrRefSpecMalformedSeparator
|
||||
}
|
||||
|
||||
sep := strings.Index(spec, refSpecSeparator)
|
||||
if sep == len(spec)-1 {
|
||||
return ErrRefSpecMalformedSeparator
|
||||
}
|
||||
|
||||
ws := strings.Count(spec[0:sep], refSpecWildcard)
|
||||
wd := strings.Count(spec[sep+1:], refSpecWildcard)
|
||||
if ws == wd && ws < 2 && wd < 2 {
|
||||
return nil
|
||||
}
|
||||
|
||||
return ErrRefSpecMalformedWildcard
|
||||
}
|
||||
|
||||
// IsForceUpdate returns if update is allowed in non fast-forward merges.
|
||||
func (s RefSpec) IsForceUpdate() bool {
|
||||
return s[0] == refSpecForce[0]
|
||||
}
|
||||
|
||||
// IsDelete returns true if the refspec indicates a delete (empty src).
|
||||
func (s RefSpec) IsDelete() bool {
|
||||
return s[0] == refSpecSeparator[0]
|
||||
}
|
||||
|
||||
// IsExactSHA1 returns true if the source is a SHA1 hash.
|
||||
func (s RefSpec) IsExactSHA1() bool {
|
||||
return plumbing.IsHash(s.Src())
|
||||
}
|
||||
|
||||
// Src returns the src side.
|
||||
func (s RefSpec) Src() string {
|
||||
spec := string(s)
|
||||
|
||||
var start int
|
||||
if s.IsForceUpdate() {
|
||||
start = 1
|
||||
} else {
|
||||
start = 0
|
||||
}
|
||||
|
||||
end := strings.Index(spec, refSpecSeparator)
|
||||
return spec[start:end]
|
||||
}
|
||||
|
||||
// Match match the given plumbing.ReferenceName against the source.
|
||||
func (s RefSpec) Match(n plumbing.ReferenceName) bool {
|
||||
if !s.IsWildcard() {
|
||||
return s.matchExact(n)
|
||||
}
|
||||
|
||||
return s.matchGlob(n)
|
||||
}
|
||||
|
||||
// IsWildcard returns true if the RefSpec contains a wildcard.
|
||||
func (s RefSpec) IsWildcard() bool {
|
||||
return strings.Contains(string(s), refSpecWildcard)
|
||||
}
|
||||
|
||||
func (s RefSpec) matchExact(n plumbing.ReferenceName) bool {
|
||||
return s.Src() == n.String()
|
||||
}
|
||||
|
||||
func (s RefSpec) matchGlob(n plumbing.ReferenceName) bool {
|
||||
src := s.Src()
|
||||
name := n.String()
|
||||
wildcard := strings.Index(src, refSpecWildcard)
|
||||
|
||||
var prefix, suffix string
|
||||
prefix = src[0:wildcard]
|
||||
if len(src) > wildcard+1 {
|
||||
suffix = src[wildcard+1:]
|
||||
}
|
||||
|
||||
return len(name) >= len(prefix)+len(suffix) &&
|
||||
strings.HasPrefix(name, prefix) &&
|
||||
strings.HasSuffix(name, suffix)
|
||||
}
|
||||
|
||||
// Dst returns the destination for the given remote reference.
|
||||
func (s RefSpec) Dst(n plumbing.ReferenceName) plumbing.ReferenceName {
|
||||
spec := string(s)
|
||||
start := strings.Index(spec, refSpecSeparator) + 1
|
||||
dst := spec[start:]
|
||||
src := s.Src()
|
||||
|
||||
if !s.IsWildcard() {
|
||||
return plumbing.ReferenceName(dst)
|
||||
}
|
||||
|
||||
name := n.String()
|
||||
ws := strings.Index(src, refSpecWildcard)
|
||||
wd := strings.Index(dst, refSpecWildcard)
|
||||
match := name[ws : len(name)-(len(src)-(ws+1))]
|
||||
|
||||
return plumbing.ReferenceName(dst[0:wd] + match + dst[wd+1:])
|
||||
}
|
||||
|
||||
func (s RefSpec) Reverse() RefSpec {
|
||||
spec := string(s)
|
||||
separator := strings.Index(spec, refSpecSeparator)
|
||||
|
||||
return RefSpec(spec[separator+1:] + refSpecSeparator + spec[:separator])
|
||||
}
|
||||
|
||||
func (s RefSpec) String() string {
|
||||
return string(s)
|
||||
}
|
||||
|
||||
// MatchAny returns true if any of the RefSpec match with the given ReferenceName.
|
||||
func MatchAny(l []RefSpec, n plumbing.ReferenceName) bool {
|
||||
for _, r := range l {
|
||||
if r.Match(n) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
81
vendor/github.com/go-git/go-git/v5/config/url.go
generated
vendored
Normal file
81
vendor/github.com/go-git/go-git/v5/config/url.go
generated
vendored
Normal file
|
@ -0,0 +1,81 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"strings"
|
||||
|
||||
format "github.com/go-git/go-git/v5/plumbing/format/config"
|
||||
)
|
||||
|
||||
var (
|
||||
errURLEmptyInsteadOf = errors.New("url config: empty insteadOf")
|
||||
)
|
||||
|
||||
// Url defines Url rewrite rules
|
||||
type URL struct {
|
||||
// Name new base url
|
||||
Name string
|
||||
// Any URL that starts with this value will be rewritten to start, instead, with <base>.
|
||||
// When more than one insteadOf strings match a given URL, the longest match is used.
|
||||
InsteadOf string
|
||||
|
||||
// raw representation of the subsection, filled by marshal or unmarshal are
|
||||
// called.
|
||||
raw *format.Subsection
|
||||
}
|
||||
|
||||
// Validate validates fields of branch
|
||||
func (b *URL) Validate() error {
|
||||
if b.InsteadOf == "" {
|
||||
return errURLEmptyInsteadOf
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
const (
|
||||
insteadOfKey = "insteadOf"
|
||||
)
|
||||
|
||||
func (u *URL) unmarshal(s *format.Subsection) error {
|
||||
u.raw = s
|
||||
|
||||
u.Name = s.Name
|
||||
u.InsteadOf = u.raw.Option(insteadOfKey)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (u *URL) marshal() *format.Subsection {
|
||||
if u.raw == nil {
|
||||
u.raw = &format.Subsection{}
|
||||
}
|
||||
|
||||
u.raw.Name = u.Name
|
||||
u.raw.SetOption(insteadOfKey, u.InsteadOf)
|
||||
|
||||
return u.raw
|
||||
}
|
||||
|
||||
func findLongestInsteadOfMatch(remoteURL string, urls map[string]*URL) *URL {
|
||||
var longestMatch *URL
|
||||
for _, u := range urls {
|
||||
if !strings.HasPrefix(remoteURL, u.InsteadOf) {
|
||||
continue
|
||||
}
|
||||
|
||||
// according to spec if there is more than one match, take the logest
|
||||
if longestMatch == nil || len(longestMatch.InsteadOf) < len(u.InsteadOf) {
|
||||
longestMatch = u
|
||||
}
|
||||
}
|
||||
|
||||
return longestMatch
|
||||
}
|
||||
|
||||
func (u *URL) ApplyInsteadOf(url string) string {
|
||||
if !strings.HasPrefix(url, u.InsteadOf) {
|
||||
return url
|
||||
}
|
||||
|
||||
return u.Name + url[len(u.InsteadOf):]
|
||||
}
|
39
vendor/github.com/go-git/go-git/v5/internal/url/url.go
generated
vendored
Normal file
39
vendor/github.com/go-git/go-git/v5/internal/url/url.go
generated
vendored
Normal file
|
@ -0,0 +1,39 @@
|
|||
package url
|
||||
|
||||
import (
|
||||
"regexp"
|
||||
)
|
||||
|
||||
var (
|
||||
isSchemeRegExp = regexp.MustCompile(`^[^:]+://`)
|
||||
|
||||
// Ref: https://github.com/git/git/blob/master/Documentation/urls.txt#L37
|
||||
scpLikeUrlRegExp = regexp.MustCompile(`^(?:(?P<user>[^@]+)@)?(?P<host>[^:\s]+):(?:(?P<port>[0-9]{1,5}):)?(?P<path>[^\\].*)$`)
|
||||
)
|
||||
|
||||
// MatchesScheme returns true if the given string matches a URL-like
|
||||
// format scheme.
|
||||
func MatchesScheme(url string) bool {
|
||||
return isSchemeRegExp.MatchString(url)
|
||||
}
|
||||
|
||||
// MatchesScpLike returns true if the given string matches an SCP-like
|
||||
// format scheme.
|
||||
func MatchesScpLike(url string) bool {
|
||||
return scpLikeUrlRegExp.MatchString(url)
|
||||
}
|
||||
|
||||
// FindScpLikeComponents returns the user, host, port and path of the
|
||||
// given SCP-like URL.
|
||||
func FindScpLikeComponents(url string) (user, host, port, path string) {
|
||||
m := scpLikeUrlRegExp.FindStringSubmatch(url)
|
||||
return m[1], m[2], m[3], m[4]
|
||||
}
|
||||
|
||||
// IsLocalEndpoint returns true if the given URL string specifies a
|
||||
// local file endpoint. For example, on a Linux machine,
|
||||
// `/home/user/src/go-git` would match as a local endpoint, but
|
||||
// `https://github.com/src-d/go-git` would not.
|
||||
func IsLocalEndpoint(url string) bool {
|
||||
return !MatchesScheme(url) && !MatchesScpLike(url)
|
||||
}
|
35
vendor/github.com/go-git/go-git/v5/plumbing/error.go
generated
vendored
Normal file
35
vendor/github.com/go-git/go-git/v5/plumbing/error.go
generated
vendored
Normal file
|
@ -0,0 +1,35 @@
|
|||
package plumbing
|
||||
|
||||
import "fmt"
|
||||
|
||||
type PermanentError struct {
|
||||
Err error
|
||||
}
|
||||
|
||||
func NewPermanentError(err error) *PermanentError {
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &PermanentError{Err: err}
|
||||
}
|
||||
|
||||
func (e *PermanentError) Error() string {
|
||||
return fmt.Sprintf("permanent client error: %s", e.Err.Error())
|
||||
}
|
||||
|
||||
type UnexpectedError struct {
|
||||
Err error
|
||||
}
|
||||
|
||||
func NewUnexpectedError(err error) *UnexpectedError {
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return &UnexpectedError{Err: err}
|
||||
}
|
||||
|
||||
func (e *UnexpectedError) Error() string {
|
||||
return fmt.Sprintf("unexpected client error: %s", e.Err.Error())
|
||||
}
|
109
vendor/github.com/go-git/go-git/v5/plumbing/format/config/common.go
generated
vendored
Normal file
109
vendor/github.com/go-git/go-git/v5/plumbing/format/config/common.go
generated
vendored
Normal file
|
@ -0,0 +1,109 @@
|
|||
package config
|
||||
|
||||
// New creates a new config instance.
|
||||
func New() *Config {
|
||||
return &Config{}
|
||||
}
|
||||
|
||||
// Config contains all the sections, comments and includes from a config file.
|
||||
type Config struct {
|
||||
Comment *Comment
|
||||
Sections Sections
|
||||
Includes Includes
|
||||
}
|
||||
|
||||
// Includes is a list of Includes in a config file.
|
||||
type Includes []*Include
|
||||
|
||||
// Include is a reference to an included config file.
|
||||
type Include struct {
|
||||
Path string
|
||||
Config *Config
|
||||
}
|
||||
|
||||
// Comment string without the prefix '#' or ';'.
|
||||
type Comment string
|
||||
|
||||
const (
|
||||
// NoSubsection token is passed to Config.Section and Config.SetSection to
|
||||
// represent the absence of a section.
|
||||
NoSubsection = ""
|
||||
)
|
||||
|
||||
// Section returns a existing section with the given name or creates a new one.
|
||||
func (c *Config) Section(name string) *Section {
|
||||
for i := len(c.Sections) - 1; i >= 0; i-- {
|
||||
s := c.Sections[i]
|
||||
if s.IsName(name) {
|
||||
return s
|
||||
}
|
||||
}
|
||||
|
||||
s := &Section{Name: name}
|
||||
c.Sections = append(c.Sections, s)
|
||||
return s
|
||||
}
|
||||
|
||||
// HasSection checks if the Config has a section with the specified name.
|
||||
func (c *Config) HasSection(name string) bool {
|
||||
for _, s := range c.Sections {
|
||||
if s.IsName(name) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// RemoveSection removes a section from a config file.
|
||||
func (c *Config) RemoveSection(name string) *Config {
|
||||
result := Sections{}
|
||||
for _, s := range c.Sections {
|
||||
if !s.IsName(name) {
|
||||
result = append(result, s)
|
||||
}
|
||||
}
|
||||
|
||||
c.Sections = result
|
||||
return c
|
||||
}
|
||||
|
||||
// RemoveSubsection remove a subsection from a config file.
|
||||
func (c *Config) RemoveSubsection(section string, subsection string) *Config {
|
||||
for _, s := range c.Sections {
|
||||
if s.IsName(section) {
|
||||
result := Subsections{}
|
||||
for _, ss := range s.Subsections {
|
||||
if !ss.IsName(subsection) {
|
||||
result = append(result, ss)
|
||||
}
|
||||
}
|
||||
s.Subsections = result
|
||||
}
|
||||
}
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// AddOption adds an option to a given section and subsection. Use the
|
||||
// NoSubsection constant for the subsection argument if no subsection is wanted.
|
||||
func (c *Config) AddOption(section string, subsection string, key string, value string) *Config {
|
||||
if subsection == "" {
|
||||
c.Section(section).AddOption(key, value)
|
||||
} else {
|
||||
c.Section(section).Subsection(subsection).AddOption(key, value)
|
||||
}
|
||||
|
||||
return c
|
||||
}
|
||||
|
||||
// SetOption sets an option to a given section and subsection. Use the
|
||||
// NoSubsection constant for the subsection argument if no subsection is wanted.
|
||||
func (c *Config) SetOption(section string, subsection string, key string, value string) *Config {
|
||||
if subsection == "" {
|
||||
c.Section(section).SetOption(key, value)
|
||||
} else {
|
||||
c.Section(section).Subsection(subsection).SetOption(key, value)
|
||||
}
|
||||
|
||||
return c
|
||||
}
|
37
vendor/github.com/go-git/go-git/v5/plumbing/format/config/decoder.go
generated
vendored
Normal file
37
vendor/github.com/go-git/go-git/v5/plumbing/format/config/decoder.go
generated
vendored
Normal file
|
@ -0,0 +1,37 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"io"
|
||||
|
||||
"github.com/go-git/gcfg"
|
||||
)
|
||||
|
||||
// A Decoder reads and decodes config files from an input stream.
|
||||
type Decoder struct {
|
||||
io.Reader
|
||||
}
|
||||
|
||||
// NewDecoder returns a new decoder that reads from r.
|
||||
func NewDecoder(r io.Reader) *Decoder {
|
||||
return &Decoder{r}
|
||||
}
|
||||
|
||||
// Decode reads the whole config from its input and stores it in the
|
||||
// value pointed to by config.
|
||||
func (d *Decoder) Decode(config *Config) error {
|
||||
cb := func(s string, ss string, k string, v string, bv bool) error {
|
||||
if ss == "" && k == "" {
|
||||
config.Section(s)
|
||||
return nil
|
||||
}
|
||||
|
||||
if ss != "" && k == "" {
|
||||
config.Section(s).Subsection(ss)
|
||||
return nil
|
||||
}
|
||||
|
||||
config.AddOption(s, ss, k, v)
|
||||
return nil
|
||||
}
|
||||
return gcfg.ReadWithCallback(d, cb)
|
||||
}
|
122
vendor/github.com/go-git/go-git/v5/plumbing/format/config/doc.go
generated
vendored
Normal file
122
vendor/github.com/go-git/go-git/v5/plumbing/format/config/doc.go
generated
vendored
Normal file
|
@ -0,0 +1,122 @@
|
|||
// Package config implements encoding and decoding of git config files.
|
||||
//
|
||||
// Configuration File
|
||||
// ------------------
|
||||
//
|
||||
// The Git configuration file contains a number of variables that affect
|
||||
// the Git commands' behavior. The `.git/config` file in each repository
|
||||
// is used to store the configuration for that repository, and
|
||||
// `$HOME/.gitconfig` is used to store a per-user configuration as
|
||||
// fallback values for the `.git/config` file. The file `/etc/gitconfig`
|
||||
// can be used to store a system-wide default configuration.
|
||||
//
|
||||
// The configuration variables are used by both the Git plumbing
|
||||
// and the porcelains. The variables are divided into sections, wherein
|
||||
// the fully qualified variable name of the variable itself is the last
|
||||
// dot-separated segment and the section name is everything before the last
|
||||
// dot. The variable names are case-insensitive, allow only alphanumeric
|
||||
// characters and `-`, and must start with an alphabetic character. Some
|
||||
// variables may appear multiple times; we say then that the variable is
|
||||
// multivalued.
|
||||
//
|
||||
// Syntax
|
||||
// ~~~~~~
|
||||
//
|
||||
// The syntax is fairly flexible and permissive; whitespaces are mostly
|
||||
// ignored. The '#' and ';' characters begin comments to the end of line,
|
||||
// blank lines are ignored.
|
||||
//
|
||||
// The file consists of sections and variables. A section begins with
|
||||
// the name of the section in square brackets and continues until the next
|
||||
// section begins. Section names are case-insensitive. Only alphanumeric
|
||||
// characters, `-` and `.` are allowed in section names. Each variable
|
||||
// must belong to some section, which means that there must be a section
|
||||
// header before the first setting of a variable.
|
||||
//
|
||||
// Sections can be further divided into subsections. To begin a subsection
|
||||
// put its name in double quotes, separated by space from the section name,
|
||||
// in the section header, like in the example below:
|
||||
//
|
||||
// --------
|
||||
// [section "subsection"]
|
||||
//
|
||||
// --------
|
||||
//
|
||||
// Subsection names are case sensitive and can contain any characters except
|
||||
// newline (doublequote `"` and backslash can be included by escaping them
|
||||
// as `\"` and `\\`, respectively). Section headers cannot span multiple
|
||||
// lines. Variables may belong directly to a section or to a given subsection.
|
||||
// You can have `[section]` if you have `[section "subsection"]`, but you
|
||||
// don't need to.
|
||||
//
|
||||
// There is also a deprecated `[section.subsection]` syntax. With this
|
||||
// syntax, the subsection name is converted to lower-case and is also
|
||||
// compared case sensitively. These subsection names follow the same
|
||||
// restrictions as section names.
|
||||
//
|
||||
// All the other lines (and the remainder of the line after the section
|
||||
// header) are recognized as setting variables, in the form
|
||||
// 'name = value' (or just 'name', which is a short-hand to say that
|
||||
// the variable is the boolean "true").
|
||||
// The variable names are case-insensitive, allow only alphanumeric characters
|
||||
// and `-`, and must start with an alphabetic character.
|
||||
//
|
||||
// A line that defines a value can be continued to the next line by
|
||||
// ending it with a `\`; the backquote and the end-of-line are
|
||||
// stripped. Leading whitespaces after 'name =', the remainder of the
|
||||
// line after the first comment character '#' or ';', and trailing
|
||||
// whitespaces of the line are discarded unless they are enclosed in
|
||||
// double quotes. Internal whitespaces within the value are retained
|
||||
// verbatim.
|
||||
//
|
||||
// Inside double quotes, double quote `"` and backslash `\` characters
|
||||
// must be escaped: use `\"` for `"` and `\\` for `\`.
|
||||
//
|
||||
// The following escape sequences (beside `\"` and `\\`) are recognized:
|
||||
// `\n` for newline character (NL), `\t` for horizontal tabulation (HT, TAB)
|
||||
// and `\b` for backspace (BS). Other char escape sequences (including octal
|
||||
// escape sequences) are invalid.
|
||||
//
|
||||
// Includes
|
||||
// ~~~~~~~~
|
||||
//
|
||||
// You can include one config file from another by setting the special
|
||||
// `include.path` variable to the name of the file to be included. The
|
||||
// variable takes a pathname as its value, and is subject to tilde
|
||||
// expansion.
|
||||
//
|
||||
// The included file is expanded immediately, as if its contents had been
|
||||
// found at the location of the include directive. If the value of the
|
||||
// `include.path` variable is a relative path, the path is considered to be
|
||||
// relative to the configuration file in which the include directive was
|
||||
// found. See below for examples.
|
||||
//
|
||||
//
|
||||
// Example
|
||||
// ~~~~~~~
|
||||
//
|
||||
// # Core variables
|
||||
// [core]
|
||||
// ; Don't trust file modes
|
||||
// filemode = false
|
||||
//
|
||||
// # Our diff algorithm
|
||||
// [diff]
|
||||
// external = /usr/local/bin/diff-wrapper
|
||||
// renames = true
|
||||
//
|
||||
// [branch "devel"]
|
||||
// remote = origin
|
||||
// merge = refs/heads/devel
|
||||
//
|
||||
// # Proxy settings
|
||||
// [core]
|
||||
// gitProxy="ssh" for "kernel.org"
|
||||
// gitProxy=default-proxy ; for the rest
|
||||
//
|
||||
// [include]
|
||||
// path = /path/to/foo.inc ; include by absolute path
|
||||
// path = foo ; expand "foo" relative to the current file
|
||||
// path = ~/foo ; expand "foo" in your `$HOME` directory
|
||||
//
|
||||
package config
|
82
vendor/github.com/go-git/go-git/v5/plumbing/format/config/encoder.go
generated
vendored
Normal file
82
vendor/github.com/go-git/go-git/v5/plumbing/format/config/encoder.go
generated
vendored
Normal file
|
@ -0,0 +1,82 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// An Encoder writes config files to an output stream.
|
||||
type Encoder struct {
|
||||
w io.Writer
|
||||
}
|
||||
|
||||
var (
|
||||
subsectionReplacer = strings.NewReplacer(`"`, `\"`, `\`, `\\`)
|
||||
valueReplacer = strings.NewReplacer(`"`, `\"`, `\`, `\\`, "\n", `\n`, "\t", `\t`, "\b", `\b`)
|
||||
)
|
||||
// NewEncoder returns a new encoder that writes to w.
|
||||
func NewEncoder(w io.Writer) *Encoder {
|
||||
return &Encoder{w}
|
||||
}
|
||||
|
||||
// Encode writes the config in git config format to the stream of the encoder.
|
||||
func (e *Encoder) Encode(cfg *Config) error {
|
||||
for _, s := range cfg.Sections {
|
||||
if err := e.encodeSection(s); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *Encoder) encodeSection(s *Section) error {
|
||||
if len(s.Options) > 0 {
|
||||
if err := e.printf("[%s]\n", s.Name); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := e.encodeOptions(s.Options); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for _, ss := range s.Subsections {
|
||||
if err := e.encodeSubsection(s.Name, ss); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *Encoder) encodeSubsection(sectionName string, s *Subsection) error {
|
||||
if err := e.printf("[%s \"%s\"]\n", sectionName, subsectionReplacer.Replace(s.Name)); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return e.encodeOptions(s.Options)
|
||||
}
|
||||
|
||||
func (e *Encoder) encodeOptions(opts Options) error {
|
||||
for _, o := range opts {
|
||||
var value string
|
||||
if strings.ContainsAny(o.Value, "#;\"\t\n\\") || strings.HasPrefix(o.Value, " ") || strings.HasSuffix(o.Value, " ") {
|
||||
value = `"`+valueReplacer.Replace(o.Value)+`"`
|
||||
} else {
|
||||
value = o.Value
|
||||
}
|
||||
|
||||
if err := e.printf("\t%s = %s\n", o.Key, value); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *Encoder) printf(msg string, args ...interface{}) error {
|
||||
_, err := fmt.Fprintf(e.w, msg, args...)
|
||||
return err
|
||||
}
|
53
vendor/github.com/go-git/go-git/v5/plumbing/format/config/format.go
generated
vendored
Normal file
53
vendor/github.com/go-git/go-git/v5/plumbing/format/config/format.go
generated
vendored
Normal file
|
@ -0,0 +1,53 @@
|
|||
package config
|
||||
|
||||
// RepositoryFormatVersion represents the repository format version,
|
||||
// as per defined at:
|
||||
//
|
||||
// https://git-scm.com/docs/repository-version
|
||||
type RepositoryFormatVersion string
|
||||
|
||||
const (
|
||||
// Version_0 is the format defined by the initial version of git,
|
||||
// including but not limited to the format of the repository
|
||||
// directory, the repository configuration file, and the object
|
||||
// and ref storage.
|
||||
//
|
||||
// Specifying the complete behavior of git is beyond the scope
|
||||
// of this document.
|
||||
Version_0 = "0"
|
||||
|
||||
// Version_1 is identical to version 0, with the following exceptions:
|
||||
//
|
||||
// 1. When reading the core.repositoryformatversion variable, a git
|
||||
// implementation which supports version 1 MUST also read any
|
||||
// configuration keys found in the extensions section of the
|
||||
// configuration file.
|
||||
//
|
||||
// 2. If a version-1 repository specifies any extensions.* keys that
|
||||
// the running git has not implemented, the operation MUST NOT proceed.
|
||||
// Similarly, if the value of any known key is not understood by the
|
||||
// implementation, the operation MUST NOT proceed.
|
||||
//
|
||||
// Note that if no extensions are specified in the config file, then
|
||||
// core.repositoryformatversion SHOULD be set to 0 (setting it to 1 provides
|
||||
// no benefit, and makes the repository incompatible with older
|
||||
// implementations of git).
|
||||
Version_1 = "1"
|
||||
|
||||
// DefaultRepositoryFormatVersion holds the default repository format version.
|
||||
DefaultRepositoryFormatVersion = Version_0
|
||||
)
|
||||
|
||||
// ObjectFormat defines the object format.
|
||||
type ObjectFormat string
|
||||
|
||||
const (
|
||||
// SHA1 represents the object format used for SHA1.
|
||||
SHA1 ObjectFormat = "sha1"
|
||||
|
||||
// SHA256 represents the object format used for SHA256.
|
||||
SHA256 ObjectFormat = "sha256"
|
||||
|
||||
// DefaultObjectFormat holds the default object format.
|
||||
DefaultObjectFormat = SHA1
|
||||
)
|
127
vendor/github.com/go-git/go-git/v5/plumbing/format/config/option.go
generated
vendored
Normal file
127
vendor/github.com/go-git/go-git/v5/plumbing/format/config/option.go
generated
vendored
Normal file
|
@ -0,0 +1,127 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Option defines a key/value entity in a config file.
|
||||
type Option struct {
|
||||
// Key preserving original caseness.
|
||||
// Use IsKey instead to compare key regardless of caseness.
|
||||
Key string
|
||||
// Original value as string, could be not normalized.
|
||||
Value string
|
||||
}
|
||||
|
||||
type Options []*Option
|
||||
|
||||
// IsKey returns true if the given key matches
|
||||
// this option's key in a case-insensitive comparison.
|
||||
func (o *Option) IsKey(key string) bool {
|
||||
return strings.EqualFold(o.Key, key)
|
||||
}
|
||||
|
||||
func (opts Options) GoString() string {
|
||||
var strs []string
|
||||
for _, opt := range opts {
|
||||
strs = append(strs, fmt.Sprintf("%#v", opt))
|
||||
}
|
||||
|
||||
return strings.Join(strs, ", ")
|
||||
}
|
||||
|
||||
// Get gets the value for the given key if set,
|
||||
// otherwise it returns the empty string.
|
||||
//
|
||||
// Note that there is no difference
|
||||
//
|
||||
// This matches git behaviour since git v1.8.1-rc1,
|
||||
// if there are multiple definitions of a key, the
|
||||
// last one wins.
|
||||
//
|
||||
// See: http://article.gmane.org/gmane.linux.kernel/1407184
|
||||
//
|
||||
// In order to get all possible values for the same key,
|
||||
// use GetAll.
|
||||
func (opts Options) Get(key string) string {
|
||||
for i := len(opts) - 1; i >= 0; i-- {
|
||||
o := opts[i]
|
||||
if o.IsKey(key) {
|
||||
return o.Value
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Has checks if an Option exist with the given key.
|
||||
func (opts Options) Has(key string) bool {
|
||||
for _, o := range opts {
|
||||
if o.IsKey(key) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// GetAll returns all possible values for the same key.
|
||||
func (opts Options) GetAll(key string) []string {
|
||||
result := []string{}
|
||||
for _, o := range opts {
|
||||
if o.IsKey(key) {
|
||||
result = append(result, o.Value)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (opts Options) withoutOption(key string) Options {
|
||||
result := Options{}
|
||||
for _, o := range opts {
|
||||
if !o.IsKey(key) {
|
||||
result = append(result, o)
|
||||
}
|
||||
}
|
||||
return result
|
||||
}
|
||||
|
||||
func (opts Options) withAddedOption(key string, value string) Options {
|
||||
return append(opts, &Option{key, value})
|
||||
}
|
||||
|
||||
func (opts Options) withSettedOption(key string, values ...string) Options {
|
||||
var result Options
|
||||
var added []string
|
||||
for _, o := range opts {
|
||||
if !o.IsKey(key) {
|
||||
result = append(result, o)
|
||||
continue
|
||||
}
|
||||
|
||||
if contains(values, o.Value) {
|
||||
added = append(added, o.Value)
|
||||
result = append(result, o)
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
for _, value := range values {
|
||||
if contains(added, value) {
|
||||
continue
|
||||
}
|
||||
|
||||
result = result.withAddedOption(key, value)
|
||||
}
|
||||
|
||||
return result
|
||||
}
|
||||
|
||||
func contains(haystack []string, needle string) bool {
|
||||
for _, s := range haystack {
|
||||
if s == needle {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
181
vendor/github.com/go-git/go-git/v5/plumbing/format/config/section.go
generated
vendored
Normal file
181
vendor/github.com/go-git/go-git/v5/plumbing/format/config/section.go
generated
vendored
Normal file
|
@ -0,0 +1,181 @@
|
|||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Section is the representation of a section inside git configuration files.
|
||||
// Each Section contains Options that are used by both the Git plumbing
|
||||
// and the porcelains.
|
||||
// Sections can be further divided into subsections. To begin a subsection
|
||||
// put its name in double quotes, separated by space from the section name,
|
||||
// in the section header, like in the example below:
|
||||
//
|
||||
// [section "subsection"]
|
||||
//
|
||||
// All the other lines (and the remainder of the line after the section header)
|
||||
// are recognized as option variables, in the form "name = value" (or just name,
|
||||
// which is a short-hand to say that the variable is the boolean "true").
|
||||
// The variable names are case-insensitive, allow only alphanumeric characters
|
||||
// and -, and must start with an alphabetic character:
|
||||
//
|
||||
// [section "subsection1"]
|
||||
// option1 = value1
|
||||
// option2
|
||||
// [section "subsection2"]
|
||||
// option3 = value2
|
||||
//
|
||||
type Section struct {
|
||||
Name string
|
||||
Options Options
|
||||
Subsections Subsections
|
||||
}
|
||||
|
||||
type Subsection struct {
|
||||
Name string
|
||||
Options Options
|
||||
}
|
||||
|
||||
type Sections []*Section
|
||||
|
||||
func (s Sections) GoString() string {
|
||||
var strs []string
|
||||
for _, ss := range s {
|
||||
strs = append(strs, fmt.Sprintf("%#v", ss))
|
||||
}
|
||||
|
||||
return strings.Join(strs, ", ")
|
||||
}
|
||||
|
||||
type Subsections []*Subsection
|
||||
|
||||
func (s Subsections) GoString() string {
|
||||
var strs []string
|
||||
for _, ss := range s {
|
||||
strs = append(strs, fmt.Sprintf("%#v", ss))
|
||||
}
|
||||
|
||||
return strings.Join(strs, ", ")
|
||||
}
|
||||
|
||||
// IsName checks if the name provided is equals to the Section name, case insensitive.
|
||||
func (s *Section) IsName(name string) bool {
|
||||
return strings.EqualFold(s.Name, name)
|
||||
}
|
||||
|
||||
// Subsection returns a Subsection from the specified Section. If the
|
||||
// Subsection does not exists, new one is created and added to Section.
|
||||
func (s *Section) Subsection(name string) *Subsection {
|
||||
for i := len(s.Subsections) - 1; i >= 0; i-- {
|
||||
ss := s.Subsections[i]
|
||||
if ss.IsName(name) {
|
||||
return ss
|
||||
}
|
||||
}
|
||||
|
||||
ss := &Subsection{Name: name}
|
||||
s.Subsections = append(s.Subsections, ss)
|
||||
return ss
|
||||
}
|
||||
|
||||
// HasSubsection checks if the Section has a Subsection with the specified name.
|
||||
func (s *Section) HasSubsection(name string) bool {
|
||||
for _, ss := range s.Subsections {
|
||||
if ss.IsName(name) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
// RemoveSubsection removes a subsection from a Section.
|
||||
func (s *Section) RemoveSubsection(name string) *Section {
|
||||
result := Subsections{}
|
||||
for _, s := range s.Subsections {
|
||||
if !s.IsName(name) {
|
||||
result = append(result, s)
|
||||
}
|
||||
}
|
||||
|
||||
s.Subsections = result
|
||||
return s
|
||||
}
|
||||
|
||||
// Option returns the value for the specified key. Empty string is returned if
|
||||
// key does not exists.
|
||||
func (s *Section) Option(key string) string {
|
||||
return s.Options.Get(key)
|
||||
}
|
||||
|
||||
// OptionAll returns all possible values for an option with the specified key.
|
||||
// If the option does not exists, an empty slice will be returned.
|
||||
func (s *Section) OptionAll(key string) []string {
|
||||
return s.Options.GetAll(key)
|
||||
}
|
||||
|
||||
// HasOption checks if the Section has an Option with the given key.
|
||||
func (s *Section) HasOption(key string) bool {
|
||||
return s.Options.Has(key)
|
||||
}
|
||||
|
||||
// AddOption adds a new Option to the Section. The updated Section is returned.
|
||||
func (s *Section) AddOption(key string, value string) *Section {
|
||||
s.Options = s.Options.withAddedOption(key, value)
|
||||
return s
|
||||
}
|
||||
|
||||
// SetOption adds a new Option to the Section. If the option already exists, is replaced.
|
||||
// The updated Section is returned.
|
||||
func (s *Section) SetOption(key string, value string) *Section {
|
||||
s.Options = s.Options.withSettedOption(key, value)
|
||||
return s
|
||||
}
|
||||
|
||||
// Remove an option with the specified key. The updated Section is returned.
|
||||
func (s *Section) RemoveOption(key string) *Section {
|
||||
s.Options = s.Options.withoutOption(key)
|
||||
return s
|
||||
}
|
||||
|
||||
// IsName checks if the name of the subsection is exactly the specified name.
|
||||
func (s *Subsection) IsName(name string) bool {
|
||||
return s.Name == name
|
||||
}
|
||||
|
||||
// Option returns an option with the specified key. If the option does not exists,
|
||||
// empty spring will be returned.
|
||||
func (s *Subsection) Option(key string) string {
|
||||
return s.Options.Get(key)
|
||||
}
|
||||
|
||||
// OptionAll returns all possible values for an option with the specified key.
|
||||
// If the option does not exists, an empty slice will be returned.
|
||||
func (s *Subsection) OptionAll(key string) []string {
|
||||
return s.Options.GetAll(key)
|
||||
}
|
||||
|
||||
// HasOption checks if the Subsection has an Option with the given key.
|
||||
func (s *Subsection) HasOption(key string) bool {
|
||||
return s.Options.Has(key)
|
||||
}
|
||||
|
||||
// AddOption adds a new Option to the Subsection. The updated Subsection is returned.
|
||||
func (s *Subsection) AddOption(key string, value string) *Subsection {
|
||||
s.Options = s.Options.withAddedOption(key, value)
|
||||
return s
|
||||
}
|
||||
|
||||
// SetOption adds a new Option to the Subsection. If the option already exists, is replaced.
|
||||
// The updated Subsection is returned.
|
||||
func (s *Subsection) SetOption(key string, value ...string) *Subsection {
|
||||
s.Options = s.Options.withSettedOption(key, value...)
|
||||
return s
|
||||
}
|
||||
|
||||
// RemoveOption removes the option with the specified key. The updated Subsection is returned.
|
||||
func (s *Subsection) RemoveOption(key string) *Subsection {
|
||||
s.Options = s.Options.withoutOption(key)
|
||||
return s
|
||||
}
|
84
vendor/github.com/go-git/go-git/v5/plumbing/hash.go
generated
vendored
Normal file
84
vendor/github.com/go-git/go-git/v5/plumbing/hash.go
generated
vendored
Normal file
|
@ -0,0 +1,84 @@
|
|||
package plumbing
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/hex"
|
||||
"sort"
|
||||
"strconv"
|
||||
|
||||
"github.com/go-git/go-git/v5/plumbing/hash"
|
||||
)
|
||||
|
||||
// Hash SHA1 hashed content
|
||||
type Hash [hash.Size]byte
|
||||
|
||||
// ZeroHash is Hash with value zero
|
||||
var ZeroHash Hash
|
||||
|
||||
// ComputeHash compute the hash for a given ObjectType and content
|
||||
func ComputeHash(t ObjectType, content []byte) Hash {
|
||||
h := NewHasher(t, int64(len(content)))
|
||||
h.Write(content)
|
||||
return h.Sum()
|
||||
}
|
||||
|
||||
// NewHash return a new Hash from a hexadecimal hash representation
|
||||
func NewHash(s string) Hash {
|
||||
b, _ := hex.DecodeString(s)
|
||||
|
||||
var h Hash
|
||||
copy(h[:], b)
|
||||
|
||||
return h
|
||||
}
|
||||
|
||||
func (h Hash) IsZero() bool {
|
||||
var empty Hash
|
||||
return h == empty
|
||||
}
|
||||
|
||||
func (h Hash) String() string {
|
||||
return hex.EncodeToString(h[:])
|
||||
}
|
||||
|
||||
type Hasher struct {
|
||||
hash.Hash
|
||||
}
|
||||
|
||||
func NewHasher(t ObjectType, size int64) Hasher {
|
||||
h := Hasher{hash.New(hash.CryptoType)}
|
||||
h.Write(t.Bytes())
|
||||
h.Write([]byte(" "))
|
||||
h.Write([]byte(strconv.FormatInt(size, 10)))
|
||||
h.Write([]byte{0})
|
||||
return h
|
||||
}
|
||||
|
||||
func (h Hasher) Sum() (hash Hash) {
|
||||
copy(hash[:], h.Hash.Sum(nil))
|
||||
return
|
||||
}
|
||||
|
||||
// HashesSort sorts a slice of Hashes in increasing order.
|
||||
func HashesSort(a []Hash) {
|
||||
sort.Sort(HashSlice(a))
|
||||
}
|
||||
|
||||
// HashSlice attaches the methods of sort.Interface to []Hash, sorting in
|
||||
// increasing order.
|
||||
type HashSlice []Hash
|
||||
|
||||
func (p HashSlice) Len() int { return len(p) }
|
||||
func (p HashSlice) Less(i, j int) bool { return bytes.Compare(p[i][:], p[j][:]) < 0 }
|
||||
func (p HashSlice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }
|
||||
|
||||
// IsHash returns true if the given string is a valid hash.
|
||||
func IsHash(s string) bool {
|
||||
switch len(s) {
|
||||
case hash.HexSize:
|
||||
_, err := hex.DecodeString(s)
|
||||
return err == nil
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
60
vendor/github.com/go-git/go-git/v5/plumbing/hash/hash.go
generated
vendored
Normal file
60
vendor/github.com/go-git/go-git/v5/plumbing/hash/hash.go
generated
vendored
Normal file
|
@ -0,0 +1,60 @@
|
|||
// package hash provides a way for managing the
|
||||
// underlying hash implementations used across go-git.
|
||||
package hash
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"fmt"
|
||||
"hash"
|
||||
|
||||
"github.com/pjbgf/sha1cd"
|
||||
)
|
||||
|
||||
// algos is a map of hash algorithms.
|
||||
var algos = map[crypto.Hash]func() hash.Hash{}
|
||||
|
||||
func init() {
|
||||
reset()
|
||||
}
|
||||
|
||||
// reset resets the default algos value. Can be used after running tests
|
||||
// that registers new algorithms to avoid side effects.
|
||||
func reset() {
|
||||
algos[crypto.SHA1] = sha1cd.New
|
||||
algos[crypto.SHA256] = crypto.SHA256.New
|
||||
}
|
||||
|
||||
// RegisterHash allows for the hash algorithm used to be overridden.
|
||||
// This ensures the hash selection for go-git must be explicit, when
|
||||
// overriding the default value.
|
||||
func RegisterHash(h crypto.Hash, f func() hash.Hash) error {
|
||||
if f == nil {
|
||||
return fmt.Errorf("cannot register hash: f is nil")
|
||||
}
|
||||
|
||||
switch h {
|
||||
case crypto.SHA1:
|
||||
algos[h] = f
|
||||
case crypto.SHA256:
|
||||
algos[h] = f
|
||||
default:
|
||||
return fmt.Errorf("unsupported hash function: %v", h)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Hash is the same as hash.Hash. This allows consumers
|
||||
// to not having to import this package alongside "hash".
|
||||
type Hash interface {
|
||||
hash.Hash
|
||||
}
|
||||
|
||||
// New returns a new Hash for the given hash function.
|
||||
// It panics if the hash function is not registered.
|
||||
func New(h crypto.Hash) Hash {
|
||||
hh, ok := algos[h]
|
||||
if !ok {
|
||||
panic(fmt.Sprintf("hash algorithm not registered: %v", h))
|
||||
}
|
||||
return hh()
|
||||
}
|
15
vendor/github.com/go-git/go-git/v5/plumbing/hash/hash_sha1.go
generated
vendored
Normal file
15
vendor/github.com/go-git/go-git/v5/plumbing/hash/hash_sha1.go
generated
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
//go:build !sha256
|
||||
// +build !sha256
|
||||
|
||||
package hash
|
||||
|
||||
import "crypto"
|
||||
|
||||
const (
|
||||
// CryptoType defines what hash algorithm is being used.
|
||||
CryptoType = crypto.SHA1
|
||||
// Size defines the amount of bytes the hash yields.
|
||||
Size = 20
|
||||
// HexSize defines the strings size of the hash when represented in hexadecimal.
|
||||
HexSize = 40
|
||||
)
|
15
vendor/github.com/go-git/go-git/v5/plumbing/hash/hash_sha256.go
generated
vendored
Normal file
15
vendor/github.com/go-git/go-git/v5/plumbing/hash/hash_sha256.go
generated
vendored
Normal file
|
@ -0,0 +1,15 @@
|
|||
//go:build sha256
|
||||
// +build sha256
|
||||
|
||||
package hash
|
||||
|
||||
import "crypto"
|
||||
|
||||
const (
|
||||
// CryptoType defines what hash algorithm is being used.
|
||||
CryptoType = crypto.SHA256
|
||||
// Size defines the amount of bytes the hash yields.
|
||||
Size = 32
|
||||
// HexSize defines the strings size of the hash when represented in hexadecimal.
|
||||
HexSize = 64
|
||||
)
|
72
vendor/github.com/go-git/go-git/v5/plumbing/memory.go
generated
vendored
Normal file
72
vendor/github.com/go-git/go-git/v5/plumbing/memory.go
generated
vendored
Normal file
|
@ -0,0 +1,72 @@
|
|||
package plumbing
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"io"
|
||||
)
|
||||
|
||||
// MemoryObject on memory Object implementation
|
||||
type MemoryObject struct {
|
||||
t ObjectType
|
||||
h Hash
|
||||
cont []byte
|
||||
sz int64
|
||||
}
|
||||
|
||||
// Hash returns the object Hash, the hash is calculated on-the-fly the first
|
||||
// time it's called, in all subsequent calls the same Hash is returned even
|
||||
// if the type or the content have changed. The Hash is only generated if the
|
||||
// size of the content is exactly the object size.
|
||||
func (o *MemoryObject) Hash() Hash {
|
||||
if o.h == ZeroHash && int64(len(o.cont)) == o.sz {
|
||||
o.h = ComputeHash(o.t, o.cont)
|
||||
}
|
||||
|
||||
return o.h
|
||||
}
|
||||
|
||||
// Type returns the ObjectType
|
||||
func (o *MemoryObject) Type() ObjectType { return o.t }
|
||||
|
||||
// SetType sets the ObjectType
|
||||
func (o *MemoryObject) SetType(t ObjectType) { o.t = t }
|
||||
|
||||
// Size returns the size of the object
|
||||
func (o *MemoryObject) Size() int64 { return o.sz }
|
||||
|
||||
// SetSize set the object size, a content of the given size should be written
|
||||
// afterwards
|
||||
func (o *MemoryObject) SetSize(s int64) { o.sz = s }
|
||||
|
||||
// Reader returns an io.ReadCloser used to read the object's content.
|
||||
//
|
||||
// For a MemoryObject, this reader is seekable.
|
||||
func (o *MemoryObject) Reader() (io.ReadCloser, error) {
|
||||
return nopCloser{bytes.NewReader(o.cont)}, nil
|
||||
}
|
||||
|
||||
// Writer returns a ObjectWriter used to write the object's content.
|
||||
func (o *MemoryObject) Writer() (io.WriteCloser, error) {
|
||||
return o, nil
|
||||
}
|
||||
|
||||
func (o *MemoryObject) Write(p []byte) (n int, err error) {
|
||||
o.cont = append(o.cont, p...)
|
||||
o.sz = int64(len(o.cont))
|
||||
|
||||
return len(p), nil
|
||||
}
|
||||
|
||||
// Close releases any resources consumed by the object when it is acting as a
|
||||
// ObjectWriter.
|
||||
func (o *MemoryObject) Close() error { return nil }
|
||||
|
||||
// nopCloser exposes the extra methods of bytes.Reader while nopping Close().
|
||||
//
|
||||
// This allows clients to attempt seeking in a cached Blob's Reader.
|
||||
type nopCloser struct {
|
||||
*bytes.Reader
|
||||
}
|
||||
|
||||
// Close does nothing.
|
||||
func (nc nopCloser) Close() error { return nil }
|
111
vendor/github.com/go-git/go-git/v5/plumbing/object.go
generated
vendored
Normal file
111
vendor/github.com/go-git/go-git/v5/plumbing/object.go
generated
vendored
Normal file
|
@ -0,0 +1,111 @@
|
|||
// package plumbing implement the core interfaces and structs used by go-git
|
||||
package plumbing
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrObjectNotFound = errors.New("object not found")
|
||||
// ErrInvalidType is returned when an invalid object type is provided.
|
||||
ErrInvalidType = errors.New("invalid object type")
|
||||
)
|
||||
|
||||
// Object is a generic representation of any git object
|
||||
type EncodedObject interface {
|
||||
Hash() Hash
|
||||
Type() ObjectType
|
||||
SetType(ObjectType)
|
||||
Size() int64
|
||||
SetSize(int64)
|
||||
Reader() (io.ReadCloser, error)
|
||||
Writer() (io.WriteCloser, error)
|
||||
}
|
||||
|
||||
// DeltaObject is an EncodedObject representing a delta.
|
||||
type DeltaObject interface {
|
||||
EncodedObject
|
||||
// BaseHash returns the hash of the object used as base for this delta.
|
||||
BaseHash() Hash
|
||||
// ActualHash returns the hash of the object after applying the delta.
|
||||
ActualHash() Hash
|
||||
// Size returns the size of the object after applying the delta.
|
||||
ActualSize() int64
|
||||
}
|
||||
|
||||
// ObjectType internal object type
|
||||
// Integer values from 0 to 7 map to those exposed by git.
|
||||
// AnyObject is used to represent any from 0 to 7.
|
||||
type ObjectType int8
|
||||
|
||||
const (
|
||||
InvalidObject ObjectType = 0
|
||||
CommitObject ObjectType = 1
|
||||
TreeObject ObjectType = 2
|
||||
BlobObject ObjectType = 3
|
||||
TagObject ObjectType = 4
|
||||
// 5 reserved for future expansion
|
||||
OFSDeltaObject ObjectType = 6
|
||||
REFDeltaObject ObjectType = 7
|
||||
|
||||
AnyObject ObjectType = -127
|
||||
)
|
||||
|
||||
func (t ObjectType) String() string {
|
||||
switch t {
|
||||
case CommitObject:
|
||||
return "commit"
|
||||
case TreeObject:
|
||||
return "tree"
|
||||
case BlobObject:
|
||||
return "blob"
|
||||
case TagObject:
|
||||
return "tag"
|
||||
case OFSDeltaObject:
|
||||
return "ofs-delta"
|
||||
case REFDeltaObject:
|
||||
return "ref-delta"
|
||||
case AnyObject:
|
||||
return "any"
|
||||
default:
|
||||
return "unknown"
|
||||
}
|
||||
}
|
||||
|
||||
func (t ObjectType) Bytes() []byte {
|
||||
return []byte(t.String())
|
||||
}
|
||||
|
||||
// Valid returns true if t is a valid ObjectType.
|
||||
func (t ObjectType) Valid() bool {
|
||||
return t >= CommitObject && t <= REFDeltaObject
|
||||
}
|
||||
|
||||
// IsDelta returns true for any ObjectType that represents a delta (i.e.
|
||||
// REFDeltaObject or OFSDeltaObject).
|
||||
func (t ObjectType) IsDelta() bool {
|
||||
return t == REFDeltaObject || t == OFSDeltaObject
|
||||
}
|
||||
|
||||
// ParseObjectType parses a string representation of ObjectType. It returns an
|
||||
// error on parse failure.
|
||||
func ParseObjectType(value string) (typ ObjectType, err error) {
|
||||
switch value {
|
||||
case "commit":
|
||||
typ = CommitObject
|
||||
case "tree":
|
||||
typ = TreeObject
|
||||
case "blob":
|
||||
typ = BlobObject
|
||||
case "tag":
|
||||
typ = TagObject
|
||||
case "ofs-delta":
|
||||
typ = OFSDeltaObject
|
||||
case "ref-delta":
|
||||
typ = REFDeltaObject
|
||||
default:
|
||||
err = ErrInvalidType
|
||||
}
|
||||
return
|
||||
}
|
315
vendor/github.com/go-git/go-git/v5/plumbing/reference.go
generated
vendored
Normal file
315
vendor/github.com/go-git/go-git/v5/plumbing/reference.go
generated
vendored
Normal file
|
@ -0,0 +1,315 @@
|
|||
package plumbing
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
)
|
||||
|
||||
const (
|
||||
refPrefix = "refs/"
|
||||
refHeadPrefix = refPrefix + "heads/"
|
||||
refTagPrefix = refPrefix + "tags/"
|
||||
refRemotePrefix = refPrefix + "remotes/"
|
||||
refNotePrefix = refPrefix + "notes/"
|
||||
symrefPrefix = "ref: "
|
||||
)
|
||||
|
||||
// RefRevParseRules are a set of rules to parse references into short names, or expand into a full reference.
|
||||
// These are the same rules as used by git in shorten_unambiguous_ref and expand_ref.
|
||||
// See: https://github.com/git/git/blob/e0aaa1b6532cfce93d87af9bc813fb2e7a7ce9d7/refs.c#L417
|
||||
var RefRevParseRules = []string{
|
||||
"%s",
|
||||
"refs/%s",
|
||||
"refs/tags/%s",
|
||||
"refs/heads/%s",
|
||||
"refs/remotes/%s",
|
||||
"refs/remotes/%s/HEAD",
|
||||
}
|
||||
|
||||
var (
|
||||
ErrReferenceNotFound = errors.New("reference not found")
|
||||
|
||||
// ErrInvalidReferenceName is returned when a reference name is invalid.
|
||||
ErrInvalidReferenceName = errors.New("invalid reference name")
|
||||
)
|
||||
|
||||
// ReferenceType reference type's
|
||||
type ReferenceType int8
|
||||
|
||||
const (
|
||||
InvalidReference ReferenceType = 0
|
||||
HashReference ReferenceType = 1
|
||||
SymbolicReference ReferenceType = 2
|
||||
)
|
||||
|
||||
func (r ReferenceType) String() string {
|
||||
switch r {
|
||||
case InvalidReference:
|
||||
return "invalid-reference"
|
||||
case HashReference:
|
||||
return "hash-reference"
|
||||
case SymbolicReference:
|
||||
return "symbolic-reference"
|
||||
}
|
||||
|
||||
return ""
|
||||
}
|
||||
|
||||
// ReferenceName reference name's
|
||||
type ReferenceName string
|
||||
|
||||
// NewBranchReferenceName returns a reference name describing a branch based on
|
||||
// his short name.
|
||||
func NewBranchReferenceName(name string) ReferenceName {
|
||||
return ReferenceName(refHeadPrefix + name)
|
||||
}
|
||||
|
||||
// NewNoteReferenceName returns a reference name describing a note based on his
|
||||
// short name.
|
||||
func NewNoteReferenceName(name string) ReferenceName {
|
||||
return ReferenceName(refNotePrefix + name)
|
||||
}
|
||||
|
||||
// NewRemoteReferenceName returns a reference name describing a remote branch
|
||||
// based on his short name and the remote name.
|
||||
func NewRemoteReferenceName(remote, name string) ReferenceName {
|
||||
return ReferenceName(refRemotePrefix + fmt.Sprintf("%s/%s", remote, name))
|
||||
}
|
||||
|
||||
// NewRemoteHEADReferenceName returns a reference name describing a the HEAD
|
||||
// branch of a remote.
|
||||
func NewRemoteHEADReferenceName(remote string) ReferenceName {
|
||||
return ReferenceName(refRemotePrefix + fmt.Sprintf("%s/%s", remote, HEAD))
|
||||
}
|
||||
|
||||
// NewTagReferenceName returns a reference name describing a tag based on short
|
||||
// his name.
|
||||
func NewTagReferenceName(name string) ReferenceName {
|
||||
return ReferenceName(refTagPrefix + name)
|
||||
}
|
||||
|
||||
// IsBranch check if a reference is a branch
|
||||
func (r ReferenceName) IsBranch() bool {
|
||||
return strings.HasPrefix(string(r), refHeadPrefix)
|
||||
}
|
||||
|
||||
// IsNote check if a reference is a note
|
||||
func (r ReferenceName) IsNote() bool {
|
||||
return strings.HasPrefix(string(r), refNotePrefix)
|
||||
}
|
||||
|
||||
// IsRemote check if a reference is a remote
|
||||
func (r ReferenceName) IsRemote() bool {
|
||||
return strings.HasPrefix(string(r), refRemotePrefix)
|
||||
}
|
||||
|
||||
// IsTag check if a reference is a tag
|
||||
func (r ReferenceName) IsTag() bool {
|
||||
return strings.HasPrefix(string(r), refTagPrefix)
|
||||
}
|
||||
|
||||
func (r ReferenceName) String() string {
|
||||
return string(r)
|
||||
}
|
||||
|
||||
// Short returns the short name of a ReferenceName
|
||||
func (r ReferenceName) Short() string {
|
||||
s := string(r)
|
||||
res := s
|
||||
for _, format := range RefRevParseRules[1:] {
|
||||
_, err := fmt.Sscanf(s, format, &res)
|
||||
if err == nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
|
||||
return res
|
||||
}
|
||||
|
||||
var (
|
||||
ctrlSeqs = regexp.MustCompile(`[\000-\037\177]`)
|
||||
)
|
||||
|
||||
// Validate validates a reference name.
|
||||
// This follows the git-check-ref-format rules.
|
||||
// See https://git-scm.com/docs/git-check-ref-format
|
||||
//
|
||||
// It is important to note that this function does not check if the reference
|
||||
// exists in the repository.
|
||||
// It only checks if the reference name is valid.
|
||||
// This functions does not support the --refspec-pattern, --normalize, and
|
||||
// --allow-onelevel options.
|
||||
//
|
||||
// Git imposes the following rules on how references are named:
|
||||
//
|
||||
// 1. They can include slash / for hierarchical (directory) grouping, but no
|
||||
// slash-separated component can begin with a dot . or end with the
|
||||
// sequence .lock.
|
||||
// 2. They must contain at least one /. This enforces the presence of a
|
||||
// category like heads/, tags/ etc. but the actual names are not
|
||||
// restricted. If the --allow-onelevel option is used, this rule is
|
||||
// waived.
|
||||
// 3. They cannot have two consecutive dots .. anywhere.
|
||||
// 4. They cannot have ASCII control characters (i.e. bytes whose values are
|
||||
// lower than \040, or \177 DEL), space, tilde ~, caret ^, or colon :
|
||||
// anywhere.
|
||||
// 5. They cannot have question-mark ?, asterisk *, or open bracket [
|
||||
// anywhere. See the --refspec-pattern option below for an exception to this
|
||||
// rule.
|
||||
// 6. They cannot begin or end with a slash / or contain multiple consecutive
|
||||
// slashes (see the --normalize option below for an exception to this rule).
|
||||
// 7. They cannot end with a dot ..
|
||||
// 8. They cannot contain a sequence @{.
|
||||
// 9. They cannot be the single character @.
|
||||
// 10. They cannot contain a \.
|
||||
func (r ReferenceName) Validate() error {
|
||||
s := string(r)
|
||||
if len(s) == 0 {
|
||||
return ErrInvalidReferenceName
|
||||
}
|
||||
|
||||
// HEAD is a special case
|
||||
if r == HEAD {
|
||||
return nil
|
||||
}
|
||||
|
||||
// rule 7
|
||||
if strings.HasSuffix(s, ".") {
|
||||
return ErrInvalidReferenceName
|
||||
}
|
||||
|
||||
// rule 2
|
||||
parts := strings.Split(s, "/")
|
||||
if len(parts) < 2 {
|
||||
return ErrInvalidReferenceName
|
||||
}
|
||||
|
||||
isBranch := r.IsBranch()
|
||||
isTag := r.IsTag()
|
||||
for _, part := range parts {
|
||||
// rule 6
|
||||
if len(part) == 0 {
|
||||
return ErrInvalidReferenceName
|
||||
}
|
||||
|
||||
if strings.HasPrefix(part, ".") || // rule 1
|
||||
strings.Contains(part, "..") || // rule 3
|
||||
ctrlSeqs.MatchString(part) || // rule 4
|
||||
strings.ContainsAny(part, "~^:?*[ \t\n") || // rule 4 & 5
|
||||
strings.Contains(part, "@{") || // rule 8
|
||||
part == "@" || // rule 9
|
||||
strings.Contains(part, "\\") || // rule 10
|
||||
strings.HasSuffix(part, ".lock") { // rule 1
|
||||
return ErrInvalidReferenceName
|
||||
}
|
||||
|
||||
if (isBranch || isTag) && strings.HasPrefix(part, "-") { // branches & tags can't start with -
|
||||
return ErrInvalidReferenceName
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
const (
|
||||
HEAD ReferenceName = "HEAD"
|
||||
Master ReferenceName = "refs/heads/master"
|
||||
Main ReferenceName = "refs/heads/main"
|
||||
)
|
||||
|
||||
// Reference is a representation of git reference
|
||||
type Reference struct {
|
||||
t ReferenceType
|
||||
n ReferenceName
|
||||
h Hash
|
||||
target ReferenceName
|
||||
}
|
||||
|
||||
// NewReferenceFromStrings creates a reference from name and target as string,
|
||||
// the resulting reference can be a SymbolicReference or a HashReference base
|
||||
// on the target provided
|
||||
func NewReferenceFromStrings(name, target string) *Reference {
|
||||
n := ReferenceName(name)
|
||||
|
||||
if strings.HasPrefix(target, symrefPrefix) {
|
||||
target := ReferenceName(target[len(symrefPrefix):])
|
||||
return NewSymbolicReference(n, target)
|
||||
}
|
||||
|
||||
return NewHashReference(n, NewHash(target))
|
||||
}
|
||||
|
||||
// NewSymbolicReference creates a new SymbolicReference reference
|
||||
func NewSymbolicReference(n, target ReferenceName) *Reference {
|
||||
return &Reference{
|
||||
t: SymbolicReference,
|
||||
n: n,
|
||||
target: target,
|
||||
}
|
||||
}
|
||||
|
||||
// NewHashReference creates a new HashReference reference
|
||||
func NewHashReference(n ReferenceName, h Hash) *Reference {
|
||||
return &Reference{
|
||||
t: HashReference,
|
||||
n: n,
|
||||
h: h,
|
||||
}
|
||||
}
|
||||
|
||||
// Type returns the type of a reference
|
||||
func (r *Reference) Type() ReferenceType {
|
||||
return r.t
|
||||
}
|
||||
|
||||
// Name returns the name of a reference
|
||||
func (r *Reference) Name() ReferenceName {
|
||||
return r.n
|
||||
}
|
||||
|
||||
// Hash returns the hash of a hash reference
|
||||
func (r *Reference) Hash() Hash {
|
||||
return r.h
|
||||
}
|
||||
|
||||
// Target returns the target of a symbolic reference
|
||||
func (r *Reference) Target() ReferenceName {
|
||||
return r.target
|
||||
}
|
||||
|
||||
// Strings dump a reference as a [2]string
|
||||
func (r *Reference) Strings() [2]string {
|
||||
var o [2]string
|
||||
o[0] = r.Name().String()
|
||||
|
||||
switch r.Type() {
|
||||
case HashReference:
|
||||
o[1] = r.Hash().String()
|
||||
case SymbolicReference:
|
||||
o[1] = symrefPrefix + r.Target().String()
|
||||
}
|
||||
|
||||
return o
|
||||
}
|
||||
|
||||
func (r *Reference) String() string {
|
||||
ref := ""
|
||||
switch r.Type() {
|
||||
case HashReference:
|
||||
ref = r.Hash().String()
|
||||
case SymbolicReference:
|
||||
ref = symrefPrefix + r.Target().String()
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
|
||||
name := r.Name().String()
|
||||
var v strings.Builder
|
||||
v.Grow(len(ref) + len(name) + 1)
|
||||
v.WriteString(ref)
|
||||
v.WriteString(" ")
|
||||
v.WriteString(name)
|
||||
return v.String()
|
||||
}
|
11
vendor/github.com/go-git/go-git/v5/plumbing/revision.go
generated
vendored
Normal file
11
vendor/github.com/go-git/go-git/v5/plumbing/revision.go
generated
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
package plumbing
|
||||
|
||||
// Revision represents a git revision
|
||||
// to get more details about git revisions
|
||||
// please check git manual page :
|
||||
// https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html
|
||||
type Revision string
|
||||
|
||||
func (r Revision) String() string {
|
||||
return string(r)
|
||||
}
|
21
vendor/github.com/luksen/maildir/LICENSE.txt
generated
vendored
Normal file
21
vendor/github.com/luksen/maildir/LICENSE.txt
generated
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2014 Lukas Senger
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in
|
||||
all copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
|
||||
THE SOFTWARE.
|
9
vendor/github.com/luksen/maildir/README.mkdn
generated
vendored
Normal file
9
vendor/github.com/luksen/maildir/README.mkdn
generated
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
The maildir package provides an interface to mailboxes in the Maildir format.
|
||||
|
||||
Documentation can be found online here:
|
||||
https://pkg.go.dev/github.com/luksen/maildir
|
||||
|
||||
This is what I used as a reference:
|
||||
- https://en.wikipedia.org/wiki/Maildir
|
||||
- http://www.qmail.org/qmail-manual-html/man5/maildir.html
|
||||
- http://cr.yp.to/proto/maildir.html
|
3
vendor/github.com/luksen/maildir/go.mod
generated
vendored
Normal file
3
vendor/github.com/luksen/maildir/go.mod
generated
vendored
Normal file
|
@ -0,0 +1,3 @@
|
|||
module github.com/luksen/maildir
|
||||
|
||||
go 1.15
|
520
vendor/github.com/luksen/maildir/maildir.go
generated
vendored
Normal file
520
vendor/github.com/luksen/maildir/maildir.go
generated
vendored
Normal file
|
@ -0,0 +1,520 @@
|
|||
// The maildir package provides an interface to mailboxes in the Maildir format.
|
||||
package maildir
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/mail"
|
||||
"net/textproto"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync/atomic"
|
||||
"time"
|
||||
)
|
||||
|
||||
// The Separator separates a messages unique key from its flags in the filename.
|
||||
// This should only be changed on operating systems where the colon isn't
|
||||
// allowed in filenames.
|
||||
var Separator rune = ':'
|
||||
|
||||
var id int64 = 10000
|
||||
|
||||
// CreateMode holds the permissions used when creating a directory.
|
||||
const CreateMode = 0700
|
||||
|
||||
// A KeyError occurs when a key matches more or less than one message.
|
||||
type KeyError struct {
|
||||
Key string // the (invalid) key
|
||||
N int // number of matches (!= 1)
|
||||
}
|
||||
|
||||
func (e *KeyError) Error() string {
|
||||
return "maildir: key " + e.Key + " matches " + strconv.Itoa(e.N) + " files."
|
||||
}
|
||||
|
||||
// A FlagError occurs when a non-standard info section is encountered.
|
||||
type FlagError struct {
|
||||
Info string // the encountered info section
|
||||
Experimental bool // info section starts with 1
|
||||
}
|
||||
|
||||
func (e *FlagError) Error() string {
|
||||
if e.Experimental {
|
||||
return "maildir: experimental info section encountered: " + e.Info[2:]
|
||||
}
|
||||
return "maildir: bad info section encountered: " + e.Info
|
||||
}
|
||||
|
||||
// A Dir represents a single directory in a Maildir mailbox.
|
||||
type Dir string
|
||||
|
||||
// Unseen moves messages from new to cur and returns their keys.
|
||||
// This means the messages are now known to the application. To find out whether
|
||||
// a user has seen a message, use Flags().
|
||||
func (d Dir) Unseen() ([]string, error) {
|
||||
f, err := os.Open(filepath.Join(string(d), "new"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
names, err := f.Readdirnames(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var keys []string
|
||||
for _, n := range names {
|
||||
if n[0] != '.' {
|
||||
split := strings.FieldsFunc(n, func(r rune) bool {
|
||||
return r == Separator
|
||||
})
|
||||
key := split[0]
|
||||
info := "2,"
|
||||
// Messages in new shouldn't have an info section but
|
||||
// we act as if, in case some other program didn't
|
||||
// follow the spec.
|
||||
if len(split) > 1 {
|
||||
info = split[1]
|
||||
}
|
||||
keys = append(keys, key)
|
||||
err = os.Rename(filepath.Join(string(d), "new", n),
|
||||
filepath.Join(string(d), "cur", key+string(Separator)+info))
|
||||
}
|
||||
}
|
||||
return keys, err
|
||||
}
|
||||
|
||||
// UnseenCount returns the number of messages in new without looking at them.
|
||||
func (d Dir) UnseenCount() (int, error) {
|
||||
f, err := os.Open(filepath.Join(string(d), "new"))
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
defer f.Close()
|
||||
names, err := f.Readdirnames(0)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
c := 0
|
||||
for _, n := range names {
|
||||
if n[0] != '.' {
|
||||
c += 1
|
||||
}
|
||||
}
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// Keys returns a slice of valid keys to access messages by. This only returns
|
||||
// keys for messages in cur. Use Unseen to access messages in new. All keys,
|
||||
// whether returned here or by Unseen, point to messages in cur.
|
||||
func (d Dir) Keys() ([]string, error) {
|
||||
f, err := os.Open(filepath.Join(string(d), "cur"))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
names, err := f.Readdirnames(0)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
var keys []string
|
||||
for _, n := range names {
|
||||
if n[0] != '.' {
|
||||
split := strings.FieldsFunc(n, func(r rune) bool {
|
||||
return r == Separator
|
||||
})
|
||||
keys = append(keys, split[0])
|
||||
}
|
||||
}
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
func fileExists(filename string) bool {
|
||||
finfo, err := os.Stat(filename)
|
||||
return err == nil && finfo.Mode().IsRegular()
|
||||
}
|
||||
|
||||
var suffices []string = []string{
|
||||
"",
|
||||
"S",
|
||||
"D",
|
||||
"F",
|
||||
"P",
|
||||
"R",
|
||||
"T",
|
||||
"DF",
|
||||
"DP",
|
||||
"DR",
|
||||
"DS",
|
||||
"DT",
|
||||
"FP",
|
||||
"FR",
|
||||
"FS",
|
||||
"FT",
|
||||
"PR",
|
||||
"PS",
|
||||
"PT",
|
||||
"RS",
|
||||
"RT",
|
||||
"ST",
|
||||
"DFP",
|
||||
"DFR",
|
||||
"DFS",
|
||||
"DFT",
|
||||
"DPR",
|
||||
"DPS",
|
||||
"DPT",
|
||||
"DRS",
|
||||
"DRT",
|
||||
"DST",
|
||||
"FPR",
|
||||
"FPS",
|
||||
"FPT",
|
||||
"FRS",
|
||||
"FRT",
|
||||
"FST",
|
||||
"PRS",
|
||||
"PRT",
|
||||
"PST",
|
||||
"RST",
|
||||
"DFPR",
|
||||
"DFPS",
|
||||
"DFPT",
|
||||
"DFRS",
|
||||
"DFRT",
|
||||
"DFST",
|
||||
"DPRS",
|
||||
"DPRT",
|
||||
"DPST",
|
||||
"DRST",
|
||||
"FPRS",
|
||||
"FPRT",
|
||||
"FPST",
|
||||
"FRST",
|
||||
"PRST",
|
||||
"DFPRS",
|
||||
"DFPRT",
|
||||
"DFPST",
|
||||
"DFRST",
|
||||
"DPRST",
|
||||
"FPRST",
|
||||
"DFPRST"}
|
||||
|
||||
// quick check for existance of legal (per DJB) file names for a key
|
||||
func (d Dir) quickFilename(key string) string {
|
||||
// "cur" files must have :info suffix
|
||||
filePrefix := filepath.Join(string(d), "cur", key+string(Separator)+"2,")
|
||||
for _, suffix := range suffices {
|
||||
if filename := filePrefix + suffix; fileExists(filename) {
|
||||
return filename
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// Filename returns the path to the file corresponding to the key.
|
||||
func (d Dir) Filename(key string) (string, error) {
|
||||
if matchedFile := d.quickFilename(key); matchedFile != "" {
|
||||
return matchedFile, nil
|
||||
}
|
||||
dirPath := filepath.Join(string(d), "cur")
|
||||
f, err := os.Open(dirPath)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
names, err := f.Readdirnames(-1)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
for _, name := range names {
|
||||
if strings.HasPrefix(name, key) {
|
||||
return filepath.Join(dirPath, name), nil
|
||||
}
|
||||
}
|
||||
return "", &KeyError{key, 0}
|
||||
}
|
||||
|
||||
// Header returns the corresponding mail header to a key.
|
||||
func (d Dir) Header(key string) (header mail.Header, err error) {
|
||||
filename, err := d.Filename(key)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
file, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
tp := textproto.NewReader(bufio.NewReader(file))
|
||||
hdr, err := tp.ReadMIMEHeader()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
header = mail.Header(hdr)
|
||||
return
|
||||
}
|
||||
|
||||
// Message returns a Message by key.
|
||||
func (d Dir) Message(key string) (*mail.Message, error) {
|
||||
filename, err := d.Filename(key)
|
||||
if err != nil {
|
||||
return &mail.Message{}, err
|
||||
}
|
||||
r, err := os.Open(filename)
|
||||
if err != nil {
|
||||
return &mail.Message{}, err
|
||||
}
|
||||
defer r.Close()
|
||||
buf := new(bytes.Buffer)
|
||||
_, err = io.Copy(buf, r)
|
||||
if err != nil {
|
||||
return &mail.Message{}, err
|
||||
}
|
||||
msg, err := mail.ReadMessage(buf)
|
||||
if err != nil {
|
||||
return msg, err
|
||||
}
|
||||
return msg, nil
|
||||
}
|
||||
|
||||
type runeSlice []rune
|
||||
|
||||
func (s runeSlice) Len() int { return len(s) }
|
||||
func (s runeSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
||||
func (s runeSlice) Less(i, j int) bool { return s[i] < s[j] }
|
||||
|
||||
// Flags returns the flags for a message sorted in ascending order.
|
||||
// See the documentation of SetFlags for details.
|
||||
func (d Dir) Flags(key string) (string, error) {
|
||||
filename, err := d.Filename(key)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
split := strings.FieldsFunc(filename, func(r rune) bool {
|
||||
return r == Separator
|
||||
})
|
||||
switch {
|
||||
case len(split[1]) < 2,
|
||||
split[1][1] != ',':
|
||||
return "", &FlagError{split[1], false}
|
||||
case split[1][0] == '1':
|
||||
return "", &FlagError{split[1], true}
|
||||
case split[1][0] != '2':
|
||||
return "", &FlagError{split[1], false}
|
||||
}
|
||||
rs := runeSlice(split[1][2:])
|
||||
sort.Sort(rs)
|
||||
return string(rs), nil
|
||||
}
|
||||
|
||||
// SetFlags appends an info section to the filename according to the given flags.
|
||||
// This function removes duplicates and sorts the flags, but doesn't check
|
||||
// whether they conform with the Maildir specification.
|
||||
//
|
||||
// The following flags are listed in the specification
|
||||
// (http://cr.yp.to/proto/maildir.html):
|
||||
//
|
||||
// Flag "P" (passed): the user has resent/forwarded/bounced this message to someone else.
|
||||
// Flag "R" (replied): the user has replied to this message.
|
||||
// Flag "S" (seen): the user has viewed this message, though perhaps he didn't read all the way through it.
|
||||
// Flag "T" (trashed): the user has moved this message to the trash; the trash will be emptied by a later user action.
|
||||
// Flag "D" (draft): the user considers this message a draft; toggled at user discretion.
|
||||
// Flag "F" (flagged): user-defined flag; toggled at user discretion.
|
||||
//
|
||||
// Using only these standard flags will improve message retrieval speed.
|
||||
func (d Dir) SetFlags(key string, flags string) error {
|
||||
info := "2,"
|
||||
rs := runeSlice(flags)
|
||||
sort.Sort(rs)
|
||||
for _, r := range rs {
|
||||
if []rune(info)[len(info)-1] != r {
|
||||
info += string(r)
|
||||
}
|
||||
}
|
||||
return d.SetInfo(key, info)
|
||||
}
|
||||
|
||||
// Set the info part of the filename.
|
||||
// Only use this if you plan on using a non-standard info part.
|
||||
func (d Dir) SetInfo(key, info string) error {
|
||||
filename, err := d.Filename(key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Rename(filename, filepath.Join(string(d), "cur", key+
|
||||
string(Separator)+info))
|
||||
return err
|
||||
}
|
||||
|
||||
// Key exposes the internal unique key generation. This function is deprecated.
|
||||
func Key() (string, error) {
|
||||
fmt.Println("maildir: Key() is deprecated without replacement. See https://github.com/luksen/maildir/issues/5 for details.")
|
||||
return generateKey()
|
||||
}
|
||||
|
||||
// generateKey generates a new unique key as described in the Maildir
|
||||
// specification. For the third part of the key (delivery identifier) it uses
|
||||
// an internal counter, the process id and a cryptographical random number to
|
||||
// ensure uniqueness among messages delivered in the same second.
|
||||
func generateKey() (string, error) {
|
||||
var key string
|
||||
key += strconv.FormatInt(time.Now().Unix(), 10)
|
||||
key += "."
|
||||
host, err := os.Hostname()
|
||||
if err != err {
|
||||
return "", err
|
||||
}
|
||||
host = strings.Replace(host, "/", "\057", -1)
|
||||
host = strings.Replace(host, string(Separator), "\072", -1)
|
||||
key += host
|
||||
key += "."
|
||||
key += strconv.FormatInt(int64(os.Getpid()), 10)
|
||||
key += strconv.FormatInt(id, 10)
|
||||
atomic.AddInt64(&id, 1)
|
||||
bs := make([]byte, 10)
|
||||
_, err = io.ReadFull(rand.Reader, bs)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
key += hex.EncodeToString(bs)
|
||||
return key, nil
|
||||
}
|
||||
|
||||
// Create creates the directory structure for a Maildir.
|
||||
// If the main directory already exists, it tries to create the subdirectories
|
||||
// in there. If an error occurs while creating one of the subdirectories, this
|
||||
// function may leave a partially created directory structure.
|
||||
func (d Dir) Create() error {
|
||||
err := os.Mkdir(string(d), os.ModeDir|CreateMode)
|
||||
if err != nil && !os.IsExist(err) {
|
||||
return err
|
||||
}
|
||||
err = os.Mkdir(filepath.Join(string(d), "tmp"), os.ModeDir|CreateMode)
|
||||
if err != nil && !os.IsExist(err) {
|
||||
return err
|
||||
}
|
||||
err = os.Mkdir(filepath.Join(string(d), "new"), os.ModeDir|CreateMode)
|
||||
if err != nil && !os.IsExist(err) {
|
||||
return err
|
||||
}
|
||||
err = os.Mkdir(filepath.Join(string(d), "cur"), os.ModeDir|CreateMode)
|
||||
if err != nil && !os.IsExist(err) {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delivery represents an ongoing message delivery to the mailbox.
|
||||
// It implements the WriteCloser interface. On closing the underlying file is
|
||||
// moved/relinked to new.
|
||||
type Delivery struct {
|
||||
file *os.File
|
||||
d Dir
|
||||
key string
|
||||
}
|
||||
|
||||
// NewDelivery creates a new Delivery.
|
||||
func (d Dir) NewDelivery() (*Delivery, error) {
|
||||
key, err := generateKey()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
del := &Delivery{}
|
||||
time.AfterFunc(24*time.Hour, func() { del.Abort() })
|
||||
file, err := os.Create(filepath.Join(string(d), "tmp", key))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
del.file = file
|
||||
del.d = d
|
||||
del.key = key
|
||||
return del, nil
|
||||
}
|
||||
|
||||
func (d *Delivery) Write(p []byte) (int, error) {
|
||||
return d.file.Write(p)
|
||||
}
|
||||
|
||||
// Close closes the underlying file and moves it to new.
|
||||
func (d *Delivery) Close() error {
|
||||
err := d.file.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Link(filepath.Join(string(d.d), "tmp", d.key),
|
||||
filepath.Join(string(d.d), "new", d.key))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Remove(filepath.Join(string(d.d), "tmp", d.key))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Abort closes the underlying file and removes it completely.
|
||||
func (d *Delivery) Abort() error {
|
||||
err := d.file.Close()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Remove(filepath.Join(string(d.d), "tmp", d.key))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Move moves a message from this Maildir to another.
|
||||
func (d Dir) Move(target Dir, key string) error {
|
||||
path, err := d.Filename(key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.Rename(path, filepath.Join(string(target), "cur", filepath.Base(path)))
|
||||
}
|
||||
|
||||
// Purge removes the actual file behind this message.
|
||||
func (d Dir) Purge(key string) error {
|
||||
f, err := d.Filename(key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return os.Remove(f)
|
||||
}
|
||||
|
||||
// Clean removes old files from tmp and should be run periodically.
|
||||
// This does not use access time but modification time for portability reasons.
|
||||
func (d Dir) Clean() error {
|
||||
f, err := os.Open(filepath.Join(string(d), "tmp"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer f.Close()
|
||||
names, err := f.Readdirnames(0)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
now := time.Now()
|
||||
for _, n := range names {
|
||||
fi, err := os.Stat(filepath.Join(string(d), "tmp", n))
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if now.Sub(fi.ModTime()).Hours() > 36 {
|
||||
err = os.Remove(filepath.Join(string(d), "tmp", n))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
26
vendor/github.com/mattn/go-gtk/LICENSE
generated
vendored
Normal file
26
vendor/github.com/mattn/go-gtk/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,26 @@
|
|||
Copyright (c) 2019 Yasuhiro Matsumoto
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright notice,
|
||||
this list of conditions and the following disclaimer in the documentation
|
||||
and/or other materials provided with the distribution.
|
||||
|
||||
3. Neither the name of the copyright holder nor the names of its contributors
|
||||
may be used to endorse or promote products derived from this software without
|
||||
specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
||||
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
||||
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
1052
vendor/github.com/mattn/go-gtk/glib/glib.go
generated
vendored
Normal file
1052
vendor/github.com/mattn/go-gtk/glib/glib.go
generated
vendored
Normal file
File diff suppressed because it is too large
Load diff
218
vendor/github.com/mattn/go-gtk/glib/glib.go.h
generated
vendored
Normal file
218
vendor/github.com/mattn/go-gtk/glib/glib.go.h
generated
vendored
Normal file
|
@ -0,0 +1,218 @@
|
|||
#ifndef GO_GLIB_H
|
||||
#define GO_GLIB_H
|
||||
|
||||
#include <glib.h>
|
||||
#include <glib-object.h>
|
||||
#include <stdint.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
static GList* to_list(void* l) {
|
||||
return (GList*)l;
|
||||
}
|
||||
static GSList* to_slist(void* sl) {
|
||||
return (GSList*)sl;
|
||||
}
|
||||
static GError* to_error(void* err) {
|
||||
return (GError*)err;
|
||||
}
|
||||
static inline GObject* to_GObject(void* o) { return G_OBJECT(o); }
|
||||
|
||||
static inline gchar* to_gcharptr(const char* s) { return (gchar*)s; }
|
||||
|
||||
static inline char* to_charptr(const gchar* s) { return (char*)s; }
|
||||
|
||||
static inline char* to_charptr_from_gpointer(gpointer s) { return (char*)s; }
|
||||
|
||||
static inline char* to_charptr_voidp(const void* s) { return (char*)s; }
|
||||
|
||||
static inline void free_string(char* s) { free(s); }
|
||||
|
||||
static inline gchar* next_string(gchar* s) { return (gchar*)(s + strlen(s) + 1); }
|
||||
|
||||
static gboolean _g_utf8_validate(void* str, int len, void* ppbar) {
|
||||
return g_utf8_validate((const gchar*)str, (gssize)len, (const gchar**)ppbar);
|
||||
}
|
||||
|
||||
static gchar* _g_locale_to_utf8(void* opsysstring, int len, int* bytes_read, int* bytes_written, GError** error) {
|
||||
return g_locale_from_utf8((const gchar*)opsysstring, (gssize)len, (gsize*)bytes_read, (gsize*)bytes_written, error);
|
||||
}
|
||||
|
||||
static gchar* _g_locale_from_utf8(char* utf8string, int len, int* bytes_read, int* bytes_written, GError** error) {
|
||||
return g_locale_from_utf8((const gchar*)utf8string, (gssize)len, (gsize*)bytes_read, (gsize*)bytes_written, error);
|
||||
}
|
||||
static void _g_object_set_ptr(gpointer object, const gchar *property_name, void* value) {
|
||||
g_object_set(object, property_name, value, NULL);
|
||||
}
|
||||
static void _g_object_set_addr(gpointer object, const gchar *property_name, void* value) {
|
||||
g_object_set(object, property_name, *(gpointer**)value, NULL);
|
||||
}
|
||||
//static void _g_object_get(gpointer object, const gchar *property_name, void* value) {
|
||||
// g_object_get(object, property_name, value, NULL);
|
||||
//}
|
||||
|
||||
static void g_value_init_int(GValue* gv) { g_value_init(gv, G_TYPE_INT); }
|
||||
static void g_value_init_string(GValue* gv) { g_value_init(gv, G_TYPE_STRING); }
|
||||
|
||||
static GValue* init_gvalue_string_type() {
|
||||
GValue* gv = g_new0(GValue, 1);
|
||||
g_value_init(gv, G_TYPE_STRING);
|
||||
return gv;
|
||||
}
|
||||
static GValue* init_gvalue_string(gchar* val) {
|
||||
GValue* gv = init_gvalue_string_type();
|
||||
g_value_set_string(gv, val);
|
||||
return gv;
|
||||
}
|
||||
|
||||
static GValue* init_gvalue_int_type() {
|
||||
GValue* gv = g_new0(GValue, 1);
|
||||
g_value_init(gv, G_TYPE_INT);
|
||||
return gv;
|
||||
}
|
||||
static GValue* init_gvalue_int(gint val) {
|
||||
GValue* gv = init_gvalue_int_type();
|
||||
g_value_set_int(gv, val);
|
||||
return gv;
|
||||
}
|
||||
|
||||
static GValue* init_gvalue_uint(guint val) { GValue* gv = g_new0(GValue, 1); g_value_init(gv, G_TYPE_UINT); g_value_set_uint(gv, val); return gv; }
|
||||
static GValue* init_gvalue_int64(gint64 val) { GValue* gv = g_new0(GValue, 1); g_value_init(gv, G_TYPE_INT64); g_value_set_int64(gv, val); return gv; }
|
||||
static GValue* init_gvalue_double(gdouble val) { GValue* gv = g_new0(GValue, 1); g_value_init(gv, G_TYPE_DOUBLE); g_value_set_double(gv, val); return gv; }
|
||||
static GValue* init_gvalue_byte(guchar val) { GValue* gv = g_new0(GValue, 1); g_value_init(gv, G_TYPE_UCHAR); g_value_set_uchar(gv, val); return gv; }
|
||||
static GValue* init_gvalue_bool(gboolean val) { GValue* gv = g_new0(GValue, 1); g_value_init(gv, G_TYPE_BOOLEAN); g_value_set_boolean(gv, val); return gv; }
|
||||
//static GValue* init_gvalue_pointer(gpointer val) { GValue* gv = g_new0(GValue, 1); g_value_init(gv, G_TYPE_POINTER); g_value_set_pointer(gv, val); return gv; }
|
||||
|
||||
typedef struct {
|
||||
char *name;
|
||||
int func_no;
|
||||
void* target;
|
||||
uintptr_t* args;
|
||||
int args_no;
|
||||
void* ret;
|
||||
gulong id;
|
||||
} callback_info;
|
||||
|
||||
static uintptr_t callback_info_get_arg(callback_info* cbi, int idx) {
|
||||
return cbi->args[idx];
|
||||
}
|
||||
extern void _go_glib_callback(callback_info* cbi);
|
||||
static uintptr_t _glib_callback(void *data, ...) {
|
||||
va_list ap;
|
||||
callback_info *cbi = (callback_info*) data;
|
||||
|
||||
int i;
|
||||
cbi->args = (uintptr_t*)malloc(sizeof(uintptr_t)*cbi->args_no);
|
||||
va_start(ap, data);
|
||||
for (i = 0; i < cbi->args_no; i++) {
|
||||
cbi->args[i] = va_arg(ap, uintptr_t);
|
||||
}
|
||||
va_end(ap);
|
||||
|
||||
_go_glib_callback(cbi);
|
||||
|
||||
free(cbi->args);
|
||||
|
||||
return *(uintptr_t*)(&cbi->ret);
|
||||
}
|
||||
|
||||
static void free_callback_info(gpointer data, GClosure *closure) {
|
||||
g_slice_free(callback_info, data);
|
||||
}
|
||||
|
||||
static callback_info* _g_signal_connect(void* obj, gchar* name, int func_no, int flags) {
|
||||
GSignalQuery query;
|
||||
callback_info* cbi;
|
||||
guint signal_id = g_signal_lookup(name, G_OBJECT_TYPE(obj));
|
||||
g_signal_query(signal_id, &query);
|
||||
cbi = g_slice_new0(callback_info);
|
||||
cbi->name = g_strdup(name);
|
||||
cbi->func_no = func_no;
|
||||
cbi->args = NULL;
|
||||
cbi->target = obj;
|
||||
cbi->args_no = query.n_params;
|
||||
if (((GConnectFlags)flags) == G_CONNECT_AFTER)
|
||||
cbi->id = g_signal_connect_data((gpointer)obj, name, G_CALLBACK(_glib_callback), (gpointer)cbi, free_callback_info, G_CONNECT_AFTER);
|
||||
else
|
||||
cbi->id = g_signal_connect_data((gpointer)obj, name, G_CALLBACK(_glib_callback), (gpointer)cbi, free_callback_info, G_CONNECT_SWAPPED);
|
||||
return cbi;
|
||||
}
|
||||
|
||||
static void _g_signal_emit_by_name(gpointer instance, const gchar *detailed_signal) {
|
||||
g_signal_emit_by_name(instance, detailed_signal);
|
||||
}
|
||||
|
||||
static gulong _g_signal_callback_id(callback_info* cbi) {
|
||||
return cbi->id;
|
||||
}
|
||||
|
||||
typedef struct {
|
||||
int func_no;
|
||||
gboolean ret;
|
||||
guint id;
|
||||
} sourcefunc_info;
|
||||
|
||||
extern void _go_glib_sourcefunc(sourcefunc_info* sfi);
|
||||
static gboolean _go_sourcefunc(void *data) {
|
||||
sourcefunc_info *sfi = (sourcefunc_info*) data;
|
||||
|
||||
_go_glib_sourcefunc(sfi);
|
||||
|
||||
return sfi->ret;
|
||||
}
|
||||
|
||||
static void free_sourcefunc_info(gpointer data) {
|
||||
g_slice_free(sourcefunc_info, data);
|
||||
}
|
||||
|
||||
static sourcefunc_info* _g_idle_add(int func_no) {
|
||||
sourcefunc_info* sfi;
|
||||
sfi = g_slice_new0(sourcefunc_info);
|
||||
sfi->func_no = func_no;
|
||||
sfi->id = g_idle_add_full(G_PRIORITY_DEFAULT_IDLE, _go_sourcefunc, sfi, free_sourcefunc_info);
|
||||
return sfi;
|
||||
}
|
||||
|
||||
static sourcefunc_info* _g_timeout_add(guint interval, int func_no) {
|
||||
sourcefunc_info* sfi;
|
||||
sfi = g_slice_new0(sourcefunc_info);
|
||||
sfi->func_no = func_no;
|
||||
sfi->id = g_timeout_add_full(G_PRIORITY_DEFAULT, interval, _go_sourcefunc, sfi, free_sourcefunc_info);
|
||||
return sfi;
|
||||
}
|
||||
|
||||
static void _g_thread_init(GThreadFunctions *vtable) {
|
||||
#if !GLIB_CHECK_VERSION(2,32,0)
|
||||
#ifdef G_THREADS_ENABLED
|
||||
g_thread_init(vtable);
|
||||
#endif
|
||||
#endif
|
||||
}
|
||||
|
||||
#if !GLIB_CHECK_VERSION(2,28,0)
|
||||
static const gchar *
|
||||
g_get_user_runtime_dir (void)
|
||||
{
|
||||
#ifndef G_OS_WIN32
|
||||
static const gchar *runtime_dir;
|
||||
static gsize initialised;
|
||||
|
||||
if (g_once_init_enter (&initialised)) {
|
||||
runtime_dir = g_strdup (getenv ("XDG_RUNTIME_DIR"));
|
||||
if (runtime_dir == NULL)
|
||||
g_warning ("XDG_RUNTIME_DIR variable not set. Falling back to XDG cache dir.");
|
||||
g_once_init_leave (&initialised, 1);
|
||||
}
|
||||
|
||||
if (runtime_dir)
|
||||
return runtime_dir;
|
||||
|
||||
/* Both fallback for UNIX and the default
|
||||
* in Windows: use the user cache directory.
|
||||
*/
|
||||
#endif
|
||||
|
||||
return g_get_user_cache_dir ();
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
20
vendor/github.com/mqu/go-notify/LICENSE
generated
vendored
Normal file
20
vendor/github.com/mqu/go-notify/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,20 @@
|
|||
The MIT License (MIT)
|
||||
|
||||
Copyright (c) 2012 LENORMAND Frank
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||
this software and associated documentation files (the "Software"), to deal in
|
||||
the Software without restriction, including without limitation the rights to
|
||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||
subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
21
vendor/github.com/mqu/go-notify/README
generated
vendored
Normal file
21
vendor/github.com/mqu/go-notify/README
generated
vendored
Normal file
|
@ -0,0 +1,21 @@
|
|||
This is a fork of https://github.com/lenormf/go-notify go-notify with some minor corrections.
|
||||
|
||||
Install :
|
||||
|
||||
# install libnotify devel librairies,
|
||||
sudo apt-get install libnotify-dev # debian style
|
||||
|
||||
# installing go-notify
|
||||
go get github.com/mqu/go-notify
|
||||
|
||||
C Dependencies: libnotify (sudo apt-get install libnotify-dev)
|
||||
|
||||
GO Dependencies: github.com/mattn/go-gtk/glib
|
||||
|
||||
This package provides GO bindings for the C library libnotify.
|
||||
Although this package provides full retro-compatibility with the regular C
|
||||
library, it also provides OOP-like functions for the NotifyNotification object.
|
||||
|
||||
The notify package depends on mattn's glib wrapper for the GO language.
|
||||
|
||||
See example for usage.
|
272
vendor/github.com/mqu/go-notify/notify.go
generated
vendored
Normal file
272
vendor/github.com/mqu/go-notify/notify.go
generated
vendored
Normal file
|
@ -0,0 +1,272 @@
|
|||
/*
|
||||
* notify.go for go-notify
|
||||
* by lenorm_f
|
||||
*/
|
||||
|
||||
package notify
|
||||
|
||||
/*
|
||||
#cgo pkg-config: libnotify
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <libnotify/notify.h>
|
||||
*/
|
||||
import "C"
|
||||
import "unsafe"
|
||||
import glib "github.com/mattn/go-gtk/glib"
|
||||
|
||||
/*
|
||||
* Exported Types
|
||||
*/
|
||||
type NotifyNotification struct {
|
||||
_notification *C.NotifyNotification
|
||||
}
|
||||
|
||||
const (
|
||||
NOTIFY_URGENCY_LOW = 0;
|
||||
NOTIFY_URGENCY_NORMAL = 1;
|
||||
NOTIFY_URGENCY_CRITICAL = 2;
|
||||
)
|
||||
|
||||
type NotifyUrgency int
|
||||
type NotifyActionCallback func (*NotifyNotification, string, interface {})
|
||||
|
||||
/*
|
||||
* Private Functions
|
||||
*/
|
||||
func new_notify_notification(cnotif *C.NotifyNotification) *NotifyNotification {
|
||||
return &NotifyNotification{cnotif}
|
||||
}
|
||||
|
||||
/*
|
||||
* Exported Functions
|
||||
*/
|
||||
// Pure Functions
|
||||
func Init(app_name string) bool {
|
||||
papp_name := C.CString(app_name)
|
||||
defer C.free(unsafe.Pointer(papp_name))
|
||||
|
||||
return bool(C.notify_init(papp_name) != 0)
|
||||
}
|
||||
|
||||
func UnInit() {
|
||||
C.notify_uninit()
|
||||
}
|
||||
|
||||
func IsInitted() bool {
|
||||
return C.notify_is_initted() != 0
|
||||
}
|
||||
|
||||
func GetAppName() string {
|
||||
return C.GoString(C.notify_get_app_name())
|
||||
}
|
||||
|
||||
func GetServerCaps() *glib.List {
|
||||
var gcaps *C.GList
|
||||
|
||||
gcaps = C.notify_get_server_caps()
|
||||
|
||||
return glib.ListFromNative(unsafe.Pointer(gcaps))
|
||||
}
|
||||
|
||||
func GetServerInfo(name, vendor, version, spec_version *string) bool {
|
||||
var cname *C.char
|
||||
var cvendor *C.char
|
||||
var cversion *C.char
|
||||
var cspec_version *C.char
|
||||
|
||||
ret := C.notify_get_server_info(&cname,
|
||||
&cvendor,
|
||||
&cversion,
|
||||
&cspec_version) != 0
|
||||
*name = C.GoString(cname)
|
||||
*vendor = C.GoString(cvendor)
|
||||
*version = C.GoString(cversion)
|
||||
*spec_version = C.GoString(cspec_version)
|
||||
|
||||
return ret
|
||||
}
|
||||
|
||||
func NotificationNew(title, text, image string) *NotifyNotification {
|
||||
ptitle := C.CString(title)
|
||||
ptext := C.CString(text)
|
||||
pimage := C.CString(image)
|
||||
ntext := C.g_utf8_normalize((*C.gchar)(ptext), -1, C.G_NORMALIZE_DEFAULT)
|
||||
defer func() {
|
||||
C.free(unsafe.Pointer(ptitle))
|
||||
C.free(unsafe.Pointer(ptext))
|
||||
C.free(unsafe.Pointer(pimage))
|
||||
|
||||
if ntext != nil {
|
||||
C.free(unsafe.Pointer(ntext))
|
||||
}
|
||||
}()
|
||||
|
||||
return new_notify_notification(C.notify_notification_new(ptitle, (*C.char)(ntext), pimage))
|
||||
}
|
||||
|
||||
func NotificationUpdate(notif *NotifyNotification, summary, body, icon string) bool {
|
||||
psummary := C.CString(summary)
|
||||
pbody := C.CString(body)
|
||||
picon := C.CString(icon)
|
||||
defer func() {
|
||||
C.free(unsafe.Pointer(psummary))
|
||||
C.free(unsafe.Pointer(pbody))
|
||||
C.free(unsafe.Pointer(picon))
|
||||
}()
|
||||
|
||||
return C.notify_notification_update(notif._notification, psummary, pbody, picon) != 0
|
||||
}
|
||||
|
||||
func NotificationShow(notif *NotifyNotification) *glib.Error {
|
||||
var err *C.GError
|
||||
C.notify_notification_show(notif._notification, &err)
|
||||
|
||||
return glib.ErrorFromNative(unsafe.Pointer(err))
|
||||
}
|
||||
|
||||
func NotificationSetTimeout(notif *NotifyNotification, timeout int32) {
|
||||
C.notify_notification_set_timeout(notif._notification, C.gint(timeout))
|
||||
}
|
||||
|
||||
func NotificationSetCategory(notif *NotifyNotification, category string) {
|
||||
pcategory := C.CString(category)
|
||||
defer C.free(unsafe.Pointer(pcategory))
|
||||
|
||||
C.notify_notification_set_category(notif._notification, pcategory)
|
||||
}
|
||||
|
||||
func NotificationSetUrgency(notif *NotifyNotification, urgency NotifyUrgency) {
|
||||
C.notify_notification_set_urgency(notif._notification, C.NotifyUrgency(urgency))
|
||||
}
|
||||
|
||||
func NotificationSetHintInt32(notif *NotifyNotification, key string, value int32) {
|
||||
pkey := C.CString(key)
|
||||
defer C.free(unsafe.Pointer(pkey))
|
||||
|
||||
C.notify_notification_set_hint_int32(notif._notification, pkey, C.gint(value))
|
||||
}
|
||||
|
||||
func NotificationSetHintDouble(notif *NotifyNotification, key string, value float64) {
|
||||
pkey := C.CString(key)
|
||||
defer C.free(unsafe.Pointer(pkey))
|
||||
|
||||
C.notify_notification_set_hint_double(notif._notification, pkey, C.gdouble(value))
|
||||
}
|
||||
|
||||
func NotificationSetHintString(notif *NotifyNotification, key string, value string) {
|
||||
pkey := C.CString(key)
|
||||
pvalue := C.CString(value)
|
||||
defer func() {
|
||||
C.free(unsafe.Pointer(pkey))
|
||||
C.free(unsafe.Pointer(pvalue))
|
||||
}()
|
||||
|
||||
C.notify_notification_set_hint_string(notif._notification, pkey, pvalue)
|
||||
}
|
||||
|
||||
func NotificationSetHintByte(notif *NotifyNotification, key string, value byte) {
|
||||
pkey := C.CString(key)
|
||||
defer C.free(unsafe.Pointer(pkey))
|
||||
|
||||
C.notify_notification_set_hint_byte(notif._notification, pkey, C.guchar(value))
|
||||
}
|
||||
|
||||
// FIXME: implement
|
||||
func NotificationSetHintByteArray(notif *NotifyNotification, key string, value []byte, len uint32) {
|
||||
pkey := C.CString(key)
|
||||
defer C.free(unsafe.Pointer(pkey))
|
||||
|
||||
// C.notify_notification_set_hint_byte_array(notif._notification, pkey, (*C.guchar)(value), C.gsize(len))
|
||||
}
|
||||
|
||||
func NotificationSetHint(notif *NotifyNotification, key string, value interface {}) {
|
||||
switch value.(type) {
|
||||
case int32: NotificationSetHintInt32(notif, key, value.(int32))
|
||||
case float64: NotificationSetHintDouble(notif, key, value.(float64))
|
||||
case string: NotificationSetHintString(notif, key, value.(string))
|
||||
case byte: NotificationSetHintByte(notif, key, value.(byte))
|
||||
}
|
||||
}
|
||||
|
||||
func NotificationClearHints(notif *NotifyNotification) {
|
||||
C.notify_notification_clear_hints(notif._notification)
|
||||
}
|
||||
|
||||
// FIXME: the C function is supposed to be allowing the user to pass another function than free
|
||||
func NotificationAddAction(notif *NotifyNotification, action, label string, callback NotifyActionCallback, user_data interface {}) {
|
||||
// C.notify_notification_add_action(notif._notification, paction, plabel, (C.NotifyActionCallback)(callback), user_data, C.free)
|
||||
}
|
||||
|
||||
func NotificationClearActions(notif *NotifyNotification) {
|
||||
C.notify_notification_clear_actions(notif._notification)
|
||||
}
|
||||
|
||||
func NotificationClose(notif *NotifyNotification) *glib.Error {
|
||||
var err *C.GError
|
||||
|
||||
C.notify_notification_close(notif._notification, &err)
|
||||
|
||||
return glib.ErrorFromNative(unsafe.Pointer(err))
|
||||
}
|
||||
|
||||
// Member Functions
|
||||
func (this *NotifyNotification) Update(summary, body, icon string) bool {
|
||||
return NotificationUpdate(this, summary, body, icon)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) Show() *glib.Error {
|
||||
return NotificationShow(this)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetTimeout(timeout int32) {
|
||||
NotificationSetTimeout(this, timeout)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetCategory(category string) {
|
||||
NotificationSetCategory(this, category)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetUrgency(urgency NotifyUrgency) {
|
||||
NotificationSetUrgency(this, urgency)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetHintInt32(key string, value int32) {
|
||||
NotificationSetHintInt32(this, key, value)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetHintDouble(key string, value float64) {
|
||||
NotificationSetHintDouble(this, key, value)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetHintString(key string, value string) {
|
||||
NotificationSetHintString(this, key, value)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetHintByte(key string, value byte) {
|
||||
NotificationSetHintByte(this, key, value)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetHintByteArray(key string, value []byte, len uint32) {
|
||||
NotificationSetHintByteArray(this, key, value, len)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) SetHint(key string, value interface {}) {
|
||||
NotificationSetHint(this, key, value)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) ClearHints() {
|
||||
NotificationClearHints(this)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) AddAction(action, label string, callback NotifyActionCallback, user_data interface {}) {
|
||||
NotificationAddAction(this, action, label, callback, user_data)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) ClearActions() {
|
||||
NotificationClearActions(this)
|
||||
}
|
||||
|
||||
func (this *NotifyNotification) Close() *glib.Error {
|
||||
return NotificationClose(this)
|
||||
}
|
23
vendor/github.com/pjbgf/sha1cd/Dockerfile.arm
generated
vendored
Normal file
23
vendor/github.com/pjbgf/sha1cd/Dockerfile.arm
generated
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
FROM golang:1.20@sha256:2edf6aab2d57644f3fe7407132a0d1770846867465a39c2083770cf62734b05d
|
||||
|
||||
ENV GOOS=linux
|
||||
ENV GOARCH=arm
|
||||
ENV CGO_ENABLED=1
|
||||
ENV CC=arm-linux-gnueabihf-gcc
|
||||
ENV PATH="/go/bin/${GOOS}_${GOARCH}:${PATH}"
|
||||
ENV PKG_CONFIG_PATH=/usr/lib/arm-linux-gnueabihf/pkgconfig
|
||||
|
||||
RUN dpkg --add-architecture armhf \
|
||||
&& apt update \
|
||||
&& apt install -y --no-install-recommends \
|
||||
upx \
|
||||
gcc-arm-linux-gnueabihf \
|
||||
libc6-dev-armhf-cross \
|
||||
pkg-config \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY . /src/workdir
|
||||
|
||||
WORKDIR /src/workdir
|
||||
|
||||
RUN go build ./...
|
23
vendor/github.com/pjbgf/sha1cd/Dockerfile.arm64
generated
vendored
Normal file
23
vendor/github.com/pjbgf/sha1cd/Dockerfile.arm64
generated
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
FROM golang:1.20@sha256:2edf6aab2d57644f3fe7407132a0d1770846867465a39c2083770cf62734b05d
|
||||
|
||||
ENV GOOS=linux
|
||||
ENV GOARCH=arm64
|
||||
ENV CGO_ENABLED=1
|
||||
ENV CC=aarch64-linux-gnu-gcc
|
||||
ENV PATH="/go/bin/${GOOS}_${GOARCH}:${PATH}"
|
||||
ENV PKG_CONFIG_PATH=/usr/lib/aarch64-linux-gnu/pkgconfig
|
||||
|
||||
# install build & runtime dependencies
|
||||
RUN dpkg --add-architecture arm64 \
|
||||
&& apt update \
|
||||
&& apt install -y --no-install-recommends \
|
||||
gcc-aarch64-linux-gnu \
|
||||
libc6-dev-arm64-cross \
|
||||
pkg-config \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY . /src/workdir
|
||||
|
||||
WORKDIR /src/workdir
|
||||
|
||||
RUN go build ./...
|
201
vendor/github.com/pjbgf/sha1cd/LICENSE
generated
vendored
Normal file
201
vendor/github.com/pjbgf/sha1cd/LICENSE
generated
vendored
Normal file
|
@ -0,0 +1,201 @@
|
|||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue