add vendor

This commit is contained in:
ginuerzh 2018-12-26 20:24:41 +08:00
parent bb1afcc437
commit 02f1d099c4
1153 changed files with 381523 additions and 12 deletions

View File

@ -1,21 +1,13 @@
FROM golang:1-alpine as builder FROM golang:alpine as builder
RUN apk add --no-cache musl-dev git gcc ADD . /go/src/github.com/ginuerzh/gost/
ADD . /data RUN go install github.com/ginuerzh/gost/cmd/gost
WORKDIR /data
ENV GO111MODULE=on
RUN cd cmd/gost && go build
FROM alpine:latest FROM alpine:latest
WORKDIR /bin/ WORKDIR /bin/
COPY --from=builder /data/cmd/gost/gost . COPY --from=builder /go/bin/gost .
RUN /bin/gost -V
ENTRYPOINT ["/bin/gost"] ENTRYPOINT ["/bin/gost"]

View File

@ -0,0 +1,2 @@
/examples/dummy-client/dummy-client
/examples/dummy-server/dummy-server

View File

@ -0,0 +1,121 @@
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.

View File

@ -0,0 +1,27 @@
goptlib is a library for writing Tor pluggable transports in Go.
https://spec.torproject.org/pt-spec
https://gitweb.torproject.org/torspec.git/tree/proposals/196-transport-control-ports.txt
https://gitweb.torproject.org/torspec.git/tree/proposals/217-ext-orport-auth.txt
https://gitweb.torproject.org/torspec.git/tree/proposals/232-pluggable-transports-through-proxy.txt
To download a copy of the library into $GOPATH:
go get git.torproject.org/pluggable-transports/goptlib.git
See the included example programs for examples of how to use the
library. To build them, enter their directory and run "go build".
examples/dummy-client/dummy-client.go
examples/dummy-server/dummy-server.go
The recommended way to start writing a new transport plugin is to copy
dummy-client or dummy-server and make changes to it.
There is browseable documentation here:
https://godoc.org/git.torproject.org/pluggable-transports/goptlib.git
Report bugs to the tor-dev@lists.torproject.org mailing list or to the
bug tracker at https://trac.torproject.org/projects/tor.
To the extent possible under law, the authors have dedicated all
copyright and related and neighboring rights to this software to the
public domain worldwide. This software is distributed without any
warranty. See COPYING.

View File

@ -0,0 +1,219 @@
package pt
import (
"bytes"
"fmt"
"sort"
"strings"
)
// Keyvalue mappings for the representation of client and server options.
// Args maps a string key to a list of values. It is similar to url.Values.
type Args map[string][]string
// Get the first value associated with the given key. If there are any values
// associated with the key, the value return has the value and ok is set to
// true. If there are no values for the given key, value is "" and ok is false.
// If you need access to multiple values, use the map directly.
func (args Args) Get(key string) (value string, ok bool) {
if args == nil {
return "", false
}
vals, ok := args[key]
if !ok || len(vals) == 0 {
return "", false
}
return vals[0], true
}
// Append value to the list of values for key.
func (args Args) Add(key, value string) {
args[key] = append(args[key], value)
}
// Return the index of the next unescaped byte in s that is in the term set, or
// else the length of the string if no terminators appear. Additionally return
// the unescaped string up to the returned index.
func indexUnescaped(s string, term []byte) (int, string, error) {
var i int
unesc := make([]byte, 0)
for i = 0; i < len(s); i++ {
b := s[i]
// A terminator byte?
if bytes.IndexByte(term, b) != -1 {
break
}
if b == '\\' {
i++
if i >= len(s) {
return 0, "", fmt.Errorf("nothing following final escape in %q", s)
}
b = s[i]
}
unesc = append(unesc, b)
}
return i, string(unesc), nil
}
// Parse a namevalue mapping as from an encoded SOCKS username/password.
//
// "First the '<Key>=<Value>' formatted arguments MUST be escaped, such that all
// backslash, equal sign, and semicolon characters are escaped with a
// backslash."
func parseClientParameters(s string) (args Args, err error) {
args = make(Args)
if len(s) == 0 {
return
}
i := 0
for {
var key, value string
var offset, begin int
begin = i
// Read the key.
offset, key, err = indexUnescaped(s[i:], []byte{'=', ';'})
if err != nil {
return
}
i += offset
// End of string or no equals sign?
if i >= len(s) || s[i] != '=' {
err = fmt.Errorf("no equals sign in %q", s[begin:i])
return
}
// Skip the equals sign.
i++
// Read the value.
offset, value, err = indexUnescaped(s[i:], []byte{';'})
if err != nil {
return
}
i += offset
if len(key) == 0 {
err = fmt.Errorf("empty key in %q", s[begin:i])
return
}
args.Add(key, value)
if i >= len(s) {
break
}
// Skip the semicolon.
i++
}
return args, nil
}
// Parse a transportnamevalue mapping as from TOR_PT_SERVER_TRANSPORT_OPTIONS.
//
// "...a semicolon-separated list of <key>:<value> pairs, where <key> is a PT
// name and <value> is a k=v string value with options that are to be passed to
// the transport. Colons, semicolons, equal signs and backslashes must be
// escaped with a backslash."
// Example: scramblesuit:key=banana;automata:rule=110;automata:depth=3
func parseServerTransportOptions(s string) (opts map[string]Args, err error) {
opts = make(map[string]Args)
if len(s) == 0 {
return
}
i := 0
for {
var methodName, key, value string
var offset, begin int
begin = i
// Read the method name.
offset, methodName, err = indexUnescaped(s[i:], []byte{':', '=', ';'})
if err != nil {
return
}
i += offset
// End of string or no colon?
if i >= len(s) || s[i] != ':' {
err = fmt.Errorf("no colon in %q", s[begin:i])
return
}
// Skip the colon.
i++
// Read the key.
offset, key, err = indexUnescaped(s[i:], []byte{'=', ';'})
if err != nil {
return
}
i += offset
// End of string or no equals sign?
if i >= len(s) || s[i] != '=' {
err = fmt.Errorf("no equals sign in %q", s[begin:i])
return
}
// Skip the equals sign.
i++
// Read the value.
offset, value, err = indexUnescaped(s[i:], []byte{';'})
if err != nil {
return
}
i += offset
if len(methodName) == 0 {
err = fmt.Errorf("empty method name in %q", s[begin:i])
return
}
if len(key) == 0 {
err = fmt.Errorf("empty key in %q", s[begin:i])
return
}
if opts[methodName] == nil {
opts[methodName] = make(Args)
}
opts[methodName].Add(key, value)
if i >= len(s) {
break
}
// Skip the semicolon.
i++
}
return opts, nil
}
// Escape backslashes and all the bytes that are in set.
func backslashEscape(s string, set []byte) string {
var buf bytes.Buffer
for _, b := range []byte(s) {
if b == '\\' || bytes.IndexByte(set, b) != -1 {
buf.WriteByte('\\')
}
buf.WriteByte(b)
}
return buf.String()
}
// Encode a namevalue mapping so that it is suitable to go in the ARGS option
// of an SMETHOD line. The output is sorted by key. The "ARGS:" prefix is not
// added.
//
// "Equal signs and commas [and backslashes] MUST be escaped with a backslash."
func encodeSmethodArgs(args Args) string {
if args == nil {
return ""
}
keys := make([]string, 0, len(args))
for key := range args {
keys = append(keys, key)
}
sort.Strings(keys)
escape := func(s string) string {
return backslashEscape(s, []byte{'=', ','})
}
var pairs []string
for _, key := range keys {
for _, value := range args[key] {
pairs = append(pairs, escape(key)+"="+escape(value))
}
}
return strings.Join(pairs, ",")
}

View File

@ -0,0 +1,949 @@
// Package pt implements the Tor pluggable transports specification.
//
// Sample client usage:
// var ptInfo pt.ClientInfo
// ...
// func handler(conn *pt.SocksConn) error {
// defer conn.Close()
// remote, err := net.Dial("tcp", conn.Req.Target)
// if err != nil {
// conn.Reject()
// return err
// }
// defer remote.Close()
// err = conn.Grant(remote.RemoteAddr().(*net.TCPAddr))
// if err != nil {
// return err
// }
// // do something with conn and remote.
// return nil
// }
// func acceptLoop(ln *pt.SocksListener) error {
// defer ln.Close()
// for {
// conn, err := ln.AcceptSocks()
// if err != nil {
// if e, ok := err.(net.Error); ok && e.Temporary() {
// continue
// }
// return err
// }
// go handler(conn)
// }
// return nil
// }
// ...
// func main() {
// var err error
// ptInfo, err = pt.ClientSetup(nil)
// if err != nil {
// os.Exit(1)
// }
// if ptInfo.ProxyURL != nil {
// // you need to interpret the proxy URL yourself
// // call pt.ProxyDone instead if it's a type you understand
// pt.ProxyError(fmt.Sprintf("proxy %s is not supported", ptInfo.ProxyURL))
// os.Exit(1)
// }
// for _, methodName := range ptInfo.MethodNames {
// switch methodName {
// case "foo":
// ln, err := pt.ListenSocks("tcp", "127.0.0.1:0")
// if err != nil {
// pt.CmethodError(methodName, err.Error())
// break
// }
// go acceptLoop(ln)
// pt.Cmethod(methodName, ln.Version(), ln.Addr())
// default:
// pt.CmethodError(methodName, "no such method")
// }
// }
// pt.CmethodsDone()
// }
//
// Sample server usage:
// var ptInfo pt.ServerInfo
// ...
// func handler(conn net.Conn) error {
// defer conn.Close()
// or, err := pt.DialOr(&ptInfo, conn.RemoteAddr().String(), "foo")
// if err != nil {
// return
// }
// defer or.Close()
// // do something with or and conn
// return nil
// }
// func acceptLoop(ln net.Listener) error {
// defer ln.Close()
// for {
// conn, err := ln.Accept()
// if err != nil {
// if e, ok := err.(net.Error); ok && e.Temporary() {
// continue
// }
// return err
// }
// go handler(conn)
// }
// return nil
// }
// ...
// func main() {
// var err error
// ptInfo, err = pt.ServerSetup(nil)
// if err != nil {
// os.Exit(1)
// }
// for _, bindaddr := range ptInfo.Bindaddrs {
// switch bindaddr.MethodName {
// case "foo":
// ln, err := net.ListenTCP("tcp", bindaddr.Addr)
// if err != nil {
// pt.SmethodError(bindaddr.MethodName, err.Error())
// break
// }
// go acceptLoop(ln)
// pt.Smethod(bindaddr.MethodName, ln.Addr())
// default:
// pt.SmethodError(bindaddr.MethodName, "no such method")
// }
// }
// pt.SmethodsDone()
// }
//
// Some additional care is needed to handle signals and shutdown properly. See
// the example programs dummy-client and dummy-server.
//
// Tor pluggable transports specification:
// https://spec.torproject.org/pt-spec
//
// Extended ORPort:
// https://gitweb.torproject.org/torspec.git/tree/proposals/196-transport-control-ports.txt
//
// Extended ORPort Authentication:
// https://gitweb.torproject.org/torspec.git/tree/proposals/217-ext-orport-auth.txt
//
// Pluggable Transport through SOCKS proxy:
// https://gitweb.torproject.org/torspec.git/tree/proposals/232-pluggable-transports-through-proxy.txt
//
// The package implements a SOCKS5 server sufficient for a Tor client transport
// plugin.
//
// https://www.ietf.org/rfc/rfc1928.txt
// https://www.ietf.org/rfc/rfc1929.txt
package pt
import (
"bytes"
"crypto/hmac"
"crypto/rand"
"crypto/sha256"
"crypto/subtle"
"encoding/binary"
"fmt"
"io"
"net"
"net/url"
"os"
"strconv"
"strings"
"time"
)
// This type wraps a Write method and calls Sync after each Write.
type syncWriter struct {
*os.File
}
// Call File.Write and then Sync. An error is returned if either operation
// returns an error.
func (w syncWriter) Write(p []byte) (n int, err error) {
n, err = w.File.Write(p)
if err != nil {
return
}
err = w.Sync()
return
}
// Writer to which pluggable transports negotiation messages are written. It
// defaults to a Writer that writes to os.Stdout and calls Sync after each
// write.
//
// You may, for example, log pluggable transports messages by defining a Writer
// that logs what is written to it:
// type logWriteWrapper struct {
// io.Writer
// }
//
// func (w logWriteWrapper) Write(p []byte) (int, error) {
// log.Print(string(p))
// return w.Writer.Write(p)
// }
// and then redefining Stdout:
// pt.Stdout = logWriteWrapper{pt.Stdout}
var Stdout io.Writer = syncWriter{os.Stdout}
// Represents an error that can happen during negotiation, for example
// ENV-ERROR. When an error occurs, we print it to stdout and also pass it up
// the return chain.
type ptErr struct {
Keyword string
Args []string
}
// Implements the error interface.
func (err *ptErr) Error() string {
return formatline(err.Keyword, err.Args...)
}
func getenv(key string) string {
return os.Getenv(key)
}
// Returns an ENV-ERROR if the environment variable isn't set.
func getenvRequired(key string) (string, error) {
value := os.Getenv(key)
if value == "" {
return "", envError(fmt.Sprintf("no %s environment variable", key))
}
return value, nil
}
// Returns true iff keyword contains only bytes allowed in a PT→Tor output line
// keyword.
// <KeywordChar> ::= <any US-ASCII alphanumeric, dash, and underscore>
func keywordIsSafe(keyword string) bool {
for _, b := range []byte(keyword) {
switch {
case '0' <= b && b <= '9':
continue
case 'A' <= b && b <= 'Z':
continue
case 'a' <= b && b <= 'z':
continue
case b == '-' || b == '_':
continue
default:
return false
}
}
return true
}
// Returns true iff arg contains only bytes allowed in a PT→Tor output line arg.
// <ArgChar> ::= <any US-ASCII character but NUL or NL>
func argIsSafe(arg string) bool {
for _, b := range []byte(arg) {
if b >= '\x80' || b == '\x00' || b == '\n' {
return false
}
}
return true
}
func formatline(keyword string, v ...string) string {
var buf bytes.Buffer
if !keywordIsSafe(keyword) {
panic(fmt.Sprintf("keyword %q contains forbidden bytes", keyword))
}
buf.WriteString(keyword)
for _, x := range v {
if !argIsSafe(x) {
panic(fmt.Sprintf("arg %q contains forbidden bytes", x))
}
buf.WriteString(" " + x)
}
return buf.String()
}
// Print a pluggable transports protocol line to Stdout. The line consists of a
// keyword followed by any number of space-separated arg strings. Panics if
// there are forbidden bytes in the keyword or the args (pt-spec.txt 2.2.1).
func line(keyword string, v ...string) {
fmt.Fprintln(Stdout, formatline(keyword, v...))
}
// Emit and return the given error as a ptErr.
func doError(keyword string, v ...string) *ptErr {
line(keyword, v...)
return &ptErr{keyword, v}
}
// Emit an ENV-ERROR line with explanation text. Returns a representation of the
// error.
func envError(msg string) error {
return doError("ENV-ERROR", msg)
}
// Emit a VERSION-ERROR line with explanation text. Returns a representation of
// the error.
func versionError(msg string) error {
return doError("VERSION-ERROR", msg)
}
// Emit a CMETHOD-ERROR line with explanation text. Returns a representation of
// the error.
func CmethodError(methodName, msg string) error {
return doError("CMETHOD-ERROR", methodName, msg)
}
// Emit an SMETHOD-ERROR line with explanation text. Returns a representation of
// the error.
func SmethodError(methodName, msg string) error {
return doError("SMETHOD-ERROR", methodName, msg)
}
// Emit a PROXY-ERROR line with explanation text. Returns a representation of
// the error.
func ProxyError(msg string) error {
return doError("PROXY-ERROR", msg)
}
// Emit a CMETHOD line. socks must be "socks4" or "socks5". Call this once for
// each listening client SOCKS port.
func Cmethod(name string, socks string, addr net.Addr) {
line("CMETHOD", name, socks, addr.String())
}
// Emit a CMETHODS DONE line. Call this after opening all client listeners.
func CmethodsDone() {
line("CMETHODS", "DONE")
}
// Emit an SMETHOD line. Call this once for each listening server port.
func Smethod(name string, addr net.Addr) {
line("SMETHOD", name, addr.String())
}
// Emit an SMETHOD line with an ARGS option. args is a namevalue mapping that
// will be added to the server's extrainfo document.
//
// This is an example of how to check for a required option:
// secret, ok := bindaddr.Options.Get("shared-secret")
// if ok {
// args := pt.Args{}
// args.Add("shared-secret", secret)
// pt.SmethodArgs(bindaddr.MethodName, ln.Addr(), args)
// } else {
// pt.SmethodError(bindaddr.MethodName, "need a shared-secret option")
// }
// Or, if you just want to echo back the options provided by Tor from the
// TransportServerOptions configuration,
// pt.SmethodArgs(bindaddr.MethodName, ln.Addr(), bindaddr.Options)
func SmethodArgs(name string, addr net.Addr, args Args) {
line("SMETHOD", name, addr.String(), "ARGS:"+encodeSmethodArgs(args))
}
// Emit an SMETHODS DONE line. Call this after opening all server listeners.
func SmethodsDone() {
line("SMETHODS", "DONE")
}
// Emit a PROXY DONE line. Call this after parsing ClientInfo.ProxyURL.
func ProxyDone() {
fmt.Fprintf(Stdout, "PROXY DONE\n")
}
// Get a pluggable transports version offered by Tor and understood by us, if
// any. The only version we understand is "1". This function reads the
// environment variable TOR_PT_MANAGED_TRANSPORT_VER.
func getManagedTransportVer() (string, error) {
const transportVersion = "1"
managedTransportVer, err := getenvRequired("TOR_PT_MANAGED_TRANSPORT_VER")
if err != nil {
return "", err
}
for _, offered := range strings.Split(managedTransportVer, ",") {
if offered == transportVersion {
return offered, nil
}
}
return "", versionError("no-version")
}
// Return the directory name in the TOR_PT_STATE_LOCATION environment variable,
// creating it if it doesn't exist. Returns non-nil error if
// TOR_PT_STATE_LOCATION is not set or if there is an error creating the
// directory.
func MakeStateDir() (string, error) {
dir, err := getenvRequired("TOR_PT_STATE_LOCATION")
if err != nil {
return "", err
}
err = os.MkdirAll(dir, 0700)
return dir, err
}
// Get the list of method names requested by Tor. This function reads the
// environment variable TOR_PT_CLIENT_TRANSPORTS.
func getClientTransports() ([]string, error) {
clientTransports, err := getenvRequired("TOR_PT_CLIENT_TRANSPORTS")
if err != nil {
return nil, err
}
return strings.Split(clientTransports, ","), nil
}
// Get the upstream proxy URL. Returns nil if no proxy is requested. The
// function ensures that the Scheme and Host fields are set; i.e., that the URL
// is absolute. It additionally checks that the Host field contains both a host
// and a port part. This function reads the environment variable TOR_PT_PROXY.
//
// This function doesn't check that the scheme is one of Tor's supported proxy
// schemes; that is, one of "http", "socks5", or "socks4a". The caller must be
// able to handle any returned scheme (which may be by calling ProxyError if
// it doesn't know how to handle the scheme).
func getProxyURL() (*url.URL, error) {
rawurl := os.Getenv("TOR_PT_PROXY")
if rawurl == "" {
return nil, nil
}
u, err := url.Parse(rawurl)
if err != nil {
return nil, err
}
if u.Scheme == "" {
return nil, fmt.Errorf("missing scheme")
}
if u.Host == "" {
return nil, fmt.Errorf("missing authority")
}
host, port, err := net.SplitHostPort(u.Host)
if err != nil {
return nil, err
}
if host == "" {
return nil, fmt.Errorf("missing host")
}
if port == "" {
return nil, fmt.Errorf("missing port")
}
return u, nil
}
// This structure is returned by ClientSetup. It consists of a list of method
// names and the upstream proxy URL, if any.
type ClientInfo struct {
MethodNames []string
ProxyURL *url.URL
}
// Check the client pluggable transports environment, emitting an error message
// and returning a non-nil error if any error is encountered. Returns a
// ClientInfo struct.
//
// If your program needs to know whether to call ClientSetup or ServerSetup
// (i.e., if the same program can be run as either a client or a server), check
// whether the TOR_PT_CLIENT_TRANSPORTS environment variable is set:
// if os.Getenv("TOR_PT_CLIENT_TRANSPORTS") != "" {
// // Client mode; call pt.ClientSetup.
// } else {
// // Server mode; call pt.ServerSetup.
// }
//
// Always pass nil for the unused single parameter. In the past, the parameter
// was a list of transport names to use in case Tor requested "*". That feature
// was never implemented and has been removed from the pluggable transports
// specification.
// https://bugs.torproject.org/15612
func ClientSetup(_ []string) (info ClientInfo, err error) {
ver, err := getManagedTransportVer()
if err != nil {
return
}
line("VERSION", ver)
info.MethodNames, err = getClientTransports()
if err != nil {
return
}
info.ProxyURL, err = getProxyURL()
if err != nil {
return
}
return info, nil
}
// A combination of a method name and an address, as extracted from
// TOR_PT_SERVER_BINDADDR.
type Bindaddr struct {
MethodName string
Addr *net.TCPAddr
// Options from TOR_PT_SERVER_TRANSPORT_OPTIONS that pertain to this
// transport.
Options Args
}
func parsePort(portStr string) (int, error) {
port, err := strconv.ParseUint(portStr, 10, 16)
return int(port), err
}
// Resolve an address string into a net.TCPAddr. We are a bit more strict than
// net.ResolveTCPAddr; we don't allow an empty host or port, and the host part
// must be a literal IP address.
func resolveAddr(addrStr string) (*net.TCPAddr, error) {
ipStr, portStr, err := net.SplitHostPort(addrStr)
if err != nil {
// Before the fixing of bug #7011, tor doesn't put brackets around IPv6
// addresses. Split after the last colon, assuming it is a port
// separator, and try adding the brackets.
// https://bugs.torproject.org/7011
parts := strings.Split(addrStr, ":")
if len(parts) <= 2 {
return nil, err
}
addrStr := "[" + strings.Join(parts[:len(parts)-1], ":") + "]:" + parts[len(parts)-1]
ipStr, portStr, err = net.SplitHostPort(addrStr)
}
if err != nil {
return nil, err
}
if ipStr == "" {
return nil, net.InvalidAddrError(fmt.Sprintf("address string %q lacks a host part", addrStr))
}
if portStr == "" {
return nil, net.InvalidAddrError(fmt.Sprintf("address string %q lacks a port part", addrStr))
}
ip := net.ParseIP(ipStr)
if ip == nil {
return nil, net.InvalidAddrError(fmt.Sprintf("not an IP string: %q", ipStr))
}
port, err := parsePort(portStr)
if err != nil {
return nil, err
}
return &net.TCPAddr{IP: ip, Port: port}, nil
}
// Return a new slice, the members of which are those members of addrs having a
// MethodName in methodNames.
func filterBindaddrs(addrs []Bindaddr, methodNames []string) []Bindaddr {
var result []Bindaddr
for _, ba := range addrs {
for _, methodName := range methodNames {
if ba.MethodName == methodName {
result = append(result, ba)
break
}
}
}
return result
}
// Return an array of Bindaddrs, being the contents of TOR_PT_SERVER_BINDADDR
// with keys filtered by TOR_PT_SERVER_TRANSPORTS. Transport-specific options
// from TOR_PT_SERVER_TRANSPORT_OPTIONS are assigned to the Options member.
func getServerBindaddrs() ([]Bindaddr, error) {
var result []Bindaddr
// Parse the list of server transport options.
serverTransportOptions := getenv("TOR_PT_SERVER_TRANSPORT_OPTIONS")
optionsMap, err := parseServerTransportOptions(serverTransportOptions)
if err != nil {
return nil, envError(fmt.Sprintf("TOR_PT_SERVER_TRANSPORT_OPTIONS: %q: %s", serverTransportOptions, err.Error()))
}
// Get the list of all requested bindaddrs.
serverBindaddr, err := getenvRequired("TOR_PT_SERVER_BINDADDR")
if err != nil {
return nil, err
}
seenMethods := make(map[string]bool)
for _, spec := range strings.Split(serverBindaddr, ",") {
var bindaddr Bindaddr
parts := strings.SplitN(spec, "-", 2)
if len(parts) != 2 {
return nil, envError(fmt.Sprintf("TOR_PT_SERVER_BINDADDR: %q: doesn't contain \"-\"", spec))
}
bindaddr.MethodName = parts[0]
// Check for duplicate method names: "Applications MUST NOT set
// more than one <address>:<port> pair per PT name."
if seenMethods[bindaddr.MethodName] {
return nil, envError(fmt.Sprintf("TOR_PT_SERVER_BINDADDR: %q: duplicate method name %q", spec, bindaddr.MethodName))
}
seenMethods[bindaddr.MethodName] = true
addr, err := resolveAddr(parts[1])
if err != nil {
return nil, envError(fmt.Sprintf("TOR_PT_SERVER_BINDADDR: %q: %s", spec, err.Error()))
}
bindaddr.Addr = addr
bindaddr.Options = optionsMap[bindaddr.MethodName]
result = append(result, bindaddr)
}
// Filter by TOR_PT_SERVER_TRANSPORTS.
serverTransports, err := getenvRequired("TOR_PT_SERVER_TRANSPORTS")
if err != nil {
return nil, err
}
result = filterBindaddrs(result, strings.Split(serverTransports, ","))
return result, nil
}
func readAuthCookie(f io.Reader) ([]byte, error) {
authCookieHeader := []byte("! Extended ORPort Auth Cookie !\x0a")
buf := make([]byte, 64)
n, err := io.ReadFull(f, buf)
if err != nil {
return nil, err
}
// Check that the file ends here.
n, err = f.Read(make([]byte, 1))
if n != 0 {
return nil, fmt.Errorf("file is longer than 64 bytes")
} else if err != io.EOF {
return nil, fmt.Errorf("did not find EOF at end of file")
}
header := buf[0:32]
cookie := buf[32:64]
if subtle.ConstantTimeCompare(header, authCookieHeader) != 1 {
return nil, fmt.Errorf("missing auth cookie header")
}
return cookie, nil
}
// Read and validate the contents of an auth cookie file. Returns the 32-byte
// cookie. See section 4.2.1.2 of 217-ext-orport-auth.txt.
func readAuthCookieFile(filename string) ([]byte, error) {
f, err := os.Open(filename)
if err != nil {
return nil, err
}
defer f.Close()
return readAuthCookie(f)
}
// This structure is returned by ServerSetup. It consists of a list of
// Bindaddrs, an address for the ORPort, an address for the extended ORPort (if
// any), and an authentication cookie (if any).
type ServerInfo struct {
Bindaddrs []Bindaddr
OrAddr *net.TCPAddr
ExtendedOrAddr *net.TCPAddr
AuthCookiePath string
}
// Check the server pluggable transports environment, emitting an error message
// and returning a non-nil error if any error is encountered. Resolves the
// various requested bind addresses, the server ORPort and extended ORPort, and
// reads the auth cookie file. Returns a ServerInfo struct.
//
// If your program needs to know whether to call ClientSetup or ServerSetup
// (i.e., if the same program can be run as either a client or a server), check
// whether the TOR_PT_CLIENT_TRANSPORTS environment variable is set:
// if os.Getenv("TOR_PT_CLIENT_TRANSPORTS") != "" {
// // Client mode; call pt.ClientSetup.
// } else {
// // Server mode; call pt.ServerSetup.
// }
//
// Always pass nil for the unused single parameter. In the past, the parameter
// was a list of transport names to use in case Tor requested "*". That feature
// was never implemented and has been removed from the pluggable transports
// specification.
// https://bugs.torproject.org/15612
func ServerSetup(_ []string) (info ServerInfo, err error) {
ver, err := getManagedTransportVer()
if err != nil {
return
}
line("VERSION", ver)
info.Bindaddrs, err = getServerBindaddrs()
if err != nil {
return
}
orPort := getenv("TOR_PT_ORPORT")
if orPort != "" {
info.OrAddr, err = resolveAddr(orPort)
if err != nil {
err = envError(fmt.Sprintf("cannot resolve TOR_PT_ORPORT %q: %s", orPort, err.Error()))
return
}
}
info.AuthCookiePath = getenv("TOR_PT_AUTH_COOKIE_FILE")
extendedOrPort := getenv("TOR_PT_EXTENDED_SERVER_PORT")
if extendedOrPort != "" {
if info.AuthCookiePath == "" {
err = envError("need TOR_PT_AUTH_COOKIE_FILE environment variable with TOR_PT_EXTENDED_SERVER_PORT")
return
}
info.ExtendedOrAddr, err = resolveAddr(extendedOrPort)
if err != nil {
err = envError(fmt.Sprintf("cannot resolve TOR_PT_EXTENDED_SERVER_PORT %q: %s", extendedOrPort, err.Error()))
return
}
}
// Need either OrAddr or ExtendedOrAddr.
if info.OrAddr == nil && info.ExtendedOrAddr == nil {
err = envError("need TOR_PT_ORPORT or TOR_PT_EXTENDED_SERVER_PORT environment variable")
return
}
return info, nil
}
// See 217-ext-orport-auth.txt section 4.2.1.3.
func computeServerHash(authCookie, clientNonce, serverNonce []byte) []byte {
h := hmac.New(sha256.New, authCookie)
io.WriteString(h, "ExtORPort authentication server-to-client hash")
h.Write(clientNonce)
h.Write(serverNonce)
return h.Sum([]byte{})
}
// See 217-ext-orport-auth.txt section 4.2.1.3.
func computeClientHash(authCookie, clientNonce, serverNonce []byte) []byte {
h := hmac.New(sha256.New, authCookie)
io.WriteString(h, "ExtORPort authentication client-to-server hash")
h.Write(clientNonce)
h.Write(serverNonce)
return h.Sum([]byte{})
}
func extOrPortAuthenticate(s io.ReadWriter, info *ServerInfo) error {
// Read auth types. 217-ext-orport-auth.txt section 4.1.
var authTypes [256]bool
var count int
for count = 0; count < 256; count++ {
buf := make([]byte, 1)
_, err := io.ReadFull(s, buf)
if err != nil {
return err
}
b := buf[0]
if b == 0 {
break
}
authTypes[b] = true
}
if count >= 256 {
return fmt.Errorf("read 256 auth types without seeing \\x00")
}
// We support only type 1, SAFE_COOKIE.
if !authTypes[1] {
return fmt.Errorf("server didn't offer auth type 1")
}
_, err := s.Write([]byte{1})
if err != nil {
return err
}
clientNonce := make([]byte, 32)
clientHash := make([]byte, 32)
serverNonce := make([]byte, 32)
serverHash := make([]byte, 32)
_, err = io.ReadFull(rand.Reader, clientNonce)
if err != nil {
return err
}
_, err = s.Write(clientNonce)
if err != nil {
return err
}
_, err = io.ReadFull(s, serverHash)
if err != nil {
return err
}
_, err = io.ReadFull(s, serverNonce)
if err != nil {
return err
}
// Work around tor bug #15240 where the auth cookie is generated after
// pluggable transports are launched, leading to a stale cookie getting
// cached forever if it is only read once as part of ServerSetup.
// https://bugs.torproject.org/15240
authCookie, err := readAuthCookieFile(info.AuthCookiePath)
if err != nil {
return fmt.Errorf("error reading TOR_PT_AUTH_COOKIE_FILE %q: %s", info.AuthCookiePath, err.Error())
}
expectedServerHash := computeServerHash(authCookie, clientNonce, serverNonce)
if subtle.ConstantTimeCompare(serverHash, expectedServerHash) != 1 {
return fmt.Errorf("mismatch in server hash")
}
clientHash = computeClientHash(authCookie, clientNonce, serverNonce)
_, err = s.Write(clientHash)
if err != nil {
return err
}
status := make([]byte, 1)
_, err = io.ReadFull(s, status)
if err != nil {
return err
}
if status[0] != 1 {
return fmt.Errorf("server rejected authentication")
}
return nil
}
// See section 3.1.1 of 196-transport-control-ports.txt.
const (
extOrCmdDone = 0x0000
extOrCmdUserAddr = 0x0001
extOrCmdTransport = 0x0002
extOrCmdOkay = 0x1000
extOrCmdDeny = 0x1001
)
func extOrPortSendCommand(s io.Writer, cmd uint16, body []byte) error {
var buf bytes.Buffer
if len(body) > 65535 {
return fmt.Errorf("body length %d exceeds maximum of 65535", len(body))
}
err := binary.Write(&buf, binary.BigEndian, cmd)
if err != nil {
return err
}
err = binary.Write(&buf, binary.BigEndian, uint16(len(body)))
if err != nil {
return err
}
err = binary.Write(&buf, binary.BigEndian, body)
if err != nil {
return err
}
_, err = s.Write(buf.Bytes())
if err != nil {
return err
}
return nil
}
// Send a USERADDR command on s. See section 3.1.2.1 of
// 196-transport-control-ports.txt.
func extOrPortSendUserAddr(s io.Writer, addr string) error {
return extOrPortSendCommand(s, extOrCmdUserAddr, []byte(addr))
}
// Send a TRANSPORT command on s. See section 3.1.2.2 of
// 196-transport-control-ports.txt.
func extOrPortSendTransport(s io.Writer, methodName string) error {
return extOrPortSendCommand(s, extOrCmdTransport, []byte(methodName))
}
// Send a DONE command on s. See section 3.1 of 196-transport-control-ports.txt.
func extOrPortSendDone(s io.Writer) error {
return extOrPortSendCommand(s, extOrCmdDone, []byte{})
}
func extOrPortRecvCommand(s io.Reader) (cmd uint16, body []byte, err error) {
var bodyLen uint16
data := make([]byte, 4)
_, err = io.ReadFull(s, data)
if err != nil {
return
}
buf := bytes.NewBuffer(data)
err = binary.Read(buf, binary.BigEndian, &cmd)
if err != nil {
return
}
err = binary.Read(buf, binary.BigEndian, &bodyLen)
if err != nil {
return
}
body = make([]byte, bodyLen)
_, err = io.ReadFull(s, body)
if err != nil {
return
}
return cmd, body, err
}
// Send USERADDR and TRANSPORT commands followed by a DONE command. Wait for an
// OKAY or DENY response command from the server. If addr or methodName is "",
// the corresponding command is not sent. Returns nil if and only if OKAY is
// received.
func extOrPortSetup(s io.ReadWriter, addr, methodName string) error {
var err error
if addr != "" {
err = extOrPortSendUserAddr(s, addr)
if err != nil {
return err
}
}
if methodName != "" {
err = extOrPortSendTransport(s, methodName)
if err != nil {
return err
}
}
err = extOrPortSendDone(s)
if err != nil {
return err
}
cmd, _, err := extOrPortRecvCommand(s)
if err != nil {
return err
}
if cmd == extOrCmdDeny {
return fmt.Errorf("server returned DENY after our USERADDR and DONE")
} else if cmd != extOrCmdOkay {
return fmt.Errorf("server returned unknown command 0x%04x after our USERADDR and DONE", cmd)
}
return nil
}
// Dial info.ExtendedOrAddr if defined, or else info.OrAddr, and return an open
// *net.TCPConn. If connecting to the extended OR port, extended OR port
// authentication à la 217-ext-orport-auth.txt is done before returning; an
// error is returned if authentication fails.
//
// The addr and methodName arguments are put in USERADDR and TRANSPORT ExtOrPort
// commands, respectively. If either is "", the corresponding command is not
// sent.
func DialOr(info *ServerInfo, addr, methodName string) (*net.TCPConn, error) {
if info.ExtendedOrAddr == nil || info.AuthCookiePath == "" {
return net.DialTCP("tcp", nil, info.OrAddr)
}
s, err := net.DialTCP("tcp", nil, info.ExtendedOrAddr)
if err != nil {
return nil, err
}
s.SetDeadline(time.Now().Add(5 * time.Second))
err = extOrPortAuthenticate(s, info)
if err != nil {
s.Close()
return nil, err
}
err = extOrPortSetup(s, addr, methodName)
if err != nil {
s.Close()
return nil, err
}
s.SetDeadline(time.Time{})
return s, nil
}

View File

@ -0,0 +1,507 @@
package pt
import (
"bufio"
"fmt"
"io"
"net"
"time"
)
const (
socksVersion = 0x05
socksAuthNoneRequired = 0x00
socksAuthUsernamePassword = 0x02
socksAuthNoAcceptableMethods = 0xff
socksCmdConnect = 0x01
socksRsv = 0x00
socksAtypeV4 = 0x01
socksAtypeDomainName = 0x03
socksAtypeV6 = 0x04
socksAuthRFC1929Ver = 0x01
socksAuthRFC1929Success = 0x00
socksAuthRFC1929Fail = 0x01
socksRepSucceeded = 0x00
// "general SOCKS server failure"
SocksRepGeneralFailure = 0x01
// "connection not allowed by ruleset"
SocksRepConnectionNotAllowed = 0x02
// "Network unreachable"
SocksRepNetworkUnreachable = 0x03
// "Host unreachable"
SocksRepHostUnreachable = 0x04
// "Connection refused"
SocksRepConnectionRefused = 0x05
// "TTL expired"
SocksRepTTLExpired = 0x06
// "Command not supported"
SocksRepCommandNotSupported = 0x07
// "Address type not supported"
SocksRepAddressNotSupported = 0x08
)
// Put a sanity timeout on how long we wait for a SOCKS request.
const socksRequestTimeout = 5 * time.Second
// SocksRequest describes a SOCKS request.
type SocksRequest struct {
// The endpoint requested by the client as a "host:port" string.
Target string
// The userid string sent by the client.
Username string
// The password string sent by the client.
Password string
// The parsed contents of Username as a keyvalue mapping.
Args Args
}
// SocksConn encapsulates a net.Conn and information associated with a SOCKS request.
type SocksConn struct {
net.Conn
Req SocksRequest
}
// Send a message to the proxy client that access to the given address is
// granted. Addr is ignored, and "0.0.0.0:0" is always sent back for
// BND.ADDR/BND.PORT in the SOCKS response.
func (conn *SocksConn) Grant(addr *net.TCPAddr) error {
return sendSocks5ResponseGranted(conn)
}
// Send a message to the proxy client that access was rejected or failed. This
// sends back a "General Failure" error code. RejectReason should be used if
// more specific error reporting is desired.
func (conn *SocksConn) Reject() error {
return conn.RejectReason(SocksRepGeneralFailure)
}
// Send a message to the proxy client that access was rejected, with the
// specific error code indicating the reason behind the rejection.
func (conn *SocksConn) RejectReason(reason byte) error {
return sendSocks5ResponseRejected(conn, reason)
}
// SocksListener wraps a net.Listener in order to read a SOCKS request on Accept.
//
// func handleConn(conn *pt.SocksConn) error {
// defer conn.Close()
// remote, err := net.Dial("tcp", conn.Req.Target)
// if err != nil {
// conn.Reject()
// return err
// }
// defer remote.Close()
// err = conn.Grant(remote.RemoteAddr().(*net.TCPAddr))
// if err != nil {
// return err
// }
// // do something with conn and remote
// return nil
// }
// ...
// ln, err := pt.ListenSocks("tcp", "127.0.0.1:0")
// if err != nil {
// panic(err.Error())
// }
// for {
// conn, err := ln.AcceptSocks()
// if err != nil {
// log.Printf("accept error: %s", err)
// if e, ok := err.(net.Error); ok && e.Temporary() {
// continue
// }
// break
// }
// go handleConn(conn)
// }
type SocksListener struct {
net.Listener
}
// Open a net.Listener according to network and laddr, and return it as a
// SocksListener.
func ListenSocks(network, laddr string) (*SocksListener, error) {
ln, err := net.Listen(network, laddr)
if err != nil {
return nil, err
}
return NewSocksListener(ln), nil
}
// Create a new SocksListener wrapping the given net.Listener.
func NewSocksListener(ln net.Listener) *SocksListener {
return &SocksListener{ln}
}
// Accept is the same as AcceptSocks, except that it returns a generic net.Conn.
// It is present for the sake of satisfying the net.Listener interface.
func (ln *SocksListener) Accept() (net.Conn, error) {
return ln.AcceptSocks()
}
// Call Accept on the wrapped net.Listener, do SOCKS negotiation, and return a
// SocksConn. After accepting, you must call either conn.Grant or conn.Reject
// (presumably after trying to connect to conn.Req.Target).
//
// Errors returned by AcceptSocks may be temporary (for example, EOF while
// reading the request, or a badly formatted userid string), or permanent (e.g.,
// the underlying socket is closed). You can determine whether an error is
// temporary and take appropriate action with a type conversion to net.Error.
// For example:
//
// for {
// conn, err := ln.AcceptSocks()
// if err != nil {
// if e, ok := err.(net.Error); ok && e.Temporary() {
// log.Printf("temporary accept error; trying again: %s", err)
// continue
// }
// log.Printf("permanent accept error; giving up: %s", err)
// break
// }
// go handleConn(conn)
// }
func (ln *SocksListener) AcceptSocks() (*SocksConn, error) {
retry:
c, err := ln.Listener.Accept()
if err != nil {
return nil, err
}
conn := new(SocksConn)
conn.Conn = c
err = conn.SetDeadline(time.Now().Add(socksRequestTimeout))
if err != nil {
conn.Close()
goto retry
}
conn.Req, err = socks5Handshake(conn)
if err != nil {
conn.Close()
goto retry
}
err = conn.SetDeadline(time.Time{})
if err != nil {
conn.Close()
goto retry
}
return conn, nil
}
// Returns "socks5", suitable to be included in a call to Cmethod.
func (ln *SocksListener) Version() string {
return "socks5"
}
// socks5handshake conducts the SOCKS5 handshake up to the point where the
// client command is read and the proxy must open the outgoing connection.
// Returns a SocksRequest.
func socks5Handshake(s io.ReadWriter) (req SocksRequest, err error) {
rw := bufio.NewReadWriter(bufio.NewReader(s), bufio.NewWriter(s))
// Negotiate the authentication method.
var method byte
if method, err = socksNegotiateAuth(rw); err != nil {
return
}
// Authenticate the client.
if err = socksAuthenticate(rw, method, &req); err != nil {
return
}
// Read the command.
err = socksReadCommand(rw, &req)
return
}
// socksNegotiateAuth negotiates the authentication method and returns the
// selected method as a byte. On negotiation failures an error is returned.
func socksNegotiateAuth(rw *bufio.ReadWriter) (method byte, err error) {
// Validate the version.
if err = socksReadByteVerify(rw, "version", socksVersion); err != nil {
return
}
// Read the number of methods.
var nmethods byte
if nmethods, err = socksReadByte(rw); err != nil {
return
}
// Read the methods.
var methods []byte
if methods, err = socksReadBytes(rw, int(nmethods)); err != nil {
return
}
// Pick the most "suitable" method.
method = socksAuthNoAcceptableMethods
for _, m := range methods {
switch m {
case socksAuthNoneRequired:
// Pick Username/Password over None if the client happens to
// send both.
if method == socksAuthNoAcceptableMethods {
method = m
}
case socksAuthUsernamePassword:
method = m
}
}
// Send the negotiated method.
var msg [2]byte
msg[0] = socksVersion
msg[1] = method
if _, err = rw.Writer.Write(msg[:]); err != nil {
return
}
if err = socksFlushBuffers(rw); err != nil {
return
}
return
}
// socksAuthenticate authenticates the client via the chosen authentication
// mechanism.
func socksAuthenticate(rw *bufio.ReadWriter, method byte, req *SocksRequest) (err error) {
switch method {
case socksAuthNoneRequired:
// Straight into reading the connect.
case socksAuthUsernamePassword:
if err = socksAuthRFC1929(rw, req); err != nil {
return
}
case socksAuthNoAcceptableMethods:
err = fmt.Errorf("SOCKS method select had no compatible methods")
return
default:
err = fmt.Errorf("SOCKS method select picked a unsupported method 0x%02x", method)
return
}
if err = socksFlushBuffers(rw); err != nil {
return
}
return
}
// socksAuthRFC1929 authenticates the client via RFC 1929 username/password
// auth. As a design decision any valid username/password is accepted as this
// field is primarily used as an out-of-band argument passing mechanism for
// pluggable transports.
func socksAuthRFC1929(rw *bufio.ReadWriter, req *SocksRequest) (err error) {
sendErrResp := func() {
// Swallow the write/flush error here, we are going to close the
// connection and the original failure is more useful.
resp := []byte{socksAuthRFC1929Ver, socksAuthRFC1929Fail}
rw.Write(resp[:])
socksFlushBuffers(rw)
}
// Validate the fixed parts of the command message.
if err = socksReadByteVerify(rw, "auth version", socksAuthRFC1929Ver); err != nil {
sendErrResp()
return
}
// Read the username.
var ulen byte
if ulen, err = socksReadByte(rw); err != nil {
return
}
if ulen < 1 {
sendErrResp()
err = fmt.Errorf("RFC1929 username with 0 length")
return
}
var uname []byte
if uname, err = socksReadBytes(rw, int(ulen)); err != nil {
return
}
req.Username = string(uname)
// Read the password.
var plen byte
if plen, err = socksReadByte(rw); err != nil {
return
}
if plen < 1 {
sendErrResp()
err = fmt.Errorf("RFC1929 password with 0 length")
return
}
var passwd []byte
if passwd, err = socksReadBytes(rw, int(plen)); err != nil {
return
}
if !(plen == 1 && passwd[0] == 0x00) {
// tor will set the password to 'NUL' if there are no arguments.
req.Password = string(passwd)
}
// Mash the username/password together and parse it as a pluggable
// transport argument string.
if req.Args, err = parseClientParameters(req.Username + req.Password); err != nil {
sendErrResp()
} else {
resp := []byte{socksAuthRFC1929Ver, socksAuthRFC1929Success}
_, err = rw.Write(resp[:])
}
return
}
// socksReadCommand reads a SOCKS5 client command and parses out the relevant
// fields into a SocksRequest. Only CMD_CONNECT is supported.
func socksReadCommand(rw *bufio.ReadWriter, req *SocksRequest) (err error) {
sendErrResp := func(reason byte) {
// Swallow errors that occur when writing/flushing the response,
// connection will be closed anyway.
sendSocks5ResponseRejected(rw, reason)
socksFlushBuffers(rw)
}
// Validate the fixed parts of the command message.
if err = socksReadByteVerify(rw, "version", socksVersion); err != nil {
sendErrResp(SocksRepGeneralFailure)
return
}
if err = socksReadByteVerify(rw, "command", socksCmdConnect); err != nil {
sendErrResp(SocksRepCommandNotSupported)
return
}
if err = socksReadByteVerify(rw, "reserved", socksRsv); err != nil {
sendErrResp(SocksRepGeneralFailure)
return
}
// Read the destination address/port.
// XXX: This should probably eventually send socks 5 error messages instead
// of rudely closing connections on invalid addresses.
var atype byte
if atype, err = socksReadByte(rw); err != nil {
return
}
var host string
switch atype {
case socksAtypeV4:
var addr []byte
if addr, err = socksReadBytes(rw, net.IPv4len); err != nil {
return
}
host = net.IPv4(addr[0], addr[1], addr[2], addr[3]).String()
case socksAtypeDomainName:
var alen byte
if alen, err = socksReadByte(rw); err != nil {
return
}
if alen == 0 {
err = fmt.Errorf("SOCKS request had domain name with 0 length")
return
}
var addr []byte
if addr, err = socksReadBytes(rw, int(alen)); err != nil {
return
}
host = string(addr)
case socksAtypeV6:
var rawAddr []byte
if rawAddr, err = socksReadBytes(rw, net.IPv6len); err != nil {
return
}
addr := make(net.IP, net.IPv6len)
copy(addr[:], rawAddr[:])
host = fmt.Sprintf("[%s]", addr.String())
default:
sendErrResp(SocksRepAddressNotSupported)
err = fmt.Errorf("SOCKS request had unsupported address type 0x%02x", atype)
return
}
var rawPort []byte
if rawPort, err = socksReadBytes(rw, 2); err != nil {
return
}
port := int(rawPort[0])<<8 | int(rawPort[1])<<0
if err = socksFlushBuffers(rw); err != nil {
return
}
req.Target = fmt.Sprintf("%s:%d", host, port)
return
}
// Send a SOCKS5 response with the given code. BND.ADDR/BND.PORT is always the
// IPv4 address/port "0.0.0.0:0".
func sendSocks5Response(w io.Writer, code byte) error {
resp := make([]byte, 4+4+2)
resp[0] = socksVersion
resp[1] = code
resp[2] = socksRsv
resp[3] = socksAtypeV4
// BND.ADDR/BND.PORT should be the address and port that the outgoing
// connection is bound to on the proxy, but Tor does not use this
// information, so all zeroes are sent.
_, err := w.Write(resp[:])
return err
}
// Send a SOCKS5 response code 0x00.
func sendSocks5ResponseGranted(w io.Writer) error {
return sendSocks5Response(w, socksRepSucceeded)
}
// Send a SOCKS5 response with the provided failure reason.
func sendSocks5ResponseRejected(w io.Writer, reason byte) error {
return sendSocks5Response(w, reason)
}
func socksFlushBuffers(rw *bufio.ReadWriter) error {
if err := rw.Writer.Flush(); err != nil {
return err
}
if rw.Reader.Buffered() > 0 {
return fmt.Errorf("%d bytes left after SOCKS message", rw.Reader.Buffered())
}
return nil
}
func socksReadByte(rw *bufio.ReadWriter) (byte, error) {
return rw.Reader.ReadByte()
}
func socksReadBytes(rw *bufio.ReadWriter, n int) ([]byte, error) {
ret := make([]byte, n)
if _, err := io.ReadFull(rw.Reader, ret); err != nil {
return nil, err
}
return ret, nil
}
func socksReadByteVerify(rw *bufio.ReadWriter, descr string, expected byte) error {
val, err := socksReadByte(rw)
if err != nil {
return err
}
if val != expected {
return fmt.Errorf("SOCKS message field %s was 0x%02x, not 0x%02x", descr, val, expected)
}
return nil
}
var _ net.Listener = (*SocksListener)(nil)

View File

@ -0,0 +1,55 @@
Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
==============================================================================
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,101 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
// Package csrand implements the math/rand interface over crypto/rand, along
// with some utility functions for common random number/byte related tasks.
//
// Not all of the convinience routines are replicated, only those that are
// immediately useful. The Rand variable provides access to the full math/rand
// API.
package csrand
import (
cryptRand "crypto/rand"
"encoding/binary"
"fmt"
"io"
"math/rand"
)
var (
csRandSourceInstance csRandSource
// Rand is a math/rand instance backed by crypto/rand CSPRNG.
Rand = rand.New(csRandSourceInstance)
)
type csRandSource struct {
// This does not keep any state as it is backed by crypto/rand.
}
func (r csRandSource) Int63() int64 {
var src [8]byte
if err := Bytes(src[:]); err != nil {
panic(err)
}
val := binary.BigEndian.Uint64(src[:])
val &= (1<<63 - 1)
return int64(val)
}
func (r csRandSource) Seed(seed int64) {
// No-op.
}
// Intn returns, as a int, a pseudo random number in [0, n).
func Intn(n int) int {
return Rand.Intn(n)
}
// Float64 returns, as a float64, a pesudo random number in [0.0,1.0).
func Float64() float64 {
return Rand.Float64()
}
// IntRange returns a uniformly distributed int [min, max].
func IntRange(min, max int) int {
if max < min {
panic(fmt.Sprintf("IntRange: min > max (%d, %d)", min, max))
}
r := (max + 1) - min
ret := Rand.Intn(r)
return ret + min
}
// Bytes fills the slice with random data.
func Bytes(buf []byte) error {
if _, err := io.ReadFull(cryptRand.Reader, buf); err != nil {
return err
}
return nil
}
// Reader is a alias of rand.Reader.
var Reader = cryptRand.Reader

View File

@ -0,0 +1,149 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
// Package drbg implements a minimalistic DRBG based off SipHash-2-4 in OFB
// mode.
package drbg
import (
"encoding/binary"
"encoding/hex"
"fmt"
"hash"
"github.com/dchest/siphash"
"git.torproject.org/pluggable-transports/obfs4.git/common/csrand"
)
// Size is the length of the HashDrbg output.
const Size = siphash.Size
// SeedLength is the length of the HashDrbg seed.
const SeedLength = 16 + Size
// Seed is the initial state for a HashDrbg. It consists of a SipHash-2-4
// key, and 8 bytes of initial data.
type Seed [SeedLength]byte
// Bytes returns a pointer to the raw HashDrbg seed.
func (seed *Seed) Bytes() *[SeedLength]byte {
return (*[SeedLength]byte)(seed)
}
// Hex returns the hexdecimal representation of the seed.
func (seed *Seed) Hex() string {
return hex.EncodeToString(seed.Bytes()[:])
}
// NewSeed returns a Seed initialized with the runtime CSPRNG.
func NewSeed() (seed *Seed, err error) {
seed = new(Seed)
if err = csrand.Bytes(seed.Bytes()[:]); err != nil {
return nil, err
}
return
}
// SeedFromBytes creates a Seed from the raw bytes, truncating to SeedLength as
// appropriate.
func SeedFromBytes(src []byte) (seed *Seed, err error) {
if len(src) < SeedLength {
return nil, InvalidSeedLengthError(len(src))
}
seed = new(Seed)
copy(seed.Bytes()[:], src)
return
}
// SeedFromHex creates a Seed from the hexdecimal representation, truncating to
// SeedLength as appropriate.
func SeedFromHex(encoded string) (seed *Seed, err error) {
var raw []byte
if raw, err = hex.DecodeString(encoded); err != nil {
return nil, err
}
return SeedFromBytes(raw)
}
// InvalidSeedLengthError is the error returned when the seed provided to the
// DRBG is an invalid length.
type InvalidSeedLengthError int
func (e InvalidSeedLengthError) Error() string {
return fmt.Sprintf("invalid seed length: %d", int(e))
}
// HashDrbg is a CSDRBG based off of SipHash-2-4 in OFB mode.
type HashDrbg struct {
sip hash.Hash64
ofb [Size]byte
}
// NewHashDrbg makes a HashDrbg instance based off an optional seed. The seed
// is truncated to SeedLength.
func NewHashDrbg(seed *Seed) (*HashDrbg, error) {
drbg := new(HashDrbg)
if seed == nil {
var err error
if seed, err = NewSeed(); err != nil {
return nil, err
}
}
drbg.sip = siphash.New(seed.Bytes()[:16])
copy(drbg.ofb[:], seed.Bytes()[16:])
return drbg, nil
}
// Int63 returns a uniformly distributed random integer [0, 1 << 63).
func (drbg *HashDrbg) Int63() int64 {
block := drbg.NextBlock()
ret := binary.BigEndian.Uint64(block)
ret &= (1<<63 - 1)
return int64(ret)
}
// Seed does nothing, call NewHashDrbg if you want to reseed.
func (drbg *HashDrbg) Seed(seed int64) {
// No-op.
}
// NextBlock returns the next 8 byte DRBG block.
func (drbg *HashDrbg) NextBlock() []byte {
drbg.sip.Write(drbg.ofb[:])
copy(drbg.ofb[:], drbg.sip.Sum(nil))
ret := make([]byte, Size)
copy(ret, drbg.ofb[:])
return ret
}

View File

@ -0,0 +1,433 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
// Package ntor implements the Tor Project's ntor handshake as defined in
// proposal 216 "Improved circuit-creation key exchange". It also supports
// using Elligator to transform the Curve25519 public keys sent over the wire
// to a form that is indistinguishable from random strings.
//
// Before using this package, it is strongly recommended that the specification
// is read and understood.
package ntor
import (
"bytes"
"crypto/hmac"
"crypto/sha256"
"crypto/subtle"
"encoding/hex"
"fmt"
"io"
"golang.org/x/crypto/curve25519"
"golang.org/x/crypto/hkdf"
"github.com/agl/ed25519/extra25519"
"git.torproject.org/pluggable-transports/obfs4.git/common/csrand"
)
const (
// PublicKeyLength is the length of a Curve25519 public key.
PublicKeyLength = 32
// RepresentativeLength is the length of an Elligator representative.
RepresentativeLength = 32
// PrivateKeyLength is the length of a Curve25519 private key.
PrivateKeyLength = 32
// SharedSecretLength is the length of a Curve25519 shared secret.
SharedSecretLength = 32
// NodeIDLength is the length of a ntor node identifier.
NodeIDLength = 20
// KeySeedLength is the length of the derived KEY_SEED.
KeySeedLength = sha256.Size
// AuthLength is the lenght of the derived AUTH.
AuthLength = sha256.Size
)
var protoID = []byte("ntor-curve25519-sha256-1")
var tMac = append(protoID, []byte(":mac")...)
var tKey = append(protoID, []byte(":key_extract")...)
var tVerify = append(protoID, []byte(":key_verify")...)
var mExpand = append(protoID, []byte(":key_expand")...)
// PublicKeyLengthError is the error returned when the public key being
// imported is an invalid length.
type PublicKeyLengthError int
func (e PublicKeyLengthError) Error() string {
return fmt.Sprintf("ntor: Invalid Curve25519 public key length: %d",
int(e))
}
// PrivateKeyLengthError is the error returned when the private key being
// imported is an invalid length.
type PrivateKeyLengthError int
func (e PrivateKeyLengthError) Error() string {
return fmt.Sprintf("ntor: Invalid Curve25519 private key length: %d",
int(e))
}
// NodeIDLengthError is the error returned when the node ID being imported is
// an invalid length.
type NodeIDLengthError int
func (e NodeIDLengthError) Error() string {
return fmt.Sprintf("ntor: Invalid NodeID length: %d", int(e))
}
// KeySeed is the key material that results from a handshake (KEY_SEED).
type KeySeed [KeySeedLength]byte
// Bytes returns a pointer to the raw key material.
func (key_seed *KeySeed) Bytes() *[KeySeedLength]byte {
return (*[KeySeedLength]byte)(key_seed)
}
// Auth is the verifier that results from a handshake (AUTH).
type Auth [AuthLength]byte
// Bytes returns a pointer to the raw auth.
func (auth *Auth) Bytes() *[AuthLength]byte {
return (*[AuthLength]byte)(auth)
}
// NodeID is a ntor node identifier.
type NodeID [NodeIDLength]byte
// NewNodeID creates a NodeID from the raw bytes.
func NewNodeID(raw []byte) (*NodeID, error) {
if len(raw) != NodeIDLength {
return nil, NodeIDLengthError(len(raw))
}
nodeID := new(NodeID)
copy(nodeID[:], raw)
return nodeID, nil
}
// NodeIDFromHex creates a new NodeID from the hexdecimal representation.
func NodeIDFromHex(encoded string) (*NodeID, error) {
raw, err := hex.DecodeString(encoded)
if err != nil {
return nil, err
}
return NewNodeID(raw)
}
// Bytes returns a pointer to the raw NodeID.
func (id *NodeID) Bytes() *[NodeIDLength]byte {
return (*[NodeIDLength]byte)(id)
}
// Hex returns the hexdecimal representation of the NodeID.
func (id *NodeID) Hex() string {
return hex.EncodeToString(id[:])
}
// PublicKey is a Curve25519 public key in little-endian byte order.
type PublicKey [PublicKeyLength]byte
// Bytes returns a pointer to the raw Curve25519 public key.
func (public *PublicKey) Bytes() *[PublicKeyLength]byte {
return (*[PublicKeyLength]byte)(public)
}
// Hex returns the hexdecimal representation of the Curve25519 public key.
func (public *PublicKey) Hex() string {
return hex.EncodeToString(public.Bytes()[:])
}
// NewPublicKey creates a PublicKey from the raw bytes.
func NewPublicKey(raw []byte) (*PublicKey, error) {
if len(raw) != PublicKeyLength {
return nil, PublicKeyLengthError(len(raw))
}
pubKey := new(PublicKey)
copy(pubKey[:], raw)
return pubKey, nil
}
// PublicKeyFromHex returns a PublicKey from the hexdecimal representation.
func PublicKeyFromHex(encoded string) (*PublicKey, error) {
raw, err := hex.DecodeString(encoded)
if err != nil {
return nil, err
}
return NewPublicKey(raw)
}
// Representative is an Elligator representative of a Curve25519 public key
// in little-endian byte order.
type Representative [RepresentativeLength]byte
// Bytes returns a pointer to the raw Elligator representative.
func (repr *Representative) Bytes() *[RepresentativeLength]byte {
return (*[RepresentativeLength]byte)(repr)
}
// ToPublic converts a Elligator representative to a Curve25519 public key.
func (repr *Representative) ToPublic() *PublicKey {
pub := new(PublicKey)
extra25519.RepresentativeToPublicKey(pub.Bytes(), repr.Bytes())
return pub
}
// PrivateKey is a Curve25519 private key in little-endian byte order.
type PrivateKey [PrivateKeyLength]byte
// Bytes returns a pointer to the raw Curve25519 private key.
func (private *PrivateKey) Bytes() *[PrivateKeyLength]byte {
return (*[PrivateKeyLength]byte)(private)
}
// Hex returns the hexdecimal representation of the Curve25519 private key.
func (private *PrivateKey) Hex() string {
return hex.EncodeToString(private.Bytes()[:])
}
// Keypair is a Curve25519 keypair with an optional Elligator representative.
// As only certain Curve25519 keys can be obfuscated with Elligator, the
// representative must be generated along with the keypair.
type Keypair struct {
public *PublicKey
private *PrivateKey
representative *Representative
}
// Public returns the Curve25519 public key belonging to the Keypair.
func (keypair *Keypair) Public() *PublicKey {
return keypair.public
}
// Private returns the Curve25519 private key belonging to the Keypair.
func (keypair *Keypair) Private() *PrivateKey {
return keypair.private
}
// Representative returns the Elligator representative of the public key
// belonging to the Keypair.
func (keypair *Keypair) Representative() *Representative {
return keypair.representative
}
// HasElligator returns true if the Keypair has an Elligator representative.
func (keypair *Keypair) HasElligator() bool {
return nil != keypair.representative
}
// NewKeypair generates a new Curve25519 keypair, and optionally also generates
// an Elligator representative of the public key.
func NewKeypair(elligator bool) (*Keypair, error) {
keypair := new(Keypair)
keypair.private = new(PrivateKey)
keypair.public = new(PublicKey)
if elligator {
keypair.representative = new(Representative)
}
for {
// Generate a Curve25519 private key. Like everyone who does this,
// run the CSPRNG output through SHA256 for extra tinfoil hattery.
priv := keypair.private.Bytes()[:]
if err := csrand.Bytes(priv); err != nil {
return nil, err
}
digest := sha256.Sum256(priv)
digest[0] &= 248
digest[31] &= 127
digest[31] |= 64
copy(priv, digest[:])
if elligator {
// Apply the Elligator transform. This fails ~50% of the time.
if !extra25519.ScalarBaseMult(keypair.public.Bytes(),
keypair.representative.Bytes(),
keypair.private.Bytes()) {
continue
}
} else {
// Generate the corresponding Curve25519 public key.
curve25519.ScalarBaseMult(keypair.public.Bytes(),
keypair.private.Bytes())
}
return keypair, nil
}
}
// KeypairFromHex returns a Keypair from the hexdecimal representation of the
// private key.
func KeypairFromHex(encoded string) (*Keypair, error) {
raw, err := hex.DecodeString(encoded)
if err != nil {
return nil, err
}
if len(raw) != PrivateKeyLength {
return nil, PrivateKeyLengthError(len(raw))
}
keypair := new(Keypair)
keypair.private = new(PrivateKey)
keypair.public = new(PublicKey)
copy(keypair.private[:], raw)
curve25519.ScalarBaseMult(keypair.public.Bytes(),
keypair.private.Bytes())
return keypair, nil
}
// ServerHandshake does the server side of a ntor handshake and returns status,
// KEY_SEED, and AUTH. If status is not true, the handshake MUST be aborted.
func ServerHandshake(clientPublic *PublicKey, serverKeypair *Keypair, idKeypair *Keypair, id *NodeID) (ok bool, keySeed *KeySeed, auth *Auth) {
var notOk int
var secretInput bytes.Buffer
// Server side uses EXP(X,y) | EXP(X,b)
var exp [SharedSecretLength]byte
curve25519.ScalarMult(&exp, serverKeypair.private.Bytes(),
clientPublic.Bytes())
notOk |= constantTimeIsZero(exp[:])
secretInput.Write(exp[:])
curve25519.ScalarMult(&exp, idKeypair.private.Bytes(),
clientPublic.Bytes())
notOk |= constantTimeIsZero(exp[:])
secretInput.Write(exp[:])
keySeed, auth = ntorCommon(secretInput, id, idKeypair.public,
clientPublic, serverKeypair.public)
return notOk == 0, keySeed, auth
}
// ClientHandshake does the client side of a ntor handshake and returnes
// status, KEY_SEED, and AUTH. If status is not true or AUTH does not match
// the value recieved from the server, the handshake MUST be aborted.
func ClientHandshake(clientKeypair *Keypair, serverPublic *PublicKey, idPublic *PublicKey, id *NodeID) (ok bool, keySeed *KeySeed, auth *Auth) {
var notOk int
var secretInput bytes.Buffer
// Client side uses EXP(Y,x) | EXP(B,x)
var exp [SharedSecretLength]byte
curve25519.ScalarMult(&exp, clientKeypair.private.Bytes(),
serverPublic.Bytes())
notOk |= constantTimeIsZero(exp[:])
secretInput.Write(exp[:])
curve25519.ScalarMult(&exp, clientKeypair.private.Bytes(),
idPublic.Bytes())
notOk |= constantTimeIsZero(exp[:])
secretInput.Write(exp[:])
keySeed, auth = ntorCommon(secretInput, id, idPublic,
clientKeypair.public, serverPublic)
return notOk == 0, keySeed, auth
}
// CompareAuth does a constant time compare of a Auth and a byte slice
// (presumably received over a network).
func CompareAuth(auth1 *Auth, auth2 []byte) bool {
auth1Bytes := auth1.Bytes()
return hmac.Equal(auth1Bytes[:], auth2)
}
func ntorCommon(secretInput bytes.Buffer, id *NodeID, b *PublicKey, x *PublicKey, y *PublicKey) (*KeySeed, *Auth) {
keySeed := new(KeySeed)
auth := new(Auth)
// secret_input/auth_input use this common bit, build it once.
suffix := bytes.NewBuffer(b.Bytes()[:])
suffix.Write(b.Bytes()[:])
suffix.Write(x.Bytes()[:])
suffix.Write(y.Bytes()[:])
suffix.Write(protoID)
suffix.Write(id[:])
// At this point secret_input has the 2 exponents, concatenated, append the
// client/server common suffix.
secretInput.Write(suffix.Bytes())
// KEY_SEED = H(secret_input, t_key)
h := hmac.New(sha256.New, tKey)
h.Write(secretInput.Bytes())
tmp := h.Sum(nil)
copy(keySeed[:], tmp)
// verify = H(secret_input, t_verify)
h = hmac.New(sha256.New, tVerify)
h.Write(secretInput.Bytes())
verify := h.Sum(nil)
// auth_input = verify | ID | B | Y | X | PROTOID | "Server"
authInput := bytes.NewBuffer(verify)
authInput.Write(suffix.Bytes())
authInput.Write([]byte("Server"))
h = hmac.New(sha256.New, tMac)
h.Write(authInput.Bytes())
tmp = h.Sum(nil)
copy(auth[:], tmp)
return keySeed, auth
}
func constantTimeIsZero(x []byte) int {
var ret byte
for _, v := range x {
ret |= v
}
return subtle.ConstantTimeByteEq(ret, 0)
}
// Kdf extracts and expands KEY_SEED via HKDF-SHA256 and returns `okm_len` bytes
// of key material.
func Kdf(keySeed []byte, okmLen int) []byte {
kdf := hkdf.New(sha256.New, keySeed, tKey, mExpand)
okm := make([]byte, okmLen)
n, err := io.ReadFull(kdf, okm)
if err != nil {
panic(fmt.Sprintf("BUG: Failed HKDF: %s", err.Error()))
} else if n != len(okm) {
panic(fmt.Sprintf("BUG: Got truncated HKDF output: %d", n))
}
return okm
}

View File

@ -0,0 +1,245 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
// Package probdist implements a weighted probability distribution suitable for
// protocol parameterization. To allow for easy reproduction of a given
// distribution, the drbg package is used as the random number source.
package probdist
import (
"bytes"
"container/list"
"fmt"
"math/rand"
"sync"
"git.torproject.org/pluggable-transports/obfs4.git/common/csrand"
"git.torproject.org/pluggable-transports/obfs4.git/common/drbg"
)
const (
minValues = 1
maxValues = 100
)
// WeightedDist is a weighted distribution.
type WeightedDist struct {
sync.Mutex
minValue int
maxValue int
biased bool
values []int
weights []float64
alias []int
prob []float64
}
// New creates a weighted distribution of values ranging from min to max
// based on a HashDrbg initialized with seed. Optionally, bias the weight
// generation to match the ScrambleSuit non-uniform distribution from
// obfsproxy.
func New(seed *drbg.Seed, min, max int, biased bool) (w *WeightedDist) {
w = &WeightedDist{minValue: min, maxValue: max, biased: biased}
if max <= min {
panic(fmt.Sprintf("wDist.Reset(): min >= max (%d, %d)", min, max))
}
w.Reset(seed)
return
}
// genValues creates a slice containing a random number of random values
// that when scaled by adding minValue will fall into [min, max].
func (w *WeightedDist) genValues(rng *rand.Rand) {
nValues := (w.maxValue + 1) - w.minValue
values := rng.Perm(nValues)
if nValues < minValues {
nValues = minValues
}
if nValues > maxValues {
nValues = maxValues
}
nValues = rng.Intn(nValues) + 1
w.values = values[:nValues]
}
// genBiasedWeights generates a non-uniform weight list, similar to the
// ScrambleSuit prob_dist module.
func (w *WeightedDist) genBiasedWeights(rng *rand.Rand) {
w.weights = make([]float64, len(w.values))
culmProb := 0.0
for i := range w.weights {
p := (1.0 - culmProb) * rng.Float64()
w.weights[i] = p
culmProb += p
}
}
// genUniformWeights generates a uniform weight list.
func (w *WeightedDist) genUniformWeights(rng *rand.Rand) {
w.weights = make([]float64, len(w.values))
for i := range w.weights {
w.weights[i] = rng.Float64()
}
}
// genTables calculates the alias and prob tables used for Vose's Alias method.
// Algorithm taken from http://www.keithschwarz.com/darts-dice-coins/
func (w *WeightedDist) genTables() {
n := len(w.weights)
var sum float64
for _, weight := range w.weights {
sum += weight
}
// Create arrays $Alias$ and $Prob$, each of size $n$.
alias := make([]int, n)
prob := make([]float64, n)
// Create two worklists, $Small$ and $Large$.
small := list.New()
large := list.New()
scaled := make([]float64, n)
for i, weight := range w.weights {
// Multiply each probability by $n$.
p_i := weight * float64(n) / sum
scaled[i] = p_i
// For each scaled probability $p_i$:
if scaled[i] < 1.0 {
// If $p_i < 1$, add $i$ to $Small$.
small.PushBack(i)
} else {
// Otherwise ($p_i \ge 1$), add $i$ to $Large$.
large.PushBack(i)
}
}
// While $Small$ and $Large$ are not empty: ($Large$ might be emptied first)
for small.Len() > 0 && large.Len() > 0 {
// Remove the first element from $Small$; call it $l$.
l := small.Remove(small.Front()).(int)
// Remove the first element from $Large$; call it $g$.
g := large.Remove(large.Front()).(int)
// Set $Prob[l] = p_l$.
prob[l] = scaled[l]
// Set $Alias[l] = g$.
alias[l] = g
// Set $p_g := (p_g + p_l) - 1$. (This is a more numerically stable option.)
scaled[g] = (scaled[g] + scaled[l]) - 1.0
if scaled[g] < 1.0 {
// If $p_g < 1$, add $g$ to $Small$.
small.PushBack(g)
} else {
// Otherwise ($p_g \ge 1$), add $g$ to $Large$.
large.PushBack(g)
}
}
// While $Large$ is not empty:
for large.Len() > 0 {
// Remove the first element from $Large$; call it $g$.
g := large.Remove(large.Front()).(int)
// Set $Prob[g] = 1$.
prob[g] = 1.0
}
// While $Small$ is not empty: This is only possible due to numerical instability.
for small.Len() > 0 {
// Remove the first element from $Small$; call it $l$.
l := small.Remove(small.Front()).(int)
// Set $Prob[l] = 1$.
prob[l] = 1.0
}
w.prob = prob
w.alias = alias
}
// Reset generates a new distribution with the same min/max based on a new
// seed.
func (w *WeightedDist) Reset(seed *drbg.Seed) {
// Initialize the deterministic random number generator.
drbg, _ := drbg.NewHashDrbg(seed)
rng := rand.New(drbg)
w.Lock()
defer w.Unlock()
w.genValues(rng)
if w.biased {
w.genBiasedWeights(rng)
} else {
w.genUniformWeights(rng)
}
w.genTables()
}
// Sample generates a random value according to the distribution.
func (w *WeightedDist) Sample() int {
var idx int
w.Lock()
defer w.Unlock()
// Generate a fair die roll from an $n$-sided die; call the side $i$.
i := csrand.Intn(len(w.values))
// Flip a biased coin that comes up heads with probability $Prob[i]$.
if csrand.Float64() <= w.prob[i] {
// If the coin comes up "heads," return $i$.
idx = i
} else {
// Otherwise, return $Alias[i]$.
idx = w.alias[i]
}
return w.minValue + w.values[idx]
}
// String returns a dump of the distribution table.
func (w *WeightedDist) String() string {
var buf bytes.Buffer
buf.WriteString("[ ")
for i, v := range w.values {
p := w.weights[i]
if p > 0.01 { // Squelch tiny probabilities.
buf.WriteString(fmt.Sprintf("%d: %f ", v, p))
}
}
buf.WriteString("]")
return buf.String()
}

View File

@ -0,0 +1,147 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
// Package replayfilter implements a generic replay detection filter with a
// caller specifiable time-to-live. It only detects if a given byte sequence
// has been seen before based on the SipHash-2-4 digest of the sequence.
// Collisions are treated as positive matches, though the probability of this
// happening is negligible.
package replayfilter
import (
"container/list"
"encoding/binary"
"sync"
"time"
"github.com/dchest/siphash"
"git.torproject.org/pluggable-transports/obfs4.git/common/csrand"
)
// maxFilterSize is the maximum capacity of a replay filter. This value is
// more as a safeguard to prevent runaway filter growth, and is sized to be
// serveral orders of magnitude greater than the number of connections a busy
// bridge sees in one day, so in practice should never be reached.
const maxFilterSize = 100 * 1024
type entry struct {
digest uint64
firstSeen time.Time
element *list.Element
}
// ReplayFilter is a simple filter designed only to detect if a given byte
// sequence has been seen before.
type ReplayFilter struct {
sync.Mutex
filter map[uint64]*entry
fifo *list.List
key [2]uint64
ttl time.Duration
}
// New creates a new ReplayFilter instance.
func New(ttl time.Duration) (filter *ReplayFilter, err error) {
// Initialize the SipHash-2-4 instance with a random key.
var key [16]byte
if err = csrand.Bytes(key[:]); err != nil {
return
}
filter = new(ReplayFilter)
filter.filter = make(map[uint64]*entry)
filter.fifo = list.New()
filter.key[0] = binary.BigEndian.Uint64(key[0:8])
filter.key[1] = binary.BigEndian.Uint64(key[8:16])
filter.ttl = ttl
return
}
// TestAndSet queries the filter for a given byte sequence, inserts the
// sequence, and returns if it was present before the insertion operation.
func (f *ReplayFilter) TestAndSet(now time.Time, buf []byte) bool {
digest := siphash.Hash(f.key[0], f.key[1], buf)
f.Lock()
defer f.Unlock()
f.compactFilter(now)
if e := f.filter[digest]; e != nil {
// Hit. Just return.
return true
}
// Miss. Add a new entry.
e := new(entry)
e.digest = digest
e.firstSeen = now
e.element = f.fifo.PushBack(e)
f.filter[digest] = e
return false
}
func (f *ReplayFilter) compactFilter(now time.Time) {
e := f.fifo.Front()
for e != nil {
ent, _ := e.Value.(*entry)
// If the filter is not full, only purge entries that exceed the TTL,
// otherwise purge at least one entry, then revert to TTL based
// compaction.
if f.fifo.Len() < maxFilterSize && f.ttl > 0 {
deltaT := now.Sub(ent.firstSeen)
if deltaT < 0 {
// Aeeeeeee, the system time jumped backwards, potentially by
// a lot. This will eventually self-correct, but "eventually"
// could be a long time. As much as this sucks, jettison the
// entire filter.
f.reset()
return
} else if deltaT < f.ttl {
return
}
}
// Remove the eldest entry.
eNext := e.Next()
delete(f.filter, ent.digest)
f.fifo.Remove(ent.element)
ent.element = nil
e = eNext
}
}
func (f *ReplayFilter) reset() {
f.filter = make(map[uint64]*entry)
f.fifo = list.New()
}

View File

@ -0,0 +1,90 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
// Package base provides the common interface that each supported transport
// protocol must implement.
package base
import (
"net"
"git.torproject.org/pluggable-transports/goptlib.git"
)
type DialFunc func(string, string) (net.Conn, error)
// ClientFactory is the interface that defines the factory for creating
// pluggable transport protocol client instances.
type ClientFactory interface {
// Transport returns the Transport instance that this ClientFactory belongs
// to.
Transport() Transport
// ParseArgs parses the supplied arguments into an internal representation
// for use with WrapConn. This routine is called before the outgoing
// TCP/IP connection is created to allow doing things (like keypair
// generation) to be hidden from third parties.
ParseArgs(args *pt.Args) (interface{}, error)
// Dial creates an outbound net.Conn, and does whatever is required
// (eg: handshaking) to get the connection to the point where it is
// ready to relay data.
Dial(network, address string, dialFn DialFunc, args interface{}) (net.Conn, error)
}
// ServerFactory is the interface that defines the factory for creating
// plugable transport protocol server instances. As the arguments are the
// property of the factory, validation is done at factory creation time.
type ServerFactory interface {
// Transport returns the Transport instance that this ServerFactory belongs
// to.
Transport() Transport
// Args returns the Args required on the client side to handshake with
// server connections created by this factory.
Args() *pt.Args
// WrapConn wraps the provided net.Conn with a transport protocol
// implementation, and does whatever is required (eg: handshaking) to get
// the connection to a point where it is ready to relay data.
WrapConn(conn net.Conn) (net.Conn, error)
}
// Transport is an interface that defines a pluggable transport protocol.
type Transport interface {
// Name returns the name of the transport protocol. It MUST be a valid C
// identifier.
Name() string
// ClientFactory returns a ClientFactory instance for this transport
// protocol.
ClientFactory(stateDir string) (ClientFactory, error)
// ServerFactory returns a ServerFactory instance for this transport
// protocol. This can fail if the provided arguments are invalid.
ServerFactory(stateDir string, args *pt.Args) (ServerFactory, error)
}

View File

@ -0,0 +1,306 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
//
// Package framing implements the obfs4 link framing and cryptography.
//
// The Encoder/Decoder shared secret format is:
// uint8_t[32] NaCl secretbox key
// uint8_t[16] NaCl Nonce prefix
// uint8_t[16] SipHash-2-4 key (used to obfsucate length)
// uint8_t[8] SipHash-2-4 IV
//
// The frame format is:
// uint16_t length (obfsucated, big endian)
// NaCl secretbox (Poly1305/XSalsa20) containing:
// uint8_t[16] tag (Part of the secretbox construct)
// uint8_t[] payload
//
// The length field is length of the NaCl secretbox XORed with the truncated
// SipHash-2-4 digest ran in OFB mode.
//
// Initialize K, IV[0] with values from the shared secret.
// On each packet, IV[n] = H(K, IV[n - 1])
// mask[n] = IV[n][0:2]
// obfsLen = length ^ mask[n]
//
// The NaCl secretbox (Poly1305/XSalsa20) nonce format is:
// uint8_t[24] prefix (Fixed)
// uint64_t counter (Big endian)
//
// The counter is initialized to 1, and is incremented on each frame. Since
// the protocol is designed to be used over a reliable medium, the nonce is not
// transmitted over the wire as both sides of the conversation know the prefix
// and the initial counter value. It is imperative that the counter does not
// wrap, and sessions MUST terminate before 2^64 frames are sent.
//
package framing
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"io"
"golang.org/x/crypto/nacl/secretbox"
"git.torproject.org/pluggable-transports/obfs4.git/common/csrand"
"git.torproject.org/pluggable-transports/obfs4.git/common/drbg"
)
const (
// MaximumSegmentLength is the length of the largest possible segment
// including overhead.
MaximumSegmentLength = 1500 - (40 + 12)
// FrameOverhead is the length of the framing overhead.
FrameOverhead = lengthLength + secretbox.Overhead
// MaximumFramePayloadLength is the length of the maximum allowed payload
// per frame.
MaximumFramePayloadLength = MaximumSegmentLength - FrameOverhead
// KeyLength is the length of the Encoder/Decoder secret key.
KeyLength = keyLength + noncePrefixLength + drbg.SeedLength
maxFrameLength = MaximumSegmentLength - lengthLength
minFrameLength = FrameOverhead - lengthLength
keyLength = 32
noncePrefixLength = 16
nonceCounterLength = 8
nonceLength = noncePrefixLength + nonceCounterLength
lengthLength = 2
)
// Error returned when Decoder.Decode() requires more data to continue.
var ErrAgain = errors.New("framing: More data needed to decode")
// Error returned when Decoder.Decode() failes to authenticate a frame.
var ErrTagMismatch = errors.New("framing: Poly1305 tag mismatch")
// Error returned when the NaCl secretbox nonce's counter wraps (FATAL).
var ErrNonceCounterWrapped = errors.New("framing: Nonce counter wrapped")
// InvalidPayloadLengthError is the error returned when Encoder.Encode()
// rejects the payload length.
type InvalidPayloadLengthError int
func (e InvalidPayloadLengthError) Error() string {
return fmt.Sprintf("framing: Invalid payload length: %d", int(e))
}
type boxNonce struct {
prefix [noncePrefixLength]byte
counter uint64
}
func (nonce *boxNonce) init(prefix []byte) {
if noncePrefixLength != len(prefix) {
panic(fmt.Sprintf("BUG: Nonce prefix length invalid: %d", len(prefix)))
}
copy(nonce.prefix[:], prefix)
nonce.counter = 1
}
func (nonce boxNonce) bytes(out *[nonceLength]byte) error {
// The security guarantee of Poly1305 is broken if a nonce is ever reused
// for a given key. Detect this by checking for counter wraparound since
// we start each counter at 1. If it ever happens that more than 2^64 - 1
// frames are transmitted over a given connection, support for rekeying
// will be neccecary, but that's unlikely to happen.
if nonce.counter == 0 {
return ErrNonceCounterWrapped
}
copy(out[:], nonce.prefix[:])
binary.BigEndian.PutUint64(out[noncePrefixLength:], nonce.counter)
return nil
}
// Encoder is a frame encoder instance.
type Encoder struct {
key [keyLength]byte
nonce boxNonce
drbg *drbg.HashDrbg
}
// NewEncoder creates a new Encoder instance. It must be supplied a slice
// containing exactly KeyLength bytes of keying material.
func NewEncoder(key []byte) *Encoder {
if len(key) != KeyLength {
panic(fmt.Sprintf("BUG: Invalid encoder key length: %d", len(key)))
}
encoder := new(Encoder)
copy(encoder.key[:], key[0:keyLength])
encoder.nonce.init(key[keyLength : keyLength+noncePrefixLength])
seed, err := drbg.SeedFromBytes(key[keyLength+noncePrefixLength:])
if err != nil {
panic(fmt.Sprintf("BUG: Failed to initialize DRBG: %s", err))
}
encoder.drbg, _ = drbg.NewHashDrbg(seed)
return encoder
}
// Encode encodes a single frame worth of payload and returns the encoded
// length. InvalidPayloadLengthError is recoverable, all other errors MUST be
// treated as fatal and the session aborted.
func (encoder *Encoder) Encode(frame, payload []byte) (n int, err error) {
payloadLen := len(payload)
if MaximumFramePayloadLength < payloadLen {
return 0, InvalidPayloadLengthError(payloadLen)
}
if len(frame) < payloadLen+FrameOverhead {
return 0, io.ErrShortBuffer
}
// Generate a new nonce.
var nonce [nonceLength]byte
if err = encoder.nonce.bytes(&nonce); err != nil {
return 0, err
}
encoder.nonce.counter++
// Encrypt and MAC payload.
box := secretbox.Seal(frame[:lengthLength], payload, &nonce, &encoder.key)
// Obfuscate the length.
length := uint16(len(box) - lengthLength)
lengthMask := encoder.drbg.NextBlock()
length ^= binary.BigEndian.Uint16(lengthMask)
binary.BigEndian.PutUint16(frame[:2], length)
// Return the frame.
return len(box), nil
}
// Decoder is a frame decoder instance.
type Decoder struct {
key [keyLength]byte
nonce boxNonce
drbg *drbg.HashDrbg
nextNonce [nonceLength]byte
nextLength uint16
nextLengthInvalid bool
}
// NewDecoder creates a new Decoder instance. It must be supplied a slice
// containing exactly KeyLength bytes of keying material.
func NewDecoder(key []byte) *Decoder {
if len(key) != KeyLength {
panic(fmt.Sprintf("BUG: Invalid decoder key length: %d", len(key)))
}
decoder := new(Decoder)
copy(decoder.key[:], key[0:keyLength])
decoder.nonce.init(key[keyLength : keyLength+noncePrefixLength])
seed, err := drbg.SeedFromBytes(key[keyLength+noncePrefixLength:])
if err != nil {
panic(fmt.Sprintf("BUG: Failed to initialize DRBG: %s", err))
}
decoder.drbg, _ = drbg.NewHashDrbg(seed)
return decoder
}
// Decode decodes a stream of data and returns the length if any. ErrAgain is
// a temporary failure, all other errors MUST be treated as fatal and the
// session aborted.
func (decoder *Decoder) Decode(data []byte, frames *bytes.Buffer) (int, error) {
// A length of 0 indicates that we do not know how big the next frame is
// going to be.
if decoder.nextLength == 0 {
// Attempt to pull out the next frame length.
if lengthLength > frames.Len() {
return 0, ErrAgain
}
// Remove the length field from the buffer.
var obfsLen [lengthLength]byte
_, err := io.ReadFull(frames, obfsLen[:])
if err != nil {
return 0, err
}
// Derive the nonce the peer used.
if err = decoder.nonce.bytes(&decoder.nextNonce); err != nil {
return 0, err
}
// Deobfuscate the length field.
length := binary.BigEndian.Uint16(obfsLen[:])
lengthMask := decoder.drbg.NextBlock()
length ^= binary.BigEndian.Uint16(lengthMask)
if maxFrameLength < length || minFrameLength > length {
// Per "Plaintext Recovery Attacks Against SSH" by
// Martin R. Albrecht, Kenneth G. Paterson and Gaven J. Watson,
// there are a class of attacks againt protocols that use similar
// sorts of framing schemes.
//
// While obfs4 should not allow plaintext recovery (CBC mode is
// not used), attempt to mitigate out of bound frame length errors
// by pretending that the length was a random valid range as per
// the countermeasure suggested by Denis Bider in section 6 of the
// paper.
decoder.nextLengthInvalid = true
length = uint16(csrand.IntRange(minFrameLength, maxFrameLength))
}
decoder.nextLength = length
}
if int(decoder.nextLength) > frames.Len() {
return 0, ErrAgain
}
// Unseal the frame.
var box [maxFrameLength]byte
n, err := io.ReadFull(frames, box[:decoder.nextLength])
if err != nil {
return 0, err
}
out, ok := secretbox.Open(data[:0], box[:n], &decoder.nextNonce, &decoder.key)
if !ok || decoder.nextLengthInvalid {
// When a random length is used (on length error) the tag should always
// mismatch, but be paranoid.
return 0, ErrTagMismatch
}
// Clean up and prepare for the next frame.
decoder.nextLength = 0
decoder.nonce.counter++
return len(out), nil
}

View File

@ -0,0 +1,424 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
package obfs4
import (
"bytes"
"crypto/hmac"
"crypto/sha256"
"encoding/hex"
"errors"
"fmt"
"hash"
"strconv"
"time"
"git.torproject.org/pluggable-transports/obfs4.git/common/csrand"
"git.torproject.org/pluggable-transports/obfs4.git/common/ntor"
"git.torproject.org/pluggable-transports/obfs4.git/common/replayfilter"
"git.torproject.org/pluggable-transports/obfs4.git/transports/obfs4/framing"
)
const (
maxHandshakeLength = 8192
clientMinPadLength = (serverMinHandshakeLength + inlineSeedFrameLength) -
clientMinHandshakeLength
clientMaxPadLength = maxHandshakeLength - clientMinHandshakeLength
clientMinHandshakeLength = ntor.RepresentativeLength + markLength + macLength
serverMinPadLength = 0
serverMaxPadLength = maxHandshakeLength - (serverMinHandshakeLength +
inlineSeedFrameLength)
serverMinHandshakeLength = ntor.RepresentativeLength + ntor.AuthLength +
markLength + macLength
markLength = sha256.Size / 2
macLength = sha256.Size / 2
inlineSeedFrameLength = framing.FrameOverhead + packetOverhead + seedPacketPayloadLength
)
// ErrMarkNotFoundYet is the error returned when the obfs4 handshake is
// incomplete and requires more data to continue. This error is non-fatal and
// is the equivalent to EAGAIN/EWOULDBLOCK.
var ErrMarkNotFoundYet = errors.New("handshake: M_[C,S] not found yet")
// ErrInvalidHandshake is the error returned when the obfs4 handshake fails due
// to the peer not sending the correct mark. This error is fatal and the
// connection MUST be dropped.
var ErrInvalidHandshake = errors.New("handshake: Failed to find M_[C,S]")
// ErrReplayedHandshake is the error returned when the obfs4 handshake fails
// due it being replayed. This error is fatal and the connection MUST be
// dropped.
var ErrReplayedHandshake = errors.New("handshake: Replay detected")
// ErrNtorFailed is the error returned when the ntor handshake fails. This
// error is fatal and the connection MUST be dropped.
var ErrNtorFailed = errors.New("handshake: ntor handshake failure")
// InvalidMacError is the error returned when the handshake MACs do not match.
// This error is fatal and the connection MUST be dropped.
type InvalidMacError struct {
Derived []byte
Received []byte
}
func (e *InvalidMacError) Error() string {
return fmt.Sprintf("handshake: MAC mismatch: Dervied: %s Received: %s.",
hex.EncodeToString(e.Derived), hex.EncodeToString(e.Received))
}
// InvalidAuthError is the error returned when the ntor AUTH tags do not match.
// This error is fatal and the connection MUST be dropped.
type InvalidAuthError struct {
Derived *ntor.Auth
Received *ntor.Auth
}
func (e *InvalidAuthError) Error() string {
return fmt.Sprintf("handshake: ntor AUTH mismatch: Derived: %s Received:%s.",
hex.EncodeToString(e.Derived.Bytes()[:]),
hex.EncodeToString(e.Received.Bytes()[:]))
}
type clientHandshake struct {
keypair *ntor.Keypair
nodeID *ntor.NodeID
serverIdentity *ntor.PublicKey
epochHour []byte
padLen int
mac hash.Hash
serverRepresentative *ntor.Representative
serverAuth *ntor.Auth
serverMark []byte
}
func newClientHandshake(nodeID *ntor.NodeID, serverIdentity *ntor.PublicKey, sessionKey *ntor.Keypair) *clientHandshake {
hs := new(clientHandshake)
hs.keypair = sessionKey
hs.nodeID = nodeID
hs.serverIdentity = serverIdentity
hs.padLen = csrand.IntRange(clientMinPadLength, clientMaxPadLength)
hs.mac = hmac.New(sha256.New, append(hs.serverIdentity.Bytes()[:], hs.nodeID.Bytes()[:]...))
return hs
}
func (hs *clientHandshake) generateHandshake() ([]byte, error) {
var buf bytes.Buffer
hs.mac.Reset()
hs.mac.Write(hs.keypair.Representative().Bytes()[:])
mark := hs.mac.Sum(nil)[:markLength]
// The client handshake is X | P_C | M_C | MAC(X | P_C | M_C | E) where:
// * X is the client's ephemeral Curve25519 public key representative.
// * P_C is [clientMinPadLength,clientMaxPadLength] bytes of random padding.
// * M_C is HMAC-SHA256-128(serverIdentity | NodeID, X)
// * MAC is HMAC-SHA256-128(serverIdentity | NodeID, X .... E)
// * E is the string representation of the number of hours since the UNIX
// epoch.
// Generate the padding
pad, err := makePad(hs.padLen)
if err != nil {
return nil, err
}
// Write X, P_C, M_C.
buf.Write(hs.keypair.Representative().Bytes()[:])
buf.Write(pad)
buf.Write(mark)
// Calculate and write the MAC.
hs.mac.Reset()
hs.mac.Write(buf.Bytes())
hs.epochHour = []byte(strconv.FormatInt(getEpochHour(), 10))
hs.mac.Write(hs.epochHour)
buf.Write(hs.mac.Sum(nil)[:macLength])
return buf.Bytes(), nil
}
func (hs *clientHandshake) parseServerHandshake(resp []byte) (int, []byte, error) {
// No point in examining the data unless the miminum plausible response has
// been received.
if serverMinHandshakeLength > len(resp) {
return 0, nil, ErrMarkNotFoundYet
}
if hs.serverRepresentative == nil || hs.serverAuth == nil {
// Pull out the representative/AUTH. (XXX: Add ctors to ntor)
hs.serverRepresentative = new(ntor.Representative)
copy(hs.serverRepresentative.Bytes()[:], resp[0:ntor.RepresentativeLength])
hs.serverAuth = new(ntor.Auth)
copy(hs.serverAuth.Bytes()[:], resp[ntor.RepresentativeLength:])
// Derive the mark.
hs.mac.Reset()
hs.mac.Write(hs.serverRepresentative.Bytes()[:])
hs.serverMark = hs.mac.Sum(nil)[:markLength]
}
// Attempt to find the mark + MAC.
pos := findMarkMac(hs.serverMark, resp, ntor.RepresentativeLength+ntor.AuthLength+serverMinPadLength,
maxHandshakeLength, false)
if pos == -1 {
if len(resp) >= maxHandshakeLength {
return 0, nil, ErrInvalidHandshake
}
return 0, nil, ErrMarkNotFoundYet
}
// Validate the MAC.
hs.mac.Reset()
hs.mac.Write(resp[:pos+markLength])
hs.mac.Write(hs.epochHour)
macCmp := hs.mac.Sum(nil)[:macLength]
macRx := resp[pos+markLength : pos+markLength+macLength]
if !hmac.Equal(macCmp, macRx) {
return 0, nil, &InvalidMacError{macCmp, macRx}
}
// Complete the handshake.
serverPublic := hs.serverRepresentative.ToPublic()
ok, seed, auth := ntor.ClientHandshake(hs.keypair, serverPublic,
hs.serverIdentity, hs.nodeID)
if !ok {
return 0, nil, ErrNtorFailed
}
if !ntor.CompareAuth(auth, hs.serverAuth.Bytes()[:]) {
return 0, nil, &InvalidAuthError{auth, hs.serverAuth}
}
return pos + markLength + macLength, seed.Bytes()[:], nil
}
type serverHandshake struct {
keypair *ntor.Keypair
nodeID *ntor.NodeID
serverIdentity *ntor.Keypair
epochHour []byte
serverAuth *ntor.Auth
padLen int
mac hash.Hash
clientRepresentative *ntor.Representative
clientMark []byte
}
func newServerHandshake(nodeID *ntor.NodeID, serverIdentity *ntor.Keypair, sessionKey *ntor.Keypair) *serverHandshake {
hs := new(serverHandshake)
hs.keypair = sessionKey
hs.nodeID = nodeID
hs.serverIdentity = serverIdentity
hs.padLen = csrand.IntRange(serverMinPadLength, serverMaxPadLength)
hs.mac = hmac.New(sha256.New, append(hs.serverIdentity.Public().Bytes()[:], hs.nodeID.Bytes()[:]...))
return hs
}
func (hs *serverHandshake) parseClientHandshake(filter *replayfilter.ReplayFilter, resp []byte) ([]byte, error) {
// No point in examining the data unless the miminum plausible response has
// been received.
if clientMinHandshakeLength > len(resp) {
return nil, ErrMarkNotFoundYet
}
if hs.clientRepresentative == nil {
// Pull out the representative/AUTH. (XXX: Add ctors to ntor)
hs.clientRepresentative = new(ntor.Representative)
copy(hs.clientRepresentative.Bytes()[:], resp[0:ntor.RepresentativeLength])
// Derive the mark.
hs.mac.Reset()
hs.mac.Write(hs.clientRepresentative.Bytes()[:])
hs.clientMark = hs.mac.Sum(nil)[:markLength]
}
// Attempt to find the mark + MAC.
pos := findMarkMac(hs.clientMark, resp, ntor.RepresentativeLength+clientMinPadLength,
maxHandshakeLength, true)
if pos == -1 {
if len(resp) >= maxHandshakeLength {
return nil, ErrInvalidHandshake
}
return nil, ErrMarkNotFoundYet
}
// Validate the MAC.
macFound := false
for _, off := range []int64{0, -1, 1} {
// Allow epoch to be off by up to a hour in either direction.
epochHour := []byte(strconv.FormatInt(getEpochHour()+int64(off), 10))
hs.mac.Reset()
hs.mac.Write(resp[:pos+markLength])
hs.mac.Write(epochHour)
macCmp := hs.mac.Sum(nil)[:macLength]
macRx := resp[pos+markLength : pos+markLength+macLength]
if hmac.Equal(macCmp, macRx) {
// Ensure that this handshake has not been seen previously.
if filter.TestAndSet(time.Now(), macRx) {
// The client either happened to generate exactly the same
// session key and padding, or someone is replaying a previous
// handshake. In either case, fuck them.
return nil, ErrReplayedHandshake
}
macFound = true
hs.epochHour = epochHour
// We could break out here, but in the name of reducing timing
// variation, evaluate all 3 MACs.
}
}
if !macFound {
// This probably should be an InvalidMacError, but conveying the 3 MACS
// that would be accepted is annoying so just return a generic fatal
// failure.
return nil, ErrInvalidHandshake
}
// Client should never sent trailing garbage.
if len(resp) != pos+markLength+macLength {
return nil, ErrInvalidHandshake
}
clientPublic := hs.clientRepresentative.ToPublic()
ok, seed, auth := ntor.ServerHandshake(clientPublic, hs.keypair,
hs.serverIdentity, hs.nodeID)
if !ok {
return nil, ErrNtorFailed
}
hs.serverAuth = auth
return seed.Bytes()[:], nil
}
func (hs *serverHandshake) generateHandshake() ([]byte, error) {
var buf bytes.Buffer
hs.mac.Reset()
hs.mac.Write(hs.keypair.Representative().Bytes()[:])
mark := hs.mac.Sum(nil)[:markLength]
// The server handshake is Y | AUTH | P_S | M_S | MAC(Y | AUTH | P_S | M_S | E) where:
// * Y is the server's ephemeral Curve25519 public key representative.
// * AUTH is the ntor handshake AUTH value.
// * P_S is [serverMinPadLength,serverMaxPadLength] bytes of random padding.
// * M_S is HMAC-SHA256-128(serverIdentity | NodeID, Y)
// * MAC is HMAC-SHA256-128(serverIdentity | NodeID, Y .... E)
// * E is the string representation of the number of hours since the UNIX
// epoch.
// Generate the padding
pad, err := makePad(hs.padLen)
if err != nil {
return nil, err
}
// Write Y, AUTH, P_S, M_S.
buf.Write(hs.keypair.Representative().Bytes()[:])
buf.Write(hs.serverAuth.Bytes()[:])
buf.Write(pad)
buf.Write(mark)
// Calculate and write the MAC.
hs.mac.Reset()
hs.mac.Write(buf.Bytes())
hs.mac.Write(hs.epochHour) // Set in hs.parseClientHandshake()
buf.Write(hs.mac.Sum(nil)[:macLength])
return buf.Bytes(), nil
}
// getEpochHour returns the number of hours since the UNIX epoch.
func getEpochHour() int64 {
return time.Now().Unix() / 3600
}
func findMarkMac(mark, buf []byte, startPos, maxPos int, fromTail bool) (pos int) {
if len(mark) != markLength {
panic(fmt.Sprintf("BUG: Invalid mark length: %d", len(mark)))
}
endPos := len(buf)
if startPos > len(buf) {
return -1
}
if endPos > maxPos {
endPos = maxPos
}
if endPos-startPos < markLength+macLength {
return -1
}
if fromTail {
// The server can optimize the search process by only examining the
// tail of the buffer. The client can't send valid data past M_C |
// MAC_C as it does not have the server's public key yet.
pos = endPos - (markLength + macLength)
if !hmac.Equal(buf[pos:pos+markLength], mark) {
return -1
}
return
}
// The client has to actually do a substring search since the server can
// and will send payload trailing the response.
//
// XXX: bytes.Index() uses a naive search, which kind of sucks.
pos = bytes.Index(buf[startPos:endPos], mark)
if pos == -1 {
return -1
}
// Ensure that there is enough trailing data for the MAC.
if startPos+pos+markLength+macLength > endPos {
return -1
}
// Return the index relative to the start of the slice.
pos += startPos
return
}
func makePad(padLen int) ([]byte, error) {
pad := make([]byte, padLen)
if err := csrand.Bytes(pad); err != nil {
return nil, err
}
return pad, nil
}

View File

@ -0,0 +1,647 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
// Package obfs4 provides an implementation of the Tor Project's obfs4
// obfuscation protocol.
package obfs4
import (
"bytes"
"crypto/sha256"
"flag"
"fmt"
"math/rand"
"net"
"strconv"
"syscall"
"time"
"git.torproject.org/pluggable-transports/goptlib.git"
"git.torproject.org/pluggable-transports/obfs4.git/common/drbg"
"git.torproject.org/pluggable-transports/obfs4.git/common/ntor"
"git.torproject.org/pluggable-transports/obfs4.git/common/probdist"
"git.torproject.org/pluggable-transports/obfs4.git/common/replayfilter"
"git.torproject.org/pluggable-transports/obfs4.git/transports/base"
"git.torproject.org/pluggable-transports/obfs4.git/transports/obfs4/framing"
)
const (
transportName = "obfs4"
nodeIDArg = "node-id"
publicKeyArg = "public-key"
privateKeyArg = "private-key"
seedArg = "drbg-seed"
iatArg = "iat-mode"
certArg = "cert"
biasCmdArg = "obfs4-distBias"
seedLength = drbg.SeedLength
headerLength = framing.FrameOverhead + packetOverhead
clientHandshakeTimeout = time.Duration(60) * time.Second
serverHandshakeTimeout = time.Duration(30) * time.Second
replayTTL = time.Duration(3) * time.Hour
maxIATDelay = 100
maxCloseDelayBytes = maxHandshakeLength
maxCloseDelay = 60
)
const (
iatNone = iota
iatEnabled
iatParanoid
)
// biasedDist controls if the probability table will be ScrambleSuit style or
// uniformly distributed.
var biasedDist bool
type obfs4ClientArgs struct {
nodeID *ntor.NodeID
publicKey *ntor.PublicKey
sessionKey *ntor.Keypair
iatMode int
}
// Transport is the obfs4 implementation of the base.Transport interface.
type Transport struct{}
// Name returns the name of the obfs4 transport protocol.
func (t *Transport) Name() string {
return transportName
}
// ClientFactory returns a new obfs4ClientFactory instance.
func (t *Transport) ClientFactory(stateDir string) (base.ClientFactory, error) {
cf := &obfs4ClientFactory{transport: t}
return cf, nil
}
// ServerFactory returns a new obfs4ServerFactory instance.
func (t *Transport) ServerFactory(stateDir string, args *pt.Args) (base.ServerFactory, error) {
st, err := serverStateFromArgs(stateDir, args)
if err != nil {
return nil, err
}
var iatSeed *drbg.Seed
if st.iatMode != iatNone {
iatSeedSrc := sha256.Sum256(st.drbgSeed.Bytes()[:])
var err error
iatSeed, err = drbg.SeedFromBytes(iatSeedSrc[:])
if err != nil {
return nil, err
}
}
// Store the arguments that should appear in our descriptor for the clients.
ptArgs := pt.Args{}
ptArgs.Add(certArg, st.cert.String())
ptArgs.Add(iatArg, strconv.Itoa(st.iatMode))
// Initialize the replay filter.
filter, err := replayfilter.New(replayTTL)
if err != nil {
return nil, err
}
// Initialize the close thresholds for failed connections.
drbg, err := drbg.NewHashDrbg(st.drbgSeed)
if err != nil {
return nil, err
}
rng := rand.New(drbg)
sf := &obfs4ServerFactory{t, &ptArgs, st.nodeID, st.identityKey, st.drbgSeed, iatSeed, st.iatMode, filter, rng.Intn(maxCloseDelayBytes), rng.Intn(maxCloseDelay)}
return sf, nil
}
type obfs4ClientFactory struct {
transport base.Transport
}
func (cf *obfs4ClientFactory) Transport() base.Transport {
return cf.transport
}
func (cf *obfs4ClientFactory) ParseArgs(args *pt.Args) (interface{}, error) {
var nodeID *ntor.NodeID
var publicKey *ntor.PublicKey
// The "new" (version >= 0.0.3) bridge lines use a unified "cert" argument
// for the Node ID and Public Key.
certStr, ok := args.Get(certArg)
if ok {
cert, err := serverCertFromString(certStr)
if err != nil {
return nil, err
}
nodeID, publicKey = cert.unpack()
} else {
// The "old" style (version <= 0.0.2) bridge lines use separate Node ID
// and Public Key arguments in Base16 encoding and are a UX disaster.
nodeIDStr, ok := args.Get(nodeIDArg)
if !ok {
return nil, fmt.Errorf("missing argument '%s'", nodeIDArg)
}
var err error
if nodeID, err = ntor.NodeIDFromHex(nodeIDStr); err != nil {
return nil, err
}
publicKeyStr, ok := args.Get(publicKeyArg)
if !ok {
return nil, fmt.Errorf("missing argument '%s'", publicKeyArg)
}
if publicKey, err = ntor.PublicKeyFromHex(publicKeyStr); err != nil {
return nil, err
}
}
// IAT config is common across the two bridge line formats.
iatStr, ok := args.Get(iatArg)
if !ok {
return nil, fmt.Errorf("missing argument '%s'", iatArg)
}
iatMode, err := strconv.Atoi(iatStr)
if err != nil || iatMode < iatNone || iatMode > iatParanoid {
return nil, fmt.Errorf("invalid iat-mode '%d'", iatMode)
}
// Generate the session key pair before connectiong to hide the Elligator2
// rejection sampling from network observers.
sessionKey, err := ntor.NewKeypair(true)
if err != nil {
return nil, err
}
return &obfs4ClientArgs{nodeID, publicKey, sessionKey, iatMode}, nil
}
func (cf *obfs4ClientFactory) Dial(network, addr string, dialFn base.DialFunc, args interface{}) (net.Conn, error) {
// Validate args before bothering to open connection.
ca, ok := args.(*obfs4ClientArgs)
if !ok {
return nil, fmt.Errorf("invalid argument type for args")
}
conn, err := dialFn(network, addr)
if err != nil {
return nil, err
}
dialConn := conn
if conn, err = newObfs4ClientConn(conn, ca); err != nil {
dialConn.Close()
return nil, err
}
return conn, nil
}
type obfs4ServerFactory struct {
transport base.Transport
args *pt.Args
nodeID *ntor.NodeID
identityKey *ntor.Keypair
lenSeed *drbg.Seed
iatSeed *drbg.Seed
iatMode int
replayFilter *replayfilter.ReplayFilter
closeDelayBytes int
closeDelay int
}
func (sf *obfs4ServerFactory) Transport() base.Transport {
return sf.transport
}
func (sf *obfs4ServerFactory) Args() *pt.Args {
return sf.args
}
func (sf *obfs4ServerFactory) WrapConn(conn net.Conn) (net.Conn, error) {
// Not much point in having a separate newObfs4ServerConn routine when
// wrapping requires using values from the factory instance.
// Generate the session keypair *before* consuming data from the peer, to
// attempt to mask the rejection sampling due to use of Elligator2. This
// might be futile, but the timing differential isn't very large on modern
// hardware, and there are far easier statistical attacks that can be
// mounted as a distinguisher.
sessionKey, err := ntor.NewKeypair(true)
if err != nil {
return nil, err
}
lenDist := probdist.New(sf.lenSeed, 0, framing.MaximumSegmentLength, biasedDist)
var iatDist *probdist.WeightedDist
if sf.iatSeed != nil {
iatDist = probdist.New(sf.iatSeed, 0, maxIATDelay, biasedDist)
}
c := &obfs4Conn{conn, true, lenDist, iatDist, sf.iatMode, bytes.NewBuffer(nil), bytes.NewBuffer(nil), make([]byte, consumeReadSize), nil, nil}
startTime := time.Now()
if err = c.serverHandshake(sf, sessionKey); err != nil {
c.closeAfterDelay(sf, startTime)
return nil, err
}
return c, nil
}
type obfs4Conn struct {
net.Conn
isServer bool
lenDist *probdist.WeightedDist
iatDist *probdist.WeightedDist
iatMode int
receiveBuffer *bytes.Buffer
receiveDecodedBuffer *bytes.Buffer
readBuffer []byte
encoder *framing.Encoder
decoder *framing.Decoder
}
func newObfs4ClientConn(conn net.Conn, args *obfs4ClientArgs) (c *obfs4Conn, err error) {
// Generate the initial protocol polymorphism distribution(s).
var seed *drbg.Seed
if seed, err = drbg.NewSeed(); err != nil {
return
}
lenDist := probdist.New(seed, 0, framing.MaximumSegmentLength, biasedDist)
var iatDist *probdist.WeightedDist
if args.iatMode != iatNone {
var iatSeed *drbg.Seed
iatSeedSrc := sha256.Sum256(seed.Bytes()[:])
if iatSeed, err = drbg.SeedFromBytes(iatSeedSrc[:]); err != nil {
return
}
iatDist = probdist.New(iatSeed, 0, maxIATDelay, biasedDist)
}
// Allocate the client structure.
c = &obfs4Conn{conn, false, lenDist, iatDist, args.iatMode, bytes.NewBuffer(nil), bytes.NewBuffer(nil), make([]byte, consumeReadSize), nil, nil}
// Start the handshake timeout.
deadline := time.Now().Add(clientHandshakeTimeout)
if err = conn.SetDeadline(deadline); err != nil {
return nil, err
}
if err = c.clientHandshake(args.nodeID, args.publicKey, args.sessionKey); err != nil {
return nil, err
}
// Stop the handshake timeout.
if err = conn.SetDeadline(time.Time{}); err != nil {
return nil, err
}
return
}
func (conn *obfs4Conn) clientHandshake(nodeID *ntor.NodeID, peerIdentityKey *ntor.PublicKey, sessionKey *ntor.Keypair) error {
if conn.isServer {
return fmt.Errorf("clientHandshake called on server connection")
}
// Generate and send the client handshake.
hs := newClientHandshake(nodeID, peerIdentityKey, sessionKey)
blob, err := hs.generateHandshake()
if err != nil {
return err
}
if _, err = conn.Conn.Write(blob); err != nil {
return err
}
// Consume the server handshake.
var hsBuf [maxHandshakeLength]byte
for {
n, err := conn.Conn.Read(hsBuf[:])
if err != nil {
// The Read() could have returned data and an error, but there is
// no point in continuing on an EOF or whatever.
return err
}
conn.receiveBuffer.Write(hsBuf[:n])
n, seed, err := hs.parseServerHandshake(conn.receiveBuffer.Bytes())
if err == ErrMarkNotFoundYet {
continue
} else if err != nil {
return err
}
_ = conn.receiveBuffer.Next(n)
// Use the derived key material to intialize the link crypto.
okm := ntor.Kdf(seed, framing.KeyLength*2)
conn.encoder = framing.NewEncoder(okm[:framing.KeyLength])
conn.decoder = framing.NewDecoder(okm[framing.KeyLength:])
return nil
}
}
func (conn *obfs4Conn) serverHandshake(sf *obfs4ServerFactory, sessionKey *ntor.Keypair) error {
if !conn.isServer {
return fmt.Errorf("serverHandshake called on client connection")
}
// Generate the server handshake, and arm the base timeout.
hs := newServerHandshake(sf.nodeID, sf.identityKey, sessionKey)
if err := conn.Conn.SetDeadline(time.Now().Add(serverHandshakeTimeout)); err != nil {
return err
}
// Consume the client handshake.
var hsBuf [maxHandshakeLength]byte
for {
n, err := conn.Conn.Read(hsBuf[:])
if err != nil {
// The Read() could have returned data and an error, but there is
// no point in continuing on an EOF or whatever.
return err
}
conn.receiveBuffer.Write(hsBuf[:n])
seed, err := hs.parseClientHandshake(sf.replayFilter, conn.receiveBuffer.Bytes())
if err == ErrMarkNotFoundYet {
continue
} else if err != nil {
return err
}
conn.receiveBuffer.Reset()
if err := conn.Conn.SetDeadline(time.Time{}); err != nil {
return nil
}
// Use the derived key material to intialize the link crypto.
okm := ntor.Kdf(seed, framing.KeyLength*2)
conn.encoder = framing.NewEncoder(okm[framing.KeyLength:])
conn.decoder = framing.NewDecoder(okm[:framing.KeyLength])
break
}
// Since the current and only implementation always sends a PRNG seed for
// the length obfuscation, this makes the amount of data received from the
// server inconsistent with the length sent from the client.
//
// Rebalance this by tweaking the client mimimum padding/server maximum
// padding, and sending the PRNG seed unpadded (As in, treat the PRNG seed
// as part of the server response). See inlineSeedFrameLength in
// handshake_ntor.go.
// Generate/send the response.
blob, err := hs.generateHandshake()
if err != nil {
return err
}
var frameBuf bytes.Buffer
if _, err = frameBuf.Write(blob); err != nil {
return err
}
// Send the PRNG seed as the first packet.
if err := conn.makePacket(&frameBuf, packetTypePrngSeed, sf.lenSeed.Bytes()[:], 0); err != nil {
return err
}
if _, err = conn.Conn.Write(frameBuf.Bytes()); err != nil {
return err
}
return nil
}
func (conn *obfs4Conn) Read(b []byte) (n int, err error) {
// If there is no payload from the previous Read() calls, consume data off
// the network. Not all data received is guaranteed to be usable payload,
// so do this in a loop till data is present or an error occurs.
for conn.receiveDecodedBuffer.Len() == 0 {
err = conn.readPackets()
if err == framing.ErrAgain {
// Don't proagate this back up the call stack if we happen to break
// out of the loop.
err = nil
continue
} else if err != nil {
break
}
}
// Even if err is set, attempt to do the read anyway so that all decoded
// data gets relayed before the connection is torn down.
if conn.receiveDecodedBuffer.Len() > 0 {
var berr error
n, berr = conn.receiveDecodedBuffer.Read(b)
if err == nil {
// Only propagate berr if there are not more important (fatal)
// errors from the network/crypto/packet processing.
err = berr
}
}
return
}
func (conn *obfs4Conn) Write(b []byte) (n int, err error) {
chopBuf := bytes.NewBuffer(b)
var payload [maxPacketPayloadLength]byte
var frameBuf bytes.Buffer
// Chop the pending data into payload frames.
for chopBuf.Len() > 0 {
// Send maximum sized frames.
rdLen := 0
rdLen, err = chopBuf.Read(payload[:])
if err != nil {
return 0, err
} else if rdLen == 0 {
panic(fmt.Sprintf("BUG: Write(), chopping length was 0"))
}
n += rdLen
err = conn.makePacket(&frameBuf, packetTypePayload, payload[:rdLen], 0)
if err != nil {
return 0, err
}
}
if conn.iatMode != iatParanoid {
// For non-paranoid IAT, pad once per burst. Paranoid IAT handles
// things differently.
if err = conn.padBurst(&frameBuf, conn.lenDist.Sample()); err != nil {
return 0, err
}
}
// Write the pending data onto the network. Partial writes are fatal,
// because the frame encoder state is advanced, and the code doesn't keep
// frameBuf around. In theory, write timeouts and whatnot could be
// supported if this wasn't the case, but that complicates the code.
if conn.iatMode != iatNone {
var iatFrame [framing.MaximumSegmentLength]byte
for frameBuf.Len() > 0 {
iatWrLen := 0
switch conn.iatMode {
case iatEnabled:
// Standard (ScrambleSuit-style) IAT obfuscation optimizes for
// bulk transport and will write ~MTU sized frames when
// possible.
iatWrLen, err = frameBuf.Read(iatFrame[:])
case iatParanoid:
// Paranoid IAT obfuscation throws performance out of the
// window and will sample the length distribution every time a
// write is scheduled.
targetLen := conn.lenDist.Sample()
if frameBuf.Len() < targetLen {
// There's not enough data buffered for the target write,
// so padding must be inserted.
if err = conn.padBurst(&frameBuf, targetLen); err != nil {
return 0, err
}
if frameBuf.Len() != targetLen {
// Ugh, padding came out to a value that required more
// than one frame, this is relatively unlikely so just
// resample since there's enough data to ensure that
// the next sample will be written.
continue
}
}
iatWrLen, err = frameBuf.Read(iatFrame[:targetLen])
}
if err != nil {
return 0, err
} else if iatWrLen == 0 {
panic(fmt.Sprintf("BUG: Write(), iat length was 0"))
}
// Calculate the delay. The delay resolution is 100 usec, leading
// to a maximum delay of 10 msec.
iatDelta := time.Duration(conn.iatDist.Sample() * 100)
// Write then sleep.
_, err = conn.Conn.Write(iatFrame[:iatWrLen])
if err != nil {
return 0, err
}
time.Sleep(iatDelta * time.Microsecond)
}
} else {
_, err = conn.Conn.Write(frameBuf.Bytes())
}
return
}
func (conn *obfs4Conn) SetDeadline(t time.Time) error {
return syscall.ENOTSUP
}
func (conn *obfs4Conn) SetWriteDeadline(t time.Time) error {
return syscall.ENOTSUP
}
func (conn *obfs4Conn) closeAfterDelay(sf *obfs4ServerFactory, startTime time.Time) {
// I-it's not like I w-wanna handshake with you or anything. B-b-baka!
defer conn.Conn.Close()
delay := time.Duration(sf.closeDelay)*time.Second + serverHandshakeTimeout
deadline := startTime.Add(delay)
if time.Now().After(deadline) {
return
}
if err := conn.Conn.SetReadDeadline(deadline); err != nil {
return
}
// Consume and discard data on this connection until either the specified
// interval passes or a certain size has been reached.
discarded := 0
var buf [framing.MaximumSegmentLength]byte
for discarded < int(sf.closeDelayBytes) {
n, err := conn.Conn.Read(buf[:])
if err != nil {
return
}
discarded += n
}
}
func (conn *obfs4Conn) padBurst(burst *bytes.Buffer, toPadTo int) (err error) {
tailLen := burst.Len() % framing.MaximumSegmentLength
padLen := 0
if toPadTo >= tailLen {
padLen = toPadTo - tailLen
} else {
padLen = (framing.MaximumSegmentLength - tailLen) + toPadTo
}
if padLen > headerLength {
err = conn.makePacket(burst, packetTypePayload, []byte{},
uint16(padLen-headerLength))
if err != nil {
return
}
} else if padLen > 0 {
err = conn.makePacket(burst, packetTypePayload, []byte{},
maxPacketPayloadLength)
if err != nil {
return
}
err = conn.makePacket(burst, packetTypePayload, []byte{},
uint16(padLen))
if err != nil {
return
}
}
return
}
func init() {
flag.BoolVar(&biasedDist, biasCmdArg, false, "Enable obfs4 using ScrambleSuit style table generation")
}
var _ base.ClientFactory = (*obfs4ClientFactory)(nil)
var _ base.ServerFactory = (*obfs4ServerFactory)(nil)
var _ base.Transport = (*Transport)(nil)
var _ net.Conn = (*obfs4Conn)(nil)

View File

@ -0,0 +1,175 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
package obfs4
import (
"crypto/sha256"
"encoding/binary"
"fmt"
"io"
"git.torproject.org/pluggable-transports/obfs4.git/common/drbg"
"git.torproject.org/pluggable-transports/obfs4.git/transports/obfs4/framing"
)
const (
packetOverhead = 2 + 1
maxPacketPayloadLength = framing.MaximumFramePayloadLength - packetOverhead
maxPacketPaddingLength = maxPacketPayloadLength
seedPacketPayloadLength = seedLength
consumeReadSize = framing.MaximumSegmentLength * 16
)
const (
packetTypePayload = iota
packetTypePrngSeed
)
// InvalidPacketLengthError is the error returned when decodePacket detects a
// invalid packet length/
type InvalidPacketLengthError int
func (e InvalidPacketLengthError) Error() string {
return fmt.Sprintf("packet: Invalid packet length: %d", int(e))
}
// InvalidPayloadLengthError is the error returned when decodePacket rejects the
// payload length.
type InvalidPayloadLengthError int
func (e InvalidPayloadLengthError) Error() string {
return fmt.Sprintf("packet: Invalid payload length: %d", int(e))
}
var zeroPadBytes [maxPacketPaddingLength]byte
func (conn *obfs4Conn) makePacket(w io.Writer, pktType uint8, data []byte, padLen uint16) error {
var pkt [framing.MaximumFramePayloadLength]byte
if len(data)+int(padLen) > maxPacketPayloadLength {
panic(fmt.Sprintf("BUG: makePacket() len(data) + padLen > maxPacketPayloadLength: %d + %d > %d",
len(data), padLen, maxPacketPayloadLength))
}
// Packets are:
// uint8_t type packetTypePayload (0x00)
// uint16_t length Length of the payload (Big Endian).
// uint8_t[] payload Data payload.
// uint8_t[] padding Padding.
pkt[0] = pktType
binary.BigEndian.PutUint16(pkt[1:], uint16(len(data)))
if len(data) > 0 {
copy(pkt[3:], data[:])
}
copy(pkt[3+len(data):], zeroPadBytes[:padLen])
pktLen := packetOverhead + len(data) + int(padLen)
// Encode the packet in an AEAD frame.
var frame [framing.MaximumSegmentLength]byte
frameLen, err := conn.encoder.Encode(frame[:], pkt[:pktLen])
if err != nil {
// All encoder errors are fatal.
return err
}
wrLen, err := w.Write(frame[:frameLen])
if err != nil {
return err
} else if wrLen < frameLen {
return io.ErrShortWrite
}
return nil
}
func (conn *obfs4Conn) readPackets() (err error) {
// Attempt to read off the network.
rdLen, rdErr := conn.Conn.Read(conn.readBuffer)
conn.receiveBuffer.Write(conn.readBuffer[:rdLen])
var decoded [framing.MaximumFramePayloadLength]byte
for conn.receiveBuffer.Len() > 0 {
// Decrypt an AEAD frame.
decLen := 0
decLen, err = conn.decoder.Decode(decoded[:], conn.receiveBuffer)
if err == framing.ErrAgain {
break
} else if err != nil {
break
} else if decLen < packetOverhead {
err = InvalidPacketLengthError(decLen)
break
}
// Decode the packet.
pkt := decoded[0:decLen]
pktType := pkt[0]
payloadLen := binary.BigEndian.Uint16(pkt[1:])
if int(payloadLen) > len(pkt)-packetOverhead {
err = InvalidPayloadLengthError(int(payloadLen))
break
}
payload := pkt[3 : 3+payloadLen]
switch pktType {
case packetTypePayload:
if payloadLen > 0 {
conn.receiveDecodedBuffer.Write(payload)
}
case packetTypePrngSeed:
// Only regenerate the distribution if we are the client.
if len(payload) == seedPacketPayloadLength && !conn.isServer {
var seed *drbg.Seed
seed, err = drbg.SeedFromBytes(payload)
if err != nil {
break
}
conn.lenDist.Reset(seed)
if conn.iatDist != nil {
iatSeedSrc := sha256.Sum256(seed.Bytes()[:])
iatSeed, err := drbg.SeedFromBytes(iatSeedSrc[:])
if err != nil {
break
}
conn.iatDist.Reset(iatSeed)
}
}
default:
// Ignore unknown packet types.
}
}
// Read errors (all fatal) take priority over various frame processing
// errors.
if rdErr != nil {
return rdErr
}
return
}

View File

@ -0,0 +1,260 @@
/*
* Copyright (c) 2014, Yawning Angel <yawning at torproject dot org>
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are met:
*
* * Redistributions of source code must retain the above copyright notice,
* this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright notice,
* this list of conditions and the following disclaimer in the documentation
* and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
* LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
* CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
* SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
* INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
* CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*/
package obfs4
import (
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"os"
"path"
"strconv"
"strings"
"git.torproject.org/pluggable-transports/goptlib.git"
"git.torproject.org/pluggable-transports/obfs4.git/common/csrand"
"git.torproject.org/pluggable-transports/obfs4.git/common/drbg"
"git.torproject.org/pluggable-transports/obfs4.git/common/ntor"
)
const (
stateFile = "obfs4_state.json"
bridgeFile = "obfs4_bridgeline.txt"
certSuffix = "=="
certLength = ntor.NodeIDLength + ntor.PublicKeyLength
)
type jsonServerState struct {
NodeID string `json:"node-id"`
PrivateKey string `json:"private-key"`
PublicKey string `json:"public-key"`
DrbgSeed string `json:"drbg-seed"`
IATMode int `json:"iat-mode"`
}
type obfs4ServerCert struct {
raw []byte
}
func (cert *obfs4ServerCert) String() string {
return strings.TrimSuffix(base64.StdEncoding.EncodeToString(cert.raw), certSuffix)
}
func (cert *obfs4ServerCert) unpack() (*ntor.NodeID, *ntor.PublicKey) {
if len(cert.raw) != certLength {
panic(fmt.Sprintf("cert length %d is invalid", len(cert.raw)))
}
nodeID, _ := ntor.NewNodeID(cert.raw[:ntor.NodeIDLength])
pubKey, _ := ntor.NewPublicKey(cert.raw[ntor.NodeIDLength:])
return nodeID, pubKey
}
func serverCertFromString(encoded string) (*obfs4ServerCert, error) {
decoded, err := base64.StdEncoding.DecodeString(encoded + certSuffix)
if err != nil {
return nil, fmt.Errorf("failed to decode cert: %s", err)
}
if len(decoded) != certLength {
return nil, fmt.Errorf("cert length %d is invalid", len(decoded))
}
return &obfs4ServerCert{raw: decoded}, nil
}
func serverCertFromState(st *obfs4ServerState) *obfs4ServerCert {
cert := new(obfs4ServerCert)
cert.raw = append(st.nodeID.Bytes()[:], st.identityKey.Public().Bytes()[:]...)
return cert
}
type obfs4ServerState struct {
nodeID *ntor.NodeID
identityKey *ntor.Keypair
drbgSeed *drbg.Seed
iatMode int
cert *obfs4ServerCert
}
func (st *obfs4ServerState) clientString() string {
return fmt.Sprintf("%s=%s %s=%d", certArg, st.cert, iatArg, st.iatMode)
}
func serverStateFromArgs(stateDir string, args *pt.Args) (*obfs4ServerState, error) {
var js jsonServerState
var nodeIDOk, privKeyOk, seedOk bool
js.NodeID, nodeIDOk = args.Get(nodeIDArg)
js.PrivateKey, privKeyOk = args.Get(privateKeyArg)
js.DrbgSeed, seedOk = args.Get(seedArg)
iatStr, iatOk := args.Get(iatArg)
// Either a private key, node id, and seed are ALL specified, or
// they should be loaded from the state file.
if !privKeyOk && !nodeIDOk && !seedOk {
if err := jsonServerStateFromFile(stateDir, &js); err != nil {
return nil, err
}
} else if !privKeyOk {
return nil, fmt.Errorf("missing argument '%s'", privateKeyArg)
} else if !nodeIDOk {
return nil, fmt.Errorf("missing argument '%s'", nodeIDArg)
} else if !seedOk {
return nil, fmt.Errorf("missing argument '%s'", seedArg)
}
// The IAT mode should be independently configurable.
if iatOk {
// If the IAT mode is specified, attempt to parse and apply it
// as an override.
iatMode, err := strconv.Atoi(iatStr)
if err != nil {
return nil, fmt.Errorf("malformed iat-mode '%s'", iatStr)
}
js.IATMode = iatMode
}
return serverStateFromJSONServerState(stateDir, &js)
}
func serverStateFromJSONServerState(stateDir string, js *jsonServerState) (*obfs4ServerState, error) {
var err error
st := new(obfs4ServerState)
if st.nodeID, err = ntor.NodeIDFromHex(js.NodeID); err != nil {
return nil, err
}
if st.identityKey, err = ntor.KeypairFromHex(js.PrivateKey); err != nil {
return nil, err
}
if st.drbgSeed, err = drbg.SeedFromHex(js.DrbgSeed); err != nil {
return nil, err
}
if js.IATMode < iatNone || js.IATMode > iatParanoid {
return nil, fmt.Errorf("invalid iat-mode '%d'", js.IATMode)
}
st.iatMode = js.IATMode
st.cert = serverCertFromState(st)
// Generate a human readable summary of the configured endpoint.
if err = newBridgeFile(stateDir, st); err != nil {
return nil, err
}
// Write back the possibly updated server state.
return st, writeJSONServerState(stateDir, js)
}
func jsonServerStateFromFile(stateDir string, js *jsonServerState) error {
fPath := path.Join(stateDir, stateFile)
f, err := ioutil.ReadFile(fPath)
if err != nil {
if os.IsNotExist(err) {
if err = newJSONServerState(stateDir, js); err == nil {
return nil
}
}
return err
}
if err := json.Unmarshal(f, js); err != nil {
return fmt.Errorf("failed to load statefile '%s': %s", fPath, err)
}
return nil
}
func newJSONServerState(stateDir string, js *jsonServerState) (err error) {
// Generate everything a server needs, using the cryptographic PRNG.
var st obfs4ServerState
rawID := make([]byte, ntor.NodeIDLength)
if err = csrand.Bytes(rawID); err != nil {
return
}
if st.nodeID, err = ntor.NewNodeID(rawID); err != nil {
return
}
if st.identityKey, err = ntor.NewKeypair(false); err != nil {
return
}
if st.drbgSeed, err = drbg.NewSeed(); err != nil {
return
}
st.iatMode = iatNone
// Encode it into JSON format and write the state file.
js.NodeID = st.nodeID.Hex()
js.PrivateKey = st.identityKey.Private().Hex()
js.PublicKey = st.identityKey.Public().Hex()
js.DrbgSeed = st.drbgSeed.Hex()
js.IATMode = st.iatMode
return writeJSONServerState(stateDir, js)
}
func writeJSONServerState(stateDir string, js *jsonServerState) error {
var err error
var encoded []byte
if encoded, err = json.Marshal(js); err != nil {
return err
}
if err = ioutil.WriteFile(path.Join(stateDir, stateFile), encoded, 0600); err != nil {
return err
}
return nil
}
func newBridgeFile(stateDir string, st *obfs4ServerState) error {
const prefix = "# obfs4 torrc client bridge line\n" +
"#\n" +
"# This file is an automatically generated bridge line based on\n" +
"# the current obfs4proxy configuration. EDITING IT WILL HAVE\n" +
"# NO EFFECT.\n" +
"#\n" +
"# Before distributing this Bridge, edit the placeholder fields\n" +
"# to contain the actual values:\n" +
"# <IP ADDRESS> - The public IP address of your obfs4 bridge.\n" +
"# <PORT> - The TCP/IP port of your obfs4 bridge.\n" +
"# <FINGERPRINT> - The bridge's fingerprint.\n\n"
bridgeLine := fmt.Sprintf("Bridge obfs4 <IP ADDRESS>:<PORT> <FINGERPRINT> %s\n",
st.clientString())
tmp := []byte(prefix + bridgeLine)
if err := ioutil.WriteFile(path.Join(stateDir, bridgeFile), tmp, 0600); err != nil {
return err
}
return nil
}

122
vendor/github.com/Yawning/chacha20/LICENSE generated vendored Normal file
View File

@ -0,0 +1,122 @@
Creative Commons Legal Code
CC0 1.0 Universal
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
HEREUNDER.
Statement of Purpose
The laws of most jurisdictions throughout the world automatically confer
exclusive Copyright and Related Rights (defined below) upon the creator
and subsequent owner(s) (each and all, an "owner") of an original work of
authorship and/or a database (each, a "Work").
Certain owners wish to permanently relinquish those rights to a Work for
the purpose of contributing to a commons of creative, cultural and
scientific works ("Commons") that the public can reliably and without fear
of later claims of infringement build upon, modify, incorporate in other
works, reuse and redistribute as freely as possible in any form whatsoever
and for any purposes, including without limitation commercial purposes.
These owners may contribute to the Commons to promote the ideal of a free
culture and the further production of creative, cultural and scientific
works, or to gain reputation or greater distribution for their Work in
part through the use and efforts of others.
For these and/or other purposes and motivations, and without any
expectation of additional consideration or compensation, the person
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
is an owner of Copyright and Related Rights in the Work, voluntarily
elects to apply CC0 to the Work and publicly distribute the Work under its
terms, with knowledge of his or her Copyright and Related Rights in the
Work and the meaning and intended legal effect of CC0 on those rights.
1. Copyright and Related Rights. A Work made available under CC0 may be
protected by copyright and related or neighboring rights ("Copyright and
Related Rights"). Copyright and Related Rights include, but are not
limited to, the following:
i. the right to reproduce, adapt, distribute, perform, display,
communicate, and translate a Work;
ii. moral rights retained by the original author(s) and/or performer(s);
iii. publicity and privacy rights pertaining to a person's image or
likeness depicted in a Work;
iv. rights protecting against unfair competition in regards to a Work,
subject to the limitations in paragraph 4(a), below;
v. rights protecting the extraction, dissemination, use and reuse of data
in a Work;
vi. database rights (such as those arising under Directive 96/9/EC of the
European Parliament and of the Council of 11 March 1996 on the legal
protection of databases, and under any national implementation
thereof, including any amended or successor version of such
directive); and
vii. other similar, equivalent or corresponding rights throughout the
world based on applicable law or treaty, and any national
implementations thereof.
2. Waiver. To the greatest extent permitted by, but not in contravention
of, applicable law, Affirmer hereby overtly, fully, permanently,
irrevocably and unconditionally waives, abandons, and surrenders all of
Affirmer's Copyright and Related Rights and associated claims and causes
of action, whether now known or unknown (including existing as well as
future claims and causes of action), in the Work (i) in all territories
worldwide, (ii) for the maximum duration provided by applicable law or
treaty (including future time extensions), (iii) in any current or future
medium and for any number of copies, and (iv) for any purpose whatsoever,
including without limitation commercial, advertising or promotional
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
member of the public at large and to the detriment of Affirmer's heirs and
successors, fully intending that such Waiver shall not be subject to
revocation, rescission, cancellation, termination, or any other legal or
equitable action to disrupt the quiet enjoyment of the Work by the public
as contemplated by Affirmer's express Statement of Purpose.
3. Public License Fallback. Should any part of the Waiver for any reason
be judged legally invalid or ineffective under applicable law, then the
Waiver shall be preserved to the maximum extent permitted taking into
account Affirmer's express Statement of Purpose. In addition, to the
extent the Waiver is so judged Affirmer hereby grants to each affected
person a royalty-free, non transferable, non sublicensable, non exclusive,
irrevocable and unconditional license to exercise Affirmer's Copyright and
Related Rights in the Work (i) in all territories worldwide, (ii) for the
maximum duration provided by applicable law or treaty (including future
time extensions), (iii) in any current or future medium and for any number
of copies, and (iv) for any purpose whatsoever, including without
limitation commercial, advertising or promotional purposes (the
"License"). The License shall be deemed effective as of the date CC0 was
applied by Affirmer to the Work. Should any part of the License for any
reason be judged legally invalid or ineffective under applicable law, such
partial invalidity or ineffectiveness shall not invalidate the remainder
of the License, and in such case Affirmer hereby affirms that he or she
will not (i) exercise any of his or her remaining Copyright and Related
Rights in the Work or (ii) assert any associated claims and causes of
action with respect to the Work, in either case contrary to Affirmer's
express Statement of Purpose.
4. Limitations and Disclaimers.
a. No trademark or patent rights held by Affirmer are waived, abandoned,
surrendered, licensed or otherwise affected by this document.
b. Affirmer offers the Work as-is and makes no representations or
warranties of any kind concerning the Work, express, implied,
statutory or otherwise, including without limitation warranties of
title, merchantability, fitness for a particular purpose, non
infringement, or the absence of latent or other defects, accuracy, or
the present or absence of errors, whether or not discoverable, all to
the greatest extent permissible under applicable law.
c. Affirmer disclaims responsibility for clearing rights of other persons
that may apply to the Work or any use thereof, including without
limitation any person's Copyright and Related Rights in the Work.
Further, Affirmer disclaims responsibility for obtaining any necessary
consents, permissions or other rights required for any use of the
Work.
d. Affirmer understands and acknowledges that Creative Commons is not a
party to this document and has no duty or obligation with respect to
this CC0 or use of the Work.

14
vendor/github.com/Yawning/chacha20/README.md generated vendored Normal file
View File

@ -0,0 +1,14 @@
### chacha20 - ChaCha20
#### Yawning Angel (yawning at schwanenlied dot me)
Yet another Go ChaCha20 implementation. Everything else I found was slow,
didn't support all the variants I need to use, or relied on cgo to go fast.
Features:
* 20 round, 256 bit key only. Everything else is pointless and stupid.
* IETF 96 bit nonce variant.
* XChaCha 24 byte nonce variant.
* SSE2 and AVX2 support on amd64 targets.
* Incremental encrypt/decrypt support, unlike golang.org/x/crypto/salsa20.

273
vendor/github.com/Yawning/chacha20/chacha20.go generated vendored Normal file
View File

@ -0,0 +1,273 @@
// chacha20.go - A ChaCha stream cipher implementation.
//
// To the extent possible under law, Yawning Angel has waived all copyright
// and related or neighboring rights to chacha20, using the Creative
// Commons "CC0" public domain dedication. See LICENSE or
// <http://creativecommons.org/publicdomain/zero/1.0/> for full details.
package chacha20
import (
"crypto/cipher"
"encoding/binary"
"errors"
"math"
"runtime"
)
const (
// KeySize is the ChaCha20 key size in bytes.
KeySize = 32
// NonceSize is the ChaCha20 nonce size in bytes.
NonceSize = 8
// INonceSize is the IETF ChaCha20 nonce size in bytes.
INonceSize = 12
// XNonceSize is the XChaCha20 nonce size in bytes.
XNonceSize = 24
// HNonceSize is the HChaCha20 nonce size in bytes.
HNonceSize = 16
// BlockSize is the ChaCha20 block size in bytes.
BlockSize = 64
stateSize = 16
chachaRounds = 20
// The constant "expand 32-byte k" as little endian uint32s.
sigma0 = uint32(0x61707865)
sigma1 = uint32(0x3320646e)
sigma2 = uint32(0x79622d32)
sigma3 = uint32(0x6b206574)
)
var (
// ErrInvalidKey is the error returned when the key is invalid.
ErrInvalidKey = errors.New("key length must be KeySize bytes")
// ErrInvalidNonce is the error returned when the nonce is invalid.
ErrInvalidNonce = errors.New("nonce length must be NonceSize/INonceSize/XNonceSize bytes")
// ErrInvalidCounter is the error returned when the counter is invalid.
ErrInvalidCounter = errors.New("block counter is invalid (out of range)")
useUnsafe = false
usingVectors = false
blocksFn = blocksRef
)
// A Cipher is an instance of ChaCha20/XChaCha20 using a particular key and
// nonce.
type Cipher struct {
state [stateSize]uint32
buf [BlockSize]byte
off int
ietf bool
}
// Reset zeros the key data so that it will no longer appear in the process's
// memory.
func (c *Cipher) Reset() {
for i := range c.state {
c.state[i] = 0
}
for i := range c.buf {
c.buf[i] = 0
}
}
// XORKeyStream sets dst to the result of XORing src with the key stream. Dst
// and src may be the same slice but otherwise should not overlap.
func (c *Cipher) XORKeyStream(dst, src []byte) {
if len(dst) < len(src) {
src = src[:len(dst)]
}
for remaining := len(src); remaining > 0; {
// Process multiple blocks at once.
if c.off == BlockSize {
nrBlocks := remaining / BlockSize
directBytes := nrBlocks * BlockSize
if nrBlocks > 0 {
blocksFn(&c.state, src, dst, nrBlocks, c.ietf)
remaining -= directBytes
if remaining == 0 {
return
}
dst = dst[directBytes:]
src = src[directBytes:]
}
// If there's a partial block, generate 1 block of keystream into
// the internal buffer.
blocksFn(&c.state, nil, c.buf[:], 1, c.ietf)
c.off = 0
}
// Process partial blocks from the buffered keystream.
toXor := BlockSize - c.off
if remaining < toXor {
toXor = remaining
}
if toXor > 0 {
for i, v := range src[:toXor] {
dst[i] = v ^ c.buf[c.off+i]
}
dst = dst[toXor:]
src = src[toXor:]
remaining -= toXor
c.off += toXor
}
}
}
// KeyStream sets dst to the raw keystream.
func (c *Cipher) KeyStream(dst []byte) {
for remaining := len(dst); remaining > 0; {
// Process multiple blocks at once.
if c.off == BlockSize {
nrBlocks := remaining / BlockSize
directBytes := nrBlocks * BlockSize
if nrBlocks > 0 {
blocksFn(&c.state, nil, dst, nrBlocks, c.ietf)
remaining -= directBytes
if remaining == 0 {
return
}
dst = dst[directBytes:]
}
// If there's a partial block, generate 1 block of keystream into
// the internal buffer.
blocksFn(&c.state, nil, c.buf[:], 1, c.ietf)
c.off = 0
}
// Process partial blocks from the buffered keystream.
toCopy := BlockSize - c.off
if remaining < toCopy {
toCopy = remaining
}
if toCopy > 0 {
copy(dst[:toCopy], c.buf[c.off:c.off+toCopy])
dst = dst[toCopy:]
remaining -= toCopy
c.off += toCopy
}
}
}
// ReKey reinitializes the ChaCha20/XChaCha20 instance with the provided key
// and nonce.
func (c *Cipher) ReKey(key, nonce []byte) error {
if len(key) != KeySize {
return ErrInvalidKey
}
switch len(nonce) {
case NonceSize:
case INonceSize:
case XNonceSize:
var subkey [KeySize]byte
var subnonce [HNonceSize]byte
copy(subnonce[:], nonce[0:16])
HChaCha(key, &subnonce, &subkey)
key = subkey[:]
nonce = nonce[16:24]
defer func() {
for i := range subkey {
subkey[i] = 0
}
}()
default:
return ErrInvalidNonce
}
c.Reset()
c.state[0] = sigma0
c.state[1] = sigma1
c.state[2] = sigma2
c.state[3] = sigma3
c.state[4] = binary.LittleEndian.Uint32(key[0:4])
c.state[5] = binary.LittleEndian.Uint32(key[4:8])
c.state[6] = binary.LittleEndian.Uint32(key[8:12])
c.state[7] = binary.LittleEndian.Uint32(key[12:16])
c.state[8] = binary.LittleEndian.Uint32(key[16:20])
c.state[9] = binary.LittleEndian.Uint32(key[20:24])
c.state[10] = binary.LittleEndian.Uint32(key[24:28])
c.state[11] = binary.LittleEndian.Uint32(key[28:32])
c.state[12] = 0
if len(nonce) == INonceSize {
c.state[13] = binary.LittleEndian.Uint32(nonce[0:4])
c.state[14] = binary.LittleEndian.Uint32(nonce[4:8])
c.state[15] = binary.LittleEndian.Uint32(nonce[8:12])
c.ietf = true
} else {
c.state[13] = 0
c.state[14] = binary.LittleEndian.Uint32(nonce[0:4])
c.state[15] = binary.LittleEndian.Uint32(nonce[4:8])
c.ietf = false
}
c.off = BlockSize
return nil
}
// Seek sets the block counter to a given offset.
func (c *Cipher) Seek(blockCounter uint64) error {
if c.ietf {
if blockCounter > math.MaxUint32 {
return ErrInvalidCounter
}
c.state[12] = uint32(blockCounter)
} else {
c.state[12] = uint32(blockCounter)
c.state[13] = uint32(blockCounter >> 32)
}
c.off = BlockSize
return nil
}
// NewCipher returns a new ChaCha20/XChaCha20 instance.
func NewCipher(key, nonce []byte) (*Cipher, error) {
c := new(Cipher)
if err := c.ReKey(key, nonce); err != nil {
return nil, err
}
return c, nil
}
// HChaCha is the HChaCha20 hash function used to make XChaCha.
func HChaCha(key []byte, nonce *[HNonceSize]byte, out *[32]byte) {
var x [stateSize]uint32 // Last 4 slots unused, sigma hardcoded.
x[0] = binary.LittleEndian.Uint32(key[0:4])
x[1] = binary.LittleEndian.Uint32(key[4:8])
x[2] = binary.LittleEndian.Uint32(key[8:12])
x[3] = binary.LittleEndian.Uint32(key[12:16])
x[4] = binary.LittleEndian.Uint32(key[16:20])
x[5] = binary.LittleEndian.Uint32(key[20:24])
x[6] = binary.LittleEndian.Uint32(key[24:28])
x[7] = binary.LittleEndian.Uint32(key[28:32])
x[8] = binary.LittleEndian.Uint32(nonce[0:4])
x[9] = binary.LittleEndian.Uint32(nonce[4:8])
x[10] = binary.LittleEndian.Uint32(nonce[8:12])
x[11] = binary.LittleEndian.Uint32(nonce[12:16])
hChaChaRef(&x, out)
}
func init() {
switch runtime.GOARCH {
case "386", "amd64":
// Abuse unsafe to skip calling binary.LittleEndian.PutUint32
// in the critical path. This is a big boost on systems that are
// little endian and not overly picky about alignment.
useUnsafe = true
}
}
var _ cipher.Stream = (*Cipher)(nil)

95
vendor/github.com/Yawning/chacha20/chacha20_amd64.go generated vendored Normal file
View File

@ -0,0 +1,95 @@
// chacha20_amd64.go - AMD64 optimized chacha20.
//
// To the extent possible under law, Yawning Angel has waived all copyright
// and related or neighboring rights to chacha20, using the Creative
// Commons "CC0" public domain dedication. See LICENSE or
// <http://creativecommons.org/publicdomain/zero/1.0/> for full details.
// +build amd64,!gccgo,!appengine
package chacha20
import (
"math"
)
var usingAVX2 = false
func blocksAmd64SSE2(x *uint32, inp, outp *byte, nrBlocks uint)
func blocksAmd64AVX2(x *uint32, inp, outp *byte, nrBlocks uint)
func cpuidAmd64(cpuidParams *uint32)
func xgetbv0Amd64(xcrVec *uint32)
func blocksAmd64(x *[stateSize]uint32, in []byte, out []byte, nrBlocks int, isIetf bool) {
// Probably unneeded, but stating this explicitly simplifies the assembly.
if nrBlocks == 0 {
return
}
if isIetf {
var totalBlocks uint64
totalBlocks = uint64(x[12]) + uint64(nrBlocks)
if totalBlocks > math.MaxUint32 {
panic("chacha20: Exceeded keystream per nonce limit")
}
}
if in == nil {
for i := range out {
out[i] = 0
}
in = out
}
// Pointless to call the AVX2 code for just a single block, since half of
// the output gets discarded...
if usingAVX2 && nrBlocks > 1 {
blocksAmd64AVX2(&x[0], &in[0], &out[0], uint(nrBlocks))
} else {
blocksAmd64SSE2(&x[0], &in[0], &out[0], uint(nrBlocks))
}
}
func supportsAVX2() bool {
// https://software.intel.com/en-us/articles/how-to-detect-new-instruction-support-in-the-4th-generation-intel-core-processor-family
const (
osXsaveBit = 1 << 27
avx2Bit = 1 << 5
)
// Check to see if CPUID actually supports the leaf that indicates AVX2.
// CPUID.(EAX=0H, ECX=0H) >= 7
regs := [4]uint32{0x00}
cpuidAmd64(&regs[0])
if regs[0] < 7 {
return false
}
// Check to see if the OS knows how to save/restore XMM/YMM state.
// CPUID.(EAX=01H, ECX=0H):ECX.OSXSAVE[bit 27]==1
regs = [4]uint32{0x01}
cpuidAmd64(&regs[0])
if regs[2]&osXsaveBit == 0 {
return false
}
xcrRegs := [2]uint32{}
xgetbv0Amd64(&xcrRegs[0])
if xcrRegs[0]&6 != 6 {
return false
}
// Check for AVX2 support.
// CPUID.(EAX=07H, ECX=0H):EBX.AVX2[bit 5]==1
regs = [4]uint32{0x07}
cpuidAmd64(&regs[0])
return regs[1]&avx2Bit != 0
}
func init() {
blocksFn = blocksAmd64
usingVectors = true
usingAVX2 = supportsAVX2()
}

1295
vendor/github.com/Yawning/chacha20/chacha20_amd64.py generated vendored Normal file

File diff suppressed because it is too large Load Diff

1180
vendor/github.com/Yawning/chacha20/chacha20_amd64.s generated vendored Normal file

File diff suppressed because it is too large Load Diff

394
vendor/github.com/Yawning/chacha20/chacha20_ref.go generated vendored Normal file
View File

@ -0,0 +1,394 @@
// chacha20_ref.go - Reference ChaCha20.
//
// To the extent possible under law, Yawning Angel has waived all copyright
// and related or neighboring rights to chacha20, using the Creative
// Commons "CC0" public domain dedication. See LICENSE or
// <http://creativecommons.org/publicdomain/zero/1.0/> for full details.
// +build !go1.9
package chacha20
import (
"encoding/binary"
"math"
"unsafe"
)
func blocksRef(x *[stateSize]uint32, in []byte, out []byte, nrBlocks int, isIetf bool) {
if isIetf {
var totalBlocks uint64
totalBlocks = uint64(x[12]) + uint64(nrBlocks)
if totalBlocks > math.MaxUint32 {
panic("chacha20: Exceeded keystream per nonce limit")
}
}
// This routine ignores x[0]...x[4] in favor the const values since it's
// ever so slightly faster.
for n := 0; n < nrBlocks; n++ {
x0, x1, x2, x3 := sigma0, sigma1, sigma2, sigma3
x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15 := x[4], x[5], x[6], x[7], x[8], x[9], x[10], x[11], x[12], x[13], x[14], x[15]
for i := chachaRounds; i > 0; i -= 2 {
// quarterround(x, 0, 4, 8, 12)
x0 += x4
x12 ^= x0
x12 = (x12 << 16) | (x12 >> 16)
x8 += x12
x4 ^= x8
x4 = (x4 << 12) | (x4 >> 20)
x0 += x4
x12 ^= x0
x12 = (x12 << 8) | (x12 >> 24)
x8 += x12
x4 ^= x8
x4 = (x4 << 7) | (x4 >> 25)
// quarterround(x, 1, 5, 9, 13)
x1 += x5
x13 ^= x1
x13 = (x13 << 16) | (x13 >> 16)
x9 += x13
x5 ^= x9
x5 = (x5 << 12) | (x5 >> 20)
x1 += x5
x13 ^= x1
x13 = (x13 << 8) | (x13 >> 24)
x9 += x13
x5 ^= x9
x5 = (x5 << 7) | (x5 >> 25)
// quarterround(x, 2, 6, 10, 14)
x2 += x6
x14 ^= x2
x14 = (x14 << 16) | (x14 >> 16)
x10 += x14
x6 ^= x10
x6 = (x6 << 12) | (x6 >> 20)
x2 += x6
x14 ^= x2
x14 = (x14 << 8) | (x14 >> 24)
x10 += x14
x6 ^= x10
x6 = (x6 << 7) | (x6 >> 25)
// quarterround(x, 3, 7, 11, 15)
x3 += x7
x15 ^= x3
x15 = (x15 << 16) | (x15 >> 16)
x11 += x15
x7 ^= x11
x7 = (x7 << 12) | (x7 >> 20)
x3 += x7
x15 ^= x3
x15 = (x15 << 8) | (x15 >> 24)
x11 += x15
x7 ^= x11
x7 = (x7 << 7) | (x7 >> 25)
// quarterround(x, 0, 5, 10, 15)
x0 += x5
x15 ^= x0
x15 = (x15 << 16) | (x15 >> 16)
x10 += x15
x5 ^= x10
x5 = (x5 << 12) | (x5 >> 20)
x0 += x5
x15 ^= x0
x15 = (x15 << 8) | (x15 >> 24)
x10 += x15
x5 ^= x10
x5 = (x5 << 7) | (x5 >> 25)
// quarterround(x, 1, 6, 11, 12)
x1 += x6
x12 ^= x1
x12 = (x12 << 16) | (x12 >> 16)
x11 += x12
x6 ^= x11
x6 = (x6 << 12) | (x6 >> 20)
x1 += x6
x12 ^= x1
x12 = (x12 << 8) | (x12 >> 24)
x11 += x12
x6 ^= x11
x6 = (x6 << 7) | (x6 >> 25)
// quarterround(x, 2, 7, 8, 13)
x2 += x7
x13 ^= x2
x13 = (x13 << 16) | (x13 >> 16)
x8 += x13
x7 ^= x8
x7 = (x7 << 12) | (x7 >> 20)
x2 += x7
x13 ^= x2
x13 = (x13 << 8) | (x13 >> 24)
x8 += x13
x7 ^= x8
x7 = (x7 << 7) | (x7 >> 25)
// quarterround(x, 3, 4, 9, 14)
x3 += x4
x14 ^= x3
x14 = (x14 << 16) | (x14 >> 16)
x9 += x14
x4 ^= x9
x4 = (x4 << 12) | (x4 >> 20)
x3 += x4
x14 ^= x3
x14 = (x14 << 8) | (x14 >> 24)
x9 += x14
x4 ^= x9
x4 = (x4 << 7) | (x4 >> 25)
}
// On amd64 at least, this is a rather big boost.
if useUnsafe {
if in != nil {
inArr := (*[16]uint32)(unsafe.Pointer(&in[n*BlockSize]))
outArr := (*[16]uint32)(unsafe.Pointer(&out[n*BlockSize]))
outArr[0] = inArr[0] ^ (x0 + sigma0)
outArr[1] = inArr[1] ^ (x1 + sigma1)
outArr[2] = inArr[2] ^ (x2 + sigma2)
outArr[3] = inArr[3] ^ (x3 + sigma3)
outArr[4] = inArr[4] ^ (x4 + x[4])
outArr[5] = inArr[5] ^ (x5 + x[5])
outArr[6] = inArr[6] ^ (x6 + x[6])
outArr[7] = inArr[7] ^ (x7 + x[7])
outArr[8] = inArr[8] ^ (x8 + x[8])
outArr[9] = inArr[9] ^ (x9 + x[9])
outArr[10] = inArr[10] ^ (x10 + x[10])
outArr[11] = inArr[11] ^ (x11 + x[11])
outArr[12] = inArr[12] ^ (x12 + x[12])
outArr[13] = inArr[13] ^ (x13 + x[13])
outArr[14] = inArr[14] ^ (x14 + x[14])
outArr[15] = inArr[15] ^ (x15 + x[15])
} else {
outArr := (*[16]uint32)(unsafe.Pointer(&out[n*BlockSize]))
outArr[0] = x0 + sigma0
outArr[1] = x1 + sigma1
outArr[2] = x2 + sigma2
outArr[3] = x3 + sigma3
outArr[4] = x4 + x[4]
outArr[5] = x5 + x[5]
outArr[6] = x6 + x[6]
outArr[7] = x7 + x[7]
outArr[8] = x8 + x[8]
outArr[9] = x9 + x[9]
outArr[10] = x10 + x[10]
outArr[11] = x11 + x[11]
outArr[12] = x12 + x[12]
outArr[13] = x13 + x[13]
outArr[14] = x14 + x[14]
outArr[15] = x15 + x[15]
}
} else {
// Slow path, either the architecture cares about alignment, or is not little endian.
x0 += sigma0
x1 += sigma1
x2 += sigma2
x3 += sigma3
x4 += x[4]
x5 += x[5]
x6 += x[6]
x7 += x[7]
x8 += x[8]
x9 += x[9]
x10 += x[10]
x11 += x[11]
x12 += x[12]
x13 += x[13]
x14 += x[14]
x15 += x[15]
if in != nil {
binary.LittleEndian.PutUint32(out[0:4], binary.LittleEndian.Uint32(in[0:4])^x0)
binary.LittleEndian.PutUint32(out[4:8], binary.LittleEndian.Uint32(in[4:8])^x1)
binary.LittleEndian.PutUint32(out[8:12], binary.LittleEndian.Uint32(in[8:12])^x2)
binary.LittleEndian.PutUint32(out[12:16], binary.LittleEndian.Uint32(in[12:16])^x3)
binary.LittleEndian.PutUint32(out[16:20], binary.LittleEndian.Uint32(in[16:20])^x4)
binary.LittleEndian.PutUint32(out[20:24], binary.LittleEndian.Uint32(in[20:24])^x5)
binary.LittleEndian.PutUint32(out[24:28], binary.LittleEndian.Uint32(in[24:28])^x6)
binary.LittleEndian.PutUint32(out[28:32], binary.LittleEndian.Uint32(in[28:32])^x7)
binary.LittleEndian.PutUint32(out[32:36], binary.LittleEndian.Uint32(in[32:36])^x8)
binary.LittleEndian.PutUint32(out[36:40], binary.LittleEndian.Uint32(in[36:40])^x9)
binary.LittleEndian.PutUint32(out[40:44], binary.LittleEndian.Uint32(in[40:44])^x10)
binary.LittleEndian.PutUint32(out[44:48], binary.LittleEndian.Uint32(in[44:48])^x11)
binary.LittleEndian.PutUint32(out[48:52], binary.LittleEndian.Uint32(in[48:52])^x12)
binary.LittleEndian.PutUint32(out[52:56], binary.LittleEndian.Uint32(in[52:56])^x13)
binary.LittleEndian.PutUint32(out[56:60], binary.LittleEndian.Uint32(in[56:60])^x14)
binary.LittleEndian.PutUint32(out[60:64], binary.LittleEndian.Uint32(in[60:64])^x15)
in = in[BlockSize:]
} else {
binary.LittleEndian.PutUint32(out[0:4], x0)
binary.LittleEndian.PutUint32(out[4:8], x1)
binary.LittleEndian.PutUint32(out[8:12], x2)
binary.LittleEndian.PutUint32(out[12:16], x3)
binary.LittleEndian.PutUint32(out[16:20], x4)
binary.LittleEndian.PutUint32(out[20:24], x5)
binary.LittleEndian.PutUint32(out[24:28], x6)
binary.LittleEndian.PutUint32(out[28:32], x7)
binary.LittleEndian.PutUint32(out[32:36], x8)
binary.LittleEndian.PutUint32(out[36:40], x9)
binary.LittleEndian.PutUint32(out[40:44], x10)
binary.LittleEndian.PutUint32(out[44:48], x11)
binary.LittleEndian.PutUint32(out[48:52], x12)
binary.LittleEndian.PutUint32(out[52:56], x13)
binary.LittleEndian.PutUint32(out[56:60], x14)
binary.LittleEndian.PutUint32(out[60:64], x15)
}
out = out[BlockSize:]
}
// Stoping at 2^70 bytes per nonce is the user's responsibility.
ctr := uint64(x[13])<<32 | uint64(x[12])
ctr++
x[12] = uint32(ctr)
x[13] = uint32(ctr >> 32)
}
}
func hChaChaRef(x *[stateSize]uint32, out *[32]byte) {
x0, x1, x2, x3 := sigma0, sigma1, sigma2, sigma3
x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15 := x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9], x[10], x[11]
for i := chachaRounds; i > 0; i -= 2 {
// quarterround(x, 0, 4, 8, 12)
x0 += x4
x12 ^= x0
x12 = (x12 << 16) | (x12 >> 16)
x8 += x12
x4 ^= x8
x4 = (x4 << 12) | (x4 >> 20)
x0 += x4
x12 ^= x0
x12 = (x12 << 8) | (x12 >> 24)
x8 += x12
x4 ^= x8
x4 = (x4 << 7) | (x4 >> 25)
// quarterround(x, 1, 5, 9, 13)
x1 += x5
x13 ^= x1
x13 = (x13 << 16) | (x13 >> 16)
x9 += x13
x5 ^= x9
x5 = (x5 << 12) | (x5 >> 20)
x1 += x5
x13 ^= x1
x13 = (x13 << 8) | (x13 >> 24)
x9 += x13
x5 ^= x9
x5 = (x5 << 7) | (x5 >> 25)
// quarterround(x, 2, 6, 10, 14)
x2 += x6
x14 ^= x2
x14 = (x14 << 16) | (x14 >> 16)
x10 += x14
x6 ^= x10
x6 = (x6 << 12) | (x6 >> 20)
x2 += x6
x14 ^= x2
x14 = (x14 << 8) | (x14 >> 24)
x10 += x14
x6 ^= x10
x6 = (x6 << 7) | (x6 >> 25)
// quarterround(x, 3, 7, 11, 15)
x3 += x7
x15 ^= x3
x15 = (x15 << 16) | (x15 >> 16)
x11 += x15
x7 ^= x11
x7 = (x7 << 12) | (x7 >> 20)
x3 += x7
x15 ^= x3
x15 = (x15 << 8) | (x15 >> 24)
x11 += x15
x7 ^= x11
x7 = (x7 << 7) | (x7 >> 25)
// quarterround(x, 0, 5, 10, 15)
x0 += x5
x15 ^= x0
x15 = (x15 << 16) | (x15 >> 16)
x10 += x15
x5 ^= x10
x5 = (x5 << 12) | (x5 >> 20)
x0 += x5
x15 ^= x0
x15 = (x15 << 8) | (x15 >> 24)
x10 += x15
x5 ^= x10
x5 = (x5 << 7) | (x5 >> 25)
// quarterround(x, 1, 6, 11, 12)
x1 += x6
x12 ^= x1
x12 = (x12 << 16) | (x12 >> 16)
x11 += x12
x6 ^= x11
x6 = (x6 << 12) | (x6 >> 20)
x1 += x6
x12 ^= x1
x12 = (x12 << 8) | (x12 >> 24)
x11 += x12
x6 ^= x11
x6 = (x6 << 7) | (x6 >> 25)
// quarterround(x, 2, 7, 8, 13)
x2 += x7
x13 ^= x2
x13 = (x13 << 16) | (x13 >> 16)
x8 += x13
x7 ^= x8
x7 = (x7 << 12) | (x7 >> 20)
x2 += x7
x13 ^= x2
x13 = (x13 << 8) | (x13 >> 24)
x8 += x13
x7 ^= x8
x7 = (x7 << 7) | (x7 >> 25)
// quarterround(x, 3, 4, 9, 14)
x3 += x4
x14 ^= x3
x14 = (x14 << 16) | (x14 >> 16)
x9 += x14
x4 ^= x9
x4 = (x4 << 12) | (x4 >> 20)
x3 += x4
x14 ^= x3
x14 = (x14 << 8) | (x14 >> 24)
x9 += x14
x4 ^= x9
x4 = (x4 << 7) | (x4 >> 25)
}
// HChaCha returns x0...x3 | x12...x15, which corresponds to the
// indexes of the ChaCha constant and the indexes of the IV.
if useUnsafe {
outArr := (*[16]uint32)(unsafe.Pointer(&out[0]))
outArr[0] = x0
outArr[1] = x1
outArr[2] = x2
outArr[3] = x3
outArr[4] = x12
outArr[5] = x13
outArr[6] = x14
outArr[7] = x15
} else {
binary.LittleEndian.PutUint32(out[0:4], x0)
binary.LittleEndian.PutUint32(out[4:8], x1)
binary.LittleEndian.PutUint32(out[8:12], x2)
binary.LittleEndian.PutUint32(out[12:16], x3)
binary.LittleEndian.PutUint32(out[16:20], x12)
binary.LittleEndian.PutUint32(out[20:24], x13)
binary.LittleEndian.PutUint32(out[24:28], x14)
binary.LittleEndian.PutUint32(out[28:32], x15)
}
return
}

395
vendor/github.com/Yawning/chacha20/chacha20_ref_go19.go generated vendored Normal file
View File

@ -0,0 +1,395 @@
// chacha20_ref.go - Reference ChaCha20.
//
// To the extent possible under law, Yawning Angel has waived all copyright
// and related or neighboring rights to chacha20, using the Creative
// Commons "CC0" public domain dedication. See LICENSE or
// <http://creativecommons.org/publicdomain/zero/1.0/> for full details.
// +build go1.9
package chacha20
import (
"encoding/binary"
"math"
"math/bits"
"unsafe"
)
func blocksRef(x *[stateSize]uint32, in []byte, out []byte, nrBlocks int, isIetf bool) {
if isIetf {
var totalBlocks uint64
totalBlocks = uint64(x[12]) + uint64(nrBlocks)
if totalBlocks > math.MaxUint32 {
panic("chacha20: Exceeded keystream per nonce limit")
}
}
// This routine ignores x[0]...x[4] in favor the const values since it's
// ever so slightly faster.
for n := 0; n < nrBlocks; n++ {
x0, x1, x2, x3 := sigma0, sigma1, sigma2, sigma3
x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15 := x[4], x[5], x[6], x[7], x[8], x[9], x[10], x[11], x[12], x[13], x[14], x[15]
for i := chachaRounds; i > 0; i -= 2 {
// quarterround(x, 0, 4, 8, 12)
x0 += x4
x12 ^= x0
x12 = bits.RotateLeft32(x12, 16)
x8 += x12
x4 ^= x8
x4 = bits.RotateLeft32(x4, 12)
x0 += x4
x12 ^= x0
x12 = bits.RotateLeft32(x12, 8)
x8 += x12
x4 ^= x8
x4 = bits.RotateLeft32(x4, 7)
// quarterround(x, 1, 5, 9, 13)
x1 += x5
x13 ^= x1
x13 = bits.RotateLeft32(x13, 16)
x9 += x13
x5 ^= x9
x5 = bits.RotateLeft32(x5, 12)
x1 += x5
x13 ^= x1
x13 = bits.RotateLeft32(x13, 8)
x9 += x13
x5 ^= x9
x5 = bits.RotateLeft32(x5, 7)
// quarterround(x, 2, 6, 10, 14)
x2 += x6
x14 ^= x2
x14 = bits.RotateLeft32(x14, 16)
x10 += x14
x6 ^= x10
x6 = bits.RotateLeft32(x6, 12)
x2 += x6
x14 ^= x2
x14 = bits.RotateLeft32(x14, 8)
x10 += x14
x6 ^= x10
x6 = bits.RotateLeft32(x6, 7)
// quarterround(x, 3, 7, 11, 15)
x3 += x7
x15 ^= x3
x15 = bits.RotateLeft32(x15, 16)
x11 += x15
x7 ^= x11
x7 = bits.RotateLeft32(x7, 12)
x3 += x7
x15 ^= x3
x15 = bits.RotateLeft32(x15, 8)
x11 += x15
x7 ^= x11
x7 = bits.RotateLeft32(x7, 7)
// quarterround(x, 0, 5, 10, 15)
x0 += x5
x15 ^= x0
x15 = bits.RotateLeft32(x15, 16)
x10 += x15
x5 ^= x10
x5 = bits.RotateLeft32(x5, 12)
x0 += x5
x15 ^= x0
x15 = bits.RotateLeft32(x15, 8)
x10 += x15
x5 ^= x10
x5 = bits.RotateLeft32(x5, 7)
// quarterround(x, 1, 6, 11, 12)
x1 += x6
x12 ^= x1
x12 = bits.RotateLeft32(x12, 16)
x11 += x12
x6 ^= x11
x6 = bits.RotateLeft32(x6, 12)
x1 += x6
x12 ^= x1
x12 = bits.RotateLeft32(x12, 8)
x11 += x12
x6 ^= x11
x6 = bits.RotateLeft32(x6, 7)
// quarterround(x, 2, 7, 8, 13)
x2 += x7
x13 ^= x2
x13 = bits.RotateLeft32(x13, 16)
x8 += x13
x7 ^= x8
x7 = bits.RotateLeft32(x7, 12)
x2 += x7
x13 ^= x2
x13 = bits.RotateLeft32(x13, 8)
x8 += x13
x7 ^= x8
x7 = bits.RotateLeft32(x7, 7)
// quarterround(x, 3, 4, 9, 14)
x3 += x4
x14 ^= x3
x14 = bits.RotateLeft32(x14, 16)
x9 += x14
x4 ^= x9
x4 = bits.RotateLeft32(x4, 12)
x3 += x4
x14 ^= x3
x14 = bits.RotateLeft32(x14, 8)
x9 += x14
x4 ^= x9
x4 = bits.RotateLeft32(x4, 7)
}
// On amd64 at least, this is a rather big boost.
if useUnsafe {
if in != nil {
inArr := (*[16]uint32)(unsafe.Pointer(&in[n*BlockSize]))
outArr := (*[16]uint32)(unsafe.Pointer(&out[n*BlockSize]))
outArr[0] = inArr[0] ^ (x0 + sigma0)
outArr[1] = inArr[1] ^ (x1 + sigma1)
outArr[2] = inArr[2] ^ (x2 + sigma2)
outArr[3] = inArr[3] ^ (x3 + sigma3)
outArr[4] = inArr[4] ^ (x4 + x[4])
outArr[5] = inArr[5] ^ (x5 + x[5])
outArr[6] = inArr[6] ^ (x6 + x[6])
outArr[7] = inArr[7] ^ (x7 + x[7])
outArr[8] = inArr[8] ^ (x8 + x[8])
outArr[9] = inArr[9] ^ (x9 + x[9])
outArr[10] = inArr[10] ^ (x10 + x[10])
outArr[11] = inArr[11] ^ (x11 + x[11])
outArr[12] = inArr[12] ^ (x12 + x[12])
outArr[13] = inArr[13] ^ (x13 + x[13])
outArr[14] = inArr[14] ^ (x14 + x[14])
outArr[15] = inArr[15] ^ (x15 + x[15])
} else {
outArr := (*[16]uint32)(unsafe.Pointer(&out[n*BlockSize]))
outArr[0] = x0 + sigma0
outArr[1] = x1 + sigma1
outArr[2] = x2 + sigma2
outArr[3] = x3 + sigma3
outArr[4] = x4 + x[4]
outArr[5] = x5 + x[5]
outArr[6] = x6 + x[6]
outArr[7] = x7 + x[7]
outArr[8] = x8 + x[8]
outArr[9] = x9 + x[9]
outArr[10] = x10 + x[10]
outArr[11] = x11 + x[11]
outArr[12] = x12 + x[12]
outArr[13] = x13 + x[13]
outArr[14] = x14 + x[14]
outArr[15] = x15 + x[15]
}
} else {
// Slow path, either the architecture cares about alignment, or is not little endian.
x0 += sigma0
x1 += sigma1
x2 += sigma2
x3 += sigma3
x4 += x[4]
x5 += x[5]
x6 += x[6]
x7 += x[7]
x8 += x[8]
x9 += x[9]
x10 += x[10]
x11 += x[11]
x12 += x[12]
x13 += x[13]
x14 += x[14]
x15 += x[15]
if in != nil {
binary.LittleEndian.PutUint32(out[0:4], binary.LittleEndian.Uint32(in[0:4])^x0)
binary.LittleEndian.PutUint32(out[4:8], binary.LittleEndian.Uint32(in[4:8])^x1)
binary.LittleEndian.PutUint32(out[8:12], binary.LittleEndian.Uint32(in[8:12])^x2)
binary.LittleEndian.PutUint32(out[12:16], binary.LittleEndian.Uint32(in[12:16])^x3)
binary.LittleEndian.PutUint32(out[16:20], binary.LittleEndian.Uint32(in[16:20])^x4)
binary.LittleEndian.PutUint32(out[20:24], binary.LittleEndian.Uint32(in[20:24])^x5)
binary.LittleEndian.PutUint32(out[24:28], binary.LittleEndian.Uint32(in[24:28])^x6)
binary.LittleEndian.PutUint32(out[28:32], binary.LittleEndian.Uint32(in[28:32])^x7)
binary.LittleEndian.PutUint32(out[32:36], binary.LittleEndian.Uint32(in[32:36])^x8)
binary.LittleEndian.PutUint32(out[36:40], binary.LittleEndian.Uint32(in[36:40])^x9)
binary.LittleEndian.PutUint32(out[40:44], binary.LittleEndian.Uint32(in[40:44])^x10)
binary.LittleEndian.PutUint32(out[44:48], binary.LittleEndian.Uint32(in[44:48])^x11)
binary.LittleEndian.PutUint32(out[48:52], binary.LittleEndian.Uint32(in[48:52])^x12)
binary.LittleEndian.PutUint32(out[52:56], binary.LittleEndian.Uint32(in[52:56])^x13)
binary.LittleEndian.PutUint32(out[56:60], binary.LittleEndian.Uint32(in[56:60])^x14)
binary.LittleEndian.PutUint32(out[60:64], binary.LittleEndian.Uint32(in[60:64])^x15)
in = in[BlockSize:]
} else {
binary.LittleEndian.PutUint32(out[0:4], x0)
binary.LittleEndian.PutUint32(out[4:8], x1)
binary.LittleEndian.PutUint32(out[8:12], x2)
binary.LittleEndian.PutUint32(out[12:16], x3)
binary.LittleEndian.PutUint32(out[16:20], x4)
binary.LittleEndian.PutUint32(out[20:24], x5)
binary.LittleEndian.PutUint32(out[24:28], x6)
binary.LittleEndian.PutUint32(out[28:32], x7)
binary.LittleEndian.PutUint32(out[32:36], x8)
binary.LittleEndian.PutUint32(out[36:40], x9)
binary.LittleEndian.PutUint32(out[40:44], x10)
binary.LittleEndian.PutUint32(out[44:48], x11)
binary.LittleEndian.PutUint32(out[48:52], x12)
binary.LittleEndian.PutUint32(out[52:56], x13)
binary.LittleEndian.PutUint32(out[56:60], x14)
binary.LittleEndian.PutUint32(out[60:64], x15)
}
out = out[BlockSize:]
}
// Stoping at 2^70 bytes per nonce is the user's responsibility.
ctr := uint64(x[13])<<32 | uint64(x[12])
ctr++
x[12] = uint32(ctr)
x[13] = uint32(ctr >> 32)
}
}
func hChaChaRef(x *[stateSize]uint32, out *[32]byte) {
x0, x1, x2, x3 := sigma0, sigma1, sigma2, sigma3
x4, x5, x6, x7, x8, x9, x10, x11, x12, x13, x14, x15 := x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7], x[8], x[9], x[10], x[11]
for i := chachaRounds; i > 0; i -= 2 {
// quarterround(x, 0, 4, 8, 12)
x0 += x4
x12 ^= x0
x12 = bits.RotateLeft32(x12, 16)
x8 += x12
x4 ^= x8
x4 = bits.RotateLeft32(x4, 12)
x0 += x4
x12 ^= x0
x12 = bits.RotateLeft32(x12, 8)
x8 += x12
x4 ^= x8
x4 = bits.RotateLeft32(x4, 7)
// quarterround(x, 1, 5, 9, 13)
x1 += x5
x13 ^= x1
x13 = bits.RotateLeft32(x13, 16)
x9 += x13
x5 ^= x9
x5 = bits.RotateLeft32(x5, 12)
x1 += x5
x13 ^= x1
x13 = bits.RotateLeft32(x13, 8)
x9 += x13
x5 ^= x9
x5 = bits.RotateLeft32(x5, 7)
// quarterround(x, 2, 6, 10, 14)
x2 += x6
x14 ^= x2
x14 = bits.RotateLeft32(x14, 16)
x10 += x14
x6 ^= x10
x6 = bits.RotateLeft32(x6, 12)
x2 += x6
x14 ^= x2
x14 = bits.RotateLeft32(x14, 8)
x10 += x14
x6 ^= x10
x6 = bits.RotateLeft32(x6, 7)
// quarterround(x, 3, 7, 11, 15)
x3 += x7
x15 ^= x3
x15 = bits.RotateLeft32(x15, 16)
x11 += x15
x7 ^= x11
x7 = bits.RotateLeft32(x7, 12)
x3 += x7
x15 ^= x3
x15 = bits.RotateLeft32(x15, 8)
x11 += x15
x7 ^= x11
x7 = bits.RotateLeft32(x7, 7)
// quarterround(x, 0, 5, 10, 15)
x0 += x5
x15 ^= x0
x15 = bits.RotateLeft32(x15, 16)
x10 += x15
x5 ^= x10
x5 = bits.RotateLeft32(x5, 12)
x0 += x5
x15 ^= x0
x15 = bits.RotateLeft32(x15, 8)
x10 += x15
x5 ^= x10
x5 = bits.RotateLeft32(x5, 7)
// quarterround(x, 1, 6, 11, 12)
x1 += x6
x12 ^= x1
x12 = bits.RotateLeft32(x12, 16)
x11 += x12
x6 ^= x11
x6 = bits.RotateLeft32(x6, 12)
x1 += x6
x12 ^= x1
x12 = bits.RotateLeft32(x12, 8)
x11 += x12
x6 ^= x11
x6 = bits.RotateLeft32(x6, 7)
// quarterround(x, 2, 7, 8, 13)
x2 += x7
x13 ^= x2
x13 = bits.RotateLeft32(x13, 16)
x8 += x13
x7 ^= x8
x7 = bits.RotateLeft32(x7, 12)
x2 += x7
x13 ^= x2
x13 = bits.RotateLeft32(x13, 8)
x8 += x13
x7 ^= x8
x7 = bits.RotateLeft32(x7, 7)
// quarterround(x, 3, 4, 9, 14)
x3 += x4
x14 ^= x3
x14 = bits.RotateLeft32(x14, 16)
x9 += x14
x4 ^= x9
x4 = bits.RotateLeft32(x4, 12)
x3 += x4
x14 ^= x3
x14 = bits.RotateLeft32(x14, 8)
x9 += x14
x4 ^= x9
x4 = bits.RotateLeft32(x4, 7)
}
// HChaCha returns x0...x3 | x12...x15, which corresponds to the
// indexes of the ChaCha constant and the indexes of the IV.
if useUnsafe {
outArr := (*[16]uint32)(unsafe.Pointer(&out[0]))
outArr[0] = x0
outArr[1] = x1
outArr[2] = x2
outArr[3] = x3
outArr[4] = x12
outArr[5] = x13
outArr[6] = x14
outArr[7] = x15
} else {
binary.LittleEndian.PutUint32(out[0:4], x0)
binary.LittleEndian.PutUint32(out[4:8], x1)
binary.LittleEndian.PutUint32(out[8:12], x2)
binary.LittleEndian.PutUint32(out[12:16], x3)
binary.LittleEndian.PutUint32(out[16:20], x12)
binary.LittleEndian.PutUint32(out[20:24], x13)
binary.LittleEndian.PutUint32(out[24:28], x14)
binary.LittleEndian.PutUint32(out[28:32], x15)
}
return
}

27
vendor/github.com/agl/ed25519/LICENSE generated vendored Normal file
View File

@ -0,0 +1,27 @@
Copyright (c) 2012 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

1411
vendor/github.com/agl/ed25519/edwards25519/const.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

340
vendor/github.com/agl/ed25519/extra25519/extra25519.go generated vendored Normal file
View File

@ -0,0 +1,340 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package extra25519
import (
"crypto/sha512"
"github.com/agl/ed25519/edwards25519"
)
// PrivateKeyToCurve25519 converts an ed25519 private key into a corresponding
// curve25519 private key such that the resulting curve25519 public key will
// equal the result from PublicKeyToCurve25519.
func PrivateKeyToCurve25519(curve25519Private *[32]byte, privateKey *[64]byte) {
h := sha512.New()
h.Write(privateKey[:32])
digest := h.Sum(nil)
digest[0] &= 248
digest[31] &= 127
digest[31] |= 64
copy(curve25519Private[:], digest)
}
func edwardsToMontgomeryX(outX, y *edwards25519.FieldElement) {
// We only need the x-coordinate of the curve25519 point, which I'll
// call u. The isomorphism is u=(y+1)/(1-y), since y=Y/Z, this gives
// u=(Y+Z)/(Z-Y). We know that Z=1, thus u=(Y+1)/(1-Y).
var oneMinusY edwards25519.FieldElement
edwards25519.FeOne(&oneMinusY)
edwards25519.FeSub(&oneMinusY, &oneMinusY, y)
edwards25519.FeInvert(&oneMinusY, &oneMinusY)
edwards25519.FeOne(outX)
edwards25519.FeAdd(outX, outX, y)
edwards25519.FeMul(outX, outX, &oneMinusY)
}
// PublicKeyToCurve25519 converts an Ed25519 public key into the curve25519
// public key that would be generated from the same private key.
func PublicKeyToCurve25519(curve25519Public *[32]byte, publicKey *[32]byte) bool {
var A edwards25519.ExtendedGroupElement
if !A.FromBytes(publicKey) {
return false
}
// A.Z = 1 as a postcondition of FromBytes.
var x edwards25519.FieldElement
edwardsToMontgomeryX(&x, &A.Y)
edwards25519.FeToBytes(curve25519Public, &x)
return true
}
// sqrtMinusAPlus2 is sqrt(-(486662+2))
var sqrtMinusAPlus2 = edwards25519.FieldElement{
-12222970, -8312128, -11511410, 9067497, -15300785, -241793, 25456130, 14121551, -12187136, 3972024,
}
// sqrtMinusHalf is sqrt(-1/2)
var sqrtMinusHalf = edwards25519.FieldElement{
-17256545, 3971863, 28865457, -1750208, 27359696, -16640980, 12573105, 1002827, -163343, 11073975,
}
// halfQMinus1Bytes is (2^255-20)/2 expressed in little endian form.
var halfQMinus1Bytes = [32]byte{
0xf6, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0x3f,
}
// feBytesLess returns one if a <= b and zero otherwise.
func feBytesLE(a, b *[32]byte) int32 {
equalSoFar := int32(-1)
greater := int32(0)
for i := uint(31); i < 32; i-- {
x := int32(a[i])
y := int32(b[i])
greater = (^equalSoFar & greater) | (equalSoFar & ((x - y) >> 31))
equalSoFar = equalSoFar & (((x ^ y) - 1) >> 31)
}
return int32(^equalSoFar & 1 & greater)
}
// ScalarBaseMult computes a curve25519 public key from a private key and also
// a uniform representative for that public key. Note that this function will
// fail and return false for about half of private keys.
// See http://elligator.cr.yp.to/elligator-20130828.pdf.
func ScalarBaseMult(publicKey, representative, privateKey *[32]byte) bool {
var maskedPrivateKey [32]byte
copy(maskedPrivateKey[:], privateKey[:])
maskedPrivateKey[0] &= 248
maskedPrivateKey[31] &= 127
maskedPrivateKey[31] |= 64
var A edwards25519.ExtendedGroupElement
edwards25519.GeScalarMultBase(&A, &maskedPrivateKey)
var inv1 edwards25519.FieldElement
edwards25519.FeSub(&inv1, &A.Z, &A.Y)
edwards25519.FeMul(&inv1, &inv1, &A.X)
edwards25519.FeInvert(&inv1, &inv1)
var t0, u edwards25519.FieldElement
edwards25519.FeMul(&u, &inv1, &A.X)
edwards25519.FeAdd(&t0, &A.Y, &A.Z)
edwards25519.FeMul(&u, &u, &t0)
var v edwards25519.FieldElement
edwards25519.FeMul(&v, &t0, &inv1)
edwards25519.FeMul(&v, &v, &A.Z)
edwards25519.FeMul(&v, &v, &sqrtMinusAPlus2)
var b edwards25519.FieldElement
edwards25519.FeAdd(&b, &u, &edwards25519.A)
var c, b3, b7, b8 edwards25519.FieldElement
edwards25519.FeSquare(&b3, &b) // 2
edwards25519.FeMul(&b3, &b3, &b) // 3
edwards25519.FeSquare(&c, &b3) // 6
edwards25519.FeMul(&b7, &c, &b) // 7
edwards25519.FeMul(&b8, &b7, &b) // 8
edwards25519.FeMul(&c, &b7, &u)
q58(&c, &c)
var chi edwards25519.FieldElement
edwards25519.FeSquare(&chi, &c)
edwards25519.FeSquare(&chi, &chi)
edwards25519.FeSquare(&t0, &u)
edwards25519.FeMul(&chi, &chi, &t0)
edwards25519.FeSquare(&t0, &b7) // 14
edwards25519.FeMul(&chi, &chi, &t0)
edwards25519.FeNeg(&chi, &chi)
var chiBytes [32]byte
edwards25519.FeToBytes(&chiBytes, &chi)
// chi[1] is either 0 or 0xff
if chiBytes[1] == 0xff {
return false
}
// Calculate r1 = sqrt(-u/(2*(u+A)))
var r1 edwards25519.FieldElement
edwards25519.FeMul(&r1, &c, &u)
edwards25519.FeMul(&r1, &r1, &b3)
edwards25519.FeMul(&r1, &r1, &sqrtMinusHalf)
var maybeSqrtM1 edwards25519.FieldElement
edwards25519.FeSquare(&t0, &r1)
edwards25519.FeMul(&t0, &t0, &b)
edwards25519.FeAdd(&t0, &t0, &t0)
edwards25519.FeAdd(&t0, &t0, &u)
edwards25519.FeOne(&maybeSqrtM1)
edwards25519.FeCMove(&maybeSqrtM1, &edwards25519.SqrtM1, edwards25519.FeIsNonZero(&t0))
edwards25519.FeMul(&r1, &r1, &maybeSqrtM1)
// Calculate r = sqrt(-(u+A)/(2u))
var r edwards25519.FieldElement
edwards25519.FeSquare(&t0, &c) // 2
edwards25519.FeMul(&t0, &t0, &c) // 3
edwards25519.FeSquare(&t0, &t0) // 6
edwards25519.FeMul(&r, &t0, &c) // 7
edwards25519.FeSquare(&t0, &u) // 2
edwards25519.FeMul(&t0, &t0, &u) // 3
edwards25519.FeMul(&r, &r, &t0)
edwards25519.FeSquare(&t0, &b8) // 16
edwards25519.FeMul(&t0, &t0, &b8) // 24
edwards25519.FeMul(&t0, &t0, &b) // 25
edwards25519.FeMul(&r, &r, &t0)
edwards25519.FeMul(&r, &r, &sqrtMinusHalf)
edwards25519.FeSquare(&t0, &r)
edwards25519.FeMul(&t0, &t0, &u)
edwards25519.FeAdd(&t0, &t0, &t0)
edwards25519.FeAdd(&t0, &t0, &b)
edwards25519.FeOne(&maybeSqrtM1)
edwards25519.FeCMove(&maybeSqrtM1, &edwards25519.SqrtM1, edwards25519.FeIsNonZero(&t0))
edwards25519.FeMul(&r, &r, &maybeSqrtM1)
var vBytes [32]byte
edwards25519.FeToBytes(&vBytes, &v)
vInSquareRootImage := feBytesLE(&vBytes, &halfQMinus1Bytes)
edwards25519.FeCMove(&r, &r1, vInSquareRootImage)
edwards25519.FeToBytes(publicKey, &u)
edwards25519.FeToBytes(representative, &r)
return true
}
// q58 calculates out = z^((p-5)/8).
func q58(out, z *edwards25519.FieldElement) {
var t1, t2, t3 edwards25519.FieldElement
var i int
edwards25519.FeSquare(&t1, z) // 2^1
edwards25519.FeMul(&t1, &t1, z) // 2^1 + 2^0
edwards25519.FeSquare(&t1, &t1) // 2^2 + 2^1
edwards25519.FeSquare(&t2, &t1) // 2^3 + 2^2
edwards25519.FeSquare(&t2, &t2) // 2^4 + 2^3
edwards25519.FeMul(&t2, &t2, &t1) // 4,3,2,1
edwards25519.FeMul(&t1, &t2, z) // 4..0
edwards25519.FeSquare(&t2, &t1) // 5..1
for i = 1; i < 5; i++ { // 9,8,7,6,5
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t1, &t2, &t1) // 9,8,7,6,5,4,3,2,1,0
edwards25519.FeSquare(&t2, &t1) // 10..1
for i = 1; i < 10; i++ { // 19..10
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t2, &t2, &t1) // 19..0
edwards25519.FeSquare(&t3, &t2) // 20..1
for i = 1; i < 20; i++ { // 39..20
edwards25519.FeSquare(&t3, &t3)
}
edwards25519.FeMul(&t2, &t3, &t2) // 39..0
edwards25519.FeSquare(&t2, &t2) // 40..1
for i = 1; i < 10; i++ { // 49..10
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t1, &t2, &t1) // 49..0
edwards25519.FeSquare(&t2, &t1) // 50..1
for i = 1; i < 50; i++ { // 99..50
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t2, &t2, &t1) // 99..0
edwards25519.FeSquare(&t3, &t2) // 100..1
for i = 1; i < 100; i++ { // 199..100
edwards25519.FeSquare(&t3, &t3)
}
edwards25519.FeMul(&t2, &t3, &t2) // 199..0
edwards25519.FeSquare(&t2, &t2) // 200..1
for i = 1; i < 50; i++ { // 249..50
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t1, &t2, &t1) // 249..0
edwards25519.FeSquare(&t1, &t1) // 250..1
edwards25519.FeSquare(&t1, &t1) // 251..2
edwards25519.FeMul(out, &t1, z) // 251..2,0
}
// chi calculates out = z^((p-1)/2). The result is either 1, 0, or -1 depending
// on whether z is a non-zero square, zero, or a non-square.
func chi(out, z *edwards25519.FieldElement) {
var t0, t1, t2, t3 edwards25519.FieldElement
var i int
edwards25519.FeSquare(&t0, z) // 2^1
edwards25519.FeMul(&t1, &t0, z) // 2^1 + 2^0
edwards25519.FeSquare(&t0, &t1) // 2^2 + 2^1
edwards25519.FeSquare(&t2, &t0) // 2^3 + 2^2
edwards25519.FeSquare(&t2, &t2) // 4,3
edwards25519.FeMul(&t2, &t2, &t0) // 4,3,2,1
edwards25519.FeMul(&t1, &t2, z) // 4..0
edwards25519.FeSquare(&t2, &t1) // 5..1
for i = 1; i < 5; i++ { // 9,8,7,6,5
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t1, &t2, &t1) // 9,8,7,6,5,4,3,2,1,0
edwards25519.FeSquare(&t2, &t1) // 10..1
for i = 1; i < 10; i++ { // 19..10
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t2, &t2, &t1) // 19..0
edwards25519.FeSquare(&t3, &t2) // 20..1
for i = 1; i < 20; i++ { // 39..20
edwards25519.FeSquare(&t3, &t3)
}
edwards25519.FeMul(&t2, &t3, &t2) // 39..0
edwards25519.FeSquare(&t2, &t2) // 40..1
for i = 1; i < 10; i++ { // 49..10
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t1, &t2, &t1) // 49..0
edwards25519.FeSquare(&t2, &t1) // 50..1
for i = 1; i < 50; i++ { // 99..50
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t2, &t2, &t1) // 99..0
edwards25519.FeSquare(&t3, &t2) // 100..1
for i = 1; i < 100; i++ { // 199..100
edwards25519.FeSquare(&t3, &t3)
}
edwards25519.FeMul(&t2, &t3, &t2) // 199..0
edwards25519.FeSquare(&t2, &t2) // 200..1
for i = 1; i < 50; i++ { // 249..50
edwards25519.FeSquare(&t2, &t2)
}
edwards25519.FeMul(&t1, &t2, &t1) // 249..0
edwards25519.FeSquare(&t1, &t1) // 250..1
for i = 1; i < 4; i++ { // 253..4
edwards25519.FeSquare(&t1, &t1)
}
edwards25519.FeMul(out, &t1, &t0) // 253..4,2,1
}
// RepresentativeToPublicKey converts a uniform representative value for a
// curve25519 public key, as produced by ScalarBaseMult, to a curve25519 public
// key.
func RepresentativeToPublicKey(publicKey, representative *[32]byte) {
var rr2, v, e edwards25519.FieldElement
edwards25519.FeFromBytes(&rr2, representative)
edwards25519.FeSquare2(&rr2, &rr2)
rr2[0]++
edwards25519.FeInvert(&rr2, &rr2)
edwards25519.FeMul(&v, &edwards25519.A, &rr2)
edwards25519.FeNeg(&v, &v)
var v2, v3 edwards25519.FieldElement
edwards25519.FeSquare(&v2, &v)
edwards25519.FeMul(&v3, &v, &v2)
edwards25519.FeAdd(&e, &v3, &v)
edwards25519.FeMul(&v2, &v2, &edwards25519.A)
edwards25519.FeAdd(&e, &v2, &e)
chi(&e, &e)
var eBytes [32]byte
edwards25519.FeToBytes(&eBytes, &e)
// eBytes[1] is either 0 (for e = 1) or 0xff (for e = -1)
eIsMinus1 := int32(eBytes[1]) & 1
var negV edwards25519.FieldElement
edwards25519.FeNeg(&negV, &v)
edwards25519.FeCMove(&v, &negV, eIsMinus1)
edwards25519.FeZero(&v2)
edwards25519.FeCMove(&v2, &edwards25519.A, eIsMinus1)
edwards25519.FeSub(&v, &v, &v2)
edwards25519.FeToBytes(publicKey, &v)
}

21
vendor/github.com/bifurcation/mint/LICENSE.md generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2016 Richard Barnes
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

94
vendor/github.com/bifurcation/mint/README.md generated vendored Normal file
View File

@ -0,0 +1,94 @@
![A lock with a mint leaf](https://ipv.sx/mint/mint.svg)
mint - A Minimal TLS 1.3 stack
==============================
[![Build Status](https://circleci.com/gh/bifurcation/mint.svg)](https://circleci.com/gh/bifurcation/mint)
This project is primarily a learning effort for me to understand the [TLS
1.3](http://tlswg.github.io/tls13-spec/) protocol. The goal is to arrive at a
pretty complete implementation of TLS 1.3, with minimal, elegant code that
demonstrates how things work. Testing is a priority to ensure correctness, but
otherwise, the quality of the software engineering might not be at a level where
it makes sense to integrate this with other libraries. Backward compatibility
is not an objective.
We borrow liberally from the [Go TLS
library](https://golang.org/pkg/crypto/tls/), especially where TLS 1.3 aligns
with earlier TLS versions. However, unnecessary parts will be ruthlessly cut
off.
## DTLS Support
Mint has partial support for DTLS, but that support is not yet complete
and may still contain serious defects.
## Quickstart
Installation is the same as for any other Go package:
```
go get github.com/bifurcation/mint
```
The API is pretty much the same as for the TLS module, with `Dial` and `Listen`
methods wrapping the underlying socket APIs.
```
conn, err := mint.Dial("tcp", "localhost:4430", &mint.Config{...})
...
listener, err := mint.Listen("tcp", "localhost:4430", &mint.Config{...})
```
Documentation is available on
[godoc.org](https://godoc.org/github.com/bifurcation/mint)
## Interoperability testing
The `mint-client` and `mint-server` executables are included to make it easy to
do basic interoperability tests with other TLS 1.3 implementations. The steps
for testing against NSS are as follows.
```
# Install mint
go get github.com/bifurcation/mint
# Environment for NSS (you'll probably want a new directory)
NSS_ROOT=<whereever you want to put NSS>
mkdir $NSS_ROOT
cd $NSS_ROOT
export USE_64=1
export ENABLE_TLS_1_3=1
export HOST=localhost
export DOMSUF=localhost
# Build NSS
hg clone https://hg.mozilla.org/projects/nss
hg clone https://hg.mozilla.org/projects/nspr
cd nss
make nss_build_all
export PLATFORM=`cat $NSS_ROOT/dist/latest`
export DYLD_LIBRARY_PATH=$NSS_ROOT/dist/$PLATFORM/lib
export LD_LIBRARY_PATH=$NSS_ROOT/dist/$PLATFORM/lib
# Run NSS tests (this creates data for the server to use)
cd tests/ssl_gtests
./ssl_gtests.sh
# Test with client=mint server=NSS
cd $NSS_ROOT
./dist/$PLATFORM/bin/selfserv -d tests_results/security/$HOST.1/ssl_gtests/ -n rsa -p 4430
# if you get `NSS_Init failed.`, check the path above, particularly around $HOST
# ...
go run $GOPATH/src/github.com/bifurcation/mint/bin/mint-client/main.go
# Test with client=NSS server=mint
go run $GOPATH/src/github.com/bifurcation/mint/bin/mint-server/main.go
# ...
cd $NSS_ROOT
dist/$PLATFORM/bin/tstclnt -d tests_results/security/$HOST/ssl_gtests/ -V tls1.3:tls1.3 -h 127.0.0.1 -p 4430 -o
```

101
vendor/github.com/bifurcation/mint/alert.go generated vendored Normal file
View File

@ -0,0 +1,101 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package mint
import "strconv"
type Alert uint8
const (
// alert level
AlertLevelWarning = 1
AlertLevelError = 2
)
const (
AlertCloseNotify Alert = 0
AlertUnexpectedMessage Alert = 10
AlertBadRecordMAC Alert = 20
AlertDecryptionFailed Alert = 21
AlertRecordOverflow Alert = 22
AlertDecompressionFailure Alert = 30
AlertHandshakeFailure Alert = 40
AlertBadCertificate Alert = 42
AlertUnsupportedCertificate Alert = 43
AlertCertificateRevoked Alert = 44
AlertCertificateExpired Alert = 45
AlertCertificateUnknown Alert = 46
AlertIllegalParameter Alert = 47
AlertUnknownCA Alert = 48
AlertAccessDenied Alert = 49
AlertDecodeError Alert = 50
AlertDecryptError Alert = 51
AlertProtocolVersion Alert = 70
AlertInsufficientSecurity Alert = 71
AlertInternalError Alert = 80
AlertInappropriateFallback Alert = 86
AlertUserCanceled Alert = 90
AlertNoRenegotiation Alert = 100
AlertMissingExtension Alert = 109
AlertUnsupportedExtension Alert = 110
AlertCertificateUnobtainable Alert = 111
AlertUnrecognizedName Alert = 112
AlertBadCertificateStatsResponse Alert = 113
AlertBadCertificateHashValue Alert = 114
AlertUnknownPSKIdentity Alert = 115
AlertNoApplicationProtocol Alert = 120
AlertStatelessRetry Alert = 253
AlertWouldBlock Alert = 254
AlertNoAlert Alert = 255
)
var alertText = map[Alert]string{
AlertCloseNotify: "close notify",
AlertUnexpectedMessage: "unexpected message",
AlertBadRecordMAC: "bad record MAC",
AlertDecryptionFailed: "decryption failed",
AlertRecordOverflow: "record overflow",
AlertDecompressionFailure: "decompression failure",
AlertHandshakeFailure: "handshake failure",
AlertBadCertificate: "bad certificate",
AlertUnsupportedCertificate: "unsupported certificate",
AlertCertificateRevoked: "revoked certificate",
AlertCertificateExpired: "expired certificate",
AlertCertificateUnknown: "unknown certificate",
AlertIllegalParameter: "illegal parameter",
AlertUnknownCA: "unknown certificate authority",
AlertAccessDenied: "access denied",
AlertDecodeError: "error decoding message",
AlertDecryptError: "error decrypting message",
AlertProtocolVersion: "protocol version not supported",
AlertInsufficientSecurity: "insufficient security level",
AlertInternalError: "internal error",
AlertInappropriateFallback: "inappropriate fallback",
AlertUserCanceled: "user canceled",
AlertMissingExtension: "missing extension",
AlertUnsupportedExtension: "unsupported extension",
AlertCertificateUnobtainable: "certificate unobtainable",
AlertUnrecognizedName: "unrecognized name",
AlertBadCertificateStatsResponse: "bad certificate status response",
AlertBadCertificateHashValue: "bad certificate hash value",
AlertUnknownPSKIdentity: "unknown PSK identity",
AlertNoApplicationProtocol: "no application protocol",
AlertNoRenegotiation: "no renegotiation",
AlertStatelessRetry: "stateless retry",
AlertWouldBlock: "would have blocked",
AlertNoAlert: "no alert",
}
func (e Alert) String() string {
s, ok := alertText[e]
if ok {
return s
}
return "alert(" + strconv.Itoa(int(e)) + ")"
}
func (e Alert) Error() string {
return e.String()
}

File diff suppressed because it is too large Load Diff

266
vendor/github.com/bifurcation/mint/common.go generated vendored Normal file
View File

@ -0,0 +1,266 @@
package mint
import (
"fmt"
"strconv"
)
const (
tls13Version uint16 = 0x0304
tls12Version uint16 = 0x0303
tls10Version uint16 = 0x0301
dtls12WireVersion uint16 = 0xfefd
)
var (
// Flags for some minor compat issues
allowWrongVersionNumber = true
allowPKCS1 = true
)
// enum {...} ContentType;
type RecordType byte
const (
RecordTypeAlert RecordType = 21
RecordTypeHandshake RecordType = 22
RecordTypeApplicationData RecordType = 23
RecordTypeAck RecordType = 25
)
// enum {...} HandshakeType;
type HandshakeType byte
const (
// Omitted: *_RESERVED
HandshakeTypeClientHello HandshakeType = 1
HandshakeTypeServerHello HandshakeType = 2
HandshakeTypeNewSessionTicket HandshakeType = 4
HandshakeTypeEndOfEarlyData HandshakeType = 5
HandshakeTypeHelloRetryRequest HandshakeType = 6
HandshakeTypeEncryptedExtensions HandshakeType = 8
HandshakeTypeCertificate HandshakeType = 11
HandshakeTypeCertificateRequest HandshakeType = 13
HandshakeTypeCertificateVerify HandshakeType = 15
HandshakeTypeServerConfiguration HandshakeType = 17
HandshakeTypeFinished HandshakeType = 20
HandshakeTypeKeyUpdate HandshakeType = 24
HandshakeTypeMessageHash HandshakeType = 254
)
var hrrRandomSentinel = [32]byte{
0xcf, 0x21, 0xad, 0x74, 0xe5, 0x9a, 0x61, 0x11,
0xbe, 0x1d, 0x8c, 0x02, 0x1e, 0x65, 0xb8, 0x91,
0xc2, 0xa2, 0x11, 0x16, 0x7a, 0xbb, 0x8c, 0x5e,
0x07, 0x9e, 0x09, 0xe2, 0xc8, 0xa8, 0x33, 0x9c,
}
// uint8 CipherSuite[2];
type CipherSuite uint16
const (
// XXX: Actually TLS_NULL_WITH_NULL_NULL, but we need a way to label the zero
// value for this type so that we can detect when a field is set.
CIPHER_SUITE_UNKNOWN CipherSuite = 0x0000
TLS_AES_128_GCM_SHA256 CipherSuite = 0x1301
TLS_AES_256_GCM_SHA384 CipherSuite = 0x1302
TLS_CHACHA20_POLY1305_SHA256 CipherSuite = 0x1303
TLS_AES_128_CCM_SHA256 CipherSuite = 0x1304
TLS_AES_256_CCM_8_SHA256 CipherSuite = 0x1305
)
func (c CipherSuite) String() string {
switch c {
case CIPHER_SUITE_UNKNOWN:
return "unknown"
case TLS_AES_128_GCM_SHA256:
return "TLS_AES_128_GCM_SHA256"
case TLS_AES_256_GCM_SHA384:
return "TLS_AES_256_GCM_SHA384"
case TLS_CHACHA20_POLY1305_SHA256:
return "TLS_CHACHA20_POLY1305_SHA256"
case TLS_AES_128_CCM_SHA256:
return "TLS_AES_128_CCM_SHA256"
case TLS_AES_256_CCM_8_SHA256:
return "TLS_AES_256_CCM_8_SHA256"
}
// cannot use %x here, since it calls String(), leading to infinite recursion
return fmt.Sprintf("invalid CipherSuite value: 0x%s", strconv.FormatUint(uint64(c), 16))
}
// enum {...} SignatureScheme
type SignatureScheme uint16
const (
// RSASSA-PKCS1-v1_5 algorithms
RSA_PKCS1_SHA1 SignatureScheme = 0x0201
RSA_PKCS1_SHA256 SignatureScheme = 0x0401
RSA_PKCS1_SHA384 SignatureScheme = 0x0501
RSA_PKCS1_SHA512 SignatureScheme = 0x0601
// ECDSA algorithms
ECDSA_P256_SHA256 SignatureScheme = 0x0403
ECDSA_P384_SHA384 SignatureScheme = 0x0503
ECDSA_P521_SHA512 SignatureScheme = 0x0603
// RSASSA-PSS algorithms
RSA_PSS_SHA256 SignatureScheme = 0x0804
RSA_PSS_SHA384 SignatureScheme = 0x0805
RSA_PSS_SHA512 SignatureScheme = 0x0806
// EdDSA algorithms
Ed25519 SignatureScheme = 0x0807
Ed448 SignatureScheme = 0x0808
)
// enum {...} ExtensionType
type ExtensionType uint16
const (
ExtensionTypeServerName ExtensionType = 0
ExtensionTypeSupportedGroups ExtensionType = 10
ExtensionTypeSignatureAlgorithms ExtensionType = 13
ExtensionTypeALPN ExtensionType = 16
ExtensionTypeKeyShare ExtensionType = 51
ExtensionTypePreSharedKey ExtensionType = 41
ExtensionTypeEarlyData ExtensionType = 42
ExtensionTypeSupportedVersions ExtensionType = 43
ExtensionTypeCookie ExtensionType = 44
ExtensionTypePSKKeyExchangeModes ExtensionType = 45
ExtensionTypeTicketEarlyDataInfo ExtensionType = 46
)
// enum {...} NamedGroup
type NamedGroup uint16
const (
// Elliptic Curve Groups.
P256 NamedGroup = 23
P384 NamedGroup = 24
P521 NamedGroup = 25
// ECDH functions.
X25519 NamedGroup = 29
X448 NamedGroup = 30
// Finite field groups.
FFDHE2048 NamedGroup = 256
FFDHE3072 NamedGroup = 257
FFDHE4096 NamedGroup = 258
FFDHE6144 NamedGroup = 259
FFDHE8192 NamedGroup = 260
)
// enum {...} PskKeyExchangeMode;
type PSKKeyExchangeMode uint8
const (
PSKModeKE PSKKeyExchangeMode = 0
PSKModeDHEKE PSKKeyExchangeMode = 1
)
// enum {
// update_not_requested(0), update_requested(1), (255)
// } KeyUpdateRequest;
type KeyUpdateRequest uint8
const (
KeyUpdateNotRequested KeyUpdateRequest = 0
KeyUpdateRequested KeyUpdateRequest = 1
)
type State uint8
const (
StateInit = 0
// states valid for the client
StateClientStart State = iota
StateClientWaitSH
StateClientWaitEE
StateClientWaitCert
StateClientWaitCV
StateClientWaitFinished
StateClientWaitCertCR
StateClientConnected
// states valid for the server
StateServerStart State = iota
StateServerRecvdCH
StateServerNegotiated
StateServerReadPastEarlyData
StateServerWaitEOED
StateServerWaitFlight2
StateServerWaitCert
StateServerWaitCV
StateServerWaitFinished
StateServerConnected
)
func (s State) String() string {
switch s {
case StateClientStart:
return "Client START"
case StateClientWaitSH:
return "Client WAIT_SH"
case StateClientWaitEE:
return "Client WAIT_EE"
case StateClientWaitCert:
return "Client WAIT_CERT"
case StateClientWaitCV:
return "Client WAIT_CV"
case StateClientWaitFinished:
return "Client WAIT_FINISHED"
case StateClientWaitCertCR:
return "Client WAIT_CERT_CR"
case StateClientConnected:
return "Client CONNECTED"
case StateServerStart:
return "Server START"
case StateServerRecvdCH:
return "Server RECVD_CH"
case StateServerNegotiated:
return "Server NEGOTIATED"
case StateServerReadPastEarlyData:
return "Server READ_PAST_EARLY_DATA"
case StateServerWaitEOED:
return "Server WAIT_EOED"
case StateServerWaitFlight2:
return "Server WAIT_FLIGHT2"
case StateServerWaitCert:
return "Server WAIT_CERT"
case StateServerWaitCV:
return "Server WAIT_CV"
case StateServerWaitFinished:
return "Server WAIT_FINISHED"
case StateServerConnected:
return "Server CONNECTED"
default:
return fmt.Sprintf("unknown state: %d", s)
}
}
// Epochs for DTLS (also used for key phase labelling)
type Epoch uint16
const (
EpochClear Epoch = 0
EpochEarlyData Epoch = 1
EpochHandshakeData Epoch = 2
EpochApplicationData Epoch = 3
EpochUpdate Epoch = 4
)
func (e Epoch) label() string {
switch e {
case EpochClear:
return "clear"
case EpochEarlyData:
return "early data"
case EpochHandshakeData:
return "handshake"
case EpochApplicationData:
return "application data"
}
return "Application data (updated)"
}
func assert(b bool) {
if !b {
panic("Assertion failed")
}
}

921
vendor/github.com/bifurcation/mint/conn.go generated vendored Normal file
View File

@ -0,0 +1,921 @@
package mint
import (
"crypto"
"crypto/x509"
"encoding/hex"
"errors"
"fmt"
"io"
"net"
"reflect"
"sync"
"time"
)
type Certificate struct {
Chain []*x509.Certificate
PrivateKey crypto.Signer
}
type PreSharedKey struct {
CipherSuite CipherSuite
IsResumption bool
Identity []byte
Key []byte
NextProto string
ReceivedAt time.Time
ExpiresAt time.Time
TicketAgeAdd uint32
}
type PreSharedKeyCache interface {
Get(string) (PreSharedKey, bool)
Put(string, PreSharedKey)
Size() int
}
// A CookieHandler can be used to give the application more fine-grained control over Cookies.
// Generate receives the Conn as an argument, so the CookieHandler can decide when to send the cookie based on that, and offload state to the client by encoding that into the Cookie.
// When the client echoes the Cookie, Validate is called. The application can then recover the state from the cookie.
type CookieHandler interface {
// Generate a byte string that is sent as a part of a cookie to the client in the HelloRetryRequest
// If Generate returns nil, mint will not send a HelloRetryRequest.
Generate(*Conn) ([]byte, error)
// Validate is called when receiving a ClientHello containing a Cookie.
// If validation failed, the handshake is aborted.
Validate(*Conn, []byte) bool
}
type PSKMapCache map[string]PreSharedKey
func (cache PSKMapCache) Get(key string) (psk PreSharedKey, ok bool) {
psk, ok = cache[key]
return
}
func (cache *PSKMapCache) Put(key string, psk PreSharedKey) {
(*cache)[key] = psk
}
func (cache PSKMapCache) Size() int {
return len(cache)
}
// Config is the struct used to pass configuration settings to a TLS client or
// server instance. The settings for client and server are pretty different,
// but we just throw them all in here.
type Config struct {
// Client fields
ServerName string
// Server fields
SendSessionTickets bool
TicketLifetime uint32
TicketLen int
EarlyDataLifetime uint32
AllowEarlyData bool
// Require the client to echo a cookie.
RequireCookie bool
// A CookieHandler can be used to set and validate a cookie.
// The cookie returned by the CookieHandler will be part of the cookie sent on the wire, and encoded using the CookieProtector.
// If no CookieHandler is set, mint will always send a cookie.
// The CookieHandler can be used to decide on a per-connection basis, if a cookie should be sent.
CookieHandler CookieHandler
// The CookieProtector is used to encrypt / decrypt cookies.
// It should make sure that the Cookie cannot be read and tampered with by the client.
// If non-blocking mode is used, and cookies are required, this field has to be set.
// In blocking mode, a default cookie protector is used, if this is unused.
CookieProtector CookieProtector
// The ExtensionHandler is used to add custom extensions.
ExtensionHandler AppExtensionHandler
RequireClientAuth bool
// Time returns the current time as the number of seconds since the epoch.
// If Time is nil, TLS uses time.Now.
Time func() time.Time
// RootCAs defines the set of root certificate authorities
// that clients use when verifying server certificates.
// If RootCAs is nil, TLS uses the host's root CA set.
RootCAs *x509.CertPool
// InsecureSkipVerify controls whether a client verifies the
// server's certificate chain and host name.
// If InsecureSkipVerify is true, TLS accepts any certificate
// presented by the server and any host name in that certificate.
// In this mode, TLS is susceptible to man-in-the-middle attacks.
// This should be used only for testing.
InsecureSkipVerify bool
// Shared fields
Certificates []*Certificate
// VerifyPeerCertificate, if not nil, is called after normal
// certificate verification by either a TLS client or server. It
// receives the raw ASN.1 certificates provided by the peer and also
// any verified chains that normal processing found. If it returns a
// non-nil error, the handshake is aborted and that error results.
//
// If normal verification fails then the handshake will abort before
// considering this callback. If normal verification is disabled by
// setting InsecureSkipVerify then this callback will be considered but
// the verifiedChains argument will always be nil.
VerifyPeerCertificate func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error
CipherSuites []CipherSuite
Groups []NamedGroup
SignatureSchemes []SignatureScheme
NextProtos []string
PSKs PreSharedKeyCache
PSKModes []PSKKeyExchangeMode
NonBlocking bool
UseDTLS bool
// The same config object can be shared among different connections, so it
// needs its own mutex
mutex sync.RWMutex
}
// Clone returns a shallow clone of c. It is safe to clone a Config that is
// being used concurrently by a TLS client or server.
func (c *Config) Clone() *Config {
c.mutex.Lock()
defer c.mutex.Unlock()
return &Config{
ServerName: c.ServerName,
SendSessionTickets: c.SendSessionTickets,
TicketLifetime: c.TicketLifetime,
TicketLen: c.TicketLen,
EarlyDataLifetime: c.EarlyDataLifetime,
AllowEarlyData: c.AllowEarlyData,
RequireCookie: c.RequireCookie,
CookieHandler: c.CookieHandler,
CookieProtector: c.CookieProtector,
ExtensionHandler: c.ExtensionHandler,
RequireClientAuth: c.RequireClientAuth,
Time: c.Time,
RootCAs: c.RootCAs,
InsecureSkipVerify: c.InsecureSkipVerify,
Certificates: c.Certificates,
VerifyPeerCertificate: c.VerifyPeerCertificate,
CipherSuites: c.CipherSuites,
Groups: c.Groups,
SignatureSchemes: c.SignatureSchemes,
NextProtos: c.NextProtos,
PSKs: c.PSKs,
PSKModes: c.PSKModes,
NonBlocking: c.NonBlocking,
UseDTLS: c.UseDTLS,
}
}
func (c *Config) Init(isClient bool) error {
c.mutex.Lock()
defer c.mutex.Unlock()
// Set defaults
if len(c.CipherSuites) == 0 {
c.CipherSuites = defaultSupportedCipherSuites
}
if len(c.Groups) == 0 {
c.Groups = defaultSupportedGroups
}
if len(c.SignatureSchemes) == 0 {
c.SignatureSchemes = defaultSignatureSchemes
}
if c.TicketLen == 0 {
c.TicketLen = defaultTicketLen
}
if !reflect.ValueOf(c.PSKs).IsValid() {
c.PSKs = &PSKMapCache{}
}
if len(c.PSKModes) == 0 {
c.PSKModes = defaultPSKModes
}
return nil
}
func (c *Config) ValidForServer() bool {
return (reflect.ValueOf(c.PSKs).IsValid() && c.PSKs.Size() > 0) ||
(len(c.Certificates) > 0 &&
len(c.Certificates[0].Chain) > 0 &&
c.Certificates[0].PrivateKey != nil)
}
func (c *Config) ValidForClient() bool {
return len(c.ServerName) > 0
}
func (c *Config) time() time.Time {
t := c.Time
if t == nil {
t = time.Now
}
return t()
}
var (
defaultSupportedCipherSuites = []CipherSuite{
TLS_AES_128_GCM_SHA256,
TLS_AES_256_GCM_SHA384,
}
defaultSupportedGroups = []NamedGroup{
P256,
P384,
FFDHE2048,
X25519,
}
defaultSignatureSchemes = []SignatureScheme{
RSA_PSS_SHA256,
RSA_PSS_SHA384,
RSA_PSS_SHA512,
ECDSA_P256_SHA256,
ECDSA_P384_SHA384,
ECDSA_P521_SHA512,
}
defaultTicketLen = 16
defaultPSKModes = []PSKKeyExchangeMode{
PSKModeKE,
PSKModeDHEKE,
}
)
type ConnectionState struct {
HandshakeState State
CipherSuite CipherSuiteParams // cipher suite in use (TLS_RSA_WITH_RC4_128_SHA, ...)
PeerCertificates []*x509.Certificate // certificate chain presented by remote peer
VerifiedChains [][]*x509.Certificate // verified chains built from PeerCertificates
NextProto string // Selected ALPN proto
UsingPSK bool // Are we using PSK.
UsingEarlyData bool // Did we negotiate 0-RTT.
}
// Conn implements the net.Conn interface, as with "crypto/tls"
// * Read, Write, and Close are provided locally
// * LocalAddr, RemoteAddr, and Set*Deadline are forwarded to the inner Conn
type Conn struct {
config *Config
conn net.Conn
isClient bool
state stateConnected
hState HandshakeState
handshakeMutex sync.Mutex
handshakeAlert Alert
handshakeComplete bool
readBuffer []byte
in, out *RecordLayer
hsCtx *HandshakeContext
}
func NewConn(conn net.Conn, config *Config, isClient bool) *Conn {
c := &Conn{conn: conn, config: config, isClient: isClient, hsCtx: &HandshakeContext{}}
if !config.UseDTLS {
c.in = NewRecordLayerTLS(c.conn, directionRead)
c.out = NewRecordLayerTLS(c.conn, directionWrite)
c.hsCtx.hIn = NewHandshakeLayerTLS(c.hsCtx, c.in)
c.hsCtx.hOut = NewHandshakeLayerTLS(c.hsCtx, c.out)
} else {
c.in = NewRecordLayerDTLS(c.conn, directionRead)
c.out = NewRecordLayerDTLS(c.conn, directionWrite)
c.hsCtx.hIn = NewHandshakeLayerDTLS(c.hsCtx, c.in)
c.hsCtx.hOut = NewHandshakeLayerDTLS(c.hsCtx, c.out)
c.hsCtx.timeoutMS = initialTimeout
c.hsCtx.timers = newTimerSet()
c.hsCtx.waitingNextFlight = true
}
c.in.label = c.label()
c.out.label = c.label()
c.hsCtx.hIn.nonblocking = c.config.NonBlocking
return c
}
// Read up
func (c *Conn) consumeRecord() error {
pt, err := c.in.ReadRecord()
if pt == nil {
logf(logTypeIO, "extendBuffer returns error %v", err)
return err
}
switch pt.contentType {
case RecordTypeHandshake:
logf(logTypeHandshake, "Received post-handshake message")
// We do not support fragmentation of post-handshake handshake messages.
// TODO: Factor this more elegantly; coalesce with handshakeLayer.ReadMessage()
start := 0
headerLen := handshakeHeaderLenTLS
if c.config.UseDTLS {
headerLen = handshakeHeaderLenDTLS
}
for start < len(pt.fragment) {
if len(pt.fragment[start:]) < headerLen {
return fmt.Errorf("Post-handshake handshake message too short for header")
}
hm := &HandshakeMessage{}
hm.msgType = HandshakeType(pt.fragment[start])
hmLen := (int(pt.fragment[start+1]) << 16) + (int(pt.fragment[start+2]) << 8) + int(pt.fragment[start+3])
if len(pt.fragment[start+headerLen:]) < hmLen {
return fmt.Errorf("Post-handshake handshake message too short for body")
}
hm.body = pt.fragment[start+headerLen : start+headerLen+hmLen]
// XXX: If we want to support more advanced cases, e.g., post-handshake
// authentication, we'll need to allow transitions other than
// Connected -> Connected
state, actions, alert := c.state.ProcessMessage(hm)
if alert != AlertNoAlert {
logf(logTypeHandshake, "Error in state transition: %v", alert)
c.sendAlert(alert)
return io.EOF
}
for _, action := range actions {
alert = c.takeAction(action)
if alert != AlertNoAlert {
logf(logTypeHandshake, "Error during handshake actions: %v", alert)
c.sendAlert(alert)
return io.EOF
}
}
var connected bool
c.state, connected = state.(stateConnected)
if !connected {
logf(logTypeHandshake, "Disconnected after state transition: %v", alert)
c.sendAlert(alert)
return io.EOF
}
start += headerLen + hmLen
}
case RecordTypeAlert:
logf(logTypeIO, "extended buffer (for alert): [%d] %x", len(c.readBuffer), c.readBuffer)
if len(pt.fragment) != 2 {
c.sendAlert(AlertUnexpectedMessage)
return io.EOF
}
if Alert(pt.fragment[1]) == AlertCloseNotify {
return io.EOF
}
switch pt.fragment[0] {
case AlertLevelWarning:
// drop on the floor
case AlertLevelError:
return Alert(pt.fragment[1])
default:
c.sendAlert(AlertUnexpectedMessage)
return io.EOF
}
case RecordTypeAck:
if !c.hsCtx.hIn.datagram {
logf(logTypeHandshake, "Received ACK in TLS mode")
return AlertUnexpectedMessage
}
return c.hsCtx.processAck(pt.fragment)
case RecordTypeApplicationData:
c.readBuffer = append(c.readBuffer, pt.fragment...)
logf(logTypeIO, "extended buffer: [%d] %x", len(c.readBuffer), c.readBuffer)
}
return err
}
func readPartial(in *[]byte, buffer []byte) int {
logf(logTypeIO, "conn.Read input buffer now has len %d", len((*in)))
read := copy(buffer, *in)
*in = (*in)[read:]
logf(logTypeVerbose, "Returning %v", string(buffer))
return read
}
// Read application data up to the size of buffer. Handshake and alert records
// are consumed by the Conn object directly.
func (c *Conn) Read(buffer []byte) (int, error) {
if _, connected := c.hState.(stateConnected); !connected {
// Clients can't call Read prior to handshake completion.
if c.isClient {
return 0, errors.New("Read called before the handshake completed")
}
// Neither can servers that don't allow early data.
if !c.config.AllowEarlyData {
return 0, errors.New("Read called before the handshake completed")
}
// If there's no early data, then return WouldBlock
if len(c.hsCtx.earlyData) == 0 {
return 0, AlertWouldBlock
}
return readPartial(&c.hsCtx.earlyData, buffer), nil
}
// The handshake is now connected.
logf(logTypeHandshake, "conn.Read with buffer = %d", len(buffer))
if alert := c.Handshake(); alert != AlertNoAlert {
return 0, alert
}
if len(buffer) == 0 {
return 0, nil
}
// Run our timers.
if c.config.UseDTLS {
if err := c.hsCtx.timers.check(time.Now()); err != nil {
return 0, AlertInternalError
}
}
// Lock the input channel
c.in.Lock()
defer c.in.Unlock()
for len(c.readBuffer) == 0 {
err := c.consumeRecord()
// err can be nil if consumeRecord processed a non app-data
// record.
if err != nil {
if c.config.NonBlocking || err != AlertWouldBlock {
logf(logTypeIO, "conn.Read returns err=%v", err)
return 0, err
}
}
}
return readPartial(&c.readBuffer, buffer), nil
}
// Write application data
func (c *Conn) Write(buffer []byte) (int, error) {
// Lock the output channel
c.out.Lock()
defer c.out.Unlock()
if !c.Writable() {
return 0, errors.New("Write called before the handshake completed (and early data not in use)")
}
// Send full-size fragments
var start int
sent := 0
for start = 0; len(buffer)-start >= maxFragmentLen; start += maxFragmentLen {
err := c.out.WriteRecord(&TLSPlaintext{
contentType: RecordTypeApplicationData,
fragment: buffer[start : start+maxFragmentLen],
})
if err != nil {
return sent, err
}
sent += maxFragmentLen
}
// Send a final partial fragment if necessary
if start < len(buffer) {
err := c.out.WriteRecord(&TLSPlaintext{
contentType: RecordTypeApplicationData,
fragment: buffer[start:],
})
if err != nil {
return sent, err
}
sent += len(buffer[start:])
}
return sent, nil
}
// sendAlert sends a TLS alert message.
// c.out.Mutex <= L.
func (c *Conn) sendAlert(err Alert) error {
c.handshakeMutex.Lock()
defer c.handshakeMutex.Unlock()
var level int
switch err {
case AlertNoRenegotiation, AlertCloseNotify:
level = AlertLevelWarning
default:
level = AlertLevelError
}
buf := []byte{byte(err), byte(level)}
c.out.WriteRecord(&TLSPlaintext{
contentType: RecordTypeAlert,
fragment: buf,
})
// close_notify and end_of_early_data are not actually errors
if level == AlertLevelWarning {
return &net.OpError{Op: "local error", Err: err}
}
return c.Close()
}
// Close closes the connection.
func (c *Conn) Close() error {
// XXX crypto/tls has an interlock with Write here. Do we need that?
return c.conn.Close()
}
// LocalAddr returns the local network address.
func (c *Conn) LocalAddr() net.Addr {
return c.conn.LocalAddr()
}
// RemoteAddr returns the remote network address.
func (c *Conn) RemoteAddr() net.Addr {
return c.conn.RemoteAddr()
}
// SetDeadline sets the read and write deadlines associated with the connection.
// A zero value for t means Read and Write will not time out.
// After a Write has timed out, the TLS state is corrupt and all future writes will return the same error.
func (c *Conn) SetDeadline(t time.Time) error {
return c.conn.SetDeadline(t)
}
// SetReadDeadline sets the read deadline on the underlying connection.
// A zero value for t means Read will not time out.
func (c *Conn) SetReadDeadline(t time.Time) error {
return c.conn.SetReadDeadline(t)
}
// SetWriteDeadline sets the write deadline on the underlying connection.
// A zero value for t means Write will not time out.
// After a Write has timed out, the TLS state is corrupt and all future writes will return the same error.
func (c *Conn) SetWriteDeadline(t time.Time) error {
return c.conn.SetWriteDeadline(t)
}
func (c *Conn) takeAction(actionGeneric HandshakeAction) Alert {
label := "[server]"
if c.isClient {
label = "[client]"
}
switch action := actionGeneric.(type) {
case QueueHandshakeMessage:
logf(logTypeHandshake, "%s queuing handshake message type=%v", label, action.Message.msgType)
err := c.hsCtx.hOut.QueueMessage(action.Message)
if err != nil {
logf(logTypeHandshake, "%s Error writing handshake message: %v", label, err)
return AlertInternalError
}
case SendQueuedHandshake:
_, err := c.hsCtx.hOut.SendQueuedMessages()
if err != nil {
logf(logTypeHandshake, "%s Error writing handshake message: %v", label, err)
return AlertInternalError
}
if c.config.UseDTLS {
c.hsCtx.timers.start(retransmitTimerLabel,
c.hsCtx.handshakeRetransmit,
c.hsCtx.timeoutMS)
}
case RekeyIn:
logf(logTypeHandshake, "%s Rekeying in to %s: %+v", label, action.epoch.label(), action.KeySet)
// Check that we don't have an input data in the handshake frame parser.
if len(c.hsCtx.hIn.frame.remainder) > 0 {
logf(logTypeHandshake, "%s Rekey with data still in handshake buffers", label)
return AlertDecodeError
}
err := c.in.Rekey(action.epoch, action.KeySet.cipher, action.KeySet.key, action.KeySet.iv)
if err != nil {
logf(logTypeHandshake, "%s Unable to rekey inbound: %v", label, err)
return AlertInternalError
}
case RekeyOut:
logf(logTypeHandshake, "%s Rekeying out to %s: %+v", label, action.epoch.label(), action.KeySet)
err := c.out.Rekey(action.epoch, action.KeySet.cipher, action.KeySet.key, action.KeySet.iv)
if err != nil {
logf(logTypeHandshake, "%s Unable to rekey outbound: %v", label, err)
return AlertInternalError
}
case ResetOut:
logf(logTypeHandshake, "%s Rekeying out to %s seq=%v", label, EpochClear, action.seq)
c.out.ResetClear(action.seq)
case StorePSK:
logf(logTypeHandshake, "%s Storing new session ticket with identity [%x]", label, action.PSK.Identity)
if c.isClient {
// Clients look up PSKs based on server name
c.config.PSKs.Put(c.config.ServerName, action.PSK)
} else {
// Servers look them up based on the identity in the extension
c.config.PSKs.Put(hex.EncodeToString(action.PSK.Identity), action.PSK)
}
default:
logf(logTypeHandshake, "%s Unknown action type", label)
assert(false)
return AlertInternalError
}
return AlertNoAlert
}
func (c *Conn) HandshakeSetup() Alert {
var state HandshakeState
var actions []HandshakeAction
var alert Alert
if err := c.config.Init(c.isClient); err != nil {
logf(logTypeHandshake, "Error initializing config: %v", err)
return AlertInternalError
}
opts := ConnectionOptions{
ServerName: c.config.ServerName,
NextProtos: c.config.NextProtos,
}
if c.isClient {
state, actions, alert = clientStateStart{Config: c.config, Opts: opts, hsCtx: c.hsCtx}.Next(nil)
if alert != AlertNoAlert {
logf(logTypeHandshake, "Error initializing client state: %v", alert)
return alert
}
for _, action := range actions {
alert = c.takeAction(action)
if alert != AlertNoAlert {
logf(logTypeHandshake, "Error during handshake actions: %v", alert)
return alert
}
}
} else {
if c.config.RequireCookie && c.config.CookieProtector == nil {
logf(logTypeHandshake, "RequireCookie set, but no CookieProtector provided. Using default cookie protector. Stateless Retry not possible.")
if c.config.NonBlocking {
logf(logTypeHandshake, "Not possible in non-blocking mode.")
return AlertInternalError
}
var err error
c.config.CookieProtector, err = NewDefaultCookieProtector()
if err != nil {
logf(logTypeHandshake, "Error initializing cookie source: %v", alert)
return AlertInternalError
}
}
state = serverStateStart{Config: c.config, conn: c, hsCtx: c.hsCtx}
}
c.hState = state
return AlertNoAlert
}
type handshakeMessageReader interface {
ReadMessage() (*HandshakeMessage, Alert)
}
type handshakeMessageReaderImpl struct {
hsCtx *HandshakeContext
}
var _ handshakeMessageReader = &handshakeMessageReaderImpl{}
func (r *handshakeMessageReaderImpl) ReadMessage() (*HandshakeMessage, Alert) {
var hm *HandshakeMessage
var err error
for {
hm, err = r.hsCtx.hIn.ReadMessage()
if err == AlertWouldBlock {
return nil, AlertWouldBlock
}
if err != nil {
logf(logTypeHandshake, "Error reading message: %v", err)
return nil, AlertCloseNotify
}
if hm != nil {
break
}
}
return hm, AlertNoAlert
}
// Handshake causes a TLS handshake on the connection. The `isClient` member
// determines whether a client or server handshake is performed. If a
// handshake has already been performed, then its result will be returned.
func (c *Conn) Handshake() Alert {
label := "[server]"
if c.isClient {
label = "[client]"
}
// TODO Lock handshakeMutex
// TODO Remove CloseNotify hack
if c.handshakeAlert != AlertNoAlert && c.handshakeAlert != AlertCloseNotify {
logf(logTypeHandshake, "Pre-existing handshake error: %v", c.handshakeAlert)
return c.handshakeAlert
}
if c.handshakeComplete {
return AlertNoAlert
}
if c.hState == nil {
logf(logTypeHandshake, "%s First time through handshake (or after stateless retry), setting up", label)
alert := c.HandshakeSetup()
if alert != AlertNoAlert || (c.isClient && c.config.NonBlocking) {
return alert
}
}
logf(logTypeHandshake, "(Re-)entering handshake, state=%v", c.hState)
state := c.hState
_, connected := state.(stateConnected)
hmr := &handshakeMessageReaderImpl{hsCtx: c.hsCtx}
for !connected {
var alert Alert
var actions []HandshakeAction
// Advance the state machine
state, actions, alert = state.Next(hmr)
if alert == AlertWouldBlock {
logf(logTypeHandshake, "%s Would block reading message: %s", label, alert)
// If we blocked, then run our timers to see if any have expired.
if c.hsCtx.hIn.datagram {
if err := c.hsCtx.timers.check(time.Now()); err != nil {
return AlertInternalError
}
}
return AlertWouldBlock
}
if alert == AlertCloseNotify {
logf(logTypeHandshake, "%s Error reading message: %s", label, alert)
c.sendAlert(AlertCloseNotify)
return AlertCloseNotify
}
if alert != AlertNoAlert && alert != AlertStatelessRetry {
logf(logTypeHandshake, "Error in state transition: %v", alert)
return alert
}
for index, action := range actions {
logf(logTypeHandshake, "%s taking next action (%d)", label, index)
if alert := c.takeAction(action); alert != AlertNoAlert {
logf(logTypeHandshake, "Error during handshake actions: %v", alert)
c.sendAlert(alert)
return alert
}
}
c.hState = state
logf(logTypeHandshake, "state is now %s", c.GetHsState())
_, connected = state.(stateConnected)
if connected {
c.state = state.(stateConnected)
c.handshakeComplete = true
if !c.isClient {
// Send NewSessionTicket if configured to
if c.config.SendSessionTickets {
actions, alert := c.state.NewSessionTicket(
c.config.TicketLen,
c.config.TicketLifetime,
c.config.EarlyDataLifetime)
for _, action := range actions {
alert = c.takeAction(action)
if alert != AlertNoAlert {
logf(logTypeHandshake, "Error during handshake actions: %v", alert)
c.sendAlert(alert)
return alert
}
}
}
// If there is early data, move it into the main buffer
if c.hsCtx.earlyData != nil {
c.readBuffer = c.hsCtx.earlyData
c.hsCtx.earlyData = nil
}
} else {
assert(c.hsCtx.earlyData == nil)
}
}
if c.config.NonBlocking {
if alert == AlertStatelessRetry {
return AlertStatelessRetry
}
return AlertNoAlert
}
}
return AlertNoAlert
}
func (c *Conn) SendKeyUpdate(requestUpdate bool) error {
if !c.handshakeComplete {
return fmt.Errorf("Cannot update keys until after handshake")
}
request := KeyUpdateNotRequested
if requestUpdate {
request = KeyUpdateRequested
}
// Create the key update and update state
actions, alert := c.state.KeyUpdate(request)
if alert != AlertNoAlert {
c.sendAlert(alert)
return fmt.Errorf("Alert while generating key update: %v", alert)
}
// Take actions (send key update and rekey)
for _, action := range actions {
alert = c.takeAction(action)
if alert != AlertNoAlert {
c.sendAlert(alert)
return fmt.Errorf("Alert during key update actions: %v", alert)
}
}
return nil
}
func (c *Conn) GetHsState() State {
if c.hState == nil {
return StateInit
}
return c.hState.State()
}
func (c *Conn) ComputeExporter(label string, context []byte, keyLength int) ([]byte, error) {
_, connected := c.hState.(stateConnected)
if !connected {
return nil, fmt.Errorf("Cannot compute exporter when state is not connected")
}
if c.state.exporterSecret == nil {
return nil, fmt.Errorf("Internal error: no exporter secret")
}
h0 := c.state.cryptoParams.Hash.New().Sum(nil)
tmpSecret := deriveSecret(c.state.cryptoParams, c.state.exporterSecret, label, h0)
hc := c.state.cryptoParams.Hash.New().Sum(context)
return HkdfExpandLabel(c.state.cryptoParams.Hash, tmpSecret, "exporter", hc, keyLength), nil
}
func (c *Conn) ConnectionState() ConnectionState {
state := ConnectionState{
HandshakeState: c.GetHsState(),
}
if c.handshakeComplete {
state.CipherSuite = cipherSuiteMap[c.state.Params.CipherSuite]
state.NextProto = c.state.Params.NextProto
state.VerifiedChains = c.state.verifiedChains
state.PeerCertificates = c.state.peerCertificates
state.UsingPSK = c.state.Params.UsingPSK
state.UsingEarlyData = c.state.Params.UsingEarlyData
}
return state
}
func (c *Conn) Writable() bool {
// If we're connected, we're writable.
if _, connected := c.hState.(stateConnected); connected {
return true
}
// If we're a client in 0-RTT, then we're writable.
if c.isClient && c.out.cipher.epoch == EpochEarlyData {
return true
}
return false
}
func (c *Conn) label() string {
if c.isClient {
return "client"
}
return "server"
}

86
vendor/github.com/bifurcation/mint/cookie-protector.go generated vendored Normal file
View File

@ -0,0 +1,86 @@
package mint
import (
"crypto/aes"
"crypto/cipher"
"crypto/rand"
"crypto/sha256"
"fmt"
"io"
"golang.org/x/crypto/hkdf"
)
// CookieProtector is used to create and verify a cookie
type CookieProtector interface {
// NewToken creates a new token
NewToken([]byte) ([]byte, error)
// DecodeToken decodes a token
DecodeToken([]byte) ([]byte, error)
}
const cookieSecretSize = 32
const cookieNonceSize = 32
// The DefaultCookieProtector is a simple implementation for the CookieProtector.
type DefaultCookieProtector struct {
secret []byte
}
var _ CookieProtector = &DefaultCookieProtector{}
// NewDefaultCookieProtector creates a source for source address tokens
func NewDefaultCookieProtector() (CookieProtector, error) {
secret := make([]byte, cookieSecretSize)
if _, err := rand.Read(secret); err != nil {
return nil, err
}
return &DefaultCookieProtector{secret: secret}, nil
}
// NewToken encodes data into a new token.
func (s *DefaultCookieProtector) NewToken(data []byte) ([]byte, error) {
nonce := make([]byte, cookieNonceSize)
if _, err := rand.Read(nonce); err != nil {
return nil, err
}
aead, aeadNonce, err := s.createAEAD(nonce)
if err != nil {
return nil, err
}
return append(nonce, aead.Seal(nil, aeadNonce, data, nil)...), nil
}
// DecodeToken decodes a token.
func (s *DefaultCookieProtector) DecodeToken(p []byte) ([]byte, error) {
if len(p) < cookieNonceSize {
return nil, fmt.Errorf("Token too short: %d", len(p))
}
nonce := p[:cookieNonceSize]
aead, aeadNonce, err := s.createAEAD(nonce)
if err != nil {
return nil, err
}
return aead.Open(nil, aeadNonce, p[cookieNonceSize:], nil)
}
func (s *DefaultCookieProtector) createAEAD(nonce []byte) (cipher.AEAD, []byte, error) {
h := hkdf.New(sha256.New, s.secret, nonce, []byte("mint cookie source"))
key := make([]byte, 32) // use a 32 byte key, in order to select AES-256
if _, err := io.ReadFull(h, key); err != nil {
return nil, nil, err
}
aeadNonce := make([]byte, 12)
if _, err := io.ReadFull(h, aeadNonce); err != nil {
return nil, nil, err
}
c, err := aes.NewCipher(key)
if err != nil {
return nil, nil, err
}
aead, err := cipher.NewGCM(c)
if err != nil {
return nil, nil, err
}
return aead, aeadNonce, nil
}

667
vendor/github.com/bifurcation/mint/crypto.go generated vendored Normal file
View File

@ -0,0 +1,667 @@
package mint
import (
"bytes"
"crypto"
"crypto/aes"
"crypto/cipher"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/hmac"
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"crypto/x509/pkix"
"encoding/asn1"
"fmt"
"math/big"
"time"
"golang.org/x/crypto/curve25519"
// Blank includes to ensure hash support
_ "crypto/sha1"
_ "crypto/sha256"
_ "crypto/sha512"
)
var prng = rand.Reader
type aeadFactory func(key []byte) (cipher.AEAD, error)
type CipherSuiteParams struct {
Suite CipherSuite
Cipher aeadFactory // Cipher factory
Hash crypto.Hash // Hash function
KeyLen int // Key length in octets
IvLen int // IV length in octets
}
type signatureAlgorithm uint8
const (
signatureAlgorithmUnknown = iota
signatureAlgorithmRSA_PKCS1
signatureAlgorithmRSA_PSS
signatureAlgorithmECDSA
)
var (
hashMap = map[SignatureScheme]crypto.Hash{
RSA_PKCS1_SHA1: crypto.SHA1,
RSA_PKCS1_SHA256: crypto.SHA256,
RSA_PKCS1_SHA384: crypto.SHA384,
RSA_PKCS1_SHA512: crypto.SHA512,
ECDSA_P256_SHA256: crypto.SHA256,
ECDSA_P384_SHA384: crypto.SHA384,
ECDSA_P521_SHA512: crypto.SHA512,
RSA_PSS_SHA256: crypto.SHA256,
RSA_PSS_SHA384: crypto.SHA384,
RSA_PSS_SHA512: crypto.SHA512,
}
sigMap = map[SignatureScheme]signatureAlgorithm{
RSA_PKCS1_SHA1: signatureAlgorithmRSA_PKCS1,
RSA_PKCS1_SHA256: signatureAlgorithmRSA_PKCS1,
RSA_PKCS1_SHA384: signatureAlgorithmRSA_PKCS1,
RSA_PKCS1_SHA512: signatureAlgorithmRSA_PKCS1,
ECDSA_P256_SHA256: signatureAlgorithmECDSA,
ECDSA_P384_SHA384: signatureAlgorithmECDSA,
ECDSA_P521_SHA512: signatureAlgorithmECDSA,
RSA_PSS_SHA256: signatureAlgorithmRSA_PSS,
RSA_PSS_SHA384: signatureAlgorithmRSA_PSS,
RSA_PSS_SHA512: signatureAlgorithmRSA_PSS,
}
curveMap = map[SignatureScheme]NamedGroup{
ECDSA_P256_SHA256: P256,
ECDSA_P384_SHA384: P384,
ECDSA_P521_SHA512: P521,
}
newAESGCM = func(key []byte) (cipher.AEAD, error) {
block, err := aes.NewCipher(key)
if err != nil {
return nil, err
}
// TLS always uses 12-byte nonces
return cipher.NewGCMWithNonceSize(block, 12)
}
cipherSuiteMap = map[CipherSuite]CipherSuiteParams{
TLS_AES_128_GCM_SHA256: {
Suite: TLS_AES_128_GCM_SHA256,
Cipher: newAESGCM,
Hash: crypto.SHA256,
KeyLen: 16,
IvLen: 12,
},
TLS_AES_256_GCM_SHA384: {
Suite: TLS_AES_256_GCM_SHA384,
Cipher: newAESGCM,
Hash: crypto.SHA384,
KeyLen: 32,
IvLen: 12,
},
}
x509AlgMap = map[SignatureScheme]x509.SignatureAlgorithm{
RSA_PKCS1_SHA1: x509.SHA1WithRSA,
RSA_PKCS1_SHA256: x509.SHA256WithRSA,
RSA_PKCS1_SHA384: x509.SHA384WithRSA,
RSA_PKCS1_SHA512: x509.SHA512WithRSA,
ECDSA_P256_SHA256: x509.ECDSAWithSHA256,
ECDSA_P384_SHA384: x509.ECDSAWithSHA384,
ECDSA_P521_SHA512: x509.ECDSAWithSHA512,
}
defaultRSAKeySize = 2048
)
func curveFromNamedGroup(group NamedGroup) (crv elliptic.Curve) {
switch group {
case P256:
crv = elliptic.P256()
case P384:
crv = elliptic.P384()
case P521:
crv = elliptic.P521()
}
return
}
func namedGroupFromECDSAKey(key *ecdsa.PublicKey) (g NamedGroup) {
switch key.Curve.Params().Name {
case elliptic.P256().Params().Name:
g = P256
case elliptic.P384().Params().Name:
g = P384
case elliptic.P521().Params().Name:
g = P521
}
return
}
func keyExchangeSizeFromNamedGroup(group NamedGroup) (size int) {
size = 0
switch group {
case X25519:
size = 32
case P256:
size = 65
case P384:
size = 97
case P521:
size = 133
case FFDHE2048:
size = 256
case FFDHE3072:
size = 384
case FFDHE4096:
size = 512
case FFDHE6144:
size = 768
case FFDHE8192:
size = 1024
}
return
}
func primeFromNamedGroup(group NamedGroup) (p *big.Int) {
switch group {
case FFDHE2048:
p = finiteFieldPrime2048
case FFDHE3072:
p = finiteFieldPrime3072
case FFDHE4096:
p = finiteFieldPrime4096
case FFDHE6144:
p = finiteFieldPrime6144
case FFDHE8192:
p = finiteFieldPrime8192
}
return
}
func schemeValidForKey(alg SignatureScheme, key crypto.Signer) bool {
sigType := sigMap[alg]
switch key.(type) {
case *rsa.PrivateKey:
return sigType == signatureAlgorithmRSA_PKCS1 || sigType == signatureAlgorithmRSA_PSS
case *ecdsa.PrivateKey:
return sigType == signatureAlgorithmECDSA
default:
return false
}
}
func ffdheKeyShareFromPrime(p *big.Int) (priv, pub *big.Int, err error) {
primeLen := len(p.Bytes())
for {
// g = 2 for all ffdhe groups
priv, err = rand.Int(prng, p)
if err != nil {
return
}
pub = big.NewInt(0)
pub.Exp(big.NewInt(2), priv, p)
if len(pub.Bytes()) == primeLen {
return
}
}
}
func newKeyShare(group NamedGroup) (pub []byte, priv []byte, err error) {
switch group {
case P256, P384, P521:
var x, y *big.Int
crv := curveFromNamedGroup(group)
priv, x, y, err = elliptic.GenerateKey(crv, prng)
if err != nil {
return
}
pub = elliptic.Marshal(crv, x, y)
return
case FFDHE2048, FFDHE3072, FFDHE4096, FFDHE6144, FFDHE8192:
p := primeFromNamedGroup(group)
x, X, err2 := ffdheKeyShareFromPrime(p)
if err2 != nil {
err = err2
return
}
priv = x.Bytes()
pubBytes := X.Bytes()
numBytes := keyExchangeSizeFromNamedGroup(group)
pub = make([]byte, numBytes)
copy(pub[numBytes-len(pubBytes):], pubBytes)
return
case X25519:
var private, public [32]byte
_, err = prng.Read(private[:])
if err != nil {
return
}
curve25519.ScalarBaseMult(&public, &private)
priv = private[:]
pub = public[:]
return
default:
return nil, nil, fmt.Errorf("tls.newkeyshare: Unsupported group %v", group)
}
}
func keyAgreement(group NamedGroup, pub []byte, priv []byte) ([]byte, error) {
switch group {
case P256, P384, P521:
if len(pub) != keyExchangeSizeFromNamedGroup(group) {
return nil, fmt.Errorf("tls.keyagreement: Wrong public key size")
}
crv := curveFromNamedGroup(group)
pubX, pubY := elliptic.Unmarshal(crv, pub)
x, _ := crv.Params().ScalarMult(pubX, pubY, priv)
xBytes := x.Bytes()
numBytes := len(crv.Params().P.Bytes())
ret := make([]byte, numBytes)
copy(ret[numBytes-len(xBytes):], xBytes)
return ret, nil
case FFDHE2048, FFDHE3072, FFDHE4096, FFDHE6144, FFDHE8192:
numBytes := keyExchangeSizeFromNamedGroup(group)
if len(pub) != numBytes {
return nil, fmt.Errorf("tls.keyagreement: Wrong public key size")
}
p := primeFromNamedGroup(group)
x := big.NewInt(0).SetBytes(priv)
Y := big.NewInt(0).SetBytes(pub)
ZBytes := big.NewInt(0).Exp(Y, x, p).Bytes()
ret := make([]byte, numBytes)
copy(ret[numBytes-len(ZBytes):], ZBytes)
return ret, nil
case X25519:
if len(pub) != keyExchangeSizeFromNamedGroup(group) {
return nil, fmt.Errorf("tls.keyagreement: Wrong public key size")
}
var private, public, ret [32]byte
copy(private[:], priv)
copy(public[:], pub)
curve25519.ScalarMult(&ret, &private, &public)
return ret[:], nil
default:
return nil, fmt.Errorf("tls.keyagreement: Unsupported group %v", group)
}
}
func newSigningKey(sig SignatureScheme) (crypto.Signer, error) {
switch sig {
case RSA_PKCS1_SHA1, RSA_PKCS1_SHA256,
RSA_PKCS1_SHA384, RSA_PKCS1_SHA512,
RSA_PSS_SHA256, RSA_PSS_SHA384,
RSA_PSS_SHA512:
return rsa.GenerateKey(prng, defaultRSAKeySize)
case ECDSA_P256_SHA256:
return ecdsa.GenerateKey(elliptic.P256(), prng)
case ECDSA_P384_SHA384:
return ecdsa.GenerateKey(elliptic.P384(), prng)
case ECDSA_P521_SHA512:
return ecdsa.GenerateKey(elliptic.P521(), prng)
default:
return nil, fmt.Errorf("tls.newsigningkey: Unsupported signature algorithm [%04x]", sig)
}
}
// XXX(rlb): Copied from crypto/x509
type ecdsaSignature struct {
R, S *big.Int
}
func sign(alg SignatureScheme, privateKey crypto.Signer, sigInput []byte) ([]byte, error) {
var opts crypto.SignerOpts
hash := hashMap[alg]
if hash == crypto.SHA1 {
return nil, fmt.Errorf("tls.crypt.sign: Use of SHA-1 is forbidden")
}
sigType := sigMap[alg]
var realInput []byte
switch key := privateKey.(type) {
case *rsa.PrivateKey:
switch {
case allowPKCS1 && sigType == signatureAlgorithmRSA_PKCS1:
logf(logTypeCrypto, "signing with PKCS1, hashSize=[%d]", hash.Size())
opts = hash
case !allowPKCS1 && sigType == signatureAlgorithmRSA_PKCS1:
fallthrough
case sigType == signatureAlgorithmRSA_PSS:
logf(logTypeCrypto, "signing with PSS, hashSize=[%d]", hash.Size())
opts = &rsa.PSSOptions{SaltLength: hash.Size(), Hash: hash}
default:
return nil, fmt.Errorf("tls.crypto.sign: Unsupported algorithm for RSA key")
}
h := hash.New()
h.Write(sigInput)
realInput = h.Sum(nil)
case *ecdsa.PrivateKey:
if sigType != signatureAlgorithmECDSA {
return nil, fmt.Errorf("tls.crypto.sign: Unsupported algorithm for ECDSA key")
}
algGroup := curveMap[alg]
keyGroup := namedGroupFromECDSAKey(key.Public().(*ecdsa.PublicKey))
if algGroup != keyGroup {
return nil, fmt.Errorf("tls.crypto.sign: Unsupported hash/curve combination")
}
h := hash.New()
h.Write(sigInput)
realInput = h.Sum(nil)
default:
return nil, fmt.Errorf("tls.crypto.sign: Unsupported private key type")
}
sig, err := privateKey.Sign(prng, realInput, opts)
logf(logTypeCrypto, "signature: %x", sig)
return sig, err
}
func verify(alg SignatureScheme, publicKey crypto.PublicKey, sigInput []byte, sig []byte) error {
hash := hashMap[alg]
if hash == crypto.SHA1 {
return fmt.Errorf("tls.crypt.sign: Use of SHA-1 is forbidden")
}
sigType := sigMap[alg]
switch pub := publicKey.(type) {
case *rsa.PublicKey:
switch {
case allowPKCS1 && sigType == signatureAlgorithmRSA_PKCS1:
logf(logTypeCrypto, "verifying with PKCS1, hashSize=[%d]", hash.Size())
h := hash.New()
h.Write(sigInput)
realInput := h.Sum(nil)
return rsa.VerifyPKCS1v15(pub, hash, realInput, sig)
case !allowPKCS1 && sigType == signatureAlgorithmRSA_PKCS1:
fallthrough
case sigType == signatureAlgorithmRSA_PSS:
logf(logTypeCrypto, "verifying with PSS, hashSize=[%d]", hash.Size())
opts := &rsa.PSSOptions{SaltLength: hash.Size(), Hash: hash}
h := hash.New()
h.Write(sigInput)
realInput := h.Sum(nil)
return rsa.VerifyPSS(pub, hash, realInput, sig, opts)
default:
return fmt.Errorf("tls.verify: Unsupported algorithm for RSA key")
}
case *ecdsa.PublicKey:
if sigType != signatureAlgorithmECDSA {
return fmt.Errorf("tls.verify: Unsupported algorithm for ECDSA key")
}
if curveMap[alg] != namedGroupFromECDSAKey(pub) {
return fmt.Errorf("tls.verify: Unsupported curve for ECDSA key")
}
ecdsaSig := new(ecdsaSignature)
if rest, err := asn1.Unmarshal(sig, ecdsaSig); err != nil {
return err
} else if len(rest) != 0 {
return fmt.Errorf("tls.verify: trailing data after ECDSA signature")
}
if ecdsaSig.R.Sign() <= 0 || ecdsaSig.S.Sign() <= 0 {
return fmt.Errorf("tls.verify: ECDSA signature contained zero or negative values")
}
h := hash.New()
h.Write(sigInput)
realInput := h.Sum(nil)
if !ecdsa.Verify(pub, realInput, ecdsaSig.R, ecdsaSig.S) {
return fmt.Errorf("tls.verify: ECDSA verification failure")
}
return nil
default:
return fmt.Errorf("tls.verify: Unsupported key type")
}
}
// 0
// |
// v
// PSK -> HKDF-Extract = Early Secret
// |
// +-----> Derive-Secret(.,
// | "ext binder" |
// | "res binder",
// | "")
// | = binder_key
// |
// +-----> Derive-Secret(., "c e traffic",
// | ClientHello)
// | = client_early_traffic_secret
// |
// +-----> Derive-Secret(., "e exp master",
// | ClientHello)
// | = early_exporter_master_secret
// v
// Derive-Secret(., "derived", "")
// |
// v
// (EC)DHE -> HKDF-Extract = Handshake Secret
// |
// +-----> Derive-Secret(., "c hs traffic",
// | ClientHello...ServerHello)
// | = client_handshake_traffic_secret
// |
// +-----> Derive-Secret(., "s hs traffic",
// | ClientHello...ServerHello)
// | = server_handshake_traffic_secret
// v
// Derive-Secret(., "derived", "")
// |
// v
// 0 -> HKDF-Extract = Master Secret
// |
// +-----> Derive-Secret(., "c ap traffic",
// | ClientHello...server Finished)
// | = client_application_traffic_secret_0
// |
// +-----> Derive-Secret(., "s ap traffic",
// | ClientHello...server Finished)
// | = server_application_traffic_secret_0
// |
// +-----> Derive-Secret(., "exp master",
// | ClientHello...server Finished)
// | = exporter_master_secret
// |
// +-----> Derive-Secret(., "res master",
// ClientHello...client Finished)
// = resumption_master_secret
// From RFC 5869
// PRK = HMAC-Hash(salt, IKM)
func HkdfExtract(hash crypto.Hash, saltIn, input []byte) []byte {
salt := saltIn
// if [salt is] not provided, it is set to a string of HashLen zeros
if salt == nil {
salt = bytes.Repeat([]byte{0}, hash.Size())
}
h := hmac.New(hash.New, salt)
h.Write(input)
out := h.Sum(nil)
logf(logTypeCrypto, "HKDF Extract:\n")
logf(logTypeCrypto, "Salt [%d]: %x\n", len(salt), salt)
logf(logTypeCrypto, "Input [%d]: %x\n", len(input), input)
logf(logTypeCrypto, "Output [%d]: %x\n", len(out), out)
return out
}
const (
labelExternalBinder = "ext binder"
labelResumptionBinder = "res binder"
labelEarlyTrafficSecret = "c e traffic"
labelEarlyExporterSecret = "e exp master"
labelClientHandshakeTrafficSecret = "c hs traffic"
labelServerHandshakeTrafficSecret = "s hs traffic"
labelClientApplicationTrafficSecret = "c ap traffic"
labelServerApplicationTrafficSecret = "s ap traffic"
labelExporterSecret = "exp master"
labelResumptionSecret = "res master"
labelDerived = "derived"
labelFinished = "finished"
labelResumption = "resumption"
)
// struct HkdfLabel {
// uint16 length;
// opaque label<9..255>;
// opaque hash_value<0..255>;
// };
func hkdfEncodeLabel(labelIn string, hashValue []byte, outLen int) []byte {
label := "tls13 " + labelIn
labelLen := len(label)
hashLen := len(hashValue)
hkdfLabel := make([]byte, 2+1+labelLen+1+hashLen)
hkdfLabel[0] = byte(outLen >> 8)
hkdfLabel[1] = byte(outLen)
hkdfLabel[2] = byte(labelLen)
copy(hkdfLabel[3:3+labelLen], []byte(label))
hkdfLabel[3+labelLen] = byte(hashLen)
copy(hkdfLabel[3+labelLen+1:], hashValue)
return hkdfLabel
}
func HkdfExpand(hash crypto.Hash, prk, info []byte, outLen int) []byte {
out := []byte{}
T := []byte{}
i := byte(1)
for len(out) < outLen {
block := append(T, info...)
block = append(block, i)
h := hmac.New(hash.New, prk)
h.Write(block)
T = h.Sum(nil)
out = append(out, T...)
i++
}
return out[:outLen]
}
func HkdfExpandLabel(hash crypto.Hash, secret []byte, label string, hashValue []byte, outLen int) []byte {
info := hkdfEncodeLabel(label, hashValue, outLen)
derived := HkdfExpand(hash, secret, info, outLen)
logf(logTypeCrypto, "HKDF Expand: label=[tls13 ] + '%s',requested length=%d\n", label, outLen)
logf(logTypeCrypto, "PRK [%d]: %x\n", len(secret), secret)
logf(logTypeCrypto, "Hash [%d]: %x\n", len(hashValue), hashValue)
logf(logTypeCrypto, "Info [%d]: %x\n", len(info), info)
logf(logTypeCrypto, "Derived key [%d]: %x\n", len(derived), derived)
return derived
}
func deriveSecret(params CipherSuiteParams, secret []byte, label string, messageHash []byte) []byte {
return HkdfExpandLabel(params.Hash, secret, label, messageHash, params.Hash.Size())
}
func computeFinishedData(params CipherSuiteParams, baseKey []byte, input []byte) []byte {
macKey := HkdfExpandLabel(params.Hash, baseKey, labelFinished, []byte{}, params.Hash.Size())
mac := hmac.New(params.Hash.New, macKey)
mac.Write(input)
return mac.Sum(nil)
}
type keySet struct {
cipher aeadFactory
key []byte
iv []byte
}
func makeTrafficKeys(params CipherSuiteParams, secret []byte) keySet {
logf(logTypeCrypto, "making traffic keys: secret=%x", secret)
return keySet{
cipher: params.Cipher,
key: HkdfExpandLabel(params.Hash, secret, "key", []byte{}, params.KeyLen),
iv: HkdfExpandLabel(params.Hash, secret, "iv", []byte{}, params.IvLen),
}
}
func MakeNewSelfSignedCert(name string, alg SignatureScheme) (crypto.Signer, *x509.Certificate, error) {
priv, err := newSigningKey(alg)
if err != nil {
return nil, nil, err
}
cert, err := newSelfSigned(name, alg, priv)
if err != nil {
return nil, nil, err
}
return priv, cert, nil
}
func newSelfSigned(name string, alg SignatureScheme, priv crypto.Signer) (*x509.Certificate, error) {
sigAlg, ok := x509AlgMap[alg]
if !ok {
return nil, fmt.Errorf("tls.selfsigned: Unknown signature algorithm [%04x]", alg)
}
if len(name) == 0 {
return nil, fmt.Errorf("tls.selfsigned: No name provided")
}
serial, err := rand.Int(rand.Reader, big.NewInt(0xA0A0A0A0))
if err != nil {
return nil, err
}
template := &x509.Certificate{
SerialNumber: serial,
NotBefore: time.Now(),
NotAfter: time.Now().AddDate(0, 0, 1),
SignatureAlgorithm: sigAlg,
Subject: pkix.Name{CommonName: name},
DNSNames: []string{name},
KeyUsage: x509.KeyUsageDigitalSignature | x509.KeyUsageKeyAgreement | x509.KeyUsageKeyEncipherment,
ExtKeyUsage: []x509.ExtKeyUsage{x509.ExtKeyUsageServerAuth},
}
der, err := x509.CreateCertificate(prng, template, template, priv.Public(), priv)
if err != nil {
return nil, err
}
// It is safe to ignore the error here because we're parsing known-good data
cert, _ := x509.ParseCertificate(der)
return cert, nil
}

222
vendor/github.com/bifurcation/mint/dtls.go generated vendored Normal file
View File

@ -0,0 +1,222 @@
package mint
import (
"fmt"
"github.com/bifurcation/mint/syntax"
"time"
)
const (
initialMtu = 1200
initialTimeout = 100
)
// labels for timers
const (
retransmitTimerLabel = "handshake retransmit"
ackTimerLabel = "ack timer"
)
type SentHandshakeFragment struct {
seq uint32
offset int
fragLength int
record uint64
acked bool
}
type DtlsAck struct {
RecordNumbers []uint64 `tls:"head=2"`
}
func wireVersion(h *HandshakeLayer) uint16 {
if h.datagram {
return dtls12WireVersion
}
return tls12Version
}
func dtlsConvertVersion(version uint16) uint16 {
if version == tls12Version {
return dtls12WireVersion
}
if version == tls10Version {
return 0xfeff
}
panic(fmt.Sprintf("Internal error, unexpected version=%d", version))
}
// TODO(ekr@rtfm.com): Move these to state-machine.go
func (h *HandshakeContext) handshakeRetransmit() error {
if _, err := h.hOut.SendQueuedMessages(); err != nil {
return err
}
h.timers.start(retransmitTimerLabel,
h.handshakeRetransmit,
h.timeoutMS)
// TODO(ekr@rtfm.com): Back off timer
return nil
}
func (h *HandshakeContext) sendAck() error {
toack := h.hIn.recvdRecords
count := (initialMtu - 2) / 8 // TODO(ekr@rtfm.com): Current MTU
if len(toack) > count {
toack = toack[:count]
}
logf(logTypeHandshake, "Sending ACK: [%x]", toack)
ack := &DtlsAck{toack}
body, err := syntax.Marshal(&ack)
if err != nil {
return err
}
err = h.hOut.conn.WriteRecord(&TLSPlaintext{
contentType: RecordTypeAck,
fragment: body,
})
if err != nil {
return err
}
return nil
}
func (h *HandshakeContext) processAck(data []byte) error {
// Cancel the retransmit timer because we will be resending
// and possibly re-arming later.
h.timers.cancel(retransmitTimerLabel)
ack := &DtlsAck{}
read, err := syntax.Unmarshal(data, &ack)
if err != nil {
return err
}
if len(data) != read {
return fmt.Errorf("Invalid encoding: Extra data not consumed")
}
logf(logTypeHandshake, "ACK: [%x]", ack.RecordNumbers)
for _, r := range ack.RecordNumbers {
for _, m := range h.sentFragments {
if r == m.record {
logf(logTypeHandshake, "Marking %v %v(%v) as acked",
m.seq, m.offset, m.fragLength)
m.acked = true
}
}
}
count, err := h.hOut.SendQueuedMessages()
if err != nil {
return err
}
if count == 0 {
logf(logTypeHandshake, "All messages ACKed")
h.hOut.ClearQueuedMessages()
return nil
}
// Reset the timer
h.timers.start(retransmitTimerLabel,
h.handshakeRetransmit,
h.timeoutMS)
return nil
}
func (c *Conn) GetDTLSTimeout() (bool, time.Duration) {
return c.hsCtx.timers.remaining()
}
func (h *HandshakeContext) receivedHandshakeMessage() {
logf(logTypeHandshake, "%p Received handshake, waiting for start of flight = %v", h, h.waitingNextFlight)
// This just enables tests.
if h.hIn == nil {
return
}
if !h.hIn.datagram {
return
}
if h.waitingNextFlight {
logf(logTypeHandshake, "Received the start of the flight")
// Clear the outgoing DTLS queue and terminate the retransmit timer
h.hOut.ClearQueuedMessages()
h.timers.cancel(retransmitTimerLabel)
// OK, we're not waiting any more.
h.waitingNextFlight = false
}
// Now pre-emptively arm the ACK timer if it's not armed already.
// We'll automatically dis-arm it at the end of the handshake.
if h.timers.getTimer(ackTimerLabel) == nil {
h.timers.start(ackTimerLabel, h.sendAck, h.timeoutMS/4)
}
}
func (h *HandshakeContext) receivedEndOfFlight() {
logf(logTypeHandshake, "%p Received the end of the flight", h)
if !h.hIn.datagram {
return
}
// Empty incoming queue
h.hIn.queued = nil
// Note that we are waiting for the next flight.
h.waitingNextFlight = true
// Clear the ACK queue.
h.hIn.recvdRecords = nil
// Disarm the ACK timer
h.timers.cancel(ackTimerLabel)
}
func (h *HandshakeContext) receivedFinalFlight() {
logf(logTypeHandshake, "%p Received final flight", h)
if !h.hIn.datagram {
return
}
// Disarm the ACK timer
h.timers.cancel(ackTimerLabel)
// But send an ACK immediately.
h.sendAck()
}
func (h *HandshakeContext) fragmentAcked(seq uint32, offset int, fraglen int) bool {
logf(logTypeHandshake, "Looking to see if fragment %v %v(%v) was acked", seq, offset, fraglen)
for _, f := range h.sentFragments {
if !f.acked {
continue
}
if f.seq != seq {
continue
}
if f.offset > offset {
continue
}
// At this point, we know that the stored fragment starts
// at or before what we want to send, so check where the end
// is.
if f.offset+f.fragLength < offset+fraglen {
continue
}
return true
}
return false
}

626
vendor/github.com/bifurcation/mint/extensions.go generated vendored Normal file
View File

@ -0,0 +1,626 @@
package mint
import (
"bytes"
"fmt"
"github.com/bifurcation/mint/syntax"
)
type ExtensionBody interface {
Type() ExtensionType
Marshal() ([]byte, error)
Unmarshal(data []byte) (int, error)
}
// struct {
// ExtensionType extension_type;
// opaque extension_data<0..2^16-1>;
// } Extension;
type Extension struct {
ExtensionType ExtensionType
ExtensionData []byte `tls:"head=2"`
}
func (ext Extension) Marshal() ([]byte, error) {
return syntax.Marshal(ext)
}
func (ext *Extension) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, ext)
}
type ExtensionList []Extension
type extensionListInner struct {
List []Extension `tls:"head=2"`
}
func (el ExtensionList) Marshal() ([]byte, error) {
return syntax.Marshal(extensionListInner{el})
}
func (el *ExtensionList) Unmarshal(data []byte) (int, error) {
var list extensionListInner
read, err := syntax.Unmarshal(data, &list)
if err != nil {
return 0, err
}
*el = list.List
return read, nil
}
func (el *ExtensionList) Add(src ExtensionBody) error {
data, err := src.Marshal()
if err != nil {
return err
}
if el == nil {
el = new(ExtensionList)
}
// If one already exists with this type, replace it
for i := range *el {
if (*el)[i].ExtensionType == src.Type() {
(*el)[i].ExtensionData = data
return nil
}
}
// Otherwise append
*el = append(*el, Extension{
ExtensionType: src.Type(),
ExtensionData: data,
})
return nil
}
func (el ExtensionList) Parse(dsts []ExtensionBody) (map[ExtensionType]bool, error) {
found := make(map[ExtensionType]bool)
for _, dst := range dsts {
for _, ext := range el {
if ext.ExtensionType == dst.Type() {
if found[dst.Type()] {
return nil, fmt.Errorf("Duplicate extension of type [%v]", dst.Type())
}
err := safeUnmarshal(dst, ext.ExtensionData)
if err != nil {
return nil, err
}
found[dst.Type()] = true
}
}
}
return found, nil
}
func (el ExtensionList) Find(dst ExtensionBody) (bool, error) {
for _, ext := range el {
if ext.ExtensionType == dst.Type() {
err := safeUnmarshal(dst, ext.ExtensionData)
if err != nil {
return true, err
}
return true, nil
}
}
return false, nil
}
// struct {
// NameType name_type;
// select (name_type) {
// case host_name: HostName;
// } name;
// } ServerName;
//
// enum {
// host_name(0), (255)
// } NameType;
//
// opaque HostName<1..2^16-1>;
//
// struct {
// ServerName server_name_list<1..2^16-1>
// } ServerNameList;
//
// But we only care about the case where there's a single DNS hostname. We
// will never create anything else, and throw if we receive something else
//
// 2 1 2
// | listLen | NameType | nameLen | name |
type ServerNameExtension string
type serverNameInner struct {
NameType uint8
HostName []byte `tls:"head=2,min=1"`
}
type serverNameListInner struct {
ServerNameList []serverNameInner `tls:"head=2,min=1"`
}
func (sni ServerNameExtension) Type() ExtensionType {
return ExtensionTypeServerName
}
func (sni ServerNameExtension) Marshal() ([]byte, error) {
list := serverNameListInner{
ServerNameList: []serverNameInner{{
NameType: 0x00, // host_name
HostName: []byte(sni),
}},
}
return syntax.Marshal(list)
}
func (sni *ServerNameExtension) Unmarshal(data []byte) (int, error) {
var list serverNameListInner
read, err := syntax.Unmarshal(data, &list)
if err != nil {
return 0, err
}
// Syntax requires at least one entry
// Entries beyond the first are ignored
if nameType := list.ServerNameList[0].NameType; nameType != 0x00 {
return 0, fmt.Errorf("tls.servername: Unsupported name type [%x]", nameType)
}
*sni = ServerNameExtension(list.ServerNameList[0].HostName)
return read, nil
}
// struct {
// NamedGroup group;
// opaque key_exchange<1..2^16-1>;
// } KeyShareEntry;
//
// struct {
// select (Handshake.msg_type) {
// case client_hello:
// KeyShareEntry client_shares<0..2^16-1>;
//
// case hello_retry_request:
// NamedGroup selected_group;
//
// case server_hello:
// KeyShareEntry server_share;
// };
// } KeyShare;
type KeyShareEntry struct {
Group NamedGroup
KeyExchange []byte `tls:"head=2,min=1"`
}
func (kse KeyShareEntry) SizeValid() bool {
return len(kse.KeyExchange) == keyExchangeSizeFromNamedGroup(kse.Group)
}
type KeyShareExtension struct {
HandshakeType HandshakeType
SelectedGroup NamedGroup
Shares []KeyShareEntry
}
type KeyShareClientHelloInner struct {
ClientShares []KeyShareEntry `tls:"head=2,min=0"`
}
type KeyShareHelloRetryInner struct {
SelectedGroup NamedGroup
}
type KeyShareServerHelloInner struct {
ServerShare KeyShareEntry
}
func (ks KeyShareExtension) Type() ExtensionType {
return ExtensionTypeKeyShare
}
func (ks KeyShareExtension) Marshal() ([]byte, error) {
switch ks.HandshakeType {
case HandshakeTypeClientHello:
for _, share := range ks.Shares {
if !share.SizeValid() {
return nil, fmt.Errorf("tls.keyshare: Key share has wrong size for group")
}
}
return syntax.Marshal(KeyShareClientHelloInner{ks.Shares})
case HandshakeTypeHelloRetryRequest:
if len(ks.Shares) > 0 {
return nil, fmt.Errorf("tls.keyshare: Key shares not allowed for HelloRetryRequest")
}
return syntax.Marshal(KeyShareHelloRetryInner{ks.SelectedGroup})
case HandshakeTypeServerHello:
if len(ks.Shares) != 1 {
return nil, fmt.Errorf("tls.keyshare: Server must send exactly one key share")
}
if !ks.Shares[0].SizeValid() {
return nil, fmt.Errorf("tls.keyshare: Key share has wrong size for group")
}
return syntax.Marshal(KeyShareServerHelloInner{ks.Shares[0]})
default:
return nil, fmt.Errorf("tls.keyshare: Handshake type not allowed")
}
}
func (ks *KeyShareExtension) Unmarshal(data []byte) (int, error) {
switch ks.HandshakeType {
case HandshakeTypeClientHello:
var inner KeyShareClientHelloInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
for _, share := range inner.ClientShares {
if !share.SizeValid() {
return 0, fmt.Errorf("tls.keyshare: Key share has wrong size for group")
}
}
ks.Shares = inner.ClientShares
return read, nil
case HandshakeTypeHelloRetryRequest:
var inner KeyShareHelloRetryInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
ks.SelectedGroup = inner.SelectedGroup
return read, nil
case HandshakeTypeServerHello:
var inner KeyShareServerHelloInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
if !inner.ServerShare.SizeValid() {
return 0, fmt.Errorf("tls.keyshare: Key share has wrong size for group")
}
ks.Shares = []KeyShareEntry{inner.ServerShare}
return read, nil
default:
return 0, fmt.Errorf("tls.keyshare: Handshake type not allowed")
}
}
// struct {
// NamedGroup named_group_list<2..2^16-1>;
// } NamedGroupList;
type SupportedGroupsExtension struct {
Groups []NamedGroup `tls:"head=2,min=2"`
}
func (sg SupportedGroupsExtension) Type() ExtensionType {
return ExtensionTypeSupportedGroups
}
func (sg SupportedGroupsExtension) Marshal() ([]byte, error) {
return syntax.Marshal(sg)
}
func (sg *SupportedGroupsExtension) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, sg)
}
// struct {
// SignatureScheme supported_signature_algorithms<2..2^16-2>;
// } SignatureSchemeList
type SignatureAlgorithmsExtension struct {
Algorithms []SignatureScheme `tls:"head=2,min=2"`
}
func (sa SignatureAlgorithmsExtension) Type() ExtensionType {
return ExtensionTypeSignatureAlgorithms
}
func (sa SignatureAlgorithmsExtension) Marshal() ([]byte, error) {
return syntax.Marshal(sa)
}
func (sa *SignatureAlgorithmsExtension) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, sa)
}
// struct {
// opaque identity<1..2^16-1>;
// uint32 obfuscated_ticket_age;
// } PskIdentity;
//
// opaque PskBinderEntry<32..255>;
//
// struct {
// select (Handshake.msg_type) {
// case client_hello:
// PskIdentity identities<7..2^16-1>;
// PskBinderEntry binders<33..2^16-1>;
//
// case server_hello:
// uint16 selected_identity;
// };
//
// } PreSharedKeyExtension;
type PSKIdentity struct {
Identity []byte `tls:"head=2,min=1"`
ObfuscatedTicketAge uint32
}
type PSKBinderEntry struct {
Binder []byte `tls:"head=1,min=32"`
}
type PreSharedKeyExtension struct {
HandshakeType HandshakeType
Identities []PSKIdentity
Binders []PSKBinderEntry
SelectedIdentity uint16
}
type preSharedKeyClientInner struct {
Identities []PSKIdentity `tls:"head=2,min=7"`
Binders []PSKBinderEntry `tls:"head=2,min=33"`
}
type preSharedKeyServerInner struct {
SelectedIdentity uint16
}
func (psk PreSharedKeyExtension) Type() ExtensionType {
return ExtensionTypePreSharedKey
}
func (psk PreSharedKeyExtension) Marshal() ([]byte, error) {
switch psk.HandshakeType {
case HandshakeTypeClientHello:
return syntax.Marshal(preSharedKeyClientInner{
Identities: psk.Identities,
Binders: psk.Binders,
})
case HandshakeTypeServerHello:
if len(psk.Identities) > 0 || len(psk.Binders) > 0 {
return nil, fmt.Errorf("tls.presharedkey: Server can only provide an index")
}
return syntax.Marshal(preSharedKeyServerInner{psk.SelectedIdentity})
default:
return nil, fmt.Errorf("tls.presharedkey: Handshake type not supported")
}
}
func (psk *PreSharedKeyExtension) Unmarshal(data []byte) (int, error) {
switch psk.HandshakeType {
case HandshakeTypeClientHello:
var inner preSharedKeyClientInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
if len(inner.Identities) != len(inner.Binders) {
return 0, fmt.Errorf("Lengths of identities and binders not equal")
}
psk.Identities = inner.Identities
psk.Binders = inner.Binders
return read, nil
case HandshakeTypeServerHello:
var inner preSharedKeyServerInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
psk.SelectedIdentity = inner.SelectedIdentity
return read, nil
default:
return 0, fmt.Errorf("tls.presharedkey: Handshake type not supported")
}
}
func (psk PreSharedKeyExtension) HasIdentity(id []byte) ([]byte, bool) {
for i, localID := range psk.Identities {
if bytes.Equal(localID.Identity, id) {
return psk.Binders[i].Binder, true
}
}
return nil, false
}
// enum { psk_ke(0), psk_dhe_ke(1), (255) } PskKeyExchangeMode;
//
// struct {
// PskKeyExchangeMode ke_modes<1..255>;
// } PskKeyExchangeModes;
type PSKKeyExchangeModesExtension struct {
KEModes []PSKKeyExchangeMode `tls:"head=1,min=1"`
}
func (pkem PSKKeyExchangeModesExtension) Type() ExtensionType {
return ExtensionTypePSKKeyExchangeModes
}
func (pkem PSKKeyExchangeModesExtension) Marshal() ([]byte, error) {
return syntax.Marshal(pkem)
}
func (pkem *PSKKeyExchangeModesExtension) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, pkem)
}
// struct {
// } EarlyDataIndication;
type EarlyDataExtension struct{}
func (ed EarlyDataExtension) Type() ExtensionType {
return ExtensionTypeEarlyData
}
func (ed EarlyDataExtension) Marshal() ([]byte, error) {
return []byte{}, nil
}
func (ed *EarlyDataExtension) Unmarshal(data []byte) (int, error) {
return 0, nil
}
// struct {
// uint32 max_early_data_size;
// } TicketEarlyDataInfo;
type TicketEarlyDataInfoExtension struct {
MaxEarlyDataSize uint32
}
func (tedi TicketEarlyDataInfoExtension) Type() ExtensionType {
return ExtensionTypeTicketEarlyDataInfo
}
func (tedi TicketEarlyDataInfoExtension) Marshal() ([]byte, error) {
return syntax.Marshal(tedi)
}
func (tedi *TicketEarlyDataInfoExtension) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, tedi)
}
// opaque ProtocolName<1..2^8-1>;
//
// struct {
// ProtocolName protocol_name_list<2..2^16-1>
// } ProtocolNameList;
type ALPNExtension struct {
Protocols []string
}
type protocolNameInner struct {
Name []byte `tls:"head=1,min=1"`
}
type alpnExtensionInner struct {
Protocols []protocolNameInner `tls:"head=2,min=2"`
}
func (alpn ALPNExtension) Type() ExtensionType {
return ExtensionTypeALPN
}
func (alpn ALPNExtension) Marshal() ([]byte, error) {
protocols := make([]protocolNameInner, len(alpn.Protocols))
for i, protocol := range alpn.Protocols {
protocols[i] = protocolNameInner{[]byte(protocol)}
}
return syntax.Marshal(alpnExtensionInner{protocols})
}
func (alpn *ALPNExtension) Unmarshal(data []byte) (int, error) {
var inner alpnExtensionInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
alpn.Protocols = make([]string, len(inner.Protocols))
for i, protocol := range inner.Protocols {
alpn.Protocols[i] = string(protocol.Name)
}
return read, nil
}
// struct {
// ProtocolVersion versions<2..254>;
// } SupportedVersions;
type SupportedVersionsExtension struct {
HandshakeType HandshakeType
Versions []uint16
}
type SupportedVersionsClientHelloInner struct {
Versions []uint16 `tls:"head=1,min=2,max=254"`
}
type SupportedVersionsServerHelloInner struct {
Version uint16
}
func (sv SupportedVersionsExtension) Type() ExtensionType {
return ExtensionTypeSupportedVersions
}
func (sv SupportedVersionsExtension) Marshal() ([]byte, error) {
switch sv.HandshakeType {
case HandshakeTypeClientHello:
return syntax.Marshal(SupportedVersionsClientHelloInner{sv.Versions})
case HandshakeTypeServerHello, HandshakeTypeHelloRetryRequest:
return syntax.Marshal(SupportedVersionsServerHelloInner{sv.Versions[0]})
default:
return nil, fmt.Errorf("tls.supported_versions: Handshake type not allowed")
}
}
func (sv *SupportedVersionsExtension) Unmarshal(data []byte) (int, error) {
switch sv.HandshakeType {
case HandshakeTypeClientHello:
var inner SupportedVersionsClientHelloInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
sv.Versions = inner.Versions
return read, nil
case HandshakeTypeServerHello, HandshakeTypeHelloRetryRequest:
var inner SupportedVersionsServerHelloInner
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
sv.Versions = []uint16{inner.Version}
return read, nil
default:
return 0, fmt.Errorf("tls.supported_versions: Handshake type not allowed")
}
}
// struct {
// opaque cookie<1..2^16-1>;
// } Cookie;
type CookieExtension struct {
Cookie []byte `tls:"head=2,min=1"`
}
func (c CookieExtension) Type() ExtensionType {
return ExtensionTypeCookie
}
func (c CookieExtension) Marshal() ([]byte, error) {
return syntax.Marshal(c)
}
func (c *CookieExtension) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, c)
}

147
vendor/github.com/bifurcation/mint/ffdhe.go generated vendored Normal file
View File

@ -0,0 +1,147 @@
package mint
import (
"encoding/hex"
"math/big"
)
var (
finiteFieldPrime2048hex = "FFFFFFFFFFFFFFFFADF85458A2BB4A9AAFDC5620273D3CF1" +
"D8B9C583CE2D3695A9E13641146433FBCC939DCE249B3EF9" +
"7D2FE363630C75D8F681B202AEC4617AD3DF1ED5D5FD6561" +
"2433F51F5F066ED0856365553DED1AF3B557135E7F57C935" +
"984F0C70E0E68B77E2A689DAF3EFE8721DF158A136ADE735" +
"30ACCA4F483A797ABC0AB182B324FB61D108A94BB2C8E3FB" +
"B96ADAB760D7F4681D4F42A3DE394DF4AE56EDE76372BB19" +
"0B07A7C8EE0A6D709E02FCE1CDF7E2ECC03404CD28342F61" +
"9172FE9CE98583FF8E4F1232EEF28183C3FE3B1B4C6FAD73" +
"3BB5FCBC2EC22005C58EF1837D1683B2C6F34A26C1B2EFFA" +
"886B423861285C97FFFFFFFFFFFFFFFF"
finiteFieldPrime2048bytes, _ = hex.DecodeString(finiteFieldPrime2048hex)
finiteFieldPrime2048 = big.NewInt(0).SetBytes(finiteFieldPrime2048bytes)
finiteFieldPrime3072hex = "FFFFFFFFFFFFFFFFADF85458A2BB4A9AAFDC5620273D3CF1" +
"D8B9C583CE2D3695A9E13641146433FBCC939DCE249B3EF9" +
"7D2FE363630C75D8F681B202AEC4617AD3DF1ED5D5FD6561" +
"2433F51F5F066ED0856365553DED1AF3B557135E7F57C935" +
"984F0C70E0E68B77E2A689DAF3EFE8721DF158A136ADE735" +
"30ACCA4F483A797ABC0AB182B324FB61D108A94BB2C8E3FB" +
"B96ADAB760D7F4681D4F42A3DE394DF4AE56EDE76372BB19" +
"0B07A7C8EE0A6D709E02FCE1CDF7E2ECC03404CD28342F61" +
"9172FE9CE98583FF8E4F1232EEF28183C3FE3B1B4C6FAD73" +
"3BB5FCBC2EC22005C58EF1837D1683B2C6F34A26C1B2EFFA" +
"886B4238611FCFDCDE355B3B6519035BBC34F4DEF99C0238" +
"61B46FC9D6E6C9077AD91D2691F7F7EE598CB0FAC186D91C" +
"AEFE130985139270B4130C93BC437944F4FD4452E2D74DD3" +
"64F2E21E71F54BFF5CAE82AB9C9DF69EE86D2BC522363A0D" +
"ABC521979B0DEADA1DBF9A42D5C4484E0ABCD06BFA53DDEF" +
"3C1B20EE3FD59D7C25E41D2B66C62E37FFFFFFFFFFFFFFFF"
finiteFieldPrime3072bytes, _ = hex.DecodeString(finiteFieldPrime3072hex)
finiteFieldPrime3072 = big.NewInt(0).SetBytes(finiteFieldPrime3072bytes)
finiteFieldPrime4096hex = "FFFFFFFFFFFFFFFFADF85458A2BB4A9AAFDC5620273D3CF1" +
"D8B9C583CE2D3695A9E13641146433FBCC939DCE249B3EF9" +
"7D2FE363630C75D8F681B202AEC4617AD3DF1ED5D5FD6561" +
"2433F51F5F066ED0856365553DED1AF3B557135E7F57C935" +
"984F0C70E0E68B77E2A689DAF3EFE8721DF158A136ADE735" +
"30ACCA4F483A797ABC0AB182B324FB61D108A94BB2C8E3FB" +
"B96ADAB760D7F4681D4F42A3DE394DF4AE56EDE76372BB19" +
"0B07A7C8EE0A6D709E02FCE1CDF7E2ECC03404CD28342F61" +
"9172FE9CE98583FF8E4F1232EEF28183C3FE3B1B4C6FAD73" +
"3BB5FCBC2EC22005C58EF1837D1683B2C6F34A26C1B2EFFA" +
"886B4238611FCFDCDE355B3B6519035BBC34F4DEF99C0238" +
"61B46FC9D6E6C9077AD91D2691F7F7EE598CB0FAC186D91C" +
"AEFE130985139270B4130C93BC437944F4FD4452E2D74DD3" +
"64F2E21E71F54BFF5CAE82AB9C9DF69EE86D2BC522363A0D" +
"ABC521979B0DEADA1DBF9A42D5C4484E0ABCD06BFA53DDEF" +
"3C1B20EE3FD59D7C25E41D2B669E1EF16E6F52C3164DF4FB" +
"7930E9E4E58857B6AC7D5F42D69F6D187763CF1D55034004" +
"87F55BA57E31CC7A7135C886EFB4318AED6A1E012D9E6832" +
"A907600A918130C46DC778F971AD0038092999A333CB8B7A" +
"1A1DB93D7140003C2A4ECEA9F98D0ACC0A8291CDCEC97DCF" +
"8EC9B55A7F88A46B4DB5A851F44182E1C68A007E5E655F6A" +
"FFFFFFFFFFFFFFFF"
finiteFieldPrime4096bytes, _ = hex.DecodeString(finiteFieldPrime4096hex)
finiteFieldPrime4096 = big.NewInt(0).SetBytes(finiteFieldPrime4096bytes)
finiteFieldPrime6144hex = "FFFFFFFFFFFFFFFFADF85458A2BB4A9AAFDC5620273D3CF1" +
"D8B9C583CE2D3695A9E13641146433FBCC939DCE249B3EF9" +
"7D2FE363630C75D8F681B202AEC4617AD3DF1ED5D5FD6561" +
"2433F51F5F066ED0856365553DED1AF3B557135E7F57C935" +
"984F0C70E0E68B77E2A689DAF3EFE8721DF158A136ADE735" +
"30ACCA4F483A797ABC0AB182B324FB61D108A94BB2C8E3FB" +
"B96ADAB760D7F4681D4F42A3DE394DF4AE56EDE76372BB19" +
"0B07A7C8EE0A6D709E02FCE1CDF7E2ECC03404CD28342F61" +
"9172FE9CE98583FF8E4F1232EEF28183C3FE3B1B4C6FAD73" +
"3BB5FCBC2EC22005C58EF1837D1683B2C6F34A26C1B2EFFA" +
"886B4238611FCFDCDE355B3B6519035BBC34F4DEF99C0238" +
"61B46FC9D6E6C9077AD91D2691F7F7EE598CB0FAC186D91C" +
"AEFE130985139270B4130C93BC437944F4FD4452E2D74DD3" +
"64F2E21E71F54BFF5CAE82AB9C9DF69EE86D2BC522363A0D" +
"ABC521979B0DEADA1DBF9A42D5C4484E0ABCD06BFA53DDEF" +
"3C1B20EE3FD59D7C25E41D2B669E1EF16E6F52C3164DF4FB" +
"7930E9E4E58857B6AC7D5F42D69F6D187763CF1D55034004" +
"87F55BA57E31CC7A7135C886EFB4318AED6A1E012D9E6832" +
"A907600A918130C46DC778F971AD0038092999A333CB8B7A" +
"1A1DB93D7140003C2A4ECEA9F98D0ACC0A8291CDCEC97DCF" +
"8EC9B55A7F88A46B4DB5A851F44182E1C68A007E5E0DD902" +
"0BFD64B645036C7A4E677D2C38532A3A23BA4442CAF53EA6" +
"3BB454329B7624C8917BDD64B1C0FD4CB38E8C334C701C3A" +
"CDAD0657FCCFEC719B1F5C3E4E46041F388147FB4CFDB477" +
"A52471F7A9A96910B855322EDB6340D8A00EF092350511E3" +
"0ABEC1FFF9E3A26E7FB29F8C183023C3587E38DA0077D9B4" +
"763E4E4B94B2BBC194C6651E77CAF992EEAAC0232A281BF6" +
"B3A739C1226116820AE8DB5847A67CBEF9C9091B462D538C" +
"D72B03746AE77F5E62292C311562A846505DC82DB854338A" +
"E49F5235C95B91178CCF2DD5CACEF403EC9D1810C6272B04" +
"5B3B71F9DC6B80D63FDD4A8E9ADB1E6962A69526D43161C1" +
"A41D570D7938DAD4A40E329CD0E40E65FFFFFFFFFFFFFFFF"
finiteFieldPrime6144bytes, _ = hex.DecodeString(finiteFieldPrime6144hex)
finiteFieldPrime6144 = big.NewInt(0).SetBytes(finiteFieldPrime6144bytes)
finiteFieldPrime8192hex = "FFFFFFFFFFFFFFFFADF85458A2BB4A9AAFDC5620273D3CF1" +
"D8B9C583CE2D3695A9E13641146433FBCC939DCE249B3EF9" +
"7D2FE363630C75D8F681B202AEC4617AD3DF1ED5D5FD6561" +
"2433F51F5F066ED0856365553DED1AF3B557135E7F57C935" +
"984F0C70E0E68B77E2A689DAF3EFE8721DF158A136ADE735" +
"30ACCA4F483A797ABC0AB182B324FB61D108A94BB2C8E3FB" +
"B96ADAB760D7F4681D4F42A3DE394DF4AE56EDE76372BB19" +
"0B07A7C8EE0A6D709E02FCE1CDF7E2ECC03404CD28342F61" +
"9172FE9CE98583FF8E4F1232EEF28183C3FE3B1B4C6FAD73" +
"3BB5FCBC2EC22005C58EF1837D1683B2C6F34A26C1B2EFFA" +
"886B4238611FCFDCDE355B3B6519035BBC34F4DEF99C0238" +
"61B46FC9D6E6C9077AD91D2691F7F7EE598CB0FAC186D91C" +
"AEFE130985139270B4130C93BC437944F4FD4452E2D74DD3" +
"64F2E21E71F54BFF5CAE82AB9C9DF69EE86D2BC522363A0D" +
"ABC521979B0DEADA1DBF9A42D5C4484E0ABCD06BFA53DDEF" +
"3C1B20EE3FD59D7C25E41D2B669E1EF16E6F52C3164DF4FB" +
"7930E9E4E58857B6AC7D5F42D69F6D187763CF1D55034004" +
"87F55BA57E31CC7A7135C886EFB4318AED6A1E012D9E6832" +
"A907600A918130C46DC778F971AD0038092999A333CB8B7A" +
"1A1DB93D7140003C2A4ECEA9F98D0ACC0A8291CDCEC97DCF" +
"8EC9B55A7F88A46B4DB5A851F44182E1C68A007E5E0DD902" +
"0BFD64B645036C7A4E677D2C38532A3A23BA4442CAF53EA6" +
"3BB454329B7624C8917BDD64B1C0FD4CB38E8C334C701C3A" +
"CDAD0657FCCFEC719B1F5C3E4E46041F388147FB4CFDB477" +
"A52471F7A9A96910B855322EDB6340D8A00EF092350511E3" +
"0ABEC1FFF9E3A26E7FB29F8C183023C3587E38DA0077D9B4" +
"763E4E4B94B2BBC194C6651E77CAF992EEAAC0232A281BF6" +
"B3A739C1226116820AE8DB5847A67CBEF9C9091B462D538C" +
"D72B03746AE77F5E62292C311562A846505DC82DB854338A" +
"E49F5235C95B91178CCF2DD5CACEF403EC9D1810C6272B04" +
"5B3B71F9DC6B80D63FDD4A8E9ADB1E6962A69526D43161C1" +
"A41D570D7938DAD4A40E329CCFF46AAA36AD004CF600C838" +
"1E425A31D951AE64FDB23FCEC9509D43687FEB69EDD1CC5E" +
"0B8CC3BDF64B10EF86B63142A3AB8829555B2F747C932665" +
"CB2C0F1CC01BD70229388839D2AF05E454504AC78B758282" +
"2846C0BA35C35F5C59160CC046FD8251541FC68C9C86B022" +
"BB7099876A460E7451A8A93109703FEE1C217E6C3826E52C" +
"51AA691E0E423CFC99E9E31650C1217B624816CDAD9A95F9" +
"D5B8019488D9C0A0A1FE3075A577E23183F81D4A3F2FA457" +
"1EFC8CE0BA8A4FE8B6855DFE72B0A66EDED2FBABFBE58A30" +
"FAFABE1C5D71A87E2F741EF8C1FE86FEA6BBFDE530677F0D" +
"97D11D49F7A8443D0822E506A9F4614E011E2A94838FF88C" +
"D68C8BB7C5C6424CFFFFFFFFFFFFFFFF"
finiteFieldPrime8192bytes, _ = hex.DecodeString(finiteFieldPrime8192hex)
finiteFieldPrime8192 = big.NewInt(0).SetBytes(finiteFieldPrime8192bytes)
)

98
vendor/github.com/bifurcation/mint/frame-reader.go generated vendored Normal file
View File

@ -0,0 +1,98 @@
// Read a generic "framed" packet consisting of a header and a
// This is used for both TLS Records and TLS Handshake Messages
package mint
type framing interface {
headerLen() int
defaultReadLen() int
frameLen(hdr []byte) (int, error)
}
const (
kFrameReaderHdr = 0
kFrameReaderBody = 1
)
type frameNextAction func(f *frameReader) error
type frameReader struct {
details framing
state uint8
header []byte
body []byte
working []byte
writeOffset int
remainder []byte
}
func newFrameReader(d framing) *frameReader {
hdr := make([]byte, d.headerLen())
return &frameReader{
d,
kFrameReaderHdr,
hdr,
nil,
hdr,
0,
nil,
}
}
func dup(a []byte) []byte {
r := make([]byte, len(a))
copy(r, a)
return r
}
func (f *frameReader) needed() int {
tmp := (len(f.working) - f.writeOffset) - len(f.remainder)
if tmp < 0 {
return 0
}
return tmp
}
func (f *frameReader) addChunk(in []byte) {
// Append to the buffer.
logf(logTypeFrameReader, "Appending %v", len(in))
f.remainder = append(f.remainder, in...)
}
func (f *frameReader) process() (hdr []byte, body []byte, err error) {
for f.needed() == 0 {
logf(logTypeFrameReader, "%v bytes needed for next block", len(f.working)-f.writeOffset)
// Fill out our working block
copied := copy(f.working[f.writeOffset:], f.remainder)
f.remainder = f.remainder[copied:]
f.writeOffset += copied
if f.writeOffset < len(f.working) {
logf(logTypeVerbose, "Read would have blocked 1")
return nil, nil, AlertWouldBlock
}
// Reset the write offset, because we are now full.
f.writeOffset = 0
// We have read a full frame
if f.state == kFrameReaderBody {
logf(logTypeFrameReader, "Returning frame hdr=%#x len=%d buffered=%d", f.header, len(f.body), len(f.remainder))
f.state = kFrameReaderHdr
f.working = f.header
return dup(f.header), dup(f.body), nil
}
// We have read the header
bodyLen, err := f.details.frameLen(f.header)
if err != nil {
return nil, nil, err
}
logf(logTypeFrameReader, "Processed header, body len = %v", bodyLen)
f.body = make([]byte, bodyLen)
f.working = f.body
f.writeOffset = 0
f.state = kFrameReaderBody
}
logf(logTypeVerbose, "Read would have blocked 2")
return nil, nil, AlertWouldBlock
}

551
vendor/github.com/bifurcation/mint/handshake-layer.go generated vendored Normal file
View File

@ -0,0 +1,551 @@
package mint
import (
"fmt"
"io"
"net"
)
const (
handshakeHeaderLenTLS = 4 // handshake message header length
handshakeHeaderLenDTLS = 12 // handshake message header length
maxHandshakeMessageLen = 1 << 24 // max handshake message length
)
// struct {
// HandshakeType msg_type; /* handshake type */
// uint24 length; /* bytes in message */
// select (HandshakeType) {
// ...
// } body;
// } Handshake;
//
// We do the select{...} part in a different layer, so we treat the
// actual message body as opaque:
//
// struct {
// HandshakeType msg_type;
// opaque msg<0..2^24-1>
// } Handshake;
//
type HandshakeMessage struct {
msgType HandshakeType
seq uint32
body []byte
datagram bool
offset uint32 // Used for DTLS
length uint32
cipher *cipherState
}
// Note: This could be done with the `syntax` module, using the simplified
// syntax as discussed above. However, since this is so simple, there's not
// much benefit to doing so.
// When datagram is set, we marshal this as a whole DTLS record.
func (hm *HandshakeMessage) Marshal() []byte {
if hm == nil {
return []byte{}
}
fragLen := len(hm.body)
var data []byte
if hm.datagram {
data = make([]byte, handshakeHeaderLenDTLS+fragLen)
} else {
data = make([]byte, handshakeHeaderLenTLS+fragLen)
}
tmp := data
tmp = encodeUint(uint64(hm.msgType), 1, tmp)
tmp = encodeUint(uint64(hm.length), 3, tmp)
if hm.datagram {
tmp = encodeUint(uint64(hm.seq), 2, tmp)
tmp = encodeUint(uint64(hm.offset), 3, tmp)
tmp = encodeUint(uint64(fragLen), 3, tmp)
}
copy(tmp, hm.body)
return data
}
func (hm HandshakeMessage) ToBody() (HandshakeMessageBody, error) {
logf(logTypeHandshake, "HandshakeMessage.toBody [%d] [%x]", hm.msgType, hm.body)
var body HandshakeMessageBody
switch hm.msgType {
case HandshakeTypeClientHello:
body = new(ClientHelloBody)
case HandshakeTypeServerHello:
body = new(ServerHelloBody)
case HandshakeTypeEncryptedExtensions:
body = new(EncryptedExtensionsBody)
case HandshakeTypeCertificate:
body = new(CertificateBody)
case HandshakeTypeCertificateRequest:
body = new(CertificateRequestBody)
case HandshakeTypeCertificateVerify:
body = new(CertificateVerifyBody)
case HandshakeTypeFinished:
body = &FinishedBody{VerifyDataLen: len(hm.body)}
case HandshakeTypeNewSessionTicket:
body = new(NewSessionTicketBody)
case HandshakeTypeKeyUpdate:
body = new(KeyUpdateBody)
case HandshakeTypeEndOfEarlyData:
body = new(EndOfEarlyDataBody)
default:
return body, fmt.Errorf("tls.handshakemessage: Unsupported body type")
}
err := safeUnmarshal(body, hm.body)
return body, err
}
func (h *HandshakeLayer) HandshakeMessageFromBody(body HandshakeMessageBody) (*HandshakeMessage, error) {
data, err := body.Marshal()
if err != nil {
return nil, err
}
m := &HandshakeMessage{
msgType: body.Type(),
body: data,
seq: h.msgSeq,
datagram: h.datagram,
length: uint32(len(data)),
}
h.msgSeq++
return m, nil
}
type HandshakeLayer struct {
ctx *HandshakeContext // The handshake we are attached to
nonblocking bool // Should we operate in nonblocking mode
conn *RecordLayer // Used for reading/writing records
frame *frameReader // The buffered frame reader
datagram bool // Is this DTLS?
msgSeq uint32 // The DTLS message sequence number
queued []*HandshakeMessage // In/out queue
sent []*HandshakeMessage // Sent messages for DTLS
recvdRecords []uint64 // Records we have received.
maxFragmentLen int
}
type handshakeLayerFrameDetails struct {
datagram bool
}
func (d handshakeLayerFrameDetails) headerLen() int {
if d.datagram {
return handshakeHeaderLenDTLS
}
return handshakeHeaderLenTLS
}
func (d handshakeLayerFrameDetails) defaultReadLen() int {
return d.headerLen() + maxFragmentLen
}
func (d handshakeLayerFrameDetails) frameLen(hdr []byte) (int, error) {
logf(logTypeIO, "Header=%x", hdr)
// The length of this fragment (as opposed to the message)
// is always the last three bytes for both TLS and DTLS
val, _ := decodeUint(hdr[len(hdr)-3:], 3)
return int(val), nil
}
func NewHandshakeLayerTLS(c *HandshakeContext, r *RecordLayer) *HandshakeLayer {
h := HandshakeLayer{}
h.ctx = c
h.conn = r
h.datagram = false
h.frame = newFrameReader(&handshakeLayerFrameDetails{false})
h.maxFragmentLen = maxFragmentLen
return &h
}
func NewHandshakeLayerDTLS(c *HandshakeContext, r *RecordLayer) *HandshakeLayer {
h := HandshakeLayer{}
h.ctx = c
h.conn = r
h.datagram = true
h.frame = newFrameReader(&handshakeLayerFrameDetails{true})
h.maxFragmentLen = initialMtu // Not quite right
return &h
}
func (h *HandshakeLayer) readRecord() error {
logf(logTypeVerbose, "Trying to read record")
pt, err := h.conn.readRecordAnyEpoch()
if err != nil {
return err
}
switch pt.contentType {
case RecordTypeHandshake, RecordTypeAlert, RecordTypeAck:
default:
return fmt.Errorf("tls.handshakelayer: Unexpected record type %d", pt.contentType)
}
if pt.contentType == RecordTypeAck {
if !h.datagram {
return fmt.Errorf("tls.handshakelayer: can't have ACK with TLS")
}
logf(logTypeIO, "read ACK")
return h.ctx.processAck(pt.fragment)
}
if pt.contentType == RecordTypeAlert {
logf(logTypeIO, "read alert %v", pt.fragment[1])
if len(pt.fragment) < 2 {
h.sendAlert(AlertUnexpectedMessage)
return io.EOF
}
return Alert(pt.fragment[1])
}
assert(h.ctx.hIn.conn != nil)
if pt.epoch != h.ctx.hIn.conn.cipher.epoch {
// This is out of order but we're dropping it.
// TODO(ekr@rtfm.com): If server, need to retransmit Finished.
if pt.epoch == EpochClear || pt.epoch == EpochHandshakeData {
return nil
}
// Anything else shouldn't happen.
return AlertIllegalParameter
}
h.recvdRecords = append(h.recvdRecords, pt.seq)
h.frame.addChunk(pt.fragment)
return nil
}
// sendAlert sends a TLS alert message.
func (h *HandshakeLayer) sendAlert(err Alert) error {
tmp := make([]byte, 2)
tmp[0] = AlertLevelError
tmp[1] = byte(err)
h.conn.WriteRecord(&TLSPlaintext{
contentType: RecordTypeAlert,
fragment: tmp},
)
// closeNotify is a special case in that it isn't an error:
if err != AlertCloseNotify {
return &net.OpError{Op: "local error", Err: err}
}
return nil
}
func (h *HandshakeLayer) noteMessageDelivered(seq uint32) {
h.msgSeq = seq + 1
var i int
var m *HandshakeMessage
for i, m = range h.queued {
if m.seq > seq {
break
}
}
h.queued = h.queued[i:]
}
func (h *HandshakeLayer) newFragmentReceived(hm *HandshakeMessage) (*HandshakeMessage, error) {
if hm.seq < h.msgSeq {
return nil, nil
}
// TODO(ekr@rtfm.com): Send an ACK immediately if we got something
// out of order.
h.ctx.receivedHandshakeMessage()
if hm.seq == h.msgSeq && hm.offset == 0 && hm.length == uint32(len(hm.body)) {
// TODO(ekr@rtfm.com): Check the length?
// This is complete.
h.noteMessageDelivered(hm.seq)
return hm, nil
}
// Now insert sorted.
var i int
for i = 0; i < len(h.queued); i++ {
f := h.queued[i]
if hm.seq < f.seq {
break
}
if hm.offset < f.offset {
break
}
}
tmp := make([]*HandshakeMessage, 0, len(h.queued)+1)
tmp = append(tmp, h.queued[:i]...)
tmp = append(tmp, hm)
tmp = append(tmp, h.queued[i:]...)
h.queued = tmp
return h.checkMessageAvailable()
}
func (h *HandshakeLayer) checkMessageAvailable() (*HandshakeMessage, error) {
if len(h.queued) == 0 {
return nil, nil
}
hm := h.queued[0]
if hm.seq != h.msgSeq {
return nil, nil
}
if hm.seq == h.msgSeq && hm.offset == 0 && hm.length == uint32(len(hm.body)) {
// TODO(ekr@rtfm.com): Check the length?
// This is complete.
h.noteMessageDelivered(hm.seq)
return hm, nil
}
// OK, this at least might complete the message.
end := uint32(0)
buf := make([]byte, hm.length)
for _, f := range h.queued {
// Out of fragments
if f.seq > hm.seq {
break
}
if f.length != uint32(len(buf)) {
return nil, fmt.Errorf("Mismatched DTLS length")
}
if f.offset > end {
break
}
if f.offset+uint32(len(f.body)) > end {
// OK, this is adding something we don't know about
copy(buf[f.offset:], f.body)
end = f.offset + uint32(len(f.body))
if end == hm.length {
h2 := *hm
h2.offset = 0
h2.body = buf
h.noteMessageDelivered(hm.seq)
return &h2, nil
}
}
}
return nil, nil
}
func (h *HandshakeLayer) ReadMessage() (*HandshakeMessage, error) {
var hdr, body []byte
var err error
hm, err := h.checkMessageAvailable()
if err != nil {
return nil, err
}
if hm != nil {
return hm, nil
}
for {
logf(logTypeVerbose, "ReadMessage() buffered=%v", len(h.frame.remainder))
if h.frame.needed() > 0 {
logf(logTypeVerbose, "Trying to read a new record")
err = h.readRecord()
if err != nil && (h.nonblocking || err != AlertWouldBlock) {
return nil, err
}
}
hdr, body, err = h.frame.process()
if err == nil {
break
}
if err != nil && (h.nonblocking || err != AlertWouldBlock) {
return nil, err
}
}
logf(logTypeHandshake, "read handshake message")
hm = &HandshakeMessage{}
hm.msgType = HandshakeType(hdr[0])
hm.datagram = h.datagram
hm.body = make([]byte, len(body))
copy(hm.body, body)
logf(logTypeHandshake, "Read message with type: %v", hm.msgType)
if h.datagram {
tmp, hdr := decodeUint(hdr[1:], 3)
hm.length = uint32(tmp)
tmp, hdr = decodeUint(hdr, 2)
hm.seq = uint32(tmp)
tmp, hdr = decodeUint(hdr, 3)
hm.offset = uint32(tmp)
return h.newFragmentReceived(hm)
}
hm.length = uint32(len(body))
return hm, nil
}
func (h *HandshakeLayer) QueueMessage(hm *HandshakeMessage) error {
hm.cipher = h.conn.cipher
h.queued = append(h.queued, hm)
return nil
}
func (h *HandshakeLayer) SendQueuedMessages() (int, error) {
logf(logTypeHandshake, "Sending outgoing messages")
count, err := h.WriteMessages(h.queued)
if !h.datagram {
h.ClearQueuedMessages()
}
return count, err
}
func (h *HandshakeLayer) ClearQueuedMessages() {
logf(logTypeHandshake, "Clearing outgoing hs message queue")
h.queued = nil
}
func (h *HandshakeLayer) writeFragment(hm *HandshakeMessage, start int, room int) (bool, int, error) {
var buf []byte
// Figure out if we're going to want the full header or just
// the body
hdrlen := 0
if hm.datagram {
hdrlen = handshakeHeaderLenDTLS
} else if start == 0 {
hdrlen = handshakeHeaderLenTLS
}
// Compute the amount of body we can fit in
room -= hdrlen
if room == 0 {
// This works because we are doing one record per
// message
panic("Too short max fragment len")
}
bodylen := len(hm.body) - start
if bodylen > room {
bodylen = room
}
body := hm.body[start : start+bodylen]
// Now see if this chunk has been ACKed. This doesn't produce ideal
// retransmission but is simple.
if h.ctx.fragmentAcked(hm.seq, start, bodylen) {
logf(logTypeHandshake, "Fragment %v %v(%v) already acked. Skipping", hm.seq, start, bodylen)
return false, start + bodylen, nil
}
// Encode the data.
if hdrlen > 0 {
hm2 := *hm
hm2.offset = uint32(start)
hm2.body = body
buf = hm2.Marshal()
hm = &hm2
} else {
buf = body
}
if h.datagram {
// Remember that we sent this.
h.ctx.sentFragments = append(h.ctx.sentFragments, &SentHandshakeFragment{
hm.seq,
start,
len(body),
h.conn.cipher.combineSeq(true),
false,
})
}
return true, start + bodylen, h.conn.writeRecordWithPadding(
&TLSPlaintext{
contentType: RecordTypeHandshake,
fragment: buf,
},
hm.cipher, 0)
}
func (h *HandshakeLayer) WriteMessage(hm *HandshakeMessage) (int, error) {
start := int(0)
if len(hm.body) > maxHandshakeMessageLen {
return 0, fmt.Errorf("Tried to write a handshake message that's too long")
}
written := 0
wrote := false
// Always make one pass through to allow EOED (which is empty).
for {
var err error
wrote, start, err = h.writeFragment(hm, start, h.maxFragmentLen)
if err != nil {
return 0, err
}
if wrote {
written++
}
if start >= len(hm.body) {
break
}
}
return written, nil
}
func (h *HandshakeLayer) WriteMessages(hms []*HandshakeMessage) (int, error) {
written := 0
for _, hm := range hms {
logf(logTypeHandshake, "WriteMessage [%d] %x", hm.msgType, hm.body)
wrote, err := h.WriteMessage(hm)
if err != nil {
return 0, err
}
written += wrote
}
return written, nil
}
func encodeUint(v uint64, size int, out []byte) []byte {
for i := size - 1; i >= 0; i-- {
out[i] = byte(v & 0xff)
v >>= 8
}
return out[size:]
}
func decodeUint(in []byte, size int) (uint64, []byte) {
val := uint64(0)
for i := 0; i < size; i++ {
val <<= 8
val += uint64(in[i])
}
return val, in[size:]
}
type marshalledPDU interface {
Marshal() ([]byte, error)
Unmarshal(data []byte) (int, error)
}
func safeUnmarshal(pdu marshalledPDU, data []byte) error {
read, err := pdu.Unmarshal(data)
if err != nil {
return err
}
if len(data) != read {
return fmt.Errorf("Invalid encoding: Extra data not consumed")
}
return nil
}

View File

@ -0,0 +1,481 @@
package mint
import (
"bytes"
"crypto"
"crypto/x509"
"encoding/binary"
"fmt"
"github.com/bifurcation/mint/syntax"
)
type HandshakeMessageBody interface {
Type() HandshakeType
Marshal() ([]byte, error)
Unmarshal(data []byte) (int, error)
}
// struct {
// ProtocolVersion legacy_version = 0x0303; /* TLS v1.2 */
// Random random;
// opaque legacy_session_id<0..32>;
// CipherSuite cipher_suites<2..2^16-2>;
// opaque legacy_compression_methods<1..2^8-1>;
// Extension extensions<0..2^16-1>;
// } ClientHello;
type ClientHelloBody struct {
LegacyVersion uint16
Random [32]byte
LegacySessionID []byte
CipherSuites []CipherSuite
Extensions ExtensionList
}
type clientHelloBodyInnerTLS struct {
LegacyVersion uint16
Random [32]byte
LegacySessionID []byte `tls:"head=1,max=32"`
CipherSuites []CipherSuite `tls:"head=2,min=2"`
LegacyCompressionMethods []byte `tls:"head=1,min=1"`
Extensions []Extension `tls:"head=2"`
}
type clientHelloBodyInnerDTLS struct {
LegacyVersion uint16
Random [32]byte
LegacySessionID []byte `tls:"head=1,max=32"`
EmptyCookie uint8
CipherSuites []CipherSuite `tls:"head=2,min=2"`
LegacyCompressionMethods []byte `tls:"head=1,min=1"`
Extensions []Extension `tls:"head=2"`
}
func (ch ClientHelloBody) Type() HandshakeType {
return HandshakeTypeClientHello
}
func (ch ClientHelloBody) Marshal() ([]byte, error) {
if ch.LegacyVersion == tls12Version {
return syntax.Marshal(clientHelloBodyInnerTLS{
LegacyVersion: ch.LegacyVersion,
Random: ch.Random,
LegacySessionID: []byte{},
CipherSuites: ch.CipherSuites,
LegacyCompressionMethods: []byte{0},
Extensions: ch.Extensions,
})
} else {
return syntax.Marshal(clientHelloBodyInnerDTLS{
LegacyVersion: ch.LegacyVersion,
Random: ch.Random,
LegacySessionID: []byte{},
CipherSuites: ch.CipherSuites,
LegacyCompressionMethods: []byte{0},
Extensions: ch.Extensions,
})
}
}
func (ch *ClientHelloBody) Unmarshal(data []byte) (int, error) {
var read int
var err error
// Note that this might be 0, in which case we do TLS. That
// makes the tests easier.
if ch.LegacyVersion != dtls12WireVersion {
var inner clientHelloBodyInnerTLS
read, err = syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
if len(inner.LegacyCompressionMethods) != 1 || inner.LegacyCompressionMethods[0] != 0 {
return 0, fmt.Errorf("tls.clienthello: Invalid compression method")
}
ch.LegacyVersion = inner.LegacyVersion
ch.Random = inner.Random
ch.LegacySessionID = inner.LegacySessionID
ch.CipherSuites = inner.CipherSuites
ch.Extensions = inner.Extensions
} else {
var inner clientHelloBodyInnerDTLS
read, err = syntax.Unmarshal(data, &inner)
if err != nil {
return 0, err
}
if inner.EmptyCookie != 0 {
return 0, fmt.Errorf("tls.clienthello: Invalid cookie")
}
if len(inner.LegacyCompressionMethods) != 1 || inner.LegacyCompressionMethods[0] != 0 {
return 0, fmt.Errorf("tls.clienthello: Invalid compression method")
}
ch.LegacyVersion = inner.LegacyVersion
ch.Random = inner.Random
ch.LegacySessionID = inner.LegacySessionID
ch.CipherSuites = inner.CipherSuites
ch.Extensions = inner.Extensions
}
return read, nil
}
// TODO: File a spec bug to clarify this
func (ch ClientHelloBody) Truncated() ([]byte, error) {
if len(ch.Extensions) == 0 {
return nil, fmt.Errorf("tls.clienthello.truncate: No extensions")
}
pskExt := ch.Extensions[len(ch.Extensions)-1]
if pskExt.ExtensionType != ExtensionTypePreSharedKey {
return nil, fmt.Errorf("tls.clienthello.truncate: Last extension is not PSK")
}
body, err := ch.Marshal()
if err != nil {
return nil, err
}
chm := &HandshakeMessage{
msgType: ch.Type(),
body: body,
length: uint32(len(body)),
}
chData := chm.Marshal()
psk := PreSharedKeyExtension{
HandshakeType: HandshakeTypeClientHello,
}
_, err = psk.Unmarshal(pskExt.ExtensionData)
if err != nil {
return nil, err
}
// Marshal just the binders so that we know how much to truncate
binders := struct {
Binders []PSKBinderEntry `tls:"head=2,min=33"`
}{Binders: psk.Binders}
binderData, _ := syntax.Marshal(binders)
binderLen := len(binderData)
chLen := len(chData)
return chData[:chLen-binderLen], nil
}
// struct {
// ProtocolVersion legacy_version = 0x0303; /* TLS v1.2 */
// Random random;
// opaque legacy_session_id_echo<0..32>;
// CipherSuite cipher_suite;
// uint8 legacy_compression_method = 0;
// Extension extensions<6..2^16-1>;
// } ServerHello;
type ServerHelloBody struct {
Version uint16
Random [32]byte
LegacySessionID []byte `tls:"head=1,max=32"`
CipherSuite CipherSuite
LegacyCompressionMethod uint8
Extensions ExtensionList `tls:"head=2"`
}
func (sh ServerHelloBody) Type() HandshakeType {
return HandshakeTypeServerHello
}
func (sh ServerHelloBody) Marshal() ([]byte, error) {
return syntax.Marshal(sh)
}
func (sh *ServerHelloBody) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, sh)
}
// struct {
// opaque verify_data[verify_data_length];
// } Finished;
//
// verifyDataLen is not a field in the TLS struct, but we add it here so
// that calling code can tell us how much data to expect when we marshal /
// unmarshal. (We could add this to the marshal/unmarshal methods, but let's
// try to keep the signature consistent for now.)
//
// For similar reasons, we don't use the `syntax` module here, because this
// struct doesn't map well to standard TLS presentation language concepts.
//
// TODO: File a spec bug
type FinishedBody struct {
VerifyDataLen int
VerifyData []byte
}
func (fin FinishedBody) Type() HandshakeType {
return HandshakeTypeFinished
}
func (fin FinishedBody) Marshal() ([]byte, error) {
if len(fin.VerifyData) != fin.VerifyDataLen {
return nil, fmt.Errorf("tls.finished: data length mismatch")
}
body := make([]byte, len(fin.VerifyData))
copy(body, fin.VerifyData)
return body, nil
}
func (fin *FinishedBody) Unmarshal(data []byte) (int, error) {
if len(data) < fin.VerifyDataLen {
return 0, fmt.Errorf("tls.finished: Malformed finished; too short")
}
fin.VerifyData = make([]byte, fin.VerifyDataLen)
copy(fin.VerifyData, data[:fin.VerifyDataLen])
return fin.VerifyDataLen, nil
}
// struct {
// Extension extensions<0..2^16-1>;
// } EncryptedExtensions;
//
// Marshal() and Unmarshal() are handled by ExtensionList
type EncryptedExtensionsBody struct {
Extensions ExtensionList `tls:"head=2"`
}
func (ee EncryptedExtensionsBody) Type() HandshakeType {
return HandshakeTypeEncryptedExtensions
}
func (ee EncryptedExtensionsBody) Marshal() ([]byte, error) {
return syntax.Marshal(ee)
}
func (ee *EncryptedExtensionsBody) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, ee)
}
// opaque ASN1Cert<1..2^24-1>;
//
// struct {
// ASN1Cert cert_data;
// Extension extensions<0..2^16-1>
// } CertificateEntry;
//
// struct {
// opaque certificate_request_context<0..2^8-1>;
// CertificateEntry certificate_list<0..2^24-1>;
// } Certificate;
type CertificateEntry struct {
CertData *x509.Certificate
Extensions ExtensionList
}
type CertificateBody struct {
CertificateRequestContext []byte
CertificateList []CertificateEntry
}
type certificateEntryInner struct {
CertData []byte `tls:"head=3,min=1"`
Extensions ExtensionList `tls:"head=2"`
}
type certificateBodyInner struct {
CertificateRequestContext []byte `tls:"head=1"`
CertificateList []certificateEntryInner `tls:"head=3"`
}
func (c CertificateBody) Type() HandshakeType {
return HandshakeTypeCertificate
}
func (c CertificateBody) Marshal() ([]byte, error) {
inner := certificateBodyInner{
CertificateRequestContext: c.CertificateRequestContext,
CertificateList: make([]certificateEntryInner, len(c.CertificateList)),
}
for i, entry := range c.CertificateList {
inner.CertificateList[i] = certificateEntryInner{
CertData: entry.CertData.Raw,
Extensions: entry.Extensions,
}
}
return syntax.Marshal(inner)
}
func (c *CertificateBody) Unmarshal(data []byte) (int, error) {
inner := certificateBodyInner{}
read, err := syntax.Unmarshal(data, &inner)
if err != nil {
return read, err
}
c.CertificateRequestContext = inner.CertificateRequestContext
c.CertificateList = make([]CertificateEntry, len(inner.CertificateList))
for i, entry := range inner.CertificateList {
c.CertificateList[i].CertData, err = x509.ParseCertificate(entry.CertData)
if err != nil {
return 0, fmt.Errorf("tls:certificate: Certificate failed to parse: %v", err)
}
c.CertificateList[i].Extensions = entry.Extensions
}
return read, nil
}
// struct {
// SignatureScheme algorithm;
// opaque signature<0..2^16-1>;
// } CertificateVerify;
type CertificateVerifyBody struct {
Algorithm SignatureScheme
Signature []byte `tls:"head=2"`
}
func (cv CertificateVerifyBody) Type() HandshakeType {
return HandshakeTypeCertificateVerify
}
func (cv CertificateVerifyBody) Marshal() ([]byte, error) {
return syntax.Marshal(cv)
}
func (cv *CertificateVerifyBody) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, cv)
}
func (cv *CertificateVerifyBody) EncodeSignatureInput(data []byte) []byte {
// TODO: Change context for client auth
// TODO: Put this in a const
const context = "TLS 1.3, server CertificateVerify"
sigInput := bytes.Repeat([]byte{0x20}, 64)
sigInput = append(sigInput, []byte(context)...)
sigInput = append(sigInput, []byte{0}...)
sigInput = append(sigInput, data...)
return sigInput
}
func (cv *CertificateVerifyBody) Sign(privateKey crypto.Signer, handshakeHash []byte) (err error) {
sigInput := cv.EncodeSignatureInput(handshakeHash)
cv.Signature, err = sign(cv.Algorithm, privateKey, sigInput)
logf(logTypeHandshake, "Signed: alg=[%04x] sigInput=[%x], sig=[%x]", cv.Algorithm, sigInput, cv.Signature)
return
}
func (cv *CertificateVerifyBody) Verify(publicKey crypto.PublicKey, handshakeHash []byte) error {
sigInput := cv.EncodeSignatureInput(handshakeHash)
logf(logTypeHandshake, "About to verify: alg=[%04x] sigInput=[%x], sig=[%x]", cv.Algorithm, sigInput, cv.Signature)
return verify(cv.Algorithm, publicKey, sigInput, cv.Signature)
}
// struct {
// opaque certificate_request_context<0..2^8-1>;
// Extension extensions<2..2^16-1>;
// } CertificateRequest;
type CertificateRequestBody struct {
CertificateRequestContext []byte `tls:"head=1"`
Extensions ExtensionList `tls:"head=2"`
}
func (cr CertificateRequestBody) Type() HandshakeType {
return HandshakeTypeCertificateRequest
}
func (cr CertificateRequestBody) Marshal() ([]byte, error) {
return syntax.Marshal(cr)
}
func (cr *CertificateRequestBody) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, cr)
}
// struct {
// uint32 ticket_lifetime;
// uint32 ticket_age_add;
// opaque ticket_nonce<1..255>;
// opaque ticket<1..2^16-1>;
// Extension extensions<0..2^16-2>;
// } NewSessionTicket;
type NewSessionTicketBody struct {
TicketLifetime uint32
TicketAgeAdd uint32
TicketNonce []byte `tls:"head=1"`
Ticket []byte `tls:"head=2,min=1"`
Extensions ExtensionList `tls:"head=2"`
}
const ticketNonceLen = 16
func NewSessionTicket(ticketLen int, ticketLifetime uint32) (*NewSessionTicketBody, error) {
buf := make([]byte, 4+ticketNonceLen+ticketLen)
_, err := prng.Read(buf)
if err != nil {
return nil, err
}
tkt := &NewSessionTicketBody{
TicketLifetime: ticketLifetime,
TicketAgeAdd: binary.BigEndian.Uint32(buf[:4]),
TicketNonce: buf[4 : 4+ticketNonceLen],
Ticket: buf[4+ticketNonceLen:],
}
return tkt, err
}
func (tkt NewSessionTicketBody) Type() HandshakeType {
return HandshakeTypeNewSessionTicket
}
func (tkt NewSessionTicketBody) Marshal() ([]byte, error) {
return syntax.Marshal(tkt)
}
func (tkt *NewSessionTicketBody) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, tkt)
}
// enum {
// update_not_requested(0), update_requested(1), (255)
// } KeyUpdateRequest;
//
// struct {
// KeyUpdateRequest request_update;
// } KeyUpdate;
type KeyUpdateBody struct {
KeyUpdateRequest KeyUpdateRequest
}
func (ku KeyUpdateBody) Type() HandshakeType {
return HandshakeTypeKeyUpdate
}
func (ku KeyUpdateBody) Marshal() ([]byte, error) {
return syntax.Marshal(ku)
}
func (ku *KeyUpdateBody) Unmarshal(data []byte) (int, error) {
return syntax.Unmarshal(data, ku)
}
// struct {} EndOfEarlyData;
type EndOfEarlyDataBody struct{}
func (eoed EndOfEarlyDataBody) Type() HandshakeType {
return HandshakeTypeEndOfEarlyData
}
func (eoed EndOfEarlyDataBody) Marshal() ([]byte, error) {
return []byte{}, nil
}
func (eoed *EndOfEarlyDataBody) Unmarshal(data []byte) (int, error) {
return 0, nil
}

55
vendor/github.com/bifurcation/mint/log.go generated vendored Normal file
View File

@ -0,0 +1,55 @@
package mint
import (
"fmt"
"log"
"os"
"strings"
)
// We use this environment variable to control logging. It should be a
// comma-separated list of log tags (see below) or "*" to enable all logging.
const logConfigVar = "MINT_LOG"
// Pre-defined log types
const (
logTypeCrypto = "crypto"
logTypeHandshake = "handshake"
logTypeNegotiation = "negotiation"
logTypeIO = "io"
logTypeFrameReader = "frame"
logTypeVerbose = "verbose"
)
var (
logFunction = log.Printf
logAll = false
logSettings = map[string]bool{}
)
func init() {
parseLogEnv(os.Environ())
}
func parseLogEnv(env []string) {
for _, stmt := range env {
if strings.HasPrefix(stmt, logConfigVar+"=") {
val := stmt[len(logConfigVar)+1:]
if val == "*" {
logAll = true
} else {
for _, t := range strings.Split(val, ",") {
logSettings[t] = true
}
}
}
}
}
func logf(tag string, format string, args ...interface{}) {
if logAll || logSettings[tag] {
fullFormat := fmt.Sprintf("[%s] %s", tag, format)
logFunction(fullFormat, args...)
}
}

101
vendor/github.com/bifurcation/mint/mint.svg generated vendored Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 16 KiB

218
vendor/github.com/bifurcation/mint/negotiation.go generated vendored Normal file
View File

@ -0,0 +1,218 @@
package mint
import (
"bytes"
"encoding/hex"
"fmt"
"time"
)
func VersionNegotiation(offered, supported []uint16) (bool, uint16) {
for _, offeredVersion := range offered {
for _, tls13Version := range supported {
logf(logTypeHandshake, "[server] version offered by client [%04x] <> [%04x]", offeredVersion, tls13Version)
if offeredVersion == tls13Version {
// XXX: Should probably be highest supported version, but for now, we
// only support one version, so it doesn't really matter.
return true, offeredVersion
}
}
}
return false, 0
}
func DHNegotiation(keyShares []KeyShareEntry, groups []NamedGroup) (bool, NamedGroup, []byte, []byte) {
for _, share := range keyShares {
for _, group := range groups {
if group != share.Group {
continue
}
pub, priv, err := newKeyShare(share.Group)
if err != nil {
// If we encounter an error, just keep looking
continue
}
dhSecret, err := keyAgreement(share.Group, share.KeyExchange, priv)
if err != nil {
// If we encounter an error, just keep looking
continue
}
return true, group, pub, dhSecret
}
}
return false, 0, nil, nil
}
const (
ticketAgeTolerance uint32 = 5 * 1000 // five seconds in milliseconds
)
func PSKNegotiation(identities []PSKIdentity, binders []PSKBinderEntry, context []byte, psks PreSharedKeyCache) (bool, int, *PreSharedKey, CipherSuiteParams, error) {
logf(logTypeNegotiation, "Negotiating PSK offered=[%d] supported=[%d]", len(identities), psks.Size())
for i, id := range identities {
identityHex := hex.EncodeToString(id.Identity)
psk, ok := psks.Get(identityHex)
if !ok {
logf(logTypeNegotiation, "No PSK for identity %x", identityHex)
continue
}
// For resumption, make sure the ticket age is correct
if psk.IsResumption {
extTicketAge := id.ObfuscatedTicketAge - psk.TicketAgeAdd
knownTicketAge := uint32(time.Since(psk.ReceivedAt) / time.Millisecond)
ticketAgeDelta := knownTicketAge - extTicketAge
if knownTicketAge < extTicketAge {
ticketAgeDelta = extTicketAge - knownTicketAge
}
if ticketAgeDelta > ticketAgeTolerance {
logf(logTypeNegotiation, "WARNING potential replay [%x]", psk.Identity)
logf(logTypeNegotiation, "Ticket age exceeds tolerance |%d - %d| = [%d] > [%d]",
extTicketAge, knownTicketAge, ticketAgeDelta, ticketAgeTolerance)
return false, 0, nil, CipherSuiteParams{}, fmt.Errorf("WARNING Potential replay for identity %x", psk.Identity)
}
}
params, ok := cipherSuiteMap[psk.CipherSuite]
if !ok {
err := fmt.Errorf("tls.cryptoinit: Unsupported ciphersuite from PSK [%04x]", psk.CipherSuite)
return false, 0, nil, CipherSuiteParams{}, err
}
// Compute binder
binderLabel := labelExternalBinder
if psk.IsResumption {
binderLabel = labelResumptionBinder
}
h0 := params.Hash.New().Sum(nil)
zero := bytes.Repeat([]byte{0}, params.Hash.Size())
earlySecret := HkdfExtract(params.Hash, zero, psk.Key)
binderKey := deriveSecret(params, earlySecret, binderLabel, h0)
// context = ClientHello[truncated]
// context = ClientHello1 + HelloRetryRequest + ClientHello2[truncated]
ctxHash := params.Hash.New()
ctxHash.Write(context)
binder := computeFinishedData(params, binderKey, ctxHash.Sum(nil))
if !bytes.Equal(binder, binders[i].Binder) {
logf(logTypeNegotiation, "Binder check failed for identity %x; [%x] != [%x]", psk.Identity, binder, binders[i].Binder)
return false, 0, nil, CipherSuiteParams{}, fmt.Errorf("Binder check failed identity %x", psk.Identity)
}
logf(logTypeNegotiation, "Using PSK with identity %x", psk.Identity)
return true, i, &psk, params, nil
}
logf(logTypeNegotiation, "Failed to find a usable PSK")
return false, 0, nil, CipherSuiteParams{}, nil
}
func PSKModeNegotiation(canDoDH, canDoPSK bool, modes []PSKKeyExchangeMode) (bool, bool) {
logf(logTypeNegotiation, "Negotiating PSK modes [%v] [%v] [%+v]", canDoDH, canDoPSK, modes)
dhAllowed := false
dhRequired := true
for _, mode := range modes {
dhAllowed = dhAllowed || (mode == PSKModeDHEKE)
dhRequired = dhRequired && (mode == PSKModeDHEKE)
}
// Use PSK if we can meet DH requirement and modes were provided
usingPSK := canDoPSK && (!dhRequired || canDoDH) && (len(modes) > 0)
// Use DH if allowed
usingDH := canDoDH && (dhAllowed || !usingPSK)
logf(logTypeNegotiation, "Results of PSK mode negotiation: usingDH=[%v] usingPSK=[%v]", usingDH, usingPSK)
return usingDH, usingPSK
}
func CertificateSelection(serverName *string, signatureSchemes []SignatureScheme, certs []*Certificate) (*Certificate, SignatureScheme, error) {
// Select for server name if provided
candidates := certs
if serverName != nil {
candidatesByName := []*Certificate{}
for _, cert := range certs {
for _, name := range cert.Chain[0].DNSNames {
if len(*serverName) > 0 && name == *serverName {
candidatesByName = append(candidatesByName, cert)
}
}
}
if len(candidatesByName) == 0 {
return nil, 0, fmt.Errorf("No certificates available for server name: %s", *serverName)
}
candidates = candidatesByName
}
// Select for signature scheme
for _, cert := range candidates {
for _, scheme := range signatureSchemes {
if !schemeValidForKey(scheme, cert.PrivateKey) {
continue
}
return cert, scheme, nil
}
}
return nil, 0, fmt.Errorf("No certificates compatible with signature schemes")
}
func EarlyDataNegotiation(usingPSK, gotEarlyData, allowEarlyData bool) (using bool, rejected bool) {
using = gotEarlyData && usingPSK && allowEarlyData
rejected = gotEarlyData && !using
logf(logTypeNegotiation, "Early data negotiation (%v, %v, %v) => %v, %v", usingPSK, gotEarlyData, allowEarlyData, using, rejected)
return
}
func CipherSuiteNegotiation(psk *PreSharedKey, offered, supported []CipherSuite) (CipherSuite, error) {
for _, s1 := range offered {
if psk != nil {
if s1 == psk.CipherSuite {
return s1, nil
}
continue
}
for _, s2 := range supported {
if s1 == s2 {
return s1, nil
}
}
}
return 0, fmt.Errorf("No overlap between offered and supproted ciphersuites (psk? [%v])", psk != nil)
}
func ALPNNegotiation(psk *PreSharedKey, offered, supported []string) (string, error) {
for _, p1 := range offered {
if psk != nil {
if p1 != psk.NextProto {
continue
}
}
for _, p2 := range supported {
if p1 == p2 {
return p1, nil
}
}
}
// If the client offers ALPN on resumption, it must match the earlier one
var err error
if psk != nil && psk.IsResumption && (len(offered) > 0) {
err = fmt.Errorf("ALPN for PSK not provided")
}
return "", err
}

465
vendor/github.com/bifurcation/mint/record-layer.go generated vendored Normal file
View File

@ -0,0 +1,465 @@
package mint
import (
"crypto/cipher"
"fmt"
"io"
"sync"
)
const (
sequenceNumberLen = 8 // sequence number length
recordHeaderLenTLS = 5 // record header length (TLS)
recordHeaderLenDTLS = 13 // record header length (DTLS)
maxFragmentLen = 1 << 14 // max number of bytes in a record
)
type DecryptError string
func (err DecryptError) Error() string {
return string(err)
}
type direction uint8
const (
directionWrite = direction(1)
directionRead = direction(2)
)
// struct {
// ContentType type;
// ProtocolVersion record_version [0301 for CH, 0303 for others]
// uint16 length;
// opaque fragment[TLSPlaintext.length];
// } TLSPlaintext;
type TLSPlaintext struct {
// Omitted: record_version (static)
// Omitted: length (computed from fragment)
contentType RecordType
epoch Epoch
seq uint64
fragment []byte
}
type cipherState struct {
epoch Epoch // DTLS epoch
ivLength int // Length of the seq and nonce fields
seq uint64 // Zero-padded sequence number
iv []byte // Buffer for the IV
cipher cipher.AEAD // AEAD cipher
}
type RecordLayer struct {
sync.Mutex
label string
direction direction
version uint16 // The current version number
conn io.ReadWriter // The underlying connection
frame *frameReader // The buffered frame reader
nextData []byte // The next record to send
cachedRecord *TLSPlaintext // Last record read, cached to enable "peek"
cachedError error // Error on the last record read
cipher *cipherState
readCiphers map[Epoch]*cipherState
datagram bool
}
type recordLayerFrameDetails struct {
datagram bool
}
func (d recordLayerFrameDetails) headerLen() int {
if d.datagram {
return recordHeaderLenDTLS
}
return recordHeaderLenTLS
}
func (d recordLayerFrameDetails) defaultReadLen() int {
return d.headerLen() + maxFragmentLen
}
func (d recordLayerFrameDetails) frameLen(hdr []byte) (int, error) {
return (int(hdr[d.headerLen()-2]) << 8) | int(hdr[d.headerLen()-1]), nil
}
func newCipherStateNull() *cipherState {
return &cipherState{EpochClear, 0, 0, nil, nil}
}
func newCipherStateAead(epoch Epoch, factory aeadFactory, key []byte, iv []byte) (*cipherState, error) {
cipher, err := factory(key)
if err != nil {
return nil, err
}
return &cipherState{epoch, len(iv), 0, iv, cipher}, nil
}
func NewRecordLayerTLS(conn io.ReadWriter, dir direction) *RecordLayer {
r := RecordLayer{}
r.label = ""
r.direction = dir
r.conn = conn
r.frame = newFrameReader(recordLayerFrameDetails{false})
r.cipher = newCipherStateNull()
r.version = tls10Version
return &r
}
func NewRecordLayerDTLS(conn io.ReadWriter, dir direction) *RecordLayer {
r := RecordLayer{}
r.label = ""
r.direction = dir
r.conn = conn
r.frame = newFrameReader(recordLayerFrameDetails{true})
r.cipher = newCipherStateNull()
r.readCiphers = make(map[Epoch]*cipherState, 0)
r.readCiphers[0] = r.cipher
r.datagram = true
return &r
}
func (r *RecordLayer) SetVersion(v uint16) {
r.version = v
}
func (r *RecordLayer) ResetClear(seq uint64) {
r.cipher = newCipherStateNull()
r.cipher.seq = seq
}
func (r *RecordLayer) Rekey(epoch Epoch, factory aeadFactory, key []byte, iv []byte) error {
cipher, err := newCipherStateAead(epoch, factory, key, iv)
if err != nil {
return err
}
r.cipher = cipher
if r.datagram && r.direction == directionRead {
r.readCiphers[epoch] = cipher
}
return nil
}
// TODO(ekr@rtfm.com): This is never used, which is a bug.
func (r *RecordLayer) DiscardReadKey(epoch Epoch) {
if !r.datagram {
return
}
_, ok := r.readCiphers[epoch]
assert(ok)
delete(r.readCiphers, epoch)
}
func (c *cipherState) combineSeq(datagram bool) uint64 {
seq := c.seq
if datagram {
seq |= uint64(c.epoch) << 48
}
return seq
}
func (c *cipherState) computeNonce(seq uint64) []byte {
nonce := make([]byte, len(c.iv))
copy(nonce, c.iv)
s := seq
offset := len(c.iv)
for i := 0; i < 8; i++ {
nonce[(offset-i)-1] ^= byte(s & 0xff)
s >>= 8
}
logf(logTypeCrypto, "Computing nonce for sequence # %x -> %x", seq, nonce)
return nonce
}
func (c *cipherState) incrementSequenceNumber() {
if c.seq >= (1<<48 - 1) {
// Not allowed to let sequence number wrap.
// Instead, must renegotiate before it does.
// Not likely enough to bother. This is the
// DTLS limit.
panic("TLS: sequence number wraparound")
}
c.seq++
}
func (c *cipherState) overhead() int {
if c.cipher == nil {
return 0
}
return c.cipher.Overhead()
}
func (r *RecordLayer) encrypt(cipher *cipherState, seq uint64, header []byte, pt *TLSPlaintext, padLen int) []byte {
assert(r.direction == directionWrite)
logf(logTypeIO, "%s Encrypt seq=[%x]", r.label, seq)
// Expand the fragment to hold contentType, padding, and overhead
originalLen := len(pt.fragment)
plaintextLen := originalLen + 1 + padLen
ciphertextLen := plaintextLen + cipher.overhead()
ciphertext := make([]byte, ciphertextLen)
copy(ciphertext, pt.fragment)
ciphertext[originalLen] = byte(pt.contentType)
for i := 1; i <= padLen; i++ {
ciphertext[originalLen+i] = 0
}
// Encrypt the fragment
payload := ciphertext[:plaintextLen]
cipher.cipher.Seal(payload[:0], cipher.computeNonce(seq), payload, header)
return ciphertext
}
func (r *RecordLayer) decrypt(seq uint64, header []byte, pt *TLSPlaintext) (*TLSPlaintext, int, error) {
assert(r.direction == directionRead)
logf(logTypeIO, "%s Decrypt seq=[%x]", r.label, seq)
if len(pt.fragment) < r.cipher.overhead() {
msg := fmt.Sprintf("tls.record.decrypt: Record too short [%d] < [%d]", len(pt.fragment), r.cipher.overhead())
return nil, 0, DecryptError(msg)
}
decryptLen := len(pt.fragment) - r.cipher.overhead()
out := &TLSPlaintext{
contentType: pt.contentType,
fragment: make([]byte, decryptLen),
}
// Decrypt
_, err := r.cipher.cipher.Open(out.fragment[:0], r.cipher.computeNonce(seq), pt.fragment, header)
if err != nil {
logf(logTypeIO, "%s AEAD decryption failure [%x]", r.label, pt)
return nil, 0, DecryptError("tls.record.decrypt: AEAD decrypt failed")
}
// Find the padding boundary
padLen := 0
for ; padLen < decryptLen+1 && out.fragment[decryptLen-padLen-1] == 0; padLen++ {
}
// Transfer the content type
newLen := decryptLen - padLen - 1
out.contentType = RecordType(out.fragment[newLen])
// Truncate the message to remove contentType, padding, overhead
out.fragment = out.fragment[:newLen]
out.seq = seq
return out, padLen, nil
}
func (r *RecordLayer) PeekRecordType(block bool) (RecordType, error) {
var pt *TLSPlaintext
var err error
for {
pt, err = r.nextRecord(false)
if err == nil {
break
}
if !block || err != AlertWouldBlock {
return 0, err
}
}
return pt.contentType, nil
}
func (r *RecordLayer) ReadRecord() (*TLSPlaintext, error) {
pt, err := r.nextRecord(false)
// Consume the cached record if there was one
r.cachedRecord = nil
r.cachedError = nil
return pt, err
}
func (r *RecordLayer) readRecordAnyEpoch() (*TLSPlaintext, error) {
pt, err := r.nextRecord(true)
// Consume the cached record if there was one
r.cachedRecord = nil
r.cachedError = nil
return pt, err
}
func (r *RecordLayer) nextRecord(allowOldEpoch bool) (*TLSPlaintext, error) {
cipher := r.cipher
if r.cachedRecord != nil {
logf(logTypeIO, "%s Returning cached record", r.label)
return r.cachedRecord, r.cachedError
}
// Loop until one of three things happens:
//
// 1. We get a frame
// 2. We try to read off the socket and get nothing, in which case
// returnAlertWouldBlock
// 3. We get an error.
var err error
err = AlertWouldBlock
var header, body []byte
for err != nil {
if r.frame.needed() > 0 {
buf := make([]byte, r.frame.details.headerLen()+maxFragmentLen)
n, err := r.conn.Read(buf)
if err != nil {
logf(logTypeIO, "%s Error reading, %v", r.label, err)
return nil, err
}
if n == 0 {
return nil, AlertWouldBlock
}
logf(logTypeIO, "%s Read %v bytes", r.label, n)
buf = buf[:n]
r.frame.addChunk(buf)
}
header, body, err = r.frame.process()
// Loop around onAlertWouldBlock to see if some
// data is now available.
if err != nil && err != AlertWouldBlock {
return nil, err
}
}
pt := &TLSPlaintext{}
// Validate content type
switch RecordType(header[0]) {
default:
return nil, fmt.Errorf("tls.record: Unknown content type %02x", header[0])
case RecordTypeAlert, RecordTypeHandshake, RecordTypeApplicationData, RecordTypeAck:
pt.contentType = RecordType(header[0])
}
// Validate version
if !allowWrongVersionNumber && (header[1] != 0x03 || header[2] != 0x01) {
return nil, fmt.Errorf("tls.record: Invalid version %02x%02x", header[1], header[2])
}
// Validate size < max
size := (int(header[len(header)-2]) << 8) + int(header[len(header)-1])
if size > maxFragmentLen+256 {
return nil, fmt.Errorf("tls.record: Ciphertext size too big")
}
pt.fragment = make([]byte, size)
copy(pt.fragment, body)
// TODO(ekr@rtfm.com): Enforce that for epoch > 0, the content type is app data.
// Attempt to decrypt fragment
seq := cipher.seq
if r.datagram {
// TODO(ekr@rtfm.com): Handle duplicates.
seq, _ = decodeUint(header[3:11], 8)
epoch := Epoch(seq >> 48)
// Look up the cipher suite from the epoch
c, ok := r.readCiphers[epoch]
if !ok {
logf(logTypeIO, "%s Message from unknown epoch: [%v]", r.label, epoch)
return nil, AlertWouldBlock
}
if epoch != cipher.epoch {
logf(logTypeIO, "%s Message from non-current epoch: [%v != %v] out-of-epoch reads=%v", r.label, epoch,
cipher.epoch, allowOldEpoch)
if !allowOldEpoch {
return nil, AlertWouldBlock
}
cipher = c
}
}
if cipher.cipher != nil {
logf(logTypeIO, "%s RecordLayer.ReadRecord epoch=[%s] seq=[%x] [%d] ciphertext=[%x]", r.label, cipher.epoch.label(), seq, pt.contentType, pt.fragment)
pt, _, err = r.decrypt(seq, header, pt)
if err != nil {
logf(logTypeIO, "%s Decryption failed", r.label)
return nil, err
}
}
pt.epoch = cipher.epoch
// Check that plaintext length is not too long
if len(pt.fragment) > maxFragmentLen {
return nil, fmt.Errorf("tls.record: Plaintext size too big")
}
logf(logTypeIO, "%s RecordLayer.ReadRecord [%d] [%x]", r.label, pt.contentType, pt.fragment)
r.cachedRecord = pt
cipher.incrementSequenceNumber()
return pt, nil
}
func (r *RecordLayer) WriteRecord(pt *TLSPlaintext) error {
return r.writeRecordWithPadding(pt, r.cipher, 0)
}
func (r *RecordLayer) WriteRecordWithPadding(pt *TLSPlaintext, padLen int) error {
return r.writeRecordWithPadding(pt, r.cipher, padLen)
}
func (r *RecordLayer) writeRecordWithPadding(pt *TLSPlaintext, cipher *cipherState, padLen int) error {
seq := cipher.combineSeq(r.datagram)
length := len(pt.fragment)
var contentType RecordType
if cipher.cipher != nil {
length += 1 + padLen + cipher.cipher.Overhead()
contentType = RecordTypeApplicationData
} else {
contentType = pt.contentType
}
var header []byte
if !r.datagram {
header = []byte{byte(contentType),
byte(r.version >> 8), byte(r.version & 0xff),
byte(length >> 8), byte(length)}
} else {
header = make([]byte, 13)
version := dtlsConvertVersion(r.version)
copy(header, []byte{byte(contentType),
byte(version >> 8), byte(version & 0xff),
})
encodeUint(seq, 8, header[3:])
encodeUint(uint64(length), 2, header[11:])
}
var ciphertext []byte
if cipher.cipher != nil {
logf(logTypeIO, "%s RecordLayer.WriteRecord epoch=[%s] seq=[%x] [%d] plaintext=[%x]", r.label, cipher.epoch.label(), cipher.seq, pt.contentType, pt.fragment)
ciphertext = r.encrypt(cipher, seq, header, pt, padLen)
} else {
if padLen > 0 {
return fmt.Errorf("tls.record: Padding can only be done on encrypted records")
}
ciphertext = pt.fragment
}
if len(ciphertext) > maxFragmentLen {
return fmt.Errorf("tls.record: Record size too big")
}
record := append(header, ciphertext...)
logf(logTypeIO, "%s RecordLayer.WriteRecord epoch=[%s] seq=[%x] [%d] ciphertext=[%x]", r.label, cipher.epoch.label(), cipher.seq, contentType, ciphertext)
cipher.incrementSequenceNumber()
_, err := r.conn.Write(record)
return err
}

File diff suppressed because it is too large Load Diff

247
vendor/github.com/bifurcation/mint/state-machine.go generated vendored Normal file
View File

@ -0,0 +1,247 @@
package mint
import (
"crypto/x509"
"time"
)
// Marker interface for actions that an implementation should take based on
// state transitions.
type HandshakeAction interface{}
type QueueHandshakeMessage struct {
Message *HandshakeMessage
}
type SendQueuedHandshake struct{}
type SendEarlyData struct{}
type RekeyIn struct {
epoch Epoch
KeySet keySet
}
type RekeyOut struct {
epoch Epoch
KeySet keySet
}
type ResetOut struct {
seq uint64
}
type StorePSK struct {
PSK PreSharedKey
}
type HandshakeState interface {
Next(handshakeMessageReader) (HandshakeState, []HandshakeAction, Alert)
State() State
}
type AppExtensionHandler interface {
Send(hs HandshakeType, el *ExtensionList) error
Receive(hs HandshakeType, el *ExtensionList) error
}
// ConnectionOptions objects represent per-connection settings for a client
// initiating a connection
type ConnectionOptions struct {
ServerName string
NextProtos []string
}
// ConnectionParameters objects represent the parameters negotiated for a
// connection.
type ConnectionParameters struct {
UsingPSK bool
UsingDH bool
ClientSendingEarlyData bool
UsingEarlyData bool
RejectedEarlyData bool
UsingClientAuth bool
CipherSuite CipherSuite
ServerName string
NextProto string
}
// Working state for the handshake.
type HandshakeContext struct {
timeoutMS uint32
timers *timerSet
recvdRecords []uint64
sentFragments []*SentHandshakeFragment
hIn, hOut *HandshakeLayer
waitingNextFlight bool
earlyData []byte
}
func (hc *HandshakeContext) SetVersion(version uint16) {
if hc.hIn.conn != nil {
hc.hIn.conn.SetVersion(version)
}
if hc.hOut.conn != nil {
hc.hOut.conn.SetVersion(version)
}
}
// stateConnected is symmetric between client and server
type stateConnected struct {
Params ConnectionParameters
hsCtx *HandshakeContext
isClient bool
cryptoParams CipherSuiteParams
resumptionSecret []byte
clientTrafficSecret []byte
serverTrafficSecret []byte
exporterSecret []byte
peerCertificates []*x509.Certificate
verifiedChains [][]*x509.Certificate
}
var _ HandshakeState = &stateConnected{}
func (state stateConnected) State() State {
if state.isClient {
return StateClientConnected
}
return StateServerConnected
}
func (state *stateConnected) KeyUpdate(request KeyUpdateRequest) ([]HandshakeAction, Alert) {
var trafficKeys keySet
if state.isClient {
state.clientTrafficSecret = HkdfExpandLabel(state.cryptoParams.Hash, state.clientTrafficSecret,
labelClientApplicationTrafficSecret, []byte{}, state.cryptoParams.Hash.Size())
trafficKeys = makeTrafficKeys(state.cryptoParams, state.clientTrafficSecret)
} else {
state.serverTrafficSecret = HkdfExpandLabel(state.cryptoParams.Hash, state.serverTrafficSecret,
labelServerApplicationTrafficSecret, []byte{}, state.cryptoParams.Hash.Size())
trafficKeys = makeTrafficKeys(state.cryptoParams, state.serverTrafficSecret)
}
kum, err := state.hsCtx.hOut.HandshakeMessageFromBody(&KeyUpdateBody{KeyUpdateRequest: request})
if err != nil {
logf(logTypeHandshake, "[StateConnected] Error marshaling key update message: %v", err)
return nil, AlertInternalError
}
toSend := []HandshakeAction{
QueueHandshakeMessage{kum},
SendQueuedHandshake{},
RekeyOut{epoch: EpochUpdate, KeySet: trafficKeys},
}
return toSend, AlertNoAlert
}
func (state *stateConnected) NewSessionTicket(length int, lifetime, earlyDataLifetime uint32) ([]HandshakeAction, Alert) {
tkt, err := NewSessionTicket(length, lifetime)
if err != nil {
logf(logTypeHandshake, "[StateConnected] Error generating NewSessionTicket: %v", err)
return nil, AlertInternalError
}
err = tkt.Extensions.Add(&TicketEarlyDataInfoExtension{earlyDataLifetime})
if err != nil {
logf(logTypeHandshake, "[StateConnected] Error adding extension to NewSessionTicket: %v", err)
return nil, AlertInternalError
}
resumptionKey := HkdfExpandLabel(state.cryptoParams.Hash, state.resumptionSecret,
labelResumption, tkt.TicketNonce, state.cryptoParams.Hash.Size())
newPSK := PreSharedKey{
CipherSuite: state.cryptoParams.Suite,
IsResumption: true,
Identity: tkt.Ticket,
Key: resumptionKey,
NextProto: state.Params.NextProto,
ReceivedAt: time.Now(),
ExpiresAt: time.Now().Add(time.Duration(tkt.TicketLifetime) * time.Second),
TicketAgeAdd: tkt.TicketAgeAdd,
}
tktm, err := state.hsCtx.hOut.HandshakeMessageFromBody(tkt)
if err != nil {
logf(logTypeHandshake, "[StateConnected] Error marshaling NewSessionTicket: %v", err)
return nil, AlertInternalError
}
toSend := []HandshakeAction{
StorePSK{newPSK},
QueueHandshakeMessage{tktm},
SendQueuedHandshake{},
}
return toSend, AlertNoAlert
}
// Next does nothing for this state.
func (state stateConnected) Next(hr handshakeMessageReader) (HandshakeState, []HandshakeAction, Alert) {
return state, nil, AlertNoAlert
}
func (state stateConnected) ProcessMessage(hm *HandshakeMessage) (HandshakeState, []HandshakeAction, Alert) {
if hm == nil {
logf(logTypeHandshake, "[StateConnected] Unexpected message")
return nil, nil, AlertUnexpectedMessage
}
bodyGeneric, err := hm.ToBody()
if err != nil {
logf(logTypeHandshake, "[StateConnected] Error decoding message: %v", err)
return nil, nil, AlertDecodeError
}
switch body := bodyGeneric.(type) {
case *KeyUpdateBody:
var trafficKeys keySet
if !state.isClient {
state.clientTrafficSecret = HkdfExpandLabel(state.cryptoParams.Hash, state.clientTrafficSecret,
labelClientApplicationTrafficSecret, []byte{}, state.cryptoParams.Hash.Size())
trafficKeys = makeTrafficKeys(state.cryptoParams, state.clientTrafficSecret)
} else {
state.serverTrafficSecret = HkdfExpandLabel(state.cryptoParams.Hash, state.serverTrafficSecret,
labelServerApplicationTrafficSecret, []byte{}, state.cryptoParams.Hash.Size())
trafficKeys = makeTrafficKeys(state.cryptoParams, state.serverTrafficSecret)
}
toSend := []HandshakeAction{RekeyIn{epoch: EpochUpdate, KeySet: trafficKeys}}
// If requested, roll outbound keys and send a KeyUpdate
if body.KeyUpdateRequest == KeyUpdateRequested {
logf(logTypeHandshake, "Received key update, update requested", body.KeyUpdateRequest)
moreToSend, alert := state.KeyUpdate(KeyUpdateNotRequested)
if alert != AlertNoAlert {
return nil, nil, alert
}
toSend = append(toSend, moreToSend...)
}
return state, toSend, AlertNoAlert
case *NewSessionTicketBody:
// XXX: Allow NewSessionTicket in both directions?
if !state.isClient {
return nil, nil, AlertUnexpectedMessage
}
resumptionKey := HkdfExpandLabel(state.cryptoParams.Hash, state.resumptionSecret,
labelResumption, body.TicketNonce, state.cryptoParams.Hash.Size())
psk := PreSharedKey{
CipherSuite: state.cryptoParams.Suite,
IsResumption: true,
Identity: body.Ticket,
Key: resumptionKey,
NextProto: state.Params.NextProto,
ReceivedAt: time.Now(),
ExpiresAt: time.Now().Add(time.Duration(body.TicketLifetime) * time.Second),
TicketAgeAdd: body.TicketAgeAdd,
}
toSend := []HandshakeAction{StorePSK{psk}}
return state, toSend, AlertNoAlert
}
logf(logTypeHandshake, "[StateConnected] Unexpected message type %v", hm.msgType)
return nil, nil, AlertUnexpectedMessage
}

84
vendor/github.com/bifurcation/mint/syntax/README.md generated vendored Normal file
View File

@ -0,0 +1,84 @@
TLS Syntax
==========
TLS defines [its own syntax](https://tlswg.github.io/tls13-spec/#rfc.section.3)
for describing structures used in that protocol. To facilitate experimentation
with TLS in Go, this module maps that syntax to the Go structure syntax, taking
advantage of Go's type annotations to encode non-type information carried in the
TLS presentation format.
For example, in the TLS specification, a ClientHello message has the following
structure:
~~~~~
uint16 ProtocolVersion;
opaque Random[32];
uint8 CipherSuite[2];
enum { ... (65535)} ExtensionType;
struct {
ExtensionType extension_type;
opaque extension_data<0..2^16-1>;
} Extension;
struct {
ProtocolVersion legacy_version = 0x0303; /* TLS v1.2 */
Random random;
opaque legacy_session_id<0..32>;
CipherSuite cipher_suites<2..2^16-2>;
opaque legacy_compression_methods<1..2^8-1>;
Extension extensions<0..2^16-1>;
} ClientHello;
~~~~~
This maps to the following Go type definitions:
~~~~~
type protocolVersion uint16
type random [32]byte
type cipherSuite uint16 // or [2]byte
type extensionType uint16
type extension struct {
ExtensionType extensionType
ExtensionData []byte `tls:"head=2"`
}
type clientHello struct {
LegacyVersion protocolVersion
Random random
LegacySessionID []byte `tls:"head=1,max=32"`
CipherSuites []cipherSuite `tls:"head=2,min=2"`
LegacyCompressionMethods []byte `tls:"head=1,min=1"`
Extensions []extension `tls:"head=2"`
}
~~~~~
Then you can just declare, marshal, and unmarshal structs just like you would
with, say JSON.
The available annotations right now are all related to vectors:
* `head`: The number of bytes of length to use as a "header"
* `min`: The minimum length of the vector, in bytes
* `max`: The maximum length of the vector, in bytes
## Not supported
* The `select()` syntax for creating alternate version of the same struct (see,
e.g., the KeyShare extension)
* The backreference syntax for array lengths or select parameters, as in `opaque
fragment[TLSPlaintext.length]`. Note, however, that in cases where the length
immediately preceds the array, these can be reframed as vectors with
appropriate sizes.
QUIC Extensions Syntax
======================
syntax also supports some minor extensions to allow implementing QUIC.
* The `varint` annotation describes a QUIC-style varint
* `head=none` means no header, i.e., the bytes are encoded directly on the wire.
On reading, the decoder will consume all available data.
* `head=varint` means to encode the header as a varint

344
vendor/github.com/bifurcation/mint/syntax/decode.go generated vendored Normal file
View File

@ -0,0 +1,344 @@
package syntax
import (
"bytes"
"fmt"
"reflect"
"runtime"
)
func Unmarshal(data []byte, v interface{}) (int, error) {
// Check for well-formedness.
// Avoids filling out half a data structure
// before discovering a JSON syntax error.
d := decodeState{}
d.Write(data)
return d.unmarshal(v)
}
// Unmarshaler is the interface implemented by types that can
// unmarshal a TLS description of themselves. Note that unlike the
// JSON unmarshaler interface, it is not known a priori how much of
// the input data will be consumed. So the Unmarshaler must state
// how much of the input data it consumed.
type Unmarshaler interface {
UnmarshalTLS([]byte) (int, error)
}
// These are the options that can be specified in the struct tag. Right now,
// all of them apply to variable-length vectors and nothing else
type decOpts struct {
head uint // length of length in bytes
min uint // minimum size in bytes
max uint // maximum size in bytes
varint bool // whether to decode as a varint
}
type decodeState struct {
bytes.Buffer
}
func (d *decodeState) unmarshal(v interface{}) (read int, err error) {
defer func() {
if r := recover(); r != nil {
if _, ok := r.(runtime.Error); ok {
panic(r)
}
if s, ok := r.(string); ok {
panic(s)
}
err = r.(error)
}
}()
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr || rv.IsNil() {
return 0, fmt.Errorf("Invalid unmarshal target (non-pointer or nil)")
}
read = d.value(rv)
return read, nil
}
func (e *decodeState) value(v reflect.Value) int {
return valueDecoder(v)(e, v, decOpts{})
}
type decoderFunc func(e *decodeState, v reflect.Value, opts decOpts) int
func valueDecoder(v reflect.Value) decoderFunc {
return typeDecoder(v.Type().Elem())
}
func typeDecoder(t reflect.Type) decoderFunc {
// Note: Omits the caching / wait-group things that encoding/json uses
return newTypeDecoder(t)
}
var (
unmarshalerType = reflect.TypeOf(new(Unmarshaler)).Elem()
)
func newTypeDecoder(t reflect.Type) decoderFunc {
if t.Kind() != reflect.Ptr && reflect.PtrTo(t).Implements(unmarshalerType) {
return unmarshalerDecoder
}
switch t.Kind() {
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return uintDecoder
case reflect.Array:
return newArrayDecoder(t)
case reflect.Slice:
return newSliceDecoder(t)
case reflect.Struct:
return newStructDecoder(t)
case reflect.Ptr:
return newPointerDecoder(t)
default:
panic(fmt.Errorf("Unsupported type (%s)", t))
}
}
///// Specific decoders below
func unmarshalerDecoder(d *decodeState, v reflect.Value, opts decOpts) int {
um, ok := v.Interface().(Unmarshaler)
if !ok {
panic(fmt.Errorf("Non-Unmarshaler passed to unmarshalerEncoder"))
}
read, err := um.UnmarshalTLS(d.Bytes())
if err != nil {
panic(err)
}
if read > d.Len() {
panic(fmt.Errorf("Invalid return value from UnmarshalTLS"))
}
d.Next(read)
return read
}
//////////
func uintDecoder(d *decodeState, v reflect.Value, opts decOpts) int {
if opts.varint {
return varintDecoder(d, v, opts)
}
uintLen := int(v.Elem().Type().Size())
buf := d.Next(uintLen)
if len(buf) != uintLen {
panic(fmt.Errorf("Insufficient data to read uint"))
}
return setUintFromBuffer(v, buf)
}
func varintDecoder(d *decodeState, v reflect.Value, opts decOpts) int {
l, val := readVarint(d)
uintLen := int(v.Elem().Type().Size())
if uintLen < l {
panic(fmt.Errorf("Uint too small to fit varint: %d < %d", uintLen, l))
}
v.Elem().SetUint(val)
return l
}
func readVarint(d *decodeState) (int, uint64) {
// Read the first octet and decide the size of the presented varint
first := d.Next(1)
if len(first) != 1 {
panic(fmt.Errorf("Insufficient data to read varint length"))
}
twoBits := uint(first[0] >> 6)
varintLen := 1 << twoBits
rest := d.Next(varintLen - 1)
if len(rest) != varintLen-1 {
panic(fmt.Errorf("Insufficient data to read varint"))
}
buf := append(first, rest...)
buf[0] &= 0x3f
return len(buf), decodeUintFromBuffer(buf)
}
func decodeUintFromBuffer(buf []byte) uint64 {
val := uint64(0)
for _, b := range buf {
val = (val << 8) + uint64(b)
}
return val
}
func setUintFromBuffer(v reflect.Value, buf []byte) int {
v.Elem().SetUint(decodeUintFromBuffer(buf))
return len(buf)
}
//////////
type arrayDecoder struct {
elemDec decoderFunc
}
func (ad *arrayDecoder) decode(d *decodeState, v reflect.Value, opts decOpts) int {
n := v.Elem().Type().Len()
read := 0
for i := 0; i < n; i += 1 {
read += ad.elemDec(d, v.Elem().Index(i).Addr(), opts)
}
return read
}
func newArrayDecoder(t reflect.Type) decoderFunc {
dec := &arrayDecoder{typeDecoder(t.Elem())}
return dec.decode
}
//////////
type sliceDecoder struct {
elementType reflect.Type
elementDec decoderFunc
}
func (sd *sliceDecoder) decode(d *decodeState, v reflect.Value, opts decOpts) int {
var length uint64
var read int
var data []byte
if opts.head == 0 {
panic(fmt.Errorf("Cannot decode a slice without a header length"))
}
// If the caller indicated there is no header, then read everything from the buffer
if opts.head == headValueNoHead {
for {
chunk := d.Next(1024)
data = append(data, chunk...)
if len(chunk) != 1024 {
break
}
}
length = uint64(len(data))
if opts.max > 0 && length > uint64(opts.max) {
panic(fmt.Errorf("Length of vector exceeds declared max"))
}
if length < uint64(opts.min) {
panic(fmt.Errorf("Length of vector below declared min"))
}
} else {
if opts.head != headValueVarint {
lengthBytes := d.Next(int(opts.head))
if len(lengthBytes) != int(opts.head) {
panic(fmt.Errorf("Not enough data to read header"))
}
read = len(lengthBytes)
length = decodeUintFromBuffer(lengthBytes)
} else {
read, length = readVarint(d)
}
if opts.max > 0 && length > uint64(opts.max) {
panic(fmt.Errorf("Length of vector exceeds declared max"))
}
if length < uint64(opts.min) {
panic(fmt.Errorf("Length of vector below declared min"))
}
data = d.Next(int(length))
if len(data) != int(length) {
panic(fmt.Errorf("Available data less than declared length [%d < %d]", len(data), length))
}
}
elemBuf := &decodeState{}
elemBuf.Write(data)
elems := []reflect.Value{}
for elemBuf.Len() > 0 {
elem := reflect.New(sd.elementType)
read += sd.elementDec(elemBuf, elem, opts)
elems = append(elems, elem)
}
v.Elem().Set(reflect.MakeSlice(v.Elem().Type(), len(elems), len(elems)))
for i := 0; i < len(elems); i += 1 {
v.Elem().Index(i).Set(elems[i].Elem())
}
return read
}
func newSliceDecoder(t reflect.Type) decoderFunc {
dec := &sliceDecoder{
elementType: t.Elem(),
elementDec: typeDecoder(t.Elem()),
}
return dec.decode
}
//////////
type structDecoder struct {
fieldOpts []decOpts
fieldDecs []decoderFunc
}
func (sd *structDecoder) decode(d *decodeState, v reflect.Value, opts decOpts) int {
read := 0
for i := range sd.fieldDecs {
read += sd.fieldDecs[i](d, v.Elem().Field(i).Addr(), sd.fieldOpts[i])
}
return read
}
func newStructDecoder(t reflect.Type) decoderFunc {
n := t.NumField()
sd := structDecoder{
fieldOpts: make([]decOpts, n),
fieldDecs: make([]decoderFunc, n),
}
for i := 0; i < n; i += 1 {
f := t.Field(i)
tag := f.Tag.Get("tls")
tagOpts := parseTag(tag)
sd.fieldOpts[i] = decOpts{
head: tagOpts["head"],
max: tagOpts["max"],
min: tagOpts["min"],
varint: tagOpts[varintOption] > 0,
}
sd.fieldDecs[i] = typeDecoder(f.Type)
}
return sd.decode
}
//////////
type pointerDecoder struct {
base decoderFunc
}
func (pd *pointerDecoder) decode(d *decodeState, v reflect.Value, opts decOpts) int {
v.Elem().Set(reflect.New(v.Elem().Type().Elem()))
return pd.base(d, v.Elem(), opts)
}
func newPointerDecoder(t reflect.Type) decoderFunc {
baseDecoder := typeDecoder(t.Elem())
pd := pointerDecoder{base: baseDecoder}
return pd.decode
}

276
vendor/github.com/bifurcation/mint/syntax/encode.go generated vendored Normal file
View File

@ -0,0 +1,276 @@
package syntax
import (
"bytes"
"fmt"
"reflect"
"runtime"
)
func Marshal(v interface{}) ([]byte, error) {
e := &encodeState{}
err := e.marshal(v, encOpts{})
if err != nil {
return nil, err
}
return e.Bytes(), nil
}
// Marshaler is the interface implemented by types that
// have a defined TLS encoding.
type Marshaler interface {
MarshalTLS() ([]byte, error)
}
// These are the options that can be specified in the struct tag. Right now,
// all of them apply to variable-length vectors and nothing else
type encOpts struct {
head uint // length of length in bytes
min uint // minimum size in bytes
max uint // maximum size in bytes
varint bool // whether to encode as a varint
}
type encodeState struct {
bytes.Buffer
}
func (e *encodeState) marshal(v interface{}, opts encOpts) (err error) {
defer func() {
if r := recover(); r != nil {
if _, ok := r.(runtime.Error); ok {
panic(r)
}
if s, ok := r.(string); ok {
panic(s)
}
err = r.(error)
}
}()
e.reflectValue(reflect.ValueOf(v), opts)
return nil
}
func (e *encodeState) reflectValue(v reflect.Value, opts encOpts) {
valueEncoder(v)(e, v, opts)
}
type encoderFunc func(e *encodeState, v reflect.Value, opts encOpts)
func valueEncoder(v reflect.Value) encoderFunc {
if !v.IsValid() {
panic(fmt.Errorf("Cannot encode an invalid value"))
}
return typeEncoder(v.Type())
}
func typeEncoder(t reflect.Type) encoderFunc {
// Note: Omits the caching / wait-group things that encoding/json uses
return newTypeEncoder(t)
}
var (
marshalerType = reflect.TypeOf(new(Marshaler)).Elem()
)
func newTypeEncoder(t reflect.Type) encoderFunc {
if t.Implements(marshalerType) {
return marshalerEncoder
}
switch t.Kind() {
case reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return uintEncoder
case reflect.Array:
return newArrayEncoder(t)
case reflect.Slice:
return newSliceEncoder(t)
case reflect.Struct:
return newStructEncoder(t)
case reflect.Ptr:
return newPointerEncoder(t)
default:
panic(fmt.Errorf("Unsupported type (%s)", t))
}
}
///// Specific encoders below
func marshalerEncoder(e *encodeState, v reflect.Value, opts encOpts) {
if v.Kind() == reflect.Ptr && v.IsNil() {
panic(fmt.Errorf("Cannot encode nil pointer"))
}
m, ok := v.Interface().(Marshaler)
if !ok {
panic(fmt.Errorf("Non-Marshaler passed to marshalerEncoder"))
}
b, err := m.MarshalTLS()
if err == nil {
_, err = e.Write(b)
}
if err != nil {
panic(err)
}
}
//////////
func uintEncoder(e *encodeState, v reflect.Value, opts encOpts) {
if opts.varint {
varintEncoder(e, v, opts)
return
}
writeUint(e, v.Uint(), int(v.Type().Size()))
}
func varintEncoder(e *encodeState, v reflect.Value, opts encOpts) {
writeVarint(e, v.Uint())
}
func writeVarint(e *encodeState, u uint64) {
if (u >> 62) > 0 {
panic(fmt.Errorf("uint value is too big for varint"))
}
var varintLen int
for _, len := range []uint{1, 2, 4, 8} {
if u < (uint64(1) << (8*len - 2)) {
varintLen = int(len)
break
}
}
twoBits := map[int]uint64{1: 0x00, 2: 0x01, 4: 0x02, 8: 0x03}[varintLen]
shift := uint(8*varintLen - 2)
writeUint(e, u|(twoBits<<shift), varintLen)
}
func writeUint(e *encodeState, u uint64, len int) {
data := make([]byte, len)
for i := 0; i < len; i += 1 {
data[i] = byte(u >> uint(8*(len-i-1)))
}
e.Write(data)
}
//////////
type arrayEncoder struct {
elemEnc encoderFunc
}
func (ae *arrayEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
n := v.Len()
for i := 0; i < n; i += 1 {
ae.elemEnc(e, v.Index(i), opts)
}
}
func newArrayEncoder(t reflect.Type) encoderFunc {
enc := &arrayEncoder{typeEncoder(t.Elem())}
return enc.encode
}
//////////
type sliceEncoder struct {
ae *arrayEncoder
}
func (se *sliceEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
arrayState := &encodeState{}
se.ae.encode(arrayState, v, opts)
n := uint(arrayState.Len())
if opts.head == 0 {
panic(fmt.Errorf("Cannot encode a slice without a header length"))
}
if opts.max > 0 && n > opts.max {
panic(fmt.Errorf("Encoded length more than max [%d > %d]", n, opts.max))
}
if n < opts.min {
panic(fmt.Errorf("Encoded length less than min [%d < %d]", n, opts.min))
}
switch opts.head {
case headValueNoHead:
// None.
case headValueVarint:
writeVarint(e, uint64(n))
default:
if n>>(8*opts.head) > 0 {
panic(fmt.Errorf("Encoded length too long for header length [%d, %d]", n, opts.head))
}
writeUint(e, uint64(n), int(opts.head))
}
e.Write(arrayState.Bytes())
}
func newSliceEncoder(t reflect.Type) encoderFunc {
enc := &sliceEncoder{&arrayEncoder{typeEncoder(t.Elem())}}
return enc.encode
}
//////////
type structEncoder struct {
fieldOpts []encOpts
fieldEncs []encoderFunc
}
func (se *structEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
for i := range se.fieldEncs {
se.fieldEncs[i](e, v.Field(i), se.fieldOpts[i])
}
}
func newStructEncoder(t reflect.Type) encoderFunc {
n := t.NumField()
se := structEncoder{
fieldOpts: make([]encOpts, n),
fieldEncs: make([]encoderFunc, n),
}
for i := 0; i < n; i += 1 {
f := t.Field(i)
tag := f.Tag.Get("tls")
tagOpts := parseTag(tag)
se.fieldOpts[i] = encOpts{
head: tagOpts["head"],
max: tagOpts["max"],
min: tagOpts["min"],
varint: tagOpts[varintOption] > 0,
}
se.fieldEncs[i] = typeEncoder(f.Type)
}
return se.encode
}
//////////
type pointerEncoder struct {
base encoderFunc
}
func (pe pointerEncoder) encode(e *encodeState, v reflect.Value, opts encOpts) {
if v.IsNil() {
panic(fmt.Errorf("Cannot marshal a struct containing a nil pointer"))
}
pe.base(e, v.Elem(), opts)
}
func newPointerEncoder(t reflect.Type) encoderFunc {
baseEncoder := typeEncoder(t.Elem())
pe := pointerEncoder{base: baseEncoder}
return pe.encode
}

50
vendor/github.com/bifurcation/mint/syntax/tags.go generated vendored Normal file
View File

@ -0,0 +1,50 @@
package syntax
import (
"strconv"
"strings"
)
// `tls:"head=2,min=2,max=255,varint"`
type tagOptions map[string]uint
var (
varintOption = "varint"
headOptionNone = "none"
headOptionVarint = "varint"
headValueNoHead = uint(255)
headValueVarint = uint(254)
)
// parseTag parses a struct field's "tls" tag as a comma-separated list of
// name=value pairs, where the values MUST be unsigned integers, or in
// the special case of head, "none" or "varint"
func parseTag(tag string) tagOptions {
opts := tagOptions{}
for _, token := range strings.Split(tag, ",") {
if token == varintOption {
opts[varintOption] = 1
continue
}
parts := strings.Split(token, "=")
if len(parts[0]) == 0 {
continue
}
if len(parts) == 1 {
continue
}
if parts[0] == "head" && parts[1] == headOptionNone {
opts[parts[0]] = headValueNoHead
} else if parts[0] == "head" && parts[1] == headOptionVarint {
opts[parts[0]] = headValueVarint
} else if val, err := strconv.Atoi(parts[1]); err == nil && val >= 0 {
opts[parts[0]] = uint(val)
}
}
return opts
}

122
vendor/github.com/bifurcation/mint/timer.go generated vendored Normal file
View File

@ -0,0 +1,122 @@
package mint
import (
"time"
)
// This is a simple timer implementation. Timers are stored in a sorted
// list.
// TODO(ekr@rtfm.com): Add a way to uncouple these from the system
// clock.
type timerCb func() error
type timer struct {
label string
cb timerCb
deadline time.Time
duration uint32
}
type timerSet struct {
ts []*timer
}
func newTimerSet() *timerSet {
return &timerSet{}
}
func (ts *timerSet) start(label string, cb timerCb, delayMs uint32) *timer {
now := time.Now()
t := timer{
label,
cb,
now.Add(time.Millisecond * time.Duration(delayMs)),
delayMs,
}
logf(logTypeHandshake, "Timer %s set [%v -> %v]", t.label, now, t.deadline)
var i int
ntimers := len(ts.ts)
for i = 0; i < ntimers; i++ {
if t.deadline.Before(ts.ts[i].deadline) {
break
}
}
tmp := make([]*timer, 0, ntimers+1)
tmp = append(tmp, ts.ts[:i]...)
tmp = append(tmp, &t)
tmp = append(tmp, ts.ts[i:]...)
ts.ts = tmp
return &t
}
// TODO(ekr@rtfm.com): optimize this now that the list is sorted.
// We should be able to do just one list manipulation, as long
// as we're careful about how we handle inserts during callbacks.
func (ts *timerSet) check(now time.Time) error {
for i, t := range ts.ts {
if now.After(t.deadline) {
ts.ts = append(ts.ts[:i], ts.ts[:i+1]...)
if t.cb != nil {
logf(logTypeHandshake, "Timer %s expired [%v > %v]", t.label, now, t.deadline)
cb := t.cb
t.cb = nil
err := cb()
if err != nil {
return err
}
}
} else {
break
}
}
return nil
}
// Returns the next time any of the timers would fire.
func (ts *timerSet) remaining() (bool, time.Duration) {
for _, t := range ts.ts {
if t.cb != nil {
return true, time.Until(t.deadline)
}
}
return false, time.Duration(0)
}
func (ts *timerSet) cancel(label string) {
for _, t := range ts.ts {
if t.label == label {
t.cancel()
}
}
}
func (ts *timerSet) getTimer(label string) *timer {
for _, t := range ts.ts {
if t.label == label && t.cb != nil {
return t
}
}
return nil
}
func (ts *timerSet) getAllTimers() []string {
var ret []string
for _, t := range ts.ts {
if t.cb != nil {
ret = append(ret, t.label)
}
}
return ret
}
func (t *timer) cancel() {
logf(logTypeHandshake, "Timer %s cancelled", t.label)
t.cb = nil
t.label = ""
}

179
vendor/github.com/bifurcation/mint/tls.go generated vendored Normal file
View File

@ -0,0 +1,179 @@
package mint
// XXX(rlb): This file is borrowed pretty much wholesale from crypto/tls
import (
"errors"
"net"
"strings"
"time"
)
// Server returns a new TLS server side connection
// using conn as the underlying transport.
// The configuration config must be non-nil and must include
// at least one certificate or else set GetCertificate.
func Server(conn net.Conn, config *Config) *Conn {
return NewConn(conn, config, false)
}
// Client returns a new TLS client side connection
// using conn as the underlying transport.
// The config cannot be nil: users must set either ServerName or
// InsecureSkipVerify in the config.
func Client(conn net.Conn, config *Config) *Conn {
return NewConn(conn, config, true)
}
// A listener implements a network listener (net.Listener) for TLS connections.
type Listener struct {
net.Listener
config *Config
}
// Accept waits for and returns the next incoming TLS connection.
// The returned connection c is a *tls.Conn.
func (l *Listener) Accept() (c net.Conn, err error) {
c, err = l.Listener.Accept()
if err != nil {
return
}
server := Server(c, l.config)
err = server.Handshake()
if err == AlertNoAlert {
err = nil
}
c = server
return
}
// NewListener creates a Listener which accepts connections from an inner
// Listener and wraps each connection with Server.
// The configuration config must be non-nil and must include
// at least one certificate or else set GetCertificate.
func NewListener(inner net.Listener, config *Config) (net.Listener, error) {
if config != nil && config.NonBlocking {
return nil, errors.New("listening not possible in non-blocking mode")
}
l := new(Listener)
l.Listener = inner
l.config = config
return l, nil
}
// Listen creates a TLS listener accepting connections on the
// given network address using net.Listen.
// The configuration config must be non-nil and must include
// at least one certificate or else set GetCertificate.
func Listen(network, laddr string, config *Config) (net.Listener, error) {
if config == nil || !config.ValidForServer() {
return nil, errors.New("tls: neither Certificates nor GetCertificate set in Config")
}
l, err := net.Listen(network, laddr)
if err != nil {
return nil, err
}
return NewListener(l, config)
}
type TimeoutError struct{}
func (TimeoutError) Error() string { return "tls: DialWithDialer timed out" }
func (TimeoutError) Timeout() bool { return true }
func (TimeoutError) Temporary() bool { return true }
// DialWithDialer connects to the given network address using dialer.Dial and
// then initiates a TLS handshake, returning the resulting TLS connection. Any
// timeout or deadline given in the dialer apply to connection and TLS
// handshake as a whole.
//
// DialWithDialer interprets a nil configuration as equivalent to the zero
// configuration; see the documentation of Config for the defaults.
func DialWithDialer(dialer *net.Dialer, network, addr string, config *Config) (*Conn, error) {
if config != nil && config.NonBlocking {
return nil, errors.New("dialing not possible in non-blocking mode")
}
// We want the Timeout and Deadline values from dialer to cover the
// whole process: TCP connection and TLS handshake. This means that we
// also need to start our own timers now.
timeout := dialer.Timeout
if !dialer.Deadline.IsZero() {
deadlineTimeout := dialer.Deadline.Sub(time.Now())
if timeout == 0 || deadlineTimeout < timeout {
timeout = deadlineTimeout
}
}
var errChannel chan error
if timeout != 0 {
errChannel = make(chan error, 2)
time.AfterFunc(timeout, func() {
errChannel <- TimeoutError{}
})
}
rawConn, err := dialer.Dial(network, addr)
if err != nil {
return nil, err
}
colonPos := strings.LastIndex(addr, ":")
if colonPos == -1 {
colonPos = len(addr)
}
hostname := addr[:colonPos]
if config == nil {
config = &Config{}
} else {
config = config.Clone()
}
// If no ServerName is set, infer the ServerName
// from the hostname we're connecting to.
if config.ServerName == "" {
config.ServerName = hostname
}
// Set up DTLS as needed.
config.UseDTLS = (network == "udp")
conn := Client(rawConn, config)
if timeout == 0 {
err = conn.Handshake()
if err == AlertNoAlert {
err = nil
}
} else {
go func() {
errChannel <- conn.Handshake()
}()
err = <-errChannel
if err == AlertNoAlert {
err = nil
}
}
if err != nil {
rawConn.Close()
return nil, err
}
return conn, nil
}
// Dial connects to the given network address using net.Dial
// and then initiates a TLS handshake, returning the resulting
// TLS connection.
// Dial interprets a nil configuration as equivalent to
// the zero configuration; see the documentation of Config
// for the defaults.
func Dial(network, addr string, config *Config) (*Conn, error) {
return DialWithDialer(new(net.Dialer), network, addr, config)
}

22
vendor/github.com/cheekybits/genny/LICENSE generated vendored Normal file
View File

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2014 cheekybits
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

2
vendor/github.com/cheekybits/genny/generic/doc.go generated vendored Normal file
View File

@ -0,0 +1,2 @@
// Package generic contains the generic marker types.
package generic

13
vendor/github.com/cheekybits/genny/generic/generic.go generated vendored Normal file
View File

@ -0,0 +1,13 @@
package generic
// Type is the placeholder type that indicates a generic value.
// When genny is executed, variables of this type will be replaced with
// references to the specific types.
// var GenericType generic.Type
type Type interface{}
// Number is the placehoder type that indiccates a generic numerical value.
// When genny is executed, variables of this type will be replaced with
// references to the specific types.
// var GenericType generic.Number
type Number float64

10
vendor/github.com/dchest/siphash/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,10 @@
language: go
go:
- 1.4
- 1.5
- 1.7
- 1.8
- 1.9
- 1.10
- tip

69
vendor/github.com/dchest/siphash/README.md generated vendored Normal file
View File

@ -0,0 +1,69 @@
SipHash (Go)
============
[![Build Status](https://travis-ci.org/dchest/siphash.svg)](https://travis-ci.org/dchest/siphash)
Go implementation of SipHash-2-4, a fast short-input PRF created by
Jean-Philippe Aumasson and Daniel J. Bernstein (http://131002.net/siphash/).
## Installation
$ go get github.com/dchest/siphash
## Usage
import "github.com/dchest/siphash"
There are two ways to use this package.
The slower one is to use the standard hash.Hash64 interface:
h := siphash.New(key)
h.Write([]byte("Hello"))
sum := h.Sum(nil) // returns 8-byte []byte
or
sum64 := h.Sum64() // returns uint64
The faster one is to use Hash() function, which takes two uint64 parts of
16-byte key and a byte slice, and returns uint64 hash:
sum64 := siphash.Hash(key0, key1, []byte("Hello"))
The keys and output are little-endian.
## Functions
### func Hash(k0, k1 uint64, p []byte) uint64
Hash returns the 64-bit SipHash-2-4 of the given byte slice with two
64-bit parts of 128-bit key: k0 and k1.
### func Hash128(k0, k1 uint64, p []byte) (uint64, uint64)
Hash128 returns the 128-bit SipHash-2-4 of the given byte slice with two
64-bit parts of 128-bit key: k0 and k1.
Note that 128-bit SipHash is considered experimental by SipHash authors at this time.
### func New(key []byte) hash.Hash64
New returns a new hash.Hash64 computing SipHash-2-4 with 16-byte key.
### func New128(key []byte) hash.Hash
New128 returns a new hash.Hash computing SipHash-2-4 with 16-byte key and 16-byte output.
Note that 16-byte output is considered experimental by SipHash authors at this time.
## Public domain dedication
Written by Dmitry Chestnykh and Damian Gryski.
To the extent possible under law, the authors have dedicated all copyright
and related and neighboring rights to this software to the public domain
worldwide. This software is distributed without any warranty.
http://creativecommons.org/publicdomain/zero/1.0/

148
vendor/github.com/dchest/siphash/blocks.go generated vendored Normal file
View File

@ -0,0 +1,148 @@
// +build !arm,!amd64 appengine gccgo
package siphash
func once(d *digest) {
blocks(d, d.x[:])
}
func finalize(d *digest) uint64 {
d0 := *d
once(&d0)
v0, v1, v2, v3 := d0.v0, d0.v1, d0.v2, d0.v3
v2 ^= 0xff
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 3.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 4.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
return v0 ^ v1 ^ v2 ^ v3
}
func blocks(d *digest, p []uint8) {
v0, v1, v2, v3 := d.v0, d.v1, d.v2, d.v3
for len(p) >= BlockSize {
m := uint64(p[0]) | uint64(p[1])<<8 | uint64(p[2])<<16 | uint64(p[3])<<24 |
uint64(p[4])<<32 | uint64(p[5])<<40 | uint64(p[6])<<48 | uint64(p[7])<<56
v3 ^= m
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
v0 ^= m
p = p[BlockSize:]
}
d.v0, d.v1, d.v2, d.v3 = v0, v1, v2, v3
}

86
vendor/github.com/dchest/siphash/blocks_amd64.s generated vendored Normal file
View File

@ -0,0 +1,86 @@
// +build amd64,!appengine,!gccgo
#define ROUND(v0, v1, v2, v3) \
ADDQ v1, v0; \
RORQ $51, v1; \
ADDQ v3, v2; \
XORQ v0, v1; \
RORQ $48, v3; \
RORQ $32, v0; \
XORQ v2, v3; \
ADDQ v1, v2; \
ADDQ v3, v0; \
RORQ $43, v3; \
RORQ $47, v1; \
XORQ v0, v3; \
XORQ v2, v1; \
RORQ $32, v2
// blocks(d *digest, data []uint8)
TEXT ·blocks(SB),4,$0-32
MOVQ d+0(FP), BX
MOVQ 0(BX), R9 // R9 = v0
MOVQ 8(BX), R10 // R10 = v1
MOVQ 16(BX), R11 // R11 = v2
MOVQ 24(BX), R12 // R12 = v3
MOVQ p_base+8(FP), DI // DI = *uint64
MOVQ p_len+16(FP), SI // SI = nblocks
XORL DX, DX // DX = index (0)
SHRQ $3, SI // SI /= 8
body:
CMPQ DX, SI
JGE end
MOVQ 0(DI)(DX*8), CX // CX = m
XORQ CX, R12
ROUND(R9, R10, R11, R12)
ROUND(R9, R10, R11, R12)
XORQ CX, R9
ADDQ $1, DX
JMP body
end:
MOVQ R9, 0(BX)
MOVQ R10, 8(BX)
MOVQ R11, 16(BX)
MOVQ R12, 24(BX)
RET
// once(d *digest)
TEXT ·once(SB),4,$0-8
MOVQ d+0(FP), BX
MOVQ 0(BX), R9 // R9 = v0
MOVQ 8(BX), R10 // R10 = v1
MOVQ 16(BX), R11 // R11 = v2
MOVQ 24(BX), R12 // R12 = v3
MOVQ 48(BX), CX // CX = d.x[:]
XORQ CX, R12
ROUND(R9, R10, R11, R12)
ROUND(R9, R10, R11, R12)
XORQ CX, R9
MOVQ R9, 0(BX)
MOVQ R10, 8(BX)
MOVQ R11, 16(BX)
MOVQ R12, 24(BX)
RET
// finalize(d *digest) uint64
TEXT ·finalize(SB),4,$0-16
MOVQ d+0(FP), BX
MOVQ 0(BX), R9 // R9 = v0
MOVQ 8(BX), R10 // R10 = v1
MOVQ 16(BX), R11 // R11 = v2
MOVQ 24(BX), R12 // R12 = v3
MOVQ 48(BX), CX // CX = d.x[:]
XORQ CX, R12
ROUND(R9, R10, R11, R12)
ROUND(R9, R10, R11, R12)
XORQ CX, R9
NOTB R11
ROUND(R9, R10, R11, R12)
ROUND(R9, R10, R11, R12)
ROUND(R9, R10, R11, R12)
ROUND(R9, R10, R11, R12)
XORQ R12, R11
XORQ R10, R9
XORQ R11, R9
MOVQ R9, ret+8(FP)
RET

144
vendor/github.com/dchest/siphash/blocks_arm.s generated vendored Normal file
View File

@ -0,0 +1,144 @@
#include "textflag.h"
#define ROUND()\
ADD.S R2,R0,R0;\
ADC R3,R1,R1;\
EOR R2<<13,R0,R8;\
EOR R3>>19,R8,R8;\
EOR R2>>19,R1,R11;\
EOR R3<<13,R11,R11;\
ADD.S R6,R4,R4;\
ADC R7,R5,R5;\
EOR R6<<16,R4,R2;\
EOR R7>>16,R2,R2;\
EOR R6>>16,R5,R3;\
EOR R7<<16,R3,R3;\
ADD.S R2,R1,R1;\
ADC R3,R0,R0;\
EOR R2<<21,R1,R6;\
EOR R3>>11,R6,R6;\
EOR R2>>11,R0,R7;\
EOR R3<<21,R7,R7;\
ADD.S R8,R4,R4;\
ADC R11,R5,R5;\
EOR R8<<17,R4,R2;\
EOR R11>>15,R2,R2;\
EOR R8>>15,R5,R3;\
EOR R11<<17,R3,R3;\
ADD.S R2,R1,R1;\
ADC R3,R0,R0;\
EOR R2<<13,R1,R8;\
EOR R3>>19,R8,R8;\
EOR R2>>19,R0,R11;\
EOR R3<<13,R11,R11;\
ADD.S R6,R5,R5;\
ADC R7,R4,R4;\
EOR R6<<16,R5,R2;\
EOR R7>>16,R2,R2;\
EOR R6>>16,R4,R3;\
EOR R7<<16,R3,R3;\
ADD.S R2,R0,R0;\
ADC R3,R1,R1;\
EOR R2<<21,R0,R6;\
EOR R3>>11,R6,R6;\
EOR R2>>11,R1,R7;\
EOR R3<<21,R7,R7;\
ADD.S R8,R5,R5;\
ADC R11,R4,R4;\
EOR R8<<17,R5,R2;\
EOR R11>>15,R2,R2;\
EOR R8>>15,R4,R3;\
EOR R11<<17,R3,R3;
// once(d *digest)
TEXT ·once(SB),NOSPLIT,$4-4
MOVW d+0(FP),R8
MOVM.IA (R8),[R0,R1,R2,R3,R4,R5,R6,R7]
MOVW 48(R8),R12
MOVW 52(R8),R14
EOR R12,R6,R6
EOR R14,R7,R7
ROUND()
EOR R12,R0,R0
EOR R14,R1,R1
MOVW d+0(FP),R8
MOVM.IA [R0,R1,R2,R3,R4,R5,R6,R7],(R8)
RET
// finalize(d *digest) uint64
TEXT ·finalize(SB),NOSPLIT,$4-12
MOVW d+0(FP),R8
MOVM.IA (R8),[R0,R1,R2,R3,R4,R5,R6,R7]
MOVW 48(R8),R12
MOVW 52(R8),R14
EOR R12,R6,R6
EOR R14,R7,R7
ROUND()
EOR R12,R0,R0
EOR R14,R1,R1
EOR $255,R4
ROUND()
ROUND()
EOR R2,R0,R0
EOR R3,R1,R1
EOR R6,R4,R4
EOR R7,R5,R5
EOR R4,R0,R0
EOR R5,R1,R1
MOVW R0,ret_lo+4(FP)
MOVW R1,ret_hi+8(FP)
RET
// blocks(d *digest, data []uint8)
TEXT ·blocks(SB),NOSPLIT,$8-16
MOVW R9,sav-8(SP)
MOVW d+0(FP),R8
MOVM.IA (R8),[R0,R1,R2,R3,R4,R5,R6,R7]
MOVW p+4(FP),R9
MOVW p_len+8(FP),R11
ADD R9,R11,R11
MOVW R11,endp-4(SP)
AND.S $3,R9,R8
BNE blocksunaligned
blocksloop:
MOVM.IA.W (R9),[R12,R14]
EOR R12,R6,R6
EOR R14,R7,R7
ROUND()
EOR R12,R0,R0
EOR R14,R1,R1
MOVW endp-4(SP),R11
CMP R11,R9
BLO blocksloop
MOVW d+0(FP),R8
MOVM.IA [R0,R1,R2,R3,R4,R5,R6,R7],(R8)
MOVW sav-8(SP),R9
RET
blocksunaligned:
MOVBU (R9),R12
MOVBU 1(R9),R11
ORR R11<<8,R12,R12
MOVBU 2(R9),R11
ORR R11<<16,R12,R12
MOVBU 3(R9),R11
ORR R11<<24,R12,R12
MOVBU 4(R9),R14
MOVBU 5(R9),R11
ORR R11<<8,R14,R14
MOVBU 6(R9),R11
ORR R11<<16,R14,R14
MOVBU 7(R9),R11
ORR R11<<24,R14,R14
ADD $8,R9,R9
EOR R12,R6,R6
EOR R14,R7,R7
ROUND()
EOR R12,R0,R0
EOR R14,R1,R1
MOVW endp-4(SP),R11
CMP R11,R9
BLO blocksunaligned
MOVW d+0(FP),R8
MOVM.IA [R0,R1,R2,R3,R4,R5,R6,R7],(R8)
MOVW sav-8(SP),R9
RET

21
vendor/github.com/dchest/siphash/blocks_asm.go generated vendored Normal file
View File

@ -0,0 +1,21 @@
// +build arm amd64,!appengine,!gccgo
// Written in 2012 by Dmitry Chestnykh.
//
// To the extent possible under law, the author have dedicated all copyright
// and related and neighboring rights to this software to the public domain
// worldwide. This software is distributed without any warranty.
// http://creativecommons.org/publicdomain/zero/1.0/
// This file contains a function definition for use with assembly implementations of Hash()
package siphash
//go:noescape
func blocks(d *digest, p []uint8)
//go:noescape
func finalize(d *digest) uint64
//go:noescape
func once(d *digest)

216
vendor/github.com/dchest/siphash/hash.go generated vendored Normal file
View File

@ -0,0 +1,216 @@
// +build !arm,!amd64 appengine gccgo
// Written in 2012 by Dmitry Chestnykh.
//
// To the extent possible under law, the author have dedicated all copyright
// and related and neighboring rights to this software to the public domain
// worldwide. This software is distributed without any warranty.
// http://creativecommons.org/publicdomain/zero/1.0/
package siphash
// Hash returns the 64-bit SipHash-2-4 of the given byte slice with two 64-bit
// parts of 128-bit key: k0 and k1.
func Hash(k0, k1 uint64, p []byte) uint64 {
// Initialization.
v0 := k0 ^ 0x736f6d6570736575
v1 := k1 ^ 0x646f72616e646f6d
v2 := k0 ^ 0x6c7967656e657261
v3 := k1 ^ 0x7465646279746573
t := uint64(len(p)) << 56
// Compression.
for len(p) >= BlockSize {
m := uint64(p[0]) | uint64(p[1])<<8 | uint64(p[2])<<16 | uint64(p[3])<<24 |
uint64(p[4])<<32 | uint64(p[5])<<40 | uint64(p[6])<<48 | uint64(p[7])<<56
v3 ^= m
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
v0 ^= m
p = p[BlockSize:]
}
// Compress last block.
switch len(p) {
case 7:
t |= uint64(p[6]) << 48
fallthrough
case 6:
t |= uint64(p[5]) << 40
fallthrough
case 5:
t |= uint64(p[4]) << 32
fallthrough
case 4:
t |= uint64(p[3]) << 24
fallthrough
case 3:
t |= uint64(p[2]) << 16
fallthrough
case 2:
t |= uint64(p[1]) << 8
fallthrough
case 1:
t |= uint64(p[0])
}
v3 ^= t
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
v0 ^= t
// Finalization.
v2 ^= 0xff
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 3.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 4.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
return v0 ^ v1 ^ v2 ^ v3
}

302
vendor/github.com/dchest/siphash/hash128.go generated vendored Normal file
View File

@ -0,0 +1,302 @@
// +build !arm,!amd64 appengine gccgo
// Written in 2012 by Dmitry Chestnykh.
// Modifications 2014 for 128-bit hash function by Damian Gryski.
//
// To the extent possible under law, the authors have dedicated all copyright
// and related and neighboring rights to this software to the public domain
// worldwide. This software is distributed without any warranty.
// http://creativecommons.org/publicdomain/zero/1.0/
package siphash
// Hash returns the 128-bit SipHash-2-4 of the given byte slice with two 64-bit
// parts of 128-bit key: k0 and k1.
//
// Note that 128-bit SipHash is considered experimental by SipHash authors at this time.
func Hash128(k0, k1 uint64, p []byte) (uint64, uint64) {
// Initialization.
v0 := k0 ^ 0x736f6d6570736575
v1 := k1 ^ 0x646f72616e646f6d
v2 := k0 ^ 0x6c7967656e657261
v3 := k1 ^ 0x7465646279746573
t := uint64(len(p)) << 56
v1 ^= 0xee
// Compression.
for len(p) >= BlockSize {
m := uint64(p[0]) | uint64(p[1])<<8 | uint64(p[2])<<16 | uint64(p[3])<<24 |
uint64(p[4])<<32 | uint64(p[5])<<40 | uint64(p[6])<<48 | uint64(p[7])<<56
v3 ^= m
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
v0 ^= m
p = p[BlockSize:]
}
// Compress last block.
switch len(p) {
case 7:
t |= uint64(p[6]) << 48
fallthrough
case 6:
t |= uint64(p[5]) << 40
fallthrough
case 5:
t |= uint64(p[4]) << 32
fallthrough
case 4:
t |= uint64(p[3]) << 24
fallthrough
case 3:
t |= uint64(p[2]) << 16
fallthrough
case 2:
t |= uint64(p[1]) << 8
fallthrough
case 1:
t |= uint64(p[0])
}
v3 ^= t
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
v0 ^= t
// Finalization.
v2 ^= 0xee
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 3.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 4.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
r0 := v0 ^ v1 ^ v2 ^ v3
v1 ^= 0xdd
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 3.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 4.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
r1 := v0 ^ v1 ^ v2 ^ v3
return r0, r1
}

292
vendor/github.com/dchest/siphash/hash128_amd64.s generated vendored Normal file
View File

@ -0,0 +1,292 @@
// +build amd64,!appengine,!gccgo
// This is a translation of the gcc output of FloodyBerry's pure-C public
// domain siphash implementation at https://github.com/floodyberry/siphash
// This assembly code has been modified from the 64-bit output to the experiment 128-bit output.
// SI = v0
// AX = v1
// CX = v2
// DX = v3
// func Hash128(k0, k1 uint64, b []byte) (r0 uint64, r1 uint64)
TEXT ·Hash128(SB),4,$0-56
MOVQ k0+0(FP),CX
MOVQ $0x736F6D6570736575,R9
MOVQ k1+8(FP),DI
MOVQ $0x6C7967656E657261,BX
MOVQ $0x646F72616E646F6D,AX
MOVQ b_len+24(FP),DX
XORQ $0xEE,AX
MOVQ DX,R11
MOVQ DX,R10
XORQ CX,R9
XORQ CX,BX
MOVQ $0x7465646279746573,CX
XORQ DI,AX
XORQ DI,CX
SHLQ $0x38,R11
XORQ DI,DI
MOVQ b_base+16(FP),SI
ANDQ $0xFFFFFFFFFFFFFFF8,R10
JE afterLoop
XCHGQ AX,AX
loopBody:
MOVQ 0(SI)(DI*1),R8
ADDQ AX,R9
RORQ $0x33,AX
XORQ R9,AX
RORQ $0x20,R9
ADDQ $0x8,DI
XORQ R8,CX
ADDQ CX,BX
RORQ $0x30,CX
XORQ BX,CX
ADDQ AX,BX
RORQ $0x2F,AX
ADDQ CX,R9
RORQ $0x2B,CX
XORQ BX,AX
XORQ R9,CX
RORQ $0x20,BX
ADDQ AX,R9
ADDQ CX,BX
RORQ $0x33,AX
RORQ $0x30,CX
XORQ R9,AX
XORQ BX,CX
RORQ $0x20,R9
ADDQ AX,BX
ADDQ CX,R9
RORQ $0x2F,AX
RORQ $0x2B,CX
XORQ BX,AX
RORQ $0x20,BX
XORQ R9,CX
XORQ R8,R9
CMPQ R10,DI
JA loopBody
afterLoop:
SUBQ R10,DX
CMPQ DX,$0x7
JA afterSwitch
// no support for jump tables
CMPQ DX,$0x7
JE sw7
CMPQ DX,$0x6
JE sw6
CMPQ DX,$0x5
JE sw5
CMPQ DX,$0x4
JE sw4
CMPQ DX,$0x3
JE sw3
CMPQ DX,$0x2
JE sw2
CMPQ DX,$0x1
JE sw1
JMP afterSwitch
sw7: MOVBQZX 6(SI)(DI*1),DX
SHLQ $0x30,DX
ORQ DX,R11
sw6: MOVBQZX 0x5(SI)(DI*1),DX
SHLQ $0x28,DX
ORQ DX,R11
sw5: MOVBQZX 0x4(SI)(DI*1),DX
SHLQ $0x20,DX
ORQ DX,R11
sw4: MOVBQZX 0x3(SI)(DI*1),DX
SHLQ $0x18,DX
ORQ DX,R11
sw3: MOVBQZX 0x2(SI)(DI*1),DX
SHLQ $0x10,DX
ORQ DX,R11
sw2: MOVBQZX 0x1(SI)(DI*1),DX
SHLQ $0x8,DX
ORQ DX,R11
sw1: MOVBQZX 0(SI)(DI*1),DX
ORQ DX,R11
afterSwitch:
LEAQ (AX)(R9*1),SI
XORQ R11,CX
RORQ $0x33,AX
ADDQ CX,BX
MOVQ CX,DX
XORQ SI,AX
RORQ $0x30,DX
RORQ $0x20,SI
LEAQ 0(BX)(AX*1),CX
XORQ BX,DX
RORQ $0x2F,AX
ADDQ DX,SI
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
XORQ SI,AX
RORQ $0x30,DX
RORQ $0x20,SI
XORQ CX,DX
ADDQ AX,CX
RORQ $0x2F,AX
ADDQ DX,SI
XORQ CX,AX
RORQ $0x2B,DX
RORQ $0x20,CX
XORQ SI,DX
XORQ R11,SI
XORB $0xEE,CL
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
RORQ $0x30,DX
XORQ SI,AX
XORQ CX,DX
RORQ $0x20,SI
ADDQ AX,CX
ADDQ DX,SI
RORQ $0x2F,AX
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
ADDQ DX,CX
RORQ $0x33,AX
RORQ $0x30,DX
XORQ SI,AX
RORQ $0x20,SI
XORQ CX,DX
ADDQ AX,CX
RORQ $0x2F,AX
ADDQ DX,SI
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
ADDQ DX,CX
RORQ $0x33,AX
RORQ $0x30,DX
XORQ CX,DX
XORQ SI,AX
RORQ $0x20,SI
ADDQ DX,SI
ADDQ AX,CX
RORQ $0x2F,AX
XORQ CX,AX
RORQ $0x2B,DX
RORQ $0x20,CX
XORQ SI,DX
// gcc optimized the tail end of this function differently. However,
// we need to preserve out registers to carry out the second stage of
// the finalization. This is a duplicate of an earlier finalization
// round.
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
RORQ $0x30,DX
XORQ SI,AX
XORQ CX,DX
RORQ $0x20,SI
ADDQ AX,CX
ADDQ DX,SI
RORQ $0x2F,AX
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
// Stuff the result into BX instead of AX as gcc had done
MOVQ SI,BX
XORQ AX,BX
XORQ DX,BX
XORQ CX,BX
MOVQ BX,ret+40(FP)
// Start the second finalization round
XORB $0xDD,AL
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
RORQ $0x30,DX
XORQ SI,AX
XORQ CX,DX
RORQ $0x20,SI
ADDQ AX,CX
ADDQ DX,SI
RORQ $0x2F,AX
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
ADDQ DX,CX
RORQ $0x33,AX
RORQ $0x30,DX
XORQ SI,AX
RORQ $0x20,SI
XORQ CX,DX
ADDQ AX,CX
RORQ $0x2F,AX
ADDQ DX,SI
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
ADDQ DX,CX
RORQ $0x33,AX
RORQ $0x30,DX
XORQ CX,DX
XORQ SI,AX
RORQ $0x20,SI
ADDQ DX,SI
ADDQ AX,CX
RORQ $0x2F,AX
XORQ CX,AX
RORQ $0x2B,DX
RORQ $0x20,CX
XORQ SI,DX
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
RORQ $0x30,DX
XORQ SI,AX
XORQ CX,DX
RORQ $0x20,SI
ADDQ AX,CX
ADDQ DX,SI
RORQ $0x2F,AX
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
MOVQ SI,BX
XORQ AX,BX
XORQ DX,BX
XORQ CX,BX
MOVQ BX,ret1+48(FP)
RET

201
vendor/github.com/dchest/siphash/hash_amd64.s generated vendored Normal file
View File

@ -0,0 +1,201 @@
// +build amd64,!appengine,!gccgo
// This is a translation of the gcc output of FloodyBerry's pure-C public
// domain siphash implementation at https://github.com/floodyberry/siphash
// func Hash(k0, k1 uint64, b []byte) uint64
TEXT ·Hash(SB),4,$0-48
MOVQ k0+0(FP),CX
MOVQ $0x736F6D6570736575,R9
MOVQ k1+8(FP),DI
MOVQ $0x6C7967656E657261,BX
MOVQ $0x646F72616E646F6D,AX
MOVQ b_len+24(FP),DX
MOVQ DX,R11
MOVQ DX,R10
XORQ CX,R9
XORQ CX,BX
MOVQ $0x7465646279746573,CX
XORQ DI,AX
XORQ DI,CX
SHLQ $0x38,R11
XORQ DI,DI
MOVQ b_base+16(FP),SI
ANDQ $0xFFFFFFFFFFFFFFF8,R10
JE afterLoop
XCHGQ AX,AX
loopBody:
MOVQ 0(SI)(DI*1),R8
ADDQ AX,R9
RORQ $0x33,AX
XORQ R9,AX
RORQ $0x20,R9
ADDQ $0x8,DI
XORQ R8,CX
ADDQ CX,BX
RORQ $0x30,CX
XORQ BX,CX
ADDQ AX,BX
RORQ $0x2F,AX
ADDQ CX,R9
RORQ $0x2B,CX
XORQ BX,AX
XORQ R9,CX
RORQ $0x20,BX
ADDQ AX,R9
ADDQ CX,BX
RORQ $0x33,AX
RORQ $0x30,CX
XORQ R9,AX
XORQ BX,CX
RORQ $0x20,R9
ADDQ AX,BX
ADDQ CX,R9
RORQ $0x2F,AX
RORQ $0x2B,CX
XORQ BX,AX
RORQ $0x20,BX
XORQ R9,CX
XORQ R8,R9
CMPQ R10,DI
JA loopBody
afterLoop:
SUBQ R10,DX
CMPQ DX,$0x7
JA afterSwitch
// no support for jump tables
CMPQ DX,$0x7
JE sw7
CMPQ DX,$0x6
JE sw6
CMPQ DX,$0x5
JE sw5
CMPQ DX,$0x4
JE sw4
CMPQ DX,$0x3
JE sw3
CMPQ DX,$0x2
JE sw2
CMPQ DX,$0x1
JE sw1
JMP afterSwitch
sw7: MOVBQZX 6(SI)(DI*1),DX
SHLQ $0x30,DX
ORQ DX,R11
sw6: MOVBQZX 0x5(SI)(DI*1),DX
SHLQ $0x28,DX
ORQ DX,R11
sw5: MOVBQZX 0x4(SI)(DI*1),DX
SHLQ $0x20,DX
ORQ DX,R11
sw4: MOVBQZX 0x3(SI)(DI*1),DX
SHLQ $0x18,DX
ORQ DX,R11
sw3: MOVBQZX 0x2(SI)(DI*1),DX
SHLQ $0x10,DX
ORQ DX,R11
sw2: MOVBQZX 0x1(SI)(DI*1),DX
SHLQ $0x8,DX
ORQ DX,R11
sw1: MOVBQZX 0(SI)(DI*1),DX
ORQ DX,R11
afterSwitch:
LEAQ (AX)(R9*1),SI
XORQ R11,CX
RORQ $0x33,AX
ADDQ CX,BX
MOVQ CX,DX
XORQ SI,AX
RORQ $0x30,DX
RORQ $0x20,SI
LEAQ 0(BX)(AX*1),CX
XORQ BX,DX
RORQ $0x2F,AX
ADDQ DX,SI
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
XORQ SI,AX
RORQ $0x30,DX
RORQ $0x20,SI
XORQ CX,DX
ADDQ AX,CX
RORQ $0x2F,AX
ADDQ DX,SI
XORQ CX,AX
RORQ $0x2B,DX
RORQ $0x20,CX
XORQ SI,DX
XORQ R11,SI
XORB $0xFF,CL
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
RORQ $0x30,DX
XORQ SI,AX
XORQ CX,DX
RORQ $0x20,SI
ADDQ AX,CX
ADDQ DX,SI
RORQ $0x2F,AX
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
ADDQ DX,CX
RORQ $0x33,AX
RORQ $0x30,DX
XORQ SI,AX
RORQ $0x20,SI
XORQ CX,DX
ADDQ AX,CX
RORQ $0x2F,AX
ADDQ DX,SI
RORQ $0x2B,DX
XORQ CX,AX
XORQ SI,DX
RORQ $0x20,CX
ADDQ AX,SI
ADDQ DX,CX
RORQ $0x33,AX
RORQ $0x30,DX
XORQ CX,DX
XORQ SI,AX
RORQ $0x20,SI
ADDQ DX,SI
ADDQ AX,CX
RORQ $0x2F,AX
XORQ CX,AX
RORQ $0x2B,DX
RORQ $0x20,CX
XORQ SI,DX
ADDQ AX,SI
RORQ $0x33,AX
ADDQ DX,CX
XORQ SI,AX
RORQ $0x30,DX
XORQ CX,DX
ADDQ AX,CX
RORQ $0x2F,AX
XORQ CX,AX
RORQ $0x2B,DX
RORQ $0x20,CX
XORQ DX,AX
XORQ CX,AX
MOVQ AX,ret+40(FP)
RET

27
vendor/github.com/dchest/siphash/hash_arm.go generated vendored Normal file
View File

@ -0,0 +1,27 @@
// +build arm
package siphash
// NB: ARM implementation of forgoes extra speed for Hash()
// and Hash128() by simply reusing the same blocks() implementation
// in assembly used by the streaming hash.
func Hash(k0, k1 uint64, p []byte) uint64 {
var d digest
d.size = Size
d.k0 = k0
d.k1 = k1
d.Reset()
d.Write(p)
return d.Sum64()
}
func Hash128(k0, k1 uint64, p []byte) (uint64, uint64) {
var d digest
d.size = Size128
d.k0 = k0
d.k1 = k1
d.Reset()
d.Write(p)
return d.sum128()
}

24
vendor/github.com/dchest/siphash/hash_asm.go generated vendored Normal file
View File

@ -0,0 +1,24 @@
// +build amd64,!appengine,!gccgo
// Written in 2012 by Dmitry Chestnykh.
//
// To the extent possible under law, the author have dedicated all copyright
// and related and neighboring rights to this software to the public domain
// worldwide. This software is distributed without any warranty.
// http://creativecommons.org/publicdomain/zero/1.0/
// This file contains a function definition for use with assembly implementations of Hash()
package siphash
//go:noescape
// Hash returns the 64-bit SipHash-2-4 of the given byte slice with two 64-bit
// parts of 128-bit key: k0 and k1.
func Hash(k0, k1 uint64, b []byte) uint64
//go:noescape
// Hash128 returns the 128-bit SipHash-2-4 of the given byte slice with two
// 64-bit parts of 128-bit key: k0 and k1.
func Hash128(k0, k1 uint64, b []byte) (uint64, uint64)

318
vendor/github.com/dchest/siphash/siphash.go generated vendored Normal file
View File

@ -0,0 +1,318 @@
// Written in 2012-2014 by Dmitry Chestnykh.
//
// To the extent possible under law, the author have dedicated all copyright
// and related and neighboring rights to this software to the public domain
// worldwide. This software is distributed without any warranty.
// http://creativecommons.org/publicdomain/zero/1.0/
// Package siphash implements SipHash-2-4, a fast short-input PRF
// created by Jean-Philippe Aumasson and Daniel J. Bernstein.
package siphash
import "hash"
const (
// BlockSize is the block size of hash algorithm in bytes.
BlockSize = 8
// Size is the size of hash output in bytes.
Size = 8
// Size128 is the size of 128-bit hash output in bytes.
Size128 = 16
)
type digest struct {
v0, v1, v2, v3 uint64 // state
k0, k1 uint64 // two parts of key
x [8]byte // buffer for unprocessed bytes
nx int // number of bytes in buffer x
size int // output size in bytes (8 or 16)
t uint8 // message bytes counter (mod 256)
}
// newDigest returns a new digest with the given output size in bytes (must be 8 or 16).
func newDigest(size int, key []byte) *digest {
if size != Size && size != Size128 {
panic("size must be 8 or 16")
}
d := new(digest)
d.k0 = uint64(key[0]) | uint64(key[1])<<8 | uint64(key[2])<<16 | uint64(key[3])<<24 |
uint64(key[4])<<32 | uint64(key[5])<<40 | uint64(key[6])<<48 | uint64(key[7])<<56
d.k1 = uint64(key[8]) | uint64(key[9])<<8 | uint64(key[10])<<16 | uint64(key[11])<<24 |
uint64(key[12])<<32 | uint64(key[13])<<40 | uint64(key[14])<<48 | uint64(key[15])<<56
d.size = size
d.Reset()
return d
}
// New returns a new hash.Hash64 computing SipHash-2-4 with 16-byte key and 8-byte output.
func New(key []byte) hash.Hash64 {
return newDigest(Size, key)
}
// New128 returns a new hash.Hash computing SipHash-2-4 with 16-byte key and 16-byte output.
//
// Note that 16-byte output is considered experimental by SipHash authors at this time.
func New128(key []byte) hash.Hash {
return newDigest(Size128, key)
}
func (d *digest) Reset() {
d.v0 = d.k0 ^ 0x736f6d6570736575
d.v1 = d.k1 ^ 0x646f72616e646f6d
d.v2 = d.k0 ^ 0x6c7967656e657261
d.v3 = d.k1 ^ 0x7465646279746573
d.t = 0
d.nx = 0
if d.size == Size128 {
d.v1 ^= 0xee
}
}
func (d *digest) Size() int { return d.size }
func (d *digest) BlockSize() int { return BlockSize }
func (d *digest) Write(p []byte) (nn int, err error) {
nn = len(p)
d.t += uint8(nn)
if d.nx > 0 {
n := len(p)
if n > BlockSize-d.nx {
n = BlockSize - d.nx
}
d.nx += copy(d.x[d.nx:], p)
if d.nx == BlockSize {
once(d)
d.nx = 0
}
p = p[n:]
}
if len(p) >= BlockSize {
n := len(p) &^ (BlockSize - 1)
blocks(d, p[:n])
p = p[n:]
}
if len(p) > 0 {
d.nx = copy(d.x[:], p)
}
return
}
func (d *digest) Sum64() uint64 {
for i := d.nx; i < BlockSize-1; i++ {
d.x[i] = 0
}
d.x[7] = d.t
return finalize(d)
}
func (d0 *digest) sum128() (r0, r1 uint64) {
// Make a copy of d0 so that caller can keep writing and summing.
d := *d0
for i := d.nx; i < BlockSize-1; i++ {
d.x[i] = 0
}
d.x[7] = d.t
blocks(&d, d.x[:])
v0, v1, v2, v3 := d.v0, d.v1, d.v2, d.v3
v2 ^= 0xee
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 3.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 4.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
r0 = v0 ^ v1 ^ v2 ^ v3
v1 ^= 0xdd
// Round 1.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 2.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 3.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
// Round 4.
v0 += v1
v1 = v1<<13 | v1>>(64-13)
v1 ^= v0
v0 = v0<<32 | v0>>(64-32)
v2 += v3
v3 = v3<<16 | v3>>(64-16)
v3 ^= v2
v0 += v3
v3 = v3<<21 | v3>>(64-21)
v3 ^= v0
v2 += v1
v1 = v1<<17 | v1>>(64-17)
v1 ^= v2
v2 = v2<<32 | v2>>(64-32)
r1 = v0 ^ v1 ^ v2 ^ v3
return r0, r1
}
func (d *digest) Sum(in []byte) []byte {
if d.size == Size {
r := d.Sum64()
in = append(in,
byte(r),
byte(r>>8),
byte(r>>16),
byte(r>>24),
byte(r>>32),
byte(r>>40),
byte(r>>48),
byte(r>>56))
} else {
r0, r1 := d.sum128()
in = append(in,
byte(r0),
byte(r0>>8),
byte(r0>>16),
byte(r0>>24),
byte(r0>>32),
byte(r0>>40),
byte(r0>>48),
byte(r0>>56),
byte(r1),
byte(r1>>8),
byte(r1>>16),
byte(r1>>24),
byte(r1>>32),
byte(r1>>40),
byte(r1>>48),
byte(r1>>56))
}
return in
}

24
vendor/github.com/ginuerzh/gosocks4/.gitignore generated vendored Normal file
View File

@ -0,0 +1,24 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test
*.prof

21
vendor/github.com/ginuerzh/gosocks4/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2017 ginuerzh
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

2
vendor/github.com/ginuerzh/gosocks4/README.md generated vendored Normal file
View File

@ -0,0 +1,2 @@
# gosocks4
golang and SOCKS4(a)

252
vendor/github.com/ginuerzh/gosocks4/socks4.go generated vendored Normal file
View File

@ -0,0 +1,252 @@
// SOCKS Protocol Version 4(a)
// https://www.openssh.com/txt/socks4.protocol
// https://www.openssh.com/txt/socks4a.protocol
package gosocks4
import (
"bufio"
"encoding/binary"
"errors"
"fmt"
"io"
"net"
"strconv"
)
const (
Ver4 = 4
)
const (
CmdConnect uint8 = 1
CmdBind = 2
)
const (
AddrIPv4 = 0
AddrDomain = 1
)
const (
Granted = 90
Failed = 91
Rejected = 92
RejectedUserid = 93
)
var (
ErrBadVersion = errors.New("Bad version")
ErrBadFormat = errors.New("Bad format")
ErrBadAddrType = errors.New("Bad address type")
ErrShortDataLength = errors.New("Short data length")
ErrBadCmd = errors.New("Bad Command")
)
type Addr struct {
Type int
Host string
Port uint16
}
func (addr *Addr) Decode(b []byte) error {
if len(b) < 6 {
return ErrShortDataLength
}
addr.Port = binary.BigEndian.Uint16(b[0:2])
addr.Host = net.IP(b[2 : 2+net.IPv4len]).String()
if b[2]|b[3]|b[4] == 0 && b[5] != 0 {
addr.Type = AddrDomain
}
return nil
}
func (addr *Addr) Encode(b []byte) error {
if len(b) < 6 {
return ErrShortDataLength
}
binary.BigEndian.PutUint16(b[0:2], addr.Port)
switch addr.Type {
case AddrIPv4:
ip4 := net.ParseIP(addr.Host).To4()
if ip4 == nil {
return ErrBadAddrType
}
copy(b[2:], ip4)
case AddrDomain:
ip4 := net.IPv4(0, 0, 0, 1)
copy(b[2:], ip4.To4())
default:
return ErrBadAddrType
}
return nil
}
func (addr *Addr) String() string {
return net.JoinHostPort(addr.Host, strconv.Itoa(int(addr.Port)))
}
/*
+----+----+----+----+----+----+----+----+----+----+....+----+
| VN | CD | DSTPORT | DSTIP | USERID |NULL|
+----+----+----+----+----+----+----+----+----+----+....+----+
1 1 2 4 variable 1
*/
type Request struct {
Cmd uint8
Addr *Addr
Userid []byte
}
func NewRequest(cmd uint8, addr *Addr, userid []byte) *Request {
return &Request{
Cmd: cmd,
Addr: addr,
Userid: userid,
}
}
func ReadRequest(r io.Reader) (*Request, error) {
br := bufio.NewReader(r)
b, err := br.Peek(8)
if err != nil {
return nil, err
}
if b[0] != Ver4 {
return nil, ErrBadVersion
}
request := &Request{
Cmd: b[1],
}
addr := &Addr{}
if err := addr.Decode(b[2:8]); err != nil {
return nil, err
}
request.Addr = addr
if _, err := br.Discard(8); err != nil {
return nil, err
}
b, err = br.ReadBytes(0)
if err != nil {
return nil, err
}
request.Userid = b[:len(b)-1]
if request.Addr.Type == AddrDomain {
b, err = br.ReadBytes(0)
if err != nil {
return nil, err
}
request.Addr.Host = string(b[:len(b)-1])
}
return request, nil
}
func (r *Request) Write(w io.Writer) (err error) {
bw := bufio.NewWriter(w)
bw.Write([]byte{Ver4, r.Cmd})
if r.Addr == nil {
return ErrBadAddrType
}
var b [6]byte
if err = r.Addr.Encode(b[:]); err != nil {
return
}
bw.Write(b[:])
if len(r.Userid) > 0 {
bw.Write(r.Userid)
}
bw.WriteByte(0)
if r.Addr.Type == AddrDomain {
bw.WriteString(r.Addr.Host)
bw.WriteByte(0)
}
return bw.Flush()
}
func (r *Request) String() string {
addr := r.Addr
if addr == nil {
addr = &Addr{}
}
return fmt.Sprintf("%d %d %s", Ver4, r.Cmd, addr.String())
}
/*
+----+----+----+----+----+----+----+----+
| VN | CD | DSTPORT | DSTIP |
+----+----+----+----+----+----+----+----+
1 1 2 4
*/
type Reply struct {
Code uint8
Addr *Addr
}
func NewReply(code uint8, addr *Addr) *Reply {
return &Reply{
Code: code,
Addr: addr,
}
}
func ReadReply(r io.Reader) (*Reply, error) {
var b [8]byte
_, err := io.ReadFull(r, b[:])
if err != nil {
return nil, err
}
if b[0] != 0 {
return nil, ErrBadVersion
}
reply := &Reply{
Code: b[1],
}
reply.Addr = &Addr{}
if err := reply.Addr.Decode(b[2:]); err != nil {
return nil, err
}
return reply, nil
}
func (r *Reply) Write(w io.Writer) (err error) {
var b [8]byte
b[1] = r.Code
if r.Addr != nil {
if err = r.Addr.Encode(b[2:]); err != nil {
return
}
}
_, err = w.Write(b[:])
return
}
func (r *Reply) String() string {
addr := r.Addr
if addr == nil {
addr = &Addr{}
}
return fmt.Sprintf("0 %d %s", r.Code, addr.String())
}

152
vendor/github.com/ginuerzh/gosocks4/socks4.protocol.txt generated vendored Normal file
View File

@ -0,0 +1,152 @@
SOCKS: A protocol for TCP proxy across firewalls
Ying-Da Lee
Principal Member Technical Staff
NEC Systems Laboratory, CSTC
ylee@syl.dl.nec.com
SOCKS was originally developed by David Koblas and subsequently modified
and extended by me to its current running version -- version 4. It is a
protocol that relays TCP sessions at a firewall host to allow application
users transparent access across the firewall. Because the protocol is
independent of application protocols, it can be (and has been) used for
many different services, such as telnet, ftp, finger, whois, gopher, WWW,
etc. Access control can be applied at the beginning of each TCP session;
thereafter the server simply relays the data between the client and the
application server, incurring minimum processing overhead. Since SOCKS
never has to know anything about the application protocol, it should also
be easy for it to accommodate applications which use encryption to protect
their traffic from nosey snoopers.
Two operations are defined: CONNECT and BIND.
1) CONNECT
The client connects to the SOCKS server and sends a CONNECT request when
it wants to establish a connection to an application server. The client
includes in the request packet the IP address and the port number of the
destination host, and userid, in the following format.
+----+----+----+----+----+----+----+----+----+----+....+----+
| VN | CD | DSTPORT | DSTIP | USERID |NULL|
+----+----+----+----+----+----+----+----+----+----+....+----+
# of bytes: 1 1 2 4 variable 1
VN is the SOCKS protocol version number and should be 4. CD is the
SOCKS command code and should be 1 for CONNECT request. NULL is a byte
of all zero bits.
The SOCKS server checks to see whether such a request should be granted
based on any combination of source IP address, destination IP address,
destination port number, the userid, and information it may obtain by
consulting IDENT, cf. RFC 1413. If the request is granted, the SOCKS
server makes a connection to the specified port of the destination host.
A reply packet is sent to the client when this connection is established,
or when the request is rejected or the operation fails.
+----+----+----+----+----+----+----+----+
| VN | CD | DSTPORT | DSTIP |
+----+----+----+----+----+----+----+----+
# of bytes: 1 1 2 4
VN is the version of the reply code and should be 0. CD is the result
code with one of the following values:
90: request granted
91: request rejected or failed
92: request rejected becasue SOCKS server cannot connect to
identd on the client
93: request rejected because the client program and identd
report different user-ids
The remaining fields are ignored.
The SOCKS server closes its connection immediately after notifying
the client of a failed or rejected request. For a successful request,
the SOCKS server gets ready to relay traffic on both directions. This
enables the client to do I/O on its connection as if it were directly
connected to the application server.
2) BIND
The client connects to the SOCKS server and sends a BIND request when
it wants to prepare for an inbound connection from an application server.
This should only happen after a primary connection to the application
server has been established with a CONNECT. Typically, this is part of
the sequence of actions:
-bind(): obtain a socket
-getsockname(): get the IP address and port number of the socket
-listen(): ready to accept call from the application server
-use the primary connection to inform the application server of
the IP address and the port number that it should connect to.
-accept(): accept a connection from the application server
The purpose of SOCKS BIND operation is to support such a sequence
but using a socket on the SOCKS server rather than on the client.
The client includes in the request packet the IP address of the
application server, the destination port used in the primary connection,
and the userid.
+----+----+----+----+----+----+----+----+----+----+....+----+
| VN | CD | DSTPORT | DSTIP | USERID |NULL|
+----+----+----+----+----+----+----+----+----+----+....+----+
# of bytes: 1 1 2 4 variable 1
VN is again 4 for the SOCKS protocol version number. CD must be 2 to
indicate BIND request.
The SOCKS server uses the client information to decide whether the
request is to be granted. The reply it sends back to the client has
the same format as the reply for CONNECT request, i.e.,
+----+----+----+----+----+----+----+----+
| VN | CD | DSTPORT | DSTIP |
+----+----+----+----+----+----+----+----+
# of bytes: 1 1 2 4
VN is the version of the reply code and should be 0. CD is the result
code with one of the following values:
90: request granted
91: request rejected or failed
92: request rejected becasue SOCKS server cannot connect to
identd on the client
93: request rejected because the client program and identd
report different user-ids.
However, for a granted request (CD is 90), the DSTPORT and DSTIP fields
are meaningful. In that case, the SOCKS server obtains a socket to wait
for an incoming connection and sends the port number and the IP address
of that socket to the client in DSTPORT and DSTIP, respectively. If the
DSTIP in the reply is 0 (the value of constant INADDR_ANY), then the
client should replace it by the IP address of the SOCKS server to which
the cleint is connected. (This happens if the SOCKS server is not a
multi-homed host.) In the typical scenario, these two numbers are
made available to the application client prgram via the result of the
subsequent getsockname() call. The application protocol must provide a
way for these two pieces of information to be sent from the client to
the application server so that it can initiate the connection, which
connects it to the SOCKS server rather than directly to the application
client as it normally would.
The SOCKS server sends a second reply packet to the client when the
anticipated connection from the application server is established.
The SOCKS server checks the IP address of the originating host against
the value of DSTIP specified in the client's BIND request. If a mismatch
is found, the CD field in the second reply is set to 91 and the SOCKS
server closes both connections. If the two match, CD in the second
reply is set to 90 and the SOCKS server gets ready to relay the traffic
on its two connections. From then on the client does I/O on its connection
to the SOCKS server as if it were directly connected to the application
server.
For both CONNECT and BIND operations, the server sets a time limit
(2 minutes in current CSTC implementation) for the establishment of its
connection with the application server. If the connection is still not
establiched when the time limit expires, the server closes its connection
to the client and gives up.

View File

@ -0,0 +1,39 @@
SOCKS 4A: A Simple Extension to SOCKS 4 Protocol
Ying-Da Lee
yingda@best.com or yingda@esd.sgi.com
Please read SOCKS4.protocol first for an description of the version 4
protocol. This extension is intended to allow the use of SOCKS on hosts
which are not capable of resolving all domain names.
In version 4, the client sends the following packet to the SOCKS server
to request a CONNECT or a BIND operation:
+----+----+----+----+----+----+----+----+----+----+....+----+
| VN | CD | DSTPORT | DSTIP | USERID |NULL|
+----+----+----+----+----+----+----+----+----+----+....+----+
# of bytes: 1 1 2 4 variable 1
VN is the SOCKS protocol version number and should be 4. CD is the
SOCKS command code and should be 1 for CONNECT or 2 for BIND. NULL
is a byte of all zero bits.
For version 4A, if the client cannot resolve the destination host's
domain name to find its IP address, it should set the first three bytes
of DSTIP to NULL and the last byte to a non-zero value. (This corresponds
to IP address 0.0.0.x, with x nonzero. As decreed by IANA -- The
Internet Assigned Numbers Authority -- such an address is inadmissible
as a destination IP address and thus should never occur if the client
can resolve the domain name.) Following the NULL byte terminating
USERID, the client must sends the destination domain name and termiantes
it with another NULL byte. This is used for both CONNECT and BIND requests.
A server using protocol 4A must check the DSTIP in the request packet.
If it represent address 0.0.0.x with nonzero x, the server must read
in the domain name that the client sends in the packet. The server
should resolve the domain name and make connection to the destination
host if it can.
SOCKSified sockd may pass domain names that it cannot resolve to
the next-hop SOCKS server.

23
vendor/github.com/ginuerzh/gosocks5/.gitignore generated vendored Normal file
View File

@ -0,0 +1,23 @@
# Compiled Object files, Static and Dynamic libs (Shared Objects)
*.o
*.a
*.so
# Folders
_obj
_test
# Architecture specific extensions/prefixes
*.[568vq]
[568vq].out
*.cgo1.go
*.cgo2.c
_cgo_defun.c
_cgo_gotypes.go
_cgo_export.*
_testmain.go
*.exe
*.test

21
vendor/github.com/ginuerzh/gosocks5/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 郑锐
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

4
vendor/github.com/ginuerzh/gosocks5/README.md generated vendored Normal file
View File

@ -0,0 +1,4 @@
gosocks5
========
golang and SOCKSV5

171
vendor/github.com/ginuerzh/gosocks5/conn.go generated vendored Normal file
View File

@ -0,0 +1,171 @@
package gosocks5
import (
"io"
//"log"
"net"
"sync"
"time"
)
type Selector interface {
// return supported methods
Methods() []uint8
// select method
Select(methods ...uint8) (method uint8)
// on method selected
OnSelected(method uint8, conn net.Conn) (net.Conn, error)
}
type Conn struct {
c net.Conn
selector Selector
method uint8
isClient bool
handshaked bool
handshakeMutex sync.Mutex
handshakeErr error
}
func ClientConn(conn net.Conn, selector Selector) *Conn {
return &Conn{
c: conn,
selector: selector,
isClient: true,
}
}
func ServerConn(conn net.Conn, selector Selector) *Conn {
return &Conn{
c: conn,
selector: selector,
}
}
func (conn *Conn) Handleshake() error {
conn.handshakeMutex.Lock()
defer conn.handshakeMutex.Unlock()
if err := conn.handshakeErr; err != nil {
return err
}
if conn.handshaked {
return nil
}
if conn.isClient {
conn.handshakeErr = conn.clientHandshake()
} else {
conn.handshakeErr = conn.serverHandshake()
}
return conn.handshakeErr
}
func (conn *Conn) clientHandshake() error {
var methods []uint8
var nm int
if conn.selector != nil {
methods = conn.selector.Methods()
}
nm = len(methods)
if nm == 0 {
nm = 1
}
b := make([]byte, 2+nm)
b[0] = Ver5
b[1] = uint8(nm)
copy(b[2:], methods)
if _, err := conn.c.Write(b); err != nil {
return err
}
if _, err := io.ReadFull(conn.c, b[:2]); err != nil {
return err
}
if b[0] != Ver5 {
return ErrBadVersion
}
if conn.selector != nil {
c, err := conn.selector.OnSelected(b[1], conn.c)
if err != nil {
return err
}
conn.c = c
}
conn.method = b[1]
//log.Println("method:", conn.method)
conn.handshaked = true
return nil
}
func (conn *Conn) serverHandshake() error {
methods, err := ReadMethods(conn.c)
if err != nil {
return err
}
method := MethodNoAuth
if conn.selector != nil {
method = conn.selector.Select(methods...)
}
if _, err := conn.c.Write([]byte{Ver5, method}); err != nil {
return err
}
if conn.selector != nil {
c, err := conn.selector.OnSelected(method, conn.c)
if err != nil {
return err
}
conn.c = c
}
conn.method = method
//log.Println("method:", method)
conn.handshaked = true
return nil
}
func (conn *Conn) Read(b []byte) (n int, err error) {
if err = conn.Handleshake(); err != nil {
return
}
return conn.c.Read(b)
}
func (conn *Conn) Write(b []byte) (n int, err error) {
if err = conn.Handleshake(); err != nil {
return
}
return conn.c.Write(b)
}
func (conn *Conn) Close() error {
return conn.c.Close()
}
func (conn *Conn) LocalAddr() net.Addr {
return conn.c.LocalAddr()
}
func (conn *Conn) RemoteAddr() net.Addr {
return conn.c.RemoteAddr()
}
func (conn *Conn) SetDeadline(t time.Time) error {
return conn.c.SetDeadline(t)
}
func (conn *Conn) SetReadDeadline(t time.Time) error {
return conn.c.SetReadDeadline(t)
}
func (conn *Conn) SetWriteDeadline(t time.Time) error {
return conn.c.SetWriteDeadline(t)
}

507
vendor/github.com/ginuerzh/gosocks5/rfc1928.txt generated vendored Normal file
View File

@ -0,0 +1,507 @@
Network Working Group M. Leech
Request for Comments: 1928 Bell-Northern Research Ltd
Category: Standards Track M. Ganis
International Business Machines
Y. Lee
NEC Systems Laboratory
R. Kuris
Unify Corporation
D. Koblas
Independent Consultant
L. Jones
Hewlett-Packard Company
March 1996
SOCKS Protocol Version 5
Status of this Memo
This document specifies an Internet standards track protocol for the
Internet community, and requests discussion and suggestions for
improvements. Please refer to the current edition of the "Internet
Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
Acknowledgments
This memo describes a protocol that is an evolution of the previous
version of the protocol, version 4 [1]. This new protocol stems from
active discussions and prototype implementations. The key
contributors are: Marcus Leech: Bell-Northern Research, David Koblas:
Independent Consultant, Ying-Da Lee: NEC Systems Laboratory, LaMont
Jones: Hewlett-Packard Company, Ron Kuris: Unify Corporation, Matt
Ganis: International Business Machines.
1. Introduction
The use of network firewalls, systems that effectively isolate an
organizations internal network structure from an exterior network,
such as the INTERNET is becoming increasingly popular. These
firewall systems typically act as application-layer gateways between
networks, usually offering controlled TELNET, FTP, and SMTP access.
With the emergence of more sophisticated application layer protocols
designed to facilitate global information discovery, there exists a
need to provide a general framework for these protocols to
transparently and securely traverse a firewall.
Leech, et al Standards Track [Page 1]
RFC 1928 SOCKS Protocol Version 5 March 1996
There exists, also, a need for strong authentication of such
traversal in as fine-grained a manner as is practical. This
requirement stems from the realization that client-server
relationships emerge between the networks of various organizations,
and that such relationships need to be controlled and often strongly
authenticated.
The protocol described here is designed to provide a framework for
client-server applications in both the TCP and UDP domains to
conveniently and securely use the services of a network firewall.
The protocol is conceptually a "shim-layer" between the application
layer and the transport layer, and as such does not provide network-
layer gateway services, such as forwarding of ICMP messages.
2. Existing practice
There currently exists a protocol, SOCKS Version 4, that provides for
unsecured firewall traversal for TCP-based client-server
applications, including TELNET, FTP and the popular information-
discovery protocols such as HTTP, WAIS and GOPHER.
This new protocol extends the SOCKS Version 4 model to include UDP,
and extends the framework to include provisions for generalized
strong authentication schemes, and extends the addressing scheme to
encompass domain-name and V6 IP addresses.
The implementation of the SOCKS protocol typically involves the
recompilation or relinking of TCP-based client applications to use
the appropriate encapsulation routines in the SOCKS library.
Note:
Unless otherwise noted, the decimal numbers appearing in packet-
format diagrams represent the length of the corresponding field, in
octets. Where a given octet must take on a specific value, the
syntax X'hh' is used to denote the value of the single octet in that
field. When the word 'Variable' is used, it indicates that the
corresponding field has a variable length defined either by an
associated (one or two octet) length field, or by a data type field.
3. Procedure for TCP-based clients
When a TCP-based client wishes to establish a connection to an object
that is reachable only via a firewall (such determination is left up
to the implementation), it must open a TCP connection to the
appropriate SOCKS port on the SOCKS server system. The SOCKS service
is conventionally located on TCP port 1080. If the connection
request succeeds, the client enters a negotiation for the
Leech, et al Standards Track [Page 2]
RFC 1928 SOCKS Protocol Version 5 March 1996
authentication method to be used, authenticates with the chosen
method, then sends a relay request. The SOCKS server evaluates the
request, and either establishes the appropriate connection or denies
it.
Unless otherwise noted, the decimal numbers appearing in packet-
format diagrams represent the length of the corresponding field, in
octets. Where a given octet must take on a specific value, the
syntax X'hh' is used to denote the value of the single octet in that
field. When the word 'Variable' is used, it indicates that the
corresponding field has a variable length defined either by an
associated (one or two octet) length field, or by a data type field.
The client connects to the server, and sends a version
identifier/method selection message:
+----+----------+----------+
|VER | NMETHODS | METHODS |
+----+----------+----------+
| 1 | 1 | 1 to 255 |
+----+----------+----------+
The VER field is set to X'05' for this version of the protocol. The
NMETHODS field contains the number of method identifier octets that
appear in the METHODS field.
The server selects from one of the methods given in METHODS, and
sends a METHOD selection message:
+----+--------+
|VER | METHOD |
+----+--------+
| 1 | 1 |
+----+--------+
If the selected METHOD is X'FF', none of the methods listed by the
client are acceptable, and the client MUST close the connection.
The values currently defined for METHOD are:
o X'00' NO AUTHENTICATION REQUIRED
o X'01' GSSAPI
o X'02' USERNAME/PASSWORD
o X'03' to X'7F' IANA ASSIGNED
o X'80' to X'FE' RESERVED FOR PRIVATE METHODS
o X'FF' NO ACCEPTABLE METHODS
The client and server then enter a method-specific sub-negotiation.
Leech, et al Standards Track [Page 3]
RFC 1928 SOCKS Protocol Version 5 March 1996
Descriptions of the method-dependent sub-negotiations appear in
separate memos.
Developers of new METHOD support for this protocol should contact
IANA for a METHOD number. The ASSIGNED NUMBERS document should be
referred to for a current list of METHOD numbers and their
corresponding protocols.
Compliant implementations MUST support GSSAPI and SHOULD support
USERNAME/PASSWORD authentication methods.
4. Requests
Once the method-dependent subnegotiation has completed, the client
sends the request details. If the negotiated method includes
encapsulation for purposes of integrity checking and/or
confidentiality, these requests MUST be encapsulated in the method-
dependent encapsulation.
The SOCKS request is formed as follows:
+----+-----+-------+------+----------+----------+
|VER | CMD | RSV | ATYP | DST.ADDR | DST.PORT |
+----+-----+-------+------+----------+----------+
| 1 | 1 | X'00' | 1 | Variable | 2 |
+----+-----+-------+------+----------+----------+
Where:
o VER protocol version: X'05'
o CMD
o CONNECT X'01'
o BIND X'02'
o UDP ASSOCIATE X'03'
o RSV RESERVED
o ATYP address type of following address
o IP V4 address: X'01'
o DOMAINNAME: X'03'
o IP V6 address: X'04'
o DST.ADDR desired destination address
o DST.PORT desired destination port in network octet
order
The SOCKS server will typically evaluate the request based on source
and destination addresses, and return one or more reply messages, as
appropriate for the request type.
Leech, et al Standards Track [Page 4]
RFC 1928 SOCKS Protocol Version 5 March 1996
5. Addressing
In an address field (DST.ADDR, BND.ADDR), the ATYP field specifies
the type of address contained within the field:
o X'01'
the address is a version-4 IP address, with a length of 4 octets
o X'03'
the address field contains a fully-qualified domain name. The first
octet of the address field contains the number of octets of name that
follow, there is no terminating NUL octet.
o X'04'
the address is a version-6 IP address, with a length of 16 octets.
6. Replies
The SOCKS request information is sent by the client as soon as it has
established a connection to the SOCKS server, and completed the
authentication negotiations. The server evaluates the request, and
returns a reply formed as follows:
+----+-----+-------+------+----------+----------+
|VER | REP | RSV | ATYP | BND.ADDR | BND.PORT |
+----+-----+-------+------+----------+----------+
| 1 | 1 | X'00' | 1 | Variable | 2 |
+----+-----+-------+------+----------+----------+
Where:
o VER protocol version: X'05'
o REP Reply field:
o X'00' succeeded
o X'01' general SOCKS server failure
o X'02' connection not allowed by ruleset
o X'03' Network unreachable
o X'04' Host unreachable
o X'05' Connection refused
o X'06' TTL expired
o X'07' Command not supported
o X'08' Address type not supported
o X'09' to X'FF' unassigned
o RSV RESERVED
o ATYP address type of following address
Leech, et al Standards Track [Page 5]
RFC 1928 SOCKS Protocol Version 5 March 1996
o IP V4 address: X'01'
o DOMAINNAME: X'03'
o IP V6 address: X'04'
o BND.ADDR server bound address
o BND.PORT server bound port in network octet order
Fields marked RESERVED (RSV) must be set to X'00'.
If the chosen method includes encapsulation for purposes of
authentication, integrity and/or confidentiality, the replies are
encapsulated in the method-dependent encapsulation.
CONNECT
In the reply to a CONNECT, BND.PORT contains the port number that the
server assigned to connect to the target host, while BND.ADDR
contains the associated IP address. The supplied BND.ADDR is often
different from the IP address that the client uses to reach the SOCKS
server, since such servers are often multi-homed. It is expected
that the SOCKS server will use DST.ADDR and DST.PORT, and the
client-side source address and port in evaluating the CONNECT
request.
BIND
The BIND request is used in protocols which require the client to
accept connections from the server. FTP is a well-known example,
which uses the primary client-to-server connection for commands and
status reports, but may use a server-to-client connection for
transferring data on demand (e.g. LS, GET, PUT).
It is expected that the client side of an application protocol will
use the BIND request only to establish secondary connections after a
primary connection is established using CONNECT. In is expected that
a SOCKS server will use DST.ADDR and DST.PORT in evaluating the BIND
request.
Two replies are sent from the SOCKS server to the client during a
BIND operation. The first is sent after the server creates and binds
a new socket. The BND.PORT field contains the port number that the
SOCKS server assigned to listen for an incoming connection. The
BND.ADDR field contains the associated IP address. The client will
typically use these pieces of information to notify (via the primary
or control connection) the application server of the rendezvous
address. The second reply occurs only after the anticipated incoming
connection succeeds or fails.
Leech, et al Standards Track [Page 6]
RFC 1928 SOCKS Protocol Version 5 March 1996
In the second reply, the BND.PORT and BND.ADDR fields contain the
address and port number of the connecting host.
UDP ASSOCIATE
The UDP ASSOCIATE request is used to establish an association within
the UDP relay process to handle UDP datagrams. The DST.ADDR and
DST.PORT fields contain the address and port that the client expects
to use to send UDP datagrams on for the association. The server MAY
use this information to limit access to the association. If the
client is not in possesion of the information at the time of the UDP
ASSOCIATE, the client MUST use a port number and address of all
zeros.
A UDP association terminates when the TCP connection that the UDP
ASSOCIATE request arrived on terminates.
In the reply to a UDP ASSOCIATE request, the BND.PORT and BND.ADDR
fields indicate the port number/address where the client MUST send
UDP request messages to be relayed.
Reply Processing
When a reply (REP value other than X'00') indicates a failure, the
SOCKS server MUST terminate the TCP connection shortly after sending
the reply. This must be no more than 10 seconds after detecting the
condition that caused a failure.
If the reply code (REP value of X'00') indicates a success, and the
request was either a BIND or a CONNECT, the client may now start
passing data. If the selected authentication method supports
encapsulation for the purposes of integrity, authentication and/or
confidentiality, the data are encapsulated using the method-dependent
encapsulation. Similarly, when data arrives at the SOCKS server for
the client, the server MUST encapsulate the data as appropriate for
the authentication method in use.
7. Procedure for UDP-based clients
A UDP-based client MUST send its datagrams to the UDP relay server at
the UDP port indicated by BND.PORT in the reply to the UDP ASSOCIATE
request. If the selected authentication method provides
encapsulation for the purposes of authenticity, integrity, and/or
confidentiality, the datagram MUST be encapsulated using the
appropriate encapsulation. Each UDP datagram carries a UDP request
header with it:
Leech, et al Standards Track [Page 7]
RFC 1928 SOCKS Protocol Version 5 March 1996
+----+------+------+----------+----------+----------+
|RSV | FRAG | ATYP | DST.ADDR | DST.PORT | DATA |
+----+------+------+----------+----------+----------+
| 2 | 1 | 1 | Variable | 2 | Variable |
+----+------+------+----------+----------+----------+
The fields in the UDP request header are:
o RSV Reserved X'0000'
o FRAG Current fragment number
o ATYP address type of following addresses:
o IP V4 address: X'01'
o DOMAINNAME: X'03'
o IP V6 address: X'04'
o DST.ADDR desired destination address
o DST.PORT desired destination port
o DATA user data
When a UDP relay server decides to relay a UDP datagram, it does so
silently, without any notification to the requesting client.
Similarly, it will drop datagrams it cannot or will not relay. When
a UDP relay server receives a reply datagram from a remote host, it
MUST encapsulate that datagram using the above UDP request header,
and any authentication-method-dependent encapsulation.
The UDP relay server MUST acquire from the SOCKS server the expected
IP address of the client that will send datagrams to the BND.PORT
given in the reply to UDP ASSOCIATE. It MUST drop any datagrams
arriving from any source IP address other than the one recorded for
the particular association.
The FRAG field indicates whether or not this datagram is one of a
number of fragments. If implemented, the high-order bit indicates
end-of-fragment sequence, while a value of X'00' indicates that this
datagram is standalone. Values between 1 and 127 indicate the
fragment position within a fragment sequence. Each receiver will
have a REASSEMBLY QUEUE and a REASSEMBLY TIMER associated with these
fragments. The reassembly queue must be reinitialized and the
associated fragments abandoned whenever the REASSEMBLY TIMER expires,
or a new datagram arrives carrying a FRAG field whose value is less
than the highest FRAG value processed for this fragment sequence.
The reassembly timer MUST be no less than 5 seconds. It is
recommended that fragmentation be avoided by applications wherever
possible.
Implementation of fragmentation is optional; an implementation that
does not support fragmentation MUST drop any datagram whose FRAG
field is other than X'00'.
Leech, et al Standards Track [Page 8]
RFC 1928 SOCKS Protocol Version 5 March 1996
The programming interface for a SOCKS-aware UDP MUST report an
available buffer space for UDP datagrams that is smaller than the
actual space provided by the operating system:
o if ATYP is X'01' - 10+method_dependent octets smaller
o if ATYP is X'03' - 262+method_dependent octets smaller
o if ATYP is X'04' - 20+method_dependent octets smaller
8. Security Considerations
This document describes a protocol for the application-layer
traversal of IP network firewalls. The security of such traversal is
highly dependent on the particular authentication and encapsulation
methods provided in a particular implementation, and selected during
negotiation between SOCKS client and SOCKS server.
Careful consideration should be given by the administrator to the
selection of authentication methods.
9. References
[1] Koblas, D., "SOCKS", Proceedings: 1992 Usenix Security Symposium.
Author's Address
Marcus Leech
Bell-Northern Research Ltd
P.O. Box 3511, Stn. C,
Ottawa, ON
CANADA K1Y 4H7
Phone: (613) 763-9145
EMail: mleech@bnr.ca
Leech, et al Standards Track [Page 9]

115
vendor/github.com/ginuerzh/gosocks5/rfc1929.txt generated vendored Normal file
View File

@ -0,0 +1,115 @@
Network Working Group M. Leech
Request for Comments: 1929 Bell-Northern Research Ltd
Category: Standards Track March 1996
Username/Password Authentication for SOCKS V5
Status of this Memo
This document specifies an Internet standards track protocol for the
Internet community, and requests discussion and suggestions for
improvements. Please refer to the current edition of the "Internet
Official Protocol Standards" (STD 1) for the standardization state
and status of this protocol. Distribution of this memo is unlimited.
1. Introduction
The protocol specification for SOCKS Version 5 specifies a
generalized framework for the use of arbitrary authentication
protocols in the initial socks connection setup. This document
describes one of those protocols, as it fits into the SOCKS Version 5
authentication "subnegotiation".
Note:
Unless otherwise noted, the decimal numbers appearing in packet-
format diagrams represent the length of the corresponding field, in
octets. Where a given octet must take on a specific value, the
syntax X'hh' is used to denote the value of the single octet in that
field. When the word 'Variable' is used, it indicates that the
corresponding field has a variable length defined either by an
associated (one or two octet) length field, or by a data type field.
2. Initial negotiation
Once the SOCKS V5 server has started, and the client has selected the
Username/Password Authentication protocol, the Username/Password
subnegotiation begins. This begins with the client producing a
Username/Password request:
+----+------+----------+------+----------+
|VER | ULEN | UNAME | PLEN | PASSWD |
+----+------+----------+------+----------+
| 1 | 1 | 1 to 255 | 1 | 1 to 255 |
+----+------+----------+------+----------+
Leech Standards Track [Page 1]
RFC 1929 Username Authentication for SOCKS V5 March 1996
The VER field contains the current version of the subnegotiation,
which is X'01'. The ULEN field contains the length of the UNAME field
that follows. The UNAME field contains the username as known to the
source operating system. The PLEN field contains the length of the
PASSWD field that follows. The PASSWD field contains the password
association with the given UNAME.
The server verifies the supplied UNAME and PASSWD, and sends the
following response:
+----+--------+
|VER | STATUS |
+----+--------+
| 1 | 1 |
+----+--------+
A STATUS field of X'00' indicates success. If the server returns a
`failure' (STATUS value other than X'00') status, it MUST close the
connection.
3. Security Considerations
This document describes a subnegotiation that provides authentication
services to the SOCKS protocol. Since the request carries the
password in cleartext, this subnegotiation is not recommended for
environments where "sniffing" is possible and practical.
4. Author's Address
Marcus Leech
Bell-Northern Research Ltd
P.O. Box 3511, Station C
Ottawa, ON
CANADA K1Y 4H7
Phone: +1 613 763 9145
EMail: mleech@bnr.ca
Leech Standards Track [Page 2]

700
vendor/github.com/ginuerzh/gosocks5/socks5.go generated vendored Normal file
View File

@ -0,0 +1,700 @@
// SOCKS Protocol Version 5
// http://tools.ietf.org/html/rfc1928
// http://tools.ietf.org/html/rfc1929
package gosocks5
import (
"bytes"
"encoding/binary"
"errors"
"fmt"
"io"
"io/ioutil"
"net"
"strconv"
"sync"
)
const (
Ver5 = 5
UserPassVer = 1
)
const (
MethodNoAuth uint8 = iota
MethodGSSAPI
MethodUserPass
// X'03' to X'7F' IANA ASSIGNED
// X'80' to X'FE' RESERVED FOR PRIVATE METHODS
MethodNoAcceptable = 0xFF
)
const (
CmdConnect uint8 = 1
CmdBind = 2
CmdUdp = 3
)
const (
AddrIPv4 uint8 = 1
AddrDomain = 3
AddrIPv6 = 4
)
const (
Succeeded uint8 = iota
Failure
NotAllowed
NetUnreachable
HostUnreachable
ConnRefused
TTLExpired
CmdUnsupported
AddrUnsupported
)
var (
ErrBadVersion = errors.New("Bad version")
ErrBadFormat = errors.New("Bad format")
ErrBadAddrType = errors.New("Bad address type")
ErrShortBuffer = errors.New("Short buffer")
ErrBadMethod = errors.New("Bad method")
ErrAuthFailure = errors.New("Auth failure")
)
// buffer pools
var (
sPool = sync.Pool{
New: func() interface{} {
return make([]byte, 576)
},
} // small buff pool
lPool = sync.Pool{
New: func() interface{} {
return make([]byte, 64*1024+262)
},
} // large buff pool for udp
)
/*
Method selection
+----+----------+----------+
|VER | NMETHODS | METHODS |
+----+----------+----------+
| 1 | 1 | 1 to 255 |
+----+----------+----------+
*/
func ReadMethods(r io.Reader) ([]uint8, error) {
//b := make([]byte, 257)
b := sPool.Get().([]byte)
defer sPool.Put(b)
n, err := io.ReadAtLeast(r, b, 2)
if err != nil {
return nil, err
}
if b[0] != Ver5 {
return nil, ErrBadVersion
}
if b[1] == 0 {
return nil, ErrBadMethod
}
length := 2 + int(b[1])
if n < length {
if _, err := io.ReadFull(r, b[n:length]); err != nil {
return nil, err
}
}
methods := make([]byte, int(b[1]))
copy(methods, b[2:length])
return methods, nil
}
func WriteMethod(method uint8, w io.Writer) error {
_, err := w.Write([]byte{Ver5, method})
return err
}
/*
Username/Password authentication request
+----+------+----------+------+----------+
|VER | ULEN | UNAME | PLEN | PASSWD |
+----+------+----------+------+----------+
| 1 | 1 | 1 to 255 | 1 | 1 to 255 |
+----+------+----------+------+----------+
*/
type UserPassRequest struct {
Version byte
Username string
Password string
}
func NewUserPassRequest(ver byte, u, p string) *UserPassRequest {
return &UserPassRequest{
Version: ver,
Username: u,
Password: p,
}
}
func ReadUserPassRequest(r io.Reader) (*UserPassRequest, error) {
// b := make([]byte, 513)
b := sPool.Get().([]byte)
defer sPool.Put(b)
n, err := io.ReadAtLeast(r, b, 2)
if err != nil {
return nil, err
}
if b[0] != UserPassVer {
return nil, ErrBadVersion
}
req := &UserPassRequest{
Version: b[0],
}
ulen := int(b[1])
length := ulen + 3
if n < length {
if _, err := io.ReadFull(r, b[n:length]); err != nil {
return nil, err
}
n = length
}
req.Username = string(b[2 : 2+ulen])
plen := int(b[length-1])
length += plen
if n < length {
if _, err := io.ReadFull(r, b[n:length]); err != nil {
return nil, err
}
}
req.Password = string(b[3+ulen : length])
return req, nil
}
func (req *UserPassRequest) Write(w io.Writer) error {
// b := make([]byte, 513)
b := sPool.Get().([]byte)
defer sPool.Put(b)
b[0] = req.Version
ulen := len(req.Username)
b[1] = byte(ulen)
length := 2 + ulen
copy(b[2:length], req.Username)
plen := len(req.Password)
b[length] = byte(plen)
length++
copy(b[length:length+plen], req.Password)
length += plen
_, err := w.Write(b[:length])
return err
}
func (req *UserPassRequest) String() string {
return fmt.Sprintf("%d %s:%s",
req.Version, req.Username, req.Password)
}
/*
Username/Password authentication response
+----+--------+
|VER | STATUS |
+----+--------+
| 1 | 1 |
+----+--------+
*/
type UserPassResponse struct {
Version byte
Status byte
}
func NewUserPassResponse(ver, status byte) *UserPassResponse {
return &UserPassResponse{
Version: ver,
Status: status,
}
}
func ReadUserPassResponse(r io.Reader) (*UserPassResponse, error) {
// b := make([]byte, 2)
b := sPool.Get().([]byte)
defer sPool.Put(b)
if _, err := io.ReadFull(r, b[:2]); err != nil {
return nil, err
}
if b[0] != UserPassVer {
return nil, ErrBadVersion
}
res := &UserPassResponse{
Version: b[0],
Status: b[1],
}
return res, nil
}
func (res *UserPassResponse) Write(w io.Writer) error {
_, err := w.Write([]byte{res.Version, res.Status})
return err
}
func (res *UserPassResponse) String() string {
return fmt.Sprintf("%d %d",
res.Version, res.Status)
}
/*
Address
+------+----------+----------+
| ATYP | ADDR | PORT |
+------+----------+----------+
| 1 | Variable | 2 |
+------+----------+----------+
*/
type Addr struct {
Type uint8
Host string
Port uint16
}
func NewAddr(sa string) (addr *Addr, err error) {
host, sport, err := net.SplitHostPort(sa)
if err != nil {
return nil, err
}
port, err := strconv.Atoi(sport)
if err != nil {
return nil, err
}
addr = &Addr{
Type: AddrDomain,
Host: host,
Port: uint16(port),
}
if ip := net.ParseIP(host); ip != nil {
if ip.To4() != nil {
addr.Type = AddrIPv4
} else {
addr.Type = AddrIPv6
}
}
return
}
func (addr *Addr) Decode(b []byte) error {
addr.Type = b[0]
pos := 1
switch addr.Type {
case AddrIPv4:
addr.Host = net.IP(b[pos : pos+net.IPv4len]).String()
pos += net.IPv4len
case AddrIPv6:
addr.Host = net.IP(b[pos : pos+net.IPv6len]).String()
pos += net.IPv6len
case AddrDomain:
addrlen := int(b[pos])
pos++
addr.Host = string(b[pos : pos+addrlen])
pos += addrlen
default:
return ErrBadAddrType
}
addr.Port = binary.BigEndian.Uint16(b[pos:])
return nil
}
func (addr *Addr) Encode(b []byte) (int, error) {
b[0] = addr.Type
pos := 1
switch addr.Type {
case AddrIPv4:
ip4 := net.ParseIP(addr.Host).To4()
if ip4 == nil {
ip4 = net.IPv4zero.To4()
}
pos += copy(b[pos:], ip4)
case AddrDomain:
b[pos] = byte(len(addr.Host))
pos++
pos += copy(b[pos:], []byte(addr.Host))
case AddrIPv6:
ip16 := net.ParseIP(addr.Host).To16()
if ip16 == nil {
ip16 = net.IPv6zero.To16()
}
pos += copy(b[pos:], ip16)
default:
b[0] = AddrIPv4
copy(b[pos:pos+4], net.IPv4zero.To4())
pos += 4
}
binary.BigEndian.PutUint16(b[pos:], addr.Port)
pos += 2
return pos, nil
}
func (addr *Addr) Length() (n int) {
switch addr.Type {
case AddrIPv4:
n = 10
case AddrIPv6:
n = 22
case AddrDomain:
n = 7 + len(addr.Host)
default:
n = 10
}
return
}
func (addr *Addr) String() string {
return net.JoinHostPort(addr.Host, strconv.Itoa(int(addr.Port)))
}
/*
The SOCKSv5 request
+----+-----+-------+------+----------+----------+
|VER | CMD | RSV | ATYP | DST.ADDR | DST.PORT |
+----+-----+-------+------+----------+----------+
| 1 | 1 | X'00' | 1 | Variable | 2 |
+----+-----+-------+------+----------+----------+
*/
type Request struct {
Cmd uint8
Addr *Addr
}
func NewRequest(cmd uint8, addr *Addr) *Request {
return &Request{
Cmd: cmd,
Addr: addr,
}
}
func ReadRequest(r io.Reader) (*Request, error) {
// b := make([]byte, 262)
b := sPool.Get().([]byte)
defer sPool.Put(b)
n, err := io.ReadAtLeast(r, b, 5)
if err != nil {
return nil, err
}
if b[0] != Ver5 {
return nil, ErrBadVersion
}
request := &Request{
Cmd: b[1],
}
atype := b[3]
length := 0
switch atype {
case AddrIPv4:
length = 10
case AddrIPv6:
length = 22
case AddrDomain:
length = 7 + int(b[4])
default:
return nil, ErrBadAddrType
}
if n < length {
if _, err := io.ReadFull(r, b[n:length]); err != nil {
return nil, err
}
}
addr := new(Addr)
if err := addr.Decode(b[3:length]); err != nil {
return nil, err
}
request.Addr = addr
return request, nil
}
func (r *Request) Write(w io.Writer) (err error) {
//b := make([]byte, 262)
b := sPool.Get().([]byte)
defer sPool.Put(b)
b[0] = Ver5
b[1] = r.Cmd
b[2] = 0 //rsv
b[3] = AddrIPv4 // default
addr := r.Addr
if addr == nil {
addr = &Addr{}
}
n, _ := addr.Encode(b[3:])
length := 3 + n
_, err = w.Write(b[:length])
return
}
func (r *Request) String() string {
addr := r.Addr
if addr == nil {
addr = &Addr{}
}
return fmt.Sprintf("5 %d 0 %d %s",
r.Cmd, addr.Type, addr.String())
}
/*
The SOCKSv5 reply
+----+-----+-------+------+----------+----------+
|VER | REP | RSV | ATYP | BND.ADDR | BND.PORT |
+----+-----+-------+------+----------+----------+
| 1 | 1 | X'00' | 1 | Variable | 2 |
+----+-----+-------+------+----------+----------+
*/
type Reply struct {
Rep uint8
Addr *Addr
}
func NewReply(rep uint8, addr *Addr) *Reply {
return &Reply{
Rep: rep,
Addr: addr,
}
}
func ReadReply(r io.Reader) (*Reply, error) {
// b := make([]byte, 262)
b := sPool.Get().([]byte)
defer sPool.Put(b)
n, err := io.ReadAtLeast(r, b, 5)
if err != nil {
return nil, err
}
if b[0] != Ver5 {
return nil, ErrBadVersion
}
reply := &Reply{
Rep: b[1],
}
atype := b[3]
length := 0
switch atype {
case AddrIPv4:
length = 10
case AddrIPv6:
length = 22
case AddrDomain:
length = 7 + int(b[4])
default:
return nil, ErrBadAddrType
}
if n < length {
if _, err := io.ReadFull(r, b[n:length]); err != nil {
return nil, err
}
}
addr := new(Addr)
if err := addr.Decode(b[3:length]); err != nil {
return nil, err
}
reply.Addr = addr
return reply, nil
}
func (r *Reply) Write(w io.Writer) (err error) {
// b := make([]byte, 262)
b := sPool.Get().([]byte)
defer sPool.Put(b)
b[0] = Ver5
b[1] = r.Rep
b[2] = 0 //rsv
b[3] = AddrIPv4 // default
length := 10
b[4], b[5], b[6], b[7], b[8], b[9] = 0, 0, 0, 0, 0, 0 // reset address field
if r.Addr != nil {
n, _ := r.Addr.Encode(b[3:])
length = 3 + n
}
_, err = w.Write(b[:length])
return
}
func (r *Reply) String() string {
addr := r.Addr
if addr == nil {
addr = &Addr{}
}
return fmt.Sprintf("5 %d 0 %d %s",
r.Rep, addr.Type, addr.String())
}
/*
UDP request
+----+------+------+----------+----------+----------+
|RSV | FRAG | ATYP | DST.ADDR | DST.PORT | DATA |
+----+------+------+----------+----------+----------+
| 2 | 1 | 1 | Variable | 2 | Variable |
+----+------+------+----------+----------+----------+
*/
type UDPHeader struct {
Rsv uint16
Frag uint8
Addr *Addr
}
func NewUDPHeader(rsv uint16, frag uint8, addr *Addr) *UDPHeader {
return &UDPHeader{
Rsv: rsv,
Frag: frag,
Addr: addr,
}
}
func (h *UDPHeader) Write(w io.Writer) error {
b := sPool.Get().([]byte)
defer sPool.Put(b)
binary.BigEndian.PutUint16(b[:2], h.Rsv)
b[2] = h.Frag
addr := h.Addr
if addr == nil {
addr = &Addr{}
}
length, _ := addr.Encode(b[3:])
_, err := w.Write(b[:3+length])
return err
}
func (h *UDPHeader) String() string {
return fmt.Sprintf("%d %d %d %s",
h.Rsv, h.Frag, h.Addr.Type, h.Addr.String())
}
type UDPDatagram struct {
Header *UDPHeader
Data []byte
}
func NewUDPDatagram(header *UDPHeader, data []byte) *UDPDatagram {
return &UDPDatagram{
Header: header,
Data: data,
}
}
func ReadUDPDatagram(r io.Reader) (*UDPDatagram, error) {
b := lPool.Get().([]byte)
defer lPool.Put(b)
// when r is a streaming (such as TCP connection), we may read more than the required data,
// but we don't know how to handle it. So we use io.ReadFull to instead of io.ReadAtLeast
// to make sure that no redundant data will be discarded.
n, err := io.ReadFull(r, b[:5])
if err != nil {
return nil, err
}
header := &UDPHeader{
Rsv: binary.BigEndian.Uint16(b[:2]),
Frag: b[2],
}
atype := b[3]
hlen := 0
switch atype {
case AddrIPv4:
hlen = 10
case AddrIPv6:
hlen = 22
case AddrDomain:
hlen = 7 + int(b[4])
default:
return nil, ErrBadAddrType
}
dlen := int(header.Rsv)
if dlen == 0 { // standard SOCKS5 UDP datagram
extra, err := ioutil.ReadAll(r) // we assume no redundant data
if err != nil {
return nil, err
}
copy(b[n:], extra)
n += len(extra) // total length
dlen = n - hlen // data length
} else { // extended feature, for UDP over TCP, using reserved field as data length
if _, err := io.ReadFull(r, b[n:hlen+dlen]); err != nil {
return nil, err
}
n = hlen + dlen
}
header.Addr = new(Addr)
if err := header.Addr.Decode(b[3:hlen]); err != nil {
return nil, err
}
data := make([]byte, dlen)
copy(data, b[hlen:n])
d := &UDPDatagram{
Header: header,
Data: data,
}
return d, nil
}
func (d *UDPDatagram) Write(w io.Writer) error {
h := d.Header
if h == nil {
h = &UDPHeader{}
}
buf := bytes.Buffer{}
if err := h.Write(&buf); err != nil {
return err
}
if _, err := buf.Write(d.Data); err != nil {
return err
}
_, err := buf.WriteTo(w)
return err
}

14
vendor/github.com/ginuerzh/tls-dissector/.gitignore generated vendored Normal file
View File

@ -0,0 +1,14 @@
# Binaries for programs and plugins
*.exe
*.dll
*.so
*.dylib
# Test binary, build with `go test -c`
*.test
# Output of the go coverage tool, specifically when used with LiteIDE
*.out
# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
.glide/

21
vendor/github.com/ginuerzh/tls-dissector/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
MIT License
Copyright (c) 2017 ginuerzh
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

2
vendor/github.com/ginuerzh/tls-dissector/README.md generated vendored Normal file
View File

@ -0,0 +1,2 @@
# tls-dissector
Dissect TLS handshake messages

85
vendor/github.com/ginuerzh/tls-dissector/extension.go generated vendored Normal file
View File

@ -0,0 +1,85 @@
package dissector
import (
"bytes"
"encoding/binary"
"io"
)
const (
ExtServerName uint16 = 0x0000
)
type Extension interface {
Type() uint16
Bytes() []byte
}
func ReadExtension(r io.Reader) (ext Extension, err error) {
b := make([]byte, 4)
if _, err = io.ReadFull(r, b); err != nil {
return
}
bb := make([]byte, int(binary.BigEndian.Uint16(b[2:4])))
if _, err = io.ReadFull(r, bb); err != nil {
return nil, err
}
t := binary.BigEndian.Uint16(b[:2])
switch t {
case ExtServerName:
ext = &ServerNameExtension{
NameType: bb[2],
Name: string(bb[5:]),
}
default:
ext = &unknownExtension{
raw: append(b, bb...),
}
}
return
}
type unknownExtension struct {
raw []byte
}
func NewExtension(t uint16, data []byte) Extension {
ext := &unknownExtension{
raw: make([]byte, 2+2+len(data)),
}
binary.BigEndian.PutUint16(ext.raw[:2], t)
binary.BigEndian.PutUint16(ext.raw[2:4], uint16(len(data)))
copy(ext.raw[4:], data)
return ext
}
func (ext *unknownExtension) Type() uint16 {
return binary.BigEndian.Uint16(ext.raw)
}
func (ext *unknownExtension) Bytes() []byte {
return ext.raw
}
type ServerNameExtension struct {
NameType uint8
Name string
}
func (ext *ServerNameExtension) Type() uint16 {
return ExtServerName
}
func (ext *ServerNameExtension) Bytes() []byte {
buf := &bytes.Buffer{}
binary.Write(buf, binary.BigEndian, ExtServerName)
binary.Write(buf, binary.BigEndian, uint16(2+1+2+len(ext.Name)))
binary.Write(buf, binary.BigEndian, uint16(1+2+len(ext.Name)))
buf.WriteByte(ext.NameType)
binary.Write(buf, binary.BigEndian, uint16(len(ext.Name)))
buf.WriteString(ext.Name)
return buf.Bytes()
}

238
vendor/github.com/ginuerzh/tls-dissector/handshake.go generated vendored Normal file
View File

@ -0,0 +1,238 @@
package dissector
import (
"bytes"
"crypto/tls"
"encoding/binary"
"fmt"
"io"
)
const (
handshakeHeaderLen = 4
)
const (
HelloRequest = 0
ClientHello = 1
ServerHello = 2
)
type Random struct {
Time uint32
Opaque [28]byte
}
type CipherSuite uint16
type CompressionMethod uint8
type ClientHelloHandshake struct {
Version Version
Random Random
SessionID []byte
CipherSuites []CipherSuite
CompressionMethods []CompressionMethod
Extensions []Extension
}
func (h *ClientHelloHandshake) Encode() (data []byte, err error) {
buf := new(bytes.Buffer)
if _, err = h.WriteTo(buf); err != nil {
return
}
data = buf.Bytes()
return
}
func (h *ClientHelloHandshake) Decode(data []byte) (err error) {
_, err = h.ReadFrom(bytes.NewReader(data))
return
}
func (h *ClientHelloHandshake) ReadFrom(r io.Reader) (n int64, err error) {
b := make([]byte, handshakeHeaderLen)
nn, err := io.ReadFull(r, b)
n += int64(nn)
if err != nil {
return
}
if b[0] != ClientHello {
err = ErrBadType
return
}
length := int(b[1])<<16 | int(b[2])<<8 | int(b[3])
if length < 34 { // length of version + random
err = fmt.Errorf("bad length, need at least 34 bytes, got %d", length)
return
}
b = make([]byte, length)
nn, err = io.ReadFull(r, b)
n += int64(nn)
if err != nil {
return
}
h.Version = Version(binary.BigEndian.Uint16(b[:2]))
if h.Version < tls.VersionTLS12 {
err = fmt.Errorf("bad version: only TLSv1.2 is supported")
return
}
pos := 2
h.Random.Time = binary.BigEndian.Uint32(b[pos : pos+4])
pos += 4
copy(h.Random.Opaque[:], b[pos:pos+28])
pos += 28
nn, err = h.readSession(b[pos:])
if err != nil {
return
}
pos += nn
nn, err = h.readCipherSuites(b[pos:])
if err != nil {
return
}
pos += nn
nn, err = h.readCompressionMethods(b[pos:])
if err != nil {
return
}
pos += nn
nn, err = h.readExtensions(b[pos:])
if err != nil {
return
}
// pos += nn
return
}
func (h *ClientHelloHandshake) readSession(b []byte) (n int, err error) {
if len(b) == 0 {
err = fmt.Errorf("bad length: data too short for session")
return
}
nlen := int(b[0])
n++
if len(b) < n+nlen {
err = fmt.Errorf("bad length: malformed data for session")
}
if nlen > 0 && n+nlen <= len(b) {
h.SessionID = make([]byte, nlen)
copy(h.SessionID, b[n:n+nlen])
n += nlen
}
return
}
func (h *ClientHelloHandshake) readCipherSuites(b []byte) (n int, err error) {
if len(b) < 2 {
err = fmt.Errorf("bad length: data too short for cipher suites")
return
}
nlen := int(binary.BigEndian.Uint16(b[:2]))
n += 2
if len(b) < n+nlen {
err = fmt.Errorf("bad length: malformed data for cipher suites")
}
for i := 0; i < nlen/2; i++ {
h.CipherSuites = append(h.CipherSuites, CipherSuite(binary.BigEndian.Uint16(b[n:n+2])))
n += 2
}
return
}
func (h *ClientHelloHandshake) readCompressionMethods(b []byte) (n int, err error) {
if len(b) == 0 {
err = fmt.Errorf("bad length: data too short for compression methods")
return
}
nlen := int(b[0])
n++
if len(b) < n+nlen {
err = fmt.Errorf("bad length: malformed data for compression methods")
}
for i := 0; i < nlen; i++ {
h.CompressionMethods = append(h.CompressionMethods, CompressionMethod(b[n]))
n++
}
return
}
func (h *ClientHelloHandshake) readExtensions(b []byte) (n int, err error) {
if len(b) < 2 {
err = fmt.Errorf("bad length: data too short for extensions")
return
}
nlen := int(binary.BigEndian.Uint16(b[:2]))
n += 2
if len(b) < n+nlen {
err = fmt.Errorf("bad length: malformed data for extensions")
return
}
br := bytes.NewReader(b[n:])
for br.Len() > 0 {
cn := br.Len()
var ext Extension
ext, err = ReadExtension(br)
if err != nil {
return
}
h.Extensions = append(h.Extensions, ext)
n += (cn - br.Len())
}
return
}
func (h *ClientHelloHandshake) WriteTo(w io.Writer) (n int64, err error) {
buf := &bytes.Buffer{}
buf.WriteByte(ClientHello)
buf.Write([]byte{0, 0, 0}) // placeholder for payload length
binary.Write(buf, binary.BigEndian, h.Version)
pos := 6
binary.Write(buf, binary.BigEndian, h.Random.Time)
buf.Write(h.Random.Opaque[:])
pos += 32
buf.WriteByte(byte(len(h.SessionID)))
buf.Write(h.SessionID)
pos += (1 + len(h.SessionID))
binary.Write(buf, binary.BigEndian, uint16(len(h.CipherSuites)*2))
for _, cs := range h.CipherSuites {
binary.Write(buf, binary.BigEndian, cs)
}
pos += (2 + len(h.CipherSuites)*2)
buf.WriteByte(byte(len(h.CompressionMethods)))
for _, cm := range h.CompressionMethods {
buf.WriteByte(byte(cm))
}
pos += (1 + len(h.CompressionMethods))
buf.Write([]byte{0, 0}) // placeholder for extensions length
extLen := 0
for _, ext := range h.Extensions {
nn, _ := buf.Write(ext.Bytes())
extLen += nn
}
b := buf.Bytes()
plen := len(b) - handshakeHeaderLen
b[1], b[2], b[3] = byte((plen>>16)&0xFF), byte((plen>>8)&0xFF), byte(plen&0xFF) // payload length
b[pos], b[pos+1] = byte((extLen>>8)&0xFF), byte(extLen&0xFF) // extensions length
nn, err := w.Write(b)
n = int64(nn)
return
}

61
vendor/github.com/ginuerzh/tls-dissector/record.go generated vendored Normal file
View File

@ -0,0 +1,61 @@
package dissector
import (
"bytes"
"encoding/binary"
"errors"
"io"
)
const (
RecordHeaderLen = 5
)
const (
Handshake = 0x16
)
var (
ErrBadType = errors.New("bad type")
)
type Version uint16
type Record struct {
Type uint8
Version Version
Opaque []byte
}
func ReadRecord(r io.Reader) (*Record, error) {
record := &Record{}
if _, err := record.ReadFrom(r); err != nil {
return nil, err
}
return record, nil
}
func (rec *Record) ReadFrom(r io.Reader) (n int64, err error) {
b := make([]byte, RecordHeaderLen)
nn, err := io.ReadFull(r, b)
n += int64(nn)
if err != nil {
return
}
rec.Type = b[0]
rec.Version = Version(binary.BigEndian.Uint16(b[1:3]))
length := int(binary.BigEndian.Uint16(b[3:5]))
rec.Opaque = make([]byte, length)
nn, err = io.ReadFull(r, rec.Opaque)
n += int64(nn)
return
}
func (rec *Record) WriteTo(w io.Writer) (n int64, err error) {
buf := &bytes.Buffer{}
buf.WriteByte(rec.Type)
binary.Write(buf, binary.BigEndian, rec.Version)
binary.Write(buf, binary.BigEndian, uint16(len(rec.Opaque)))
buf.Write(rec.Opaque)
return buf.WriteTo(w)
}

5827
vendor/github.com/ginuerzh/tls-dissector/rfc5246.txt generated vendored Normal file

File diff suppressed because it is too large Load Diff

19
vendor/github.com/go-log/log/LICENSE generated vendored Normal file
View File

@ -0,0 +1,19 @@
Copyright (c) 2017 Go Log
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

63
vendor/github.com/go-log/log/README.md generated vendored Normal file
View File

@ -0,0 +1,63 @@
# Log [![GoDoc](https://godoc.org/github.com/go-log/log?status.svg)](https://godoc.org/github.com/go-log/log)
Log is a logging interface for Go. That's it. Pass around the interface.
## Rationale
Users want to standardise logging. Sometimes libraries log. We leave the underlying logging implementation to the user
while allowing libraries to log by simply expecting something that satisfies the Logger interface. This leaves
the user free to pre-configure structure, output, etc.
## Interface
The interface is minimalistic on purpose
```go
type Logger interface {
Log(v ...interface{})
Logf(format string, v ...interface{})
}
```
## Example
Here's a logger that uses logrus and logs with predefined fields.
```go
import (
"github.com/go-log/log"
"github.com/sirupsen/logrus"
)
type logrusLogger struct {
*logrus.Entry
}
func (l *logrusLogger) Log(v ...interface{}) {
l.Entry.Print(v...)
}
func (l *logrusLogger) Logf(format string, v ...interface{}) {
l.Entry.Printf(format, v...)
}
func WithFields(f logrus.Fields) log.Logger {
return &logrusLogger{logrus.WithFields(f)}
}
```
The `WithFields` func returns a struct that satisfies the Logger interface.
Pre-configure a logger using WithFields and pass it as an option to a library.
```go
import "github.com/lib/foo"
l := mylogger.WithFields(logrus.Fields{
"library": "github.com/lib/foo",
})
f := foo.New(
foo.WithLogger(l),
)
```

30
vendor/github.com/go-log/log/log.go generated vendored Normal file
View File

@ -0,0 +1,30 @@
// Package log provides a log interface
package log
// Logger is a generic logging interface
type Logger interface {
Log(v ...interface{})
Logf(format string, v ...interface{})
}
var (
// The global default logger
DefaultLogger Logger = &noOpLogger{}
)
// noOpLogger is used as a placeholder for the default logger
type noOpLogger struct{}
func (n *noOpLogger) Log(v ...interface{}) {}
func (n *noOpLogger) Logf(format string, v ...interface{}) {}
// Log logs using the default logger
func Log(v ...interface{}) {
DefaultLogger.Log(v...)
}
// Logf logs formatted using the default logger
func Logf(format string, v ...interface{}) {
DefaultLogger.Logf(format, v...)
}

8
vendor/github.com/gobwas/glob/.gitignore generated vendored Normal file
View File

@ -0,0 +1,8 @@
glob.iml
.idea
*.cpu
*.mem
*.test
*.dot
*.png
*.svg

9
vendor/github.com/gobwas/glob/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,9 @@
sudo: false
language: go
go:
- 1.5.3
script:
- go test -v ./...

21
vendor/github.com/gobwas/glob/LICENSE generated vendored Normal file
View File

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2016 Sergey Kamardin
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

26
vendor/github.com/gobwas/glob/bench.sh generated vendored Normal file
View File

@ -0,0 +1,26 @@
#! /bin/bash
bench() {
filename="/tmp/$1-$2.bench"
if test -e "${filename}";
then
echo "Already exists ${filename}"
else
backup=`git rev-parse --abbrev-ref HEAD`
git checkout $1
echo -n "Creating ${filename}... "
go test ./... -run=NONE -bench=$2 > "${filename}" -benchmem
echo "OK"
git checkout ${backup}
sleep 5
fi
}
to=$1
current=`git rev-parse --abbrev-ref HEAD`
bench ${to} $2
bench ${current} $2
benchcmp $3 "/tmp/${to}-$2.bench" "/tmp/${current}-$2.bench"

Some files were not shown because too many files have changed in this diff Show More