initial commit

This commit is contained in:
Florian Maury 2018-01-23 21:25:00 +01:00
commit 3d855e8b1e
38 changed files with 5515 additions and 0 deletions

9
LICENCE Normal file
View file

@ -0,0 +1,9 @@
Copyright 2017- ANSSI
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

10
Makefile Normal file
View file

@ -0,0 +1,10 @@
.PHONY: all
all: transdep webserver
transdep: transdep.go
go build transdep.go
webserver: webserver.go
go build webserver.go

307
README.md Normal file
View file

@ -0,0 +1,307 @@
# Transdep
Transdep is a utility to discover single points of failure (SPOF) in DNS dependency graphs leading to unability to
resolve domain names.
DNS dependency graph is a notion that was introduced by Venugopalan Ramasubramanian and Emin Gün Sirer in
[Perils of Transitive Trust in the Domain Name System][1].
Current types of single points of failure that are detected are :
- domain names (which can be availability SPOF if DNSSEC is incorrectly configured);
- IP addresses of name servers;
- Longest network prefixes that may generally be announced over the Internet (/24 over IPv4 and /48 over IPv6);
- ASN of the AS announcing the IP addresses of name servers.
The ``transdep`` utility is the CLI version of the tool. The ``webserver`` utility spawns a REST/JSON webservice.
Endpoints are described below.
[1]: https://www.cs.cornell.edu/people/egs/papers/dnssurvey.pdf
## Licence
Transdep is licenced under the 2-clause BSD licence.
## Installation
Transdep uses the following external libraries:
- https://github.com/miekg/dns
- https://github.com/awalterschulze/gographviz
- https://github.com/hashicorp/golang-lru
- https://github.com/deckarep/golang-set
- https://github.com/hashicorp/go-immutable-radix
You may install them using the go get command or whichever other method you prefer:
```bash
$ go get github.com/miekg/dns
$ go get github.com/awalterschulze/gographviz
$ go get github.com/hashicorp/golang-lru
$ go get github.com/deckarep/golang-set
$ go get github.com/hashicorp/go-immutable-radix
```
You may then use the Makefile to compile the Transdep tools:
```bash
$ make all
```
## Usage
### CLI
The ``transdep`` utility can be used to analyze the dependencies of a single domain name, of multiple names, or a saved
dependency graph.
#### Analysis Target Types
To analyze a single name, the ``-domain`` option is to be used:
```bash
./transdep -domain www.example.net
```
To analyze multiple domain names, you must provide a list stored in a file with one domain name per line, with the
option ``-file``:
```bash
./transdep -file <(echo -ne "example.com\nexample.net")
./transdep -file /tmp/list_of_domain_names
```
If you saved a dependency graph into a file (that was generated using the ``-graph`` option ), you may analyze it by
loading the graph with the ``-load`` option:
```bash
./transdep -domain example.net -graph > /tmp/example_net_graph.json
./transdep -load /tmp/example_net_graph.json
```
#### Analysis Nature
Transdep can analyze a domain based on multiple criteria.
All analysis types consider that IP addresses and announcing network prefixes may be SPOF.
By default, SPOF discovery is conducted while considering that all names may break, including non-DNSSEC protected
domain names. This is used to analyze SPOF in the event of misconfigurations, zone truncation and all other types of
zone corruptions that may render a zone impossible to resolve.
If the analysis must be constrained to only consider that DNSSEC protected names may break, the ``-dnssec`` option must
be added to the command line:
```bash
./transdep -domain www.example.com -dnssec
```
By default, the SPOF discovery considers that resolvers are connected to both IPv4 and IPv6 networks. This means that if
an IPv4 address is unavailable, this unavailibility may be covered for by a server available over IPv6.
In some scenarii, this is unacceptable, because the IPv4 resolvers and the IPv6 resolvers are separate servers. Also,
one of these two networks might be unavailable (temporarily or permanently). To represent these situations, the
``-break4`` (resp. ``-break6``) options simulates that all IPv4 (resp. IPv6) addresses are always considered unavailable
when analyzing the SPOF potential of an IP address in the other network type:
```bash
./transdep -domain www.x-cli.eu -break4
www.x-cli.eu:this domain name requires some IPv4 addresses to be resolved properly
./transdep -domain www.example.com -break4
www.example.com.:Name:example.com.
www.example.com.:IP:2001:500:8d::53
www.example.com.:Name:iana-servers.net.
www.example.com.:Name:.
www.example.com.:Name:net.
www.example.com.:Name:com.
```
In the previous example, `www.x-cli.eu.` cannot be resolved by IPv6-only resolvers (because some names or delegations do
not have IPv6 addresses).
For `www.example.com`, the result shows that during that run, `Transdep` detected that when using a resolver that
has only access to the IPv6 network at the time of resolution, the name `www.example.com` might not be possible to
resolve if the IP address ``2001:500:8d::53`` is unavailable.
The ``-all`` option indicates to `Transdep` to analyze the requested domain name(s), using all possible compositions of
the previous options: with and without ``-dnssec``, and with and without ``-break4`` or with and without ``-break6``.
```bash
./transdep -domain www.x-cli.eu -all
AllNames:www.x-cli.eu.:Name:x-cli.eu.
AllNames:www.x-cli.eu.:Name:.
AllNames:www.x-cli.eu.:Name:eu.
DNSSEC:www.x-cli.eu.:Name:.
DNSSEC:www.x-cli.eu.:Name:eu.
AllNamesNo4:www.x-cli.eu.:this domain name requires some IPv4 addresses to be resolved properly
DNSSECNo4:www.x-cli.eu.:this domain name requires some IPv4 addresses to be resolved properly
AllNamesNo6:www.x-cli.eu.:Name:eu.
AllNamesNo6:www.x-cli.eu.:Name:x-cli.eu.
AllNamesNo6:www.x-cli.eu.:Name:.
DNSSECNo6:www.x-cli.eu.:Name:.
DNSSECNo6:www.x-cli.eu.:Name:eu.
```
`Transdep` may also consider an analysis criterion based on the ASN of the AS announcing the network prefixes covering
the IP addresses of the name servers. The association between the ASN and the IP address is done by using a file whose
format is as follows:
- one association per line;
- each line contains an ASN and an announced network prefix.
Here is an example of such a file:
```
64501 192.0.2.0/24
64501 198.51.100.0/24
64502 203.0.113.0/24
64502 2001:db8::/32
```
Such a file can be generated from a MRT dump file (bviews) such as the ones made available by the [RIS project][2],
using ANSSI's [`mabo`][3] tool with the sub-command ``prefixes``.
The ASN-prefix file is provided to `Transdep` using the ``-mabo`` option:
```bash
./mabo prefixes bview.20171013.0800.gz > prefixes-20171013.txt
./transdep -domain www.example.com -mabo prefixes-20171013.txt
```
[2]: https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris
[3]: https://github.com/ANSSI-FR/mabo
#### Output Types
`Transdep` can generate several types of documents. By default, it generates a CSV containing the discovered SPOF for
the requested analysis.
If the ``-all`` option is provided, the format is ``AnalysisType:DomainName:TypeOfSPOF:SPOFReference``, where
``AnalysisType`` indicates one of the following combinations:
* ``AllNames``: default options (no ``-dnssec``, no ``-break4``, no ``-break6``);
* ``AllNamesNo4``: default options except that ``-break4`` is specified;
* ``AllNamesNo6``: default options except that ``-break6`` is specified;
* ``DNSSEC``: default options except that ``-dnssec`` is specified;
* ``DNSSECNo4``: ``-dnssec`` and ``-break4`` options are specified;
* ``DNSSECNo6``: ``-dnssec`` and ``-break6`` options are specified.
If the ``-all`` option is not specified, the format is ``DomainName:TypeOfSPOF:SPOFRerefence``.
In both formats, ``DomainName`` indicates the domain name that is analyzed.
``TypeOfSPOF`` can value:
* ``Name``: the next field specifies a domain name that must be resolvable for ``DomainName`` to be resolvable.
* ``IP``: the next field specifies an IP address that must be available and not hijacked for ``DomainName`` to be
resolvable.
* ``Prefix:``: the next field specifies a network prefix that must be available and not hijacked for ``DomainName`` to
be resolvable.
* ``ASN:``: the next field specifies an AS number whose whole network must not be totally broken for ``DomainName`` to
be resolvable.
TypeOfSPOF may also value a special value: ``Cycle``. ``Cycle`` indicates that there is a circular dependency in the
graph somewhere, or an overly long CNAME chain (for some definition of "overly long").
Having ``Cycle`` as dependency means that the name cannot be resolved at all using a RFC-compliant resolver at the time
of resolution.
The ``-graph`` output option generates an output that can be later loaded for analysis using the
``-load`` option, described above.
The ``-dot`` output option generates a DOT file output. This output may be passed to any Graphviz interpret for graph
drawing. The generated DOT file will highlight domain name and IP addresses that are SPOF by coloring the nodes in red.
```bash
./transdep -domain www.x-cli.eu | dot -T pdf -o /tmp/graph_x-cli.eu.pdf
```
#### Caches
`Transdep` maintains several caches in order to limit the number of requests to name servers during the discovery of the
dependency graph. There are in-memory caches, using LRU lists and go routines, and on-disk caches for long term cache
and to store value overflowing from the in-memory LRU lists.
In-memory cache sizes are controlled with the ``-nrlrusize``, ``-zcflrusize`` and ``-dflrusize`` options. The first two
options are associated with lists that contain data that is cached on disk when the LRU lists are overflowing.
The on-disk cache is leverage whenever possible and the entry is reinstated in the LRU list upon usage. Thus, an entry
is either in-memory or on-disk and is never lost unless the cache directoy is flushed manually. The third option is
associated with a LRU list whose entries may be very large. These entries are synthetized from the entries of the other
caches, and thus are not stored on disk when the list is overflowing.
If your computer swaps or consumes too much memory while running `Transdep`, you should try to lower these values,
trying to lower ``-dflrusize`` value first. If your computer spends too much time in "disk I/O wait" and you have
some RAM capacity available, you may try to increase the two first options.
On-disk caches consist of a lot very small JSON files. Please monitor the number of remaining inodes and adapt your
inode table accordingly.
On-disk caches are stored in the directory designated by the `TMPDIR` environment variable, the `-cachedir` command line
option. The default value is ``/tmp``.
`Transdep` caches never expire, with the current implementation. If you need to flush the cache, you may change the
cache directory to keep the previous one and yet start fresh. You may also delete the `nameresolver` and `zonecut`
directories that are present in the designated cache directory.
#### Root Zone Hint File
You may specify a root zone hint file with the `-hints` option. If left unspecified, a hard-coded list of root-servers
will be used by `Transdep`, when querying the root zone for delegations.
#### DNS Violations
Using a RFC-compliant implementation prevents you from resolving many domain names. Thus, some degree of DNS violation
tolerance was implemented in `Transdep`, with much grumble.
By default, `Transdep` will consider `rcode 3` status on non-terminal nodes equivalent to `rcode 0` answers with
`ancount=0`. You may reinstate RFC8020 compliance with the `-rfc8020` option.
Some devices are also unable to answer to non-A/AAAA queries and always return `rcode 2` answers for any other qtype,
including NS or DS. By default, `Transdep` considers this servers as broken, but you may use the `-servfail` option to
indicate `Transdep` to treat these answers as `rcode 0` answers with `ancount=0`. This may lead `Transdep` to return
incorrect results in some instances.
#### Script Friendliness
If you don't care about the nature of errors that may arise during the analysis of a domain name or if you want to have
a output that is easily parsable, you may use the `-script` option to return errors as the constant ``-ERROR-``.
`Transdep` will return an error if any name that is part of the dependency graph cannot be resolved at the time of
dependency graph discovery. Doing otherwise might have led to incorrect results from partial dependency graph discovery.
#### Concurrency
You may adapt the number of domain names whose dependency graphs are discovered simultaneously with the `-jobs` option.
The higher this option value, the more you will harass the name servers. You will want to keep this value relatively low,
to prevent blacklisting of your IP and false measurements.
### Web Service
The webservice uses the ``webserver`` binary.
The ``-bind`` and ``-port`` can be used to specify, respectively, on which address and port the web server should listen
on. By default, the service is available on `http://127.0.0.1:5000`.
The ``-nrlrusize``, ``-zcflrusize``, ``-dflrusize``, ``-jobs``, ``-hints`` and ``-cachedir`` options have the same usage
as for the `Transdep` CLI utility.
The web server exposes several endpoints:
* ``/allnames`` is the endpoint corresponding to the default behaviour of the `transdep` CLI utility.
* ``/dnssec`` is the endpoint corresponding to the ``-dnssec`` option of the `transdep` CLI utility.
* ``/break4`` is the endpoint corresponding to the ``-break4`` option of the `transdep` CLI utility.
* ``/break6`` is the endpoint corresponding to the ``-break6`` option of the `transdep` CLI utility.
Combination of ``-dnssec`` and ``-break4`` or ``-break6`` is not possible with the web server.
Each endpoint takes a ``domain`` parameter as part of the query string, to specify which domain name is to be analyzed.
Endpoints may also receive ``rfc8020`` and ``servfail`` query string parameters to indicate which DNS violations are
tolerated for this analysis. If these options are not specified, `rcode 3` answers on non-terminal nodes are treated as
`rcode 0` answers with `ancount=0` and `rcode 2` answers are considered as broken name servers.
When launched from a console, the `webserver` utility outputs a URL to query to gracefully stop the service. Gracefully
shutting down the service is strongly advised to prevent on-disk cache corruption or incompleteness.
```bash
$ ./webserver &
[1] 7416
To stop the server, send a query to http://127.0.0.1:5000/stop?secret=5942985ebdc9102663130752c1d21f23
$ curl http://127.0.0.1:5000/stop?secret=5942985ebdc9102663130752c1d21f23
Stopping.
Stopping the finder: OK
$
```

173
dependency/finder.go Normal file
View file

@ -0,0 +1,173 @@
// Depfinder package contains the DNS dependency finder.
// Its purpose is to provide a request channel, and to build the dependency graph of a requested domain name.
package dependency
import (
"fmt"
"github.com/hashicorp/golang-lru"
"github.com/ANSSI-FR/transdep/graph"
"github.com/ANSSI-FR/transdep/nameresolver"
"github.com/ANSSI-FR/transdep/zonecut"
"github.com/ANSSI-FR/transdep/messages/dependency"
msg_nameresolver "github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/hashicorp/go-immutable-radix"
"github.com/ANSSI-FR/transdep/tools"
"github.com/ANSSI-FR/transdep/errors"
)
// REQ_CHAN_CAPACITY indicates the maximum number of requests that can be queued to a dependency finder instance,
// before the write call is blocking.
const REQ_CHAN_CAPACITY = 10
// Finder is a worker pool maintainer for the construction of dependency trees of domain names.
type Finder struct {
// workerPool LRU keys are requestTopic instances and values are *worker.
// Its use is to have at most one worker per domain (and type of request (resolveName/includeIP)) and spool matching
// requests to that worker.
workerPool *lru.Cache
// reqs is the channel that feeds new requests to the finder.
reqs chan *dependency.Request
// closedReqChan is true when the reqs channel has been closed. This prevents double-close or writes to a closed chan
closedReqChan bool
// cacheDir is the path to the root directory for on-disk cache.
cacheRootDir string
// joinChan is used for goroutine synchronization so that the owner of a finder instance does not exit before
// this finder is done cleaning up after itself.
joinChan chan bool
// nameResolver is the instance of Name Resolver that is started by this Finder. Its handler is passed to this
// finder workers.
nameResolver *nameresolver.Finder
// zoneCutFinder is the instance of Zone Cut Finder that is started by this Finder. Its handler is passed to this
// finder workers.
zoneCutFinder *zonecut.Finder
// tree is the reference to a radix tree containing a view of the prefixes announced with BGP over the Internet.
// This is used to fill IPNode instances with their corresponding ASN number, at the time of query.
tree *iradix.Tree
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
/* NewFinder initializes a new dependency finder struct instance.
dependencyWorkerCount designates the maximum number of workers simultaneously live for dependency tree construction.
zoneCutWorkerCount designates the maximum number of workers simultaneously live for zone cut/delegation information retrieval.
nameResolverWorkerCount designates the maximum number of workers simultaneously live for name resolution.
cacheRootDir designates the root cache directory in which the on-disk cache will be stored. The directory will be created
if it does not already exist.
rootHints is the name of the file from which the root hints should be loaded.
*/
func NewFinder(transdepConf *tools.TransdepConfig, tree *iradix.Tree) *Finder {
df := new(Finder)
var err error
df.workerPool, err = lru.NewWithEvict(transdepConf.LRUSizes.DependencyFinder, cleanupWorker)
if err != nil {
return nil
}
df.reqs = make(chan *dependency.Request, REQ_CHAN_CAPACITY)
df.closedReqChan = false
df.joinChan = make(chan bool, 1)
df.tree = tree
df.config = transdepConf
// Trick using late binding to have circular declaration of zonecut finder and name resolver handlers
var nrHandler func(request *msg_nameresolver.Request) *errors.ErrorStack
df.zoneCutFinder = zonecut.NewFinder(func(req *msg_nameresolver.Request) *errors.ErrorStack {return nrHandler(req)}, transdepConf)
df.nameResolver = nameresolver.NewFinder(df.zoneCutFinder.Handle, transdepConf)
nrHandler = df.nameResolver.Handle
df.start()
return df
}
// cleanupWorker is the callback called by the LRU when an entry is evicted.
// value is the worker instance stored within the evicted entry.
func cleanupWorker(_, value interface{}) {
wrk := value.(*worker)
wrk.stop()
}
/*spool finds an already existing worker for the spooled request or create a new worker and adds it to the LRU. It
then feeds the request to that worker.
req is the request to be forwarded to the appropriate worker. If no existing worker can handle that request, a new one
is created and added to the list of workers
*/
func (df *Finder) spool(req *dependency.Request) {
var wrk *worker
key := req.Topic()
if val, ok := df.workerPool.Get(key); ok {
wrk = val.(*worker)
} else {
wrk = newWorker(req, df.Handle, df.zoneCutFinder.Handle, df.nameResolver.Handle, df.config, df.tree)
df.workerPool.Add(key, wrk)
}
wrk.handle(req)
}
// Handle is the function called to submit new requests.
// Caller may call req.Result() after calling Handle(req) to get the result of that Handle call.
// This method returns an error if the Finder is stopped.
func (df *Finder) Handle(req *dependency.Request) *errors.ErrorStack {
if df.closedReqChan {
return errors.NewErrorStack(fmt.Errorf("Handle: dependency finder request channel already closed"))
}
df.reqs <- req
return nil
}
// start handles new requests, detects dependency cycles or else spools the requests for processing.
// When no more requests are expected, start cleans up all workers.
// start must be called as a separate goroutine (go instance.start()).
func (df *Finder) start() {
go func() {
for req := range df.reqs {
if req.DetectCycle() {
//Detect dependency loops
g := graph.NewRelationshipNode(fmt.Sprintf("start: cycle detected on %s", req.Name), graph.AND_REL)
g.AddChild(new(graph.Cycle))
req.SetResult(g, nil)
} else if req.Depth() > nameresolver.MAX_CNAME_CHAIN {
// Detect long CNAME chain (incremented only when an alias is drawing in a new dependency graph)
g := graph.NewRelationshipNode(fmt.Sprintf("start: overly long CNAME chain detected %s", req.Name), graph.AND_REL)
g.AddChild(new(graph.Cycle))
req.SetResult(g, nil)
} else {
df.spool(req)
}
}
// Cleanup workers
for _, key := range df.workerPool.Keys() {
val, _ := df.workerPool.Peek(key)
wrk := val.(*worker)
wrk.stop()
}
df.joinChan <- true
}()
}
// Stop signals that no more requests are expected.
// This function must be called for proper memory and cache management. Thus, it is advised to defer a call to this
// function as soon as a Finder is instantiated with NewFinder().
func (df *Finder) Stop() bool {
if df.closedReqChan {
// This if prevents double closes
return false
}
close(df.reqs)
df.closedReqChan = true
// wait for the "start() func to terminate
_ = <- df.joinChan
close(df.joinChan)
// Cleanup other tools
df.nameResolver.Stop()
df.zoneCutFinder.Stop()
return true
}

315
dependency/worker.go Normal file
View file

@ -0,0 +1,315 @@
package dependency
import (
"fmt"
"github.com/hashicorp/go-immutable-radix"
"github.com/miekg/dns"
"github.com/ANSSI-FR/transdep/graph"
"github.com/ANSSI-FR/transdep/messages/dependency"
"github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/ANSSI-FR/transdep/messages/zonecut"
"github.com/ANSSI-FR/transdep/tools"
"github.com/ANSSI-FR/transdep/tools/radix"
"github.com/ANSSI-FR/transdep/errors"
)
const WORKER_CHAN_CAPACITY = 10
// worker represents a handler of requests for a specific requestTopic.
// It retrieves the relevant information, cache it in memory and serves it until stop() is called.
type worker struct {
// req is the request that is handled by this worker
req *dependency.Request
// reqs is a channel of requests with identical requestTopic as the original request
reqs chan *dependency.Request
// joinChan is used by stop() to wait for the completion of the start() goroutine
joinChan chan bool
// closedReqChan is used to prevent double-close during stop()
closedReqChan bool
// tree is the reference to a radix tree containing a view of the prefixes announced with BGP over the Internet.
// This is used to fill IPNode instances with their corresponding ASN number, at the time of query.
tree *iradix.Tree
// depHandler is the handler used to fetch the dependency tree of a dependency of the current requestTopic
depHandler func(*dependency.Request) *errors.ErrorStack
// zcHandler is used to get the delegation info of some name that is part of the dependency tree of the current requestTopic
zcHandler func(request *zonecut.Request) *errors.ErrorStack
// nrHandler is used to get the IP addresses or Alias associated to a name that is part of the dependency tree of the current requestTopic
nrHandler func(*nameresolver.Request) *errors.ErrorStack
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
/* newWorker instantiates and returns a new worker.
It builds the worker struct, and starts the routine in charge of building the dependency tree of the
requested topic and serving the answer to subsequent requests.
req is the first request that triggered the instantiation of that worker
depHandler is a function that can be called to have another dependency graph resolved (probably to integrate it to the current one)
zcHandler is a function that can be called to obtain the zone cut of a requested name
nrHandler is a function that can be called to obtain the IP address or Alias of a name
*/
func newWorker(req *dependency.Request, depHandler func(*dependency.Request) *errors.ErrorStack, zcHandler func(request *zonecut.Request) *errors.ErrorStack, nrHandler func(*nameresolver.Request) *errors.ErrorStack, conf *tools.TransdepConfig, tree *iradix.Tree) *worker {
w := new(worker)
w.req = req
w.reqs = make(chan *dependency.Request, WORKER_CHAN_CAPACITY)
w.closedReqChan = false
w.joinChan = make(chan bool, 1)
w.config = conf
w.tree = tree
w.depHandler = depHandler
w.zcHandler = zcHandler
w.nrHandler = nrHandler
w.start()
return w
}
/* handle is the function called to submit a new request to that worker.
Caller may call req.Result() after this function returns to get the result for this request.
This method returns an error if the worker is stopped or if the submitted request does not match the request usually
handled by this worker.
*/
func (w *worker) handle(req *dependency.Request) *errors.ErrorStack {
if w.closedReqChan {
return errors.NewErrorStack(fmt.Errorf("handle: dependency worker channel for %s is already closed", w.req.Name()))
} else if !w.req.Equal(req) {
return errors.NewErrorStack(fmt.Errorf("handle: invalid request; the submitted request (%s) does not match the requests handled by this worker (%s)", req.Name(), w.req.Name()))
}
w.reqs <- req
return nil
}
// resolveRoot is a trick used to simplify the circular dependency of the root-zone, which is self-sufficient by definition.
func (w *worker) resolveRoot() graph.Node {
g := graph.NewRelationshipNode("resolveRoot: dependency graph of the root zone", graph.AND_REL)
g.AddChild(graph.NewDomainNameNode(".", true))
return g
}
/*getParentGraph is a helper function which gets the dependency graph of the parent domain.
This function submits a new dependency request for the parent domain and waits for the result.
Consequently, this function triggers a recursive search of the parent domain dependency tree until the root-zone
dependency tree is reached. Said otherwise, for "toto.fr", this function triggers a search for the dependency radix of
"fr.", which will recursively trigger a search for the dependency tree of ".".
*/
func (w *worker) getParentGraph() (graph.Node, *errors.ErrorStack) {
nxtLblPos, end := dns.NextLabel(w.req.Name(), 1)
shrtndName := "."
if !end {
shrtndName = w.req.Name()[nxtLblPos:]
}
// resolveName and includeIP are set to false, because this name is not dependent of the IP address of set at the
// parent domain, and we are not compatible with DNAME.
req := dependency.NewRequestWithContext(shrtndName, false, false, w.req, 0)
w.depHandler(req)
res, err := req.Result()
if err != nil {
err.Push(fmt.Errorf("getParentGraph: error during resolution of parent graph %s of %s", shrtndName, w.req.Name()))
return nil, err
}
return res, nil
}
// resolveSelf returns the graph of the current requestTopic
func (w *worker) resolveSelf() (graph.Node, *errors.ErrorStack) {
g := graph.NewRelationshipNode(fmt.Sprintf("Dependency graph of exact name %s", w.req.Name()), graph.AND_REL)
// First, we resolve the current name, to get its IP addresses or the indication that it is an alias
nr := nameresolver.NewRequest(w.req.Name(), w.req.Exceptions())
w.nrHandler(nr)
var ne *nameresolver.Entry
ne, err := nr.Result()
if err != nil {
err.Push(fmt.Errorf("resolveSelf: error while getting the exact resolution of %s", w.req.Name()))
return nil, err
}
if ne.CNAMETarget() != "" {
if !w.req.FollowAlias() {
return nil, errors.NewErrorStack(fmt.Errorf("resolveSelf: alias detected (%s) but alias is not requested to be added to the graph of %s", ne.CNAMETarget(), w.req.Name()))
}
// the following line is commented because we might not want to add to the dependency graph the name of the node
// that contains an alias that draws in a complete dependency graph, because this name is not really important
// per se wrt dependency graphs.
g.AddChild(graph.NewAliasNode(ne.CNAMETarget(), ne.Owner()))
// We reuse the FollowAlias and IncludeIP value of the current requestTopic because if we are resolving a
// name for a NS, we will want the IP address and to follow CNAMEs, even though this is an illegal configuration.
// Depth is incremented so that overly long chains can be detected
depReq := dependency.NewRequestWithContext(ne.CNAMETarget(), w.req.FollowAlias(), w.req.IncludeIP(), w.req, w.req.Depth()+1)
w.depHandler(depReq)
aliasGraph, err := depReq.Result()
if err != nil {
err.Push(fmt.Errorf("resolveSelf: error while getting the dependency graph of alias %s", ne.CNAMETarget()))
return nil, err
}
g.AddChild(aliasGraph)
} else if w.req.IncludeIP() {
gIP := graph.NewRelationshipNode(fmt.Sprintf("IPs of %s", ne.Owner()), graph.OR_REL)
g.AddChild(gIP)
for _, addr := range ne.Addrs() {
asn, err := radix.GetASNFor(w.tree, addr)
if err != nil {
asn = 0
}
gIP.AddChild(graph.NewIPNodeWithName(addr.String(), ne.Owner(), asn))
}
}
return g, nil
}
// getDelegationGraph gets the graph relative to the delegation info of the current name. The graph is empty if the
// request topic is not a zone apex.
func (w *worker) getDelegationGraph() (graph.Node, *errors.ErrorStack) {
g := graph.NewRelationshipNode(fmt.Sprintf("Dependency graph for %s delegation", w.req.Name()), graph.AND_REL)
// Get the graph for the current zone. First, we get the delegation info for this zone, and we add it.
req := zonecut.NewRequest(w.req.Name(), w.req.Exceptions())
w.zcHandler(req)
entry, err := req.Result()
if err != nil {
var returnErr bool
switch typedErr := err.OriginalError().(type) {
case *errors.TimeoutError:
returnErr = true
case *errors.NXDomainError:
returnErr = w.req.Exceptions().RFC8020
case *errors.ServfailError:
returnErr = !w.req.Exceptions().AcceptServFailAsNoData
case *errors.NoNameServerError:
returnErr = false
default:
_ = typedErr
returnErr = true
}
if returnErr {
err.Push(fmt.Errorf("getDelegationGraph: error while getting the zone cut of %s", w.req.Name()))
return nil, err
}
err = nil
entry = nil
}
// If entry is nil, then we are at a non-terminal node, so we have no other dependencies (except aliases)
if entry != nil {
g.AddChild(graph.NewDomainNameNode(entry.Domain(), entry.DNSSEC()))
nameSrvsGraph := graph.NewRelationshipNode(fmt.Sprintf("Graph of NameSrvInfo of %s", w.req.Name()), graph.OR_REL)
g.AddChild(nameSrvsGraph)
for _, nameSrv := range entry.NameServers() {
nsGraph := graph.NewRelationshipNode(fmt.Sprintf("Graph of NS %s from NameSrvInfo of %s", nameSrv.Name(), w.req.Name()), graph.AND_REL)
nameSrvsGraph.AddChild(nsGraph)
// If there are glues
if len(nameSrv.Addrs()) > 0 {
nsAddrGraph := graph.NewRelationshipNode(fmt.Sprintf("IPs of %s", nameSrv.Name()), graph.OR_REL)
nsGraph.AddChild(nsAddrGraph)
for _, ip := range nameSrv.Addrs() {
asn, err := radix.GetASNFor(w.tree, ip)
if err != nil {
asn = 0
}
nsAddrGraph.AddChild(graph.NewIPNodeWithName(ip.String(), nameSrv.Name(), asn))
}
} else {
// The NS is out-of-bailiwick and does not contain glues; thus we ask for the dependency graph of
// this NS name res
req := dependency.NewRequestWithContext(nameSrv.Name(), true, true, w.req, 0)
w.depHandler(req)
NSGraph, err := req.Result()
if err != nil {
err.Push(fmt.Errorf("getDelegationGraph: error while getting the dependency graph of NS %s", nameSrv.Name()))
return nil, err
}
nsGraph.AddChild(NSGraph)
}
}
}
return g, nil
}
// resolve orchestrates the resolution of the worker request topic and returns it
func (w *worker) resolve() (graph.Node, *errors.ErrorStack) {
// Shortcut if this is the root zone, because we don't want to have to handle the circular dependency of the root-zone
if w.req.Name() == "." {
g := w.resolveRoot()
return g, nil
}
// The graph of a name is the graph of the parent name + the graph of the name in itself (including its eventual
// delegation info and its eventual alias/IP address)
g := graph.NewRelationshipNode(fmt.Sprintf("Dependency graph for %s", w.req.Name()), graph.AND_REL)
// Get Graph of the parent zone
res, err := w.getParentGraph()
if err != nil {
err.Push(fmt.Errorf("resolve: error while getting the parent graph of %s", w.req.Name()))
return nil, err
}
g.AddChild(res)
// Get Graph of the delegation of the request topic
graphDelegRes, err := w.getDelegationGraph()
if err != nil {
err.Push(fmt.Errorf("resolve: error while getting the delegation graph of %s", w.req.Name()))
return nil, err
}
g.AddChild(graphDelegRes)
// If the request topic is interesting in itself (for instance, because it is the name used in a NS record and that
// name is out-of-bailiwick), we resolve its graph and add it
if w.req.ResolveTargetName() {
res, err := w.resolveSelf()
if err != nil {
err.Push(fmt.Errorf("resolve: error while resolving %s", w.req.Name()))
return nil, err
}
g.AddChild(res)
}
return g, nil
}
// start launches a goroutine in charge of resolving the request topic, and then serving the result of this resolution
// to subsequent identical request topic
func (w *worker) start() {
go func() {
result, err := w.resolve()
if err != nil {
result = nil
err.Push(fmt.Errorf("start: error while resolving dependency graph of %s", w.req.Name()))
}
for req := range w.reqs {
req.SetResult(result, err)
}
w.joinChan <- true
}()
}
// stop is to be called during the cleanup of the worker. It shuts down the goroutine started by start() and waits for
// it to actually end.
func (w *worker) stop() bool {
if w.closedReqChan {
return false
}
close(w.reqs)
w.closedReqChan = true
<-w.joinChan
close(w.joinChan)
return true
}

156
errors/dns.go Normal file
View file

@ -0,0 +1,156 @@
package errors
import (
"encoding/json"
"fmt"
"github.com/miekg/dns"
"net"
)
const (
UDP_TRANSPORT = 17
TCP_TRANSPORT = 6
)
var PROTO_TO_STR = map[int]string{
TCP_TRANSPORT: "TCP",
UDP_TRANSPORT: "UDP",
}
var STR_TO_PROTO = map[string]int{
"": UDP_TRANSPORT,
"TCP": TCP_TRANSPORT,
"tcp": TCP_TRANSPORT,
"UDP": UDP_TRANSPORT,
"udp": UDP_TRANSPORT,
}
type serializedServfailError struct {
Type string `json:"type"`
Qname string `json:"qname"`
Qtype string `json:"qtype"`
Addr string `json:"ip"`
Proto string `json:"protocol"`
}
type ServfailError struct {
qname string
qtype uint16
addr net.IP
proto int
}
func NewServfailError(qname string, qtype uint16, addr net.IP, proto int) *ServfailError {
se := new(ServfailError)
se.qname = qname
se.qtype = qtype
se.addr = addr
se.proto = proto
return se
}
func (se *ServfailError) MarshalJSON() ([]byte, error) {
sse := new(serializedServfailError)
sse.Type = dns.RcodeToString[dns.RcodeServerFailure]
sse.Qname = se.qname
sse.Qtype = dns.TypeToString[se.qtype]
sse.Addr = se.addr.String()
sse.Proto = PROTO_TO_STR[se.proto]
return json.Marshal(sse)
}
func (se *ServfailError) UnmarshalJSON(bstr []byte) error {
sse := new(serializedServfailError)
if err := json.Unmarshal(bstr, sse); err != nil {
return err
}
se.qname = sse.Qname
se.qtype = dns.StringToType[sse.Qtype]
se.addr = net.ParseIP(sse.Addr)
se.proto = STR_TO_PROTO[sse.Proto]
return nil
}
func (se *ServfailError) Error() string {
return fmt.Sprintf("received a SERVFAIL while trying to query %s %s? from %s with %s", se.qname, dns.TypeToString[se.qtype], se.addr.String(), PROTO_TO_STR[se.proto])
}
type serializedNXDomainError struct {
Type string `json:"type"`
Qname string `json:"qname"`
Qtype string `json:"qtype"`
Addr string `json:"ip"`
Proto string `json:"protocol"`
}
type NXDomainError struct {
qname string
qtype uint16
addr net.IP
proto int
}
func NewNXDomainError(qname string, qtype uint16, addr net.IP, proto int) *NXDomainError {
nx := new(NXDomainError)
nx.qname = qname
nx.qtype = qtype
nx.addr = addr
nx.proto = proto
return nx
}
func (nx *NXDomainError) Error() string {
return fmt.Sprintf("received a NXDomain while trying to query %s %s? from %s with %s", nx.qname, dns.TypeToString[nx.qtype], nx.addr.String(), PROTO_TO_STR[nx.proto])
}
func (nx *NXDomainError) MarshalJSON() ([]byte, error) {
snx := new(serializedNXDomainError)
snx.Type = dns.RcodeToString[dns.RcodeNameError]
snx.Qname = nx.qname
snx.Qtype = dns.TypeToString[nx.qtype]
snx.Addr = nx.addr.String()
snx.Proto = PROTO_TO_STR[nx.proto]
return json.Marshal(snx)
}
func (nx *NXDomainError) UnmarshalJSON(bstr []byte) error {
snx := new(serializedNXDomainError)
if err := json.Unmarshal(bstr, snx); err != nil {
return err
}
nx.qname = snx.Qname
nx.qtype = dns.StringToType[snx.Qtype]
nx.addr = net.ParseIP(snx.Addr)
nx.proto = STR_TO_PROTO[snx.Proto]
return nil
}
type serializedNoNameError struct {
Name string `json:"name"`
}
type NoNameServerError struct {
name string
}
func (ne *NoNameServerError) MarshalJSON() ([]byte, error) {
sne := new(serializedNoNameError)
sne.Name = ne.name
return json.Marshal(sne)
}
func (ne *NoNameServerError) UnmarshalJSON(bstr []byte) error {
sne := new(serializedNoNameError)
if err := json.Unmarshal(bstr, sne); err != nil {
return err
}
ne.name = sne.Name
return nil
}
func NewNoNameServerError(name string) *NoNameServerError {
return &NoNameServerError{name}
}
func (ne *NoNameServerError) Error() string {
return fmt.Sprintf("%s has no nameservers", ne.name)
}

119
errors/stack.go Normal file
View file

@ -0,0 +1,119 @@
package errors
import (
"strings"
"encoding/json"
"errors"
"github.com/miekg/dns"
"net"
"fmt"
)
type ErrorStack struct {
errors []error
}
func NewErrorStack(err error) *ErrorStack {
s := new(ErrorStack)
s.Push(err)
return s
}
func (es *ErrorStack) Copy() *ErrorStack {
newStack := new(ErrorStack)
for _, err := range es.errors {
// provision for when an error type will require a deepcopy
switch typedErr := err.(type) {
/* case *NXDomainError:
newStack.errors = append(newStack.errors, err)
case *ServfailError:
newStack.errors = append(newStack.errors, err)
case *NoNameServerError:
newStack.errors = append(newStack.errors, err)
case *TimeoutError:
newStack.errors = append(newStack.errors, err)*/
default:
_ = typedErr
newStack.errors = append(newStack.errors, err)
}
}
return newStack
}
func (es *ErrorStack) MarshalJSON() ([]byte, error) {
var ses []interface{}
for _, err := range es.errors {
switch typedErr := err.(type) {
case *NXDomainError:
ses = append(ses, typedErr)
case *ServfailError:
ses = append(ses, typedErr)
case *NoNameServerError:
ses = append(ses, typedErr)
default:
ses = append(ses, typedErr.Error())
}
}
return json.Marshal(ses)
}
func (es *ErrorStack) UnmarshalJSON(bstr []byte) error {
var ses []interface{}
if err := json.Unmarshal(bstr, &ses) ; err != nil {
return err
}
for _, err := range ses {
switch typedErr := err.(type) {
case string:
es.errors = append(es.errors, errors.New(typedErr))
case map[string]interface{}:
if typeVal, ok := typedErr["type"] ; ok {
if typeVal.(string) == dns.RcodeToString[dns.RcodeServerFailure] {
es.errors = append(es.errors, NewServfailError(typedErr["qname"].(string), dns.StringToType[typedErr["qtype"].(string)], net.ParseIP(typedErr["ip"].(string)), STR_TO_PROTO[typedErr["protocol"].(string)]))
} else if typeVal.(string) == dns.RcodeToString[dns.RcodeNameError] {
es.errors = append(es.errors, NewNXDomainError(typedErr["qname"].(string), dns.StringToType[typedErr["qtype"].(string)], net.ParseIP(typedErr["ip"].(string)), STR_TO_PROTO[typedErr["protocol"].(string)]))
} else {
panic(fmt.Sprintf("missing case: type unknown: %s", typeVal))
}
} else if name, ok := typedErr["name"] ; ok {
es.errors = append(es.errors, NewNoNameServerError(name.(string)))
}
default:
panic("missing case: not a string nor a map?")
}
}
return nil
}
func (es *ErrorStack) Push(err error) {
es.errors = append(es.errors, err)
}
func (es *ErrorStack) OriginalError() error {
if len(es.errors) > 0 {
return es.errors[0]
}
return nil
}
func (es *ErrorStack) LatestError() error {
if len(es.errors) > 0 {
return es.errors[len(es.errors)-1]
}
return nil
}
func (es *ErrorStack) Error() string {
errCount := len(es.errors)
l := make([]string, errCount)
if errCount == 1 {
l[0] = es.errors[0].Error()
} else {
for i := 0; i < len(es.errors)/2; i++ {
l[i] = es.errors[errCount-1-i].Error()
l[errCount-1-i] = es.errors[i].Error()
}
}
return strings.Join(l, ", ")
}

19
errors/timeout.go Normal file
View file

@ -0,0 +1,19 @@
package errors
import "fmt"
type TimeoutError struct {
operation string
requestTopic string
}
func NewTimeoutError(operation, topic string) *TimeoutError {
te := new(TimeoutError)
te.operation = operation
te.requestTopic = topic
return te
}
func (te *TimeoutError) Error() string {
return fmt.Sprintf("timeout while performing \"%s\" on \"%s\"", te.operation, te.requestTopic)
}

107
graph/aliasName.go Normal file
View file

@ -0,0 +1,107 @@
package graph
import (
"crypto/sha256"
"encoding/json"
"github.com/miekg/dns"
"strings"
)
/* serializedAliasNode is a proxy struct used to serialize an Alias node into JSON.
The AliasNode struct is not directly used because the Go json module requires that attributes must be exported for it
to work, and AliasNode struct attributes have no other reason for being exported.
*/
type serializedAliasNode struct {
Target string `json:"target"`
Source string `json:"source"`
}
// AliasNode represents a CNAME in the dependency graph of a name.
type AliasNode struct {
// target is the right-hand name of the CNAME RR
target string
// source is the owner name of the CNAME RR
source string
// parentNode is a reference to the parent node in the dependency graph. This is used to visit the graph from leafs
// to root
parentNode Node
}
/* NewAliasNode returns a new instance of AliasNode after initializing it.
target is the right-hand name of the CNAME RR
source is the owner name of the CNAME RR
*/
func NewAliasNode(target, source string) *AliasNode {
n := new(AliasNode)
n.target = strings.ToLower(dns.Fqdn(target))
n.source = strings.ToLower(dns.Fqdn(source))
return n
}
// Implements json.Marshaler
func (n *AliasNode) MarshalJSON() ([]byte, error) {
sn := new(serializedAliasNode)
sn.Target = n.target
sn.Source = n.source
return json.Marshal(sn)
}
// Implements json.Unmarshaler
func (n *AliasNode) UnmarshalJSON(bstr []byte) error {
sn := new(serializedAliasNode)
err := json.Unmarshal(bstr, sn)
if err != nil {
return err
}
n.target = sn.Target
n.source = sn.Source
return nil
}
func (n *AliasNode) Target() string {
return n.target
}
func (n *AliasNode) Source() string {
return n.source
}
func (n *AliasNode) String() string {
jsonbstr, err := json.Marshal(n)
if err != nil {
return ""
}
return string(jsonbstr)
}
func (n *AliasNode) deepcopy() Node {
nn := new(AliasNode)
nn.target = n.target
nn.source = n.source
nn.parentNode = n.parentNode
return nn
}
func (n *AliasNode) setParent(g Node) {
n.parentNode = g
}
func (n *AliasNode) parent() Node {
return n.parentNode
}
// similar compares to LeafNode and returns true if the o LeafNode is also an AliasNode and the targets are the same.
func (n *AliasNode) similar(o LeafNode) bool {
otherDomain, ok := o.(*AliasNode)
// It is safe to use == here to compare domain names b/c NewAliasNode performs canonicalization of the domain names
return ok && n.target == otherDomain.target //&& n.source == otherDomain.source
}
func (n *AliasNode) hash() [8]byte {
var ret [8]byte
h := sha256.Sum256([]byte(n.target + n.source))
copy(ret[:], h[:8])
return ret
}

478
graph/analysis.go Normal file
View file

@ -0,0 +1,478 @@
package graph
import (
"fmt"
"github.com/hashicorp/go-immutable-radix"
"github.com/deckarep/golang-set"
"net"
"github.com/ANSSI-FR/transdep/tools"
)
/* simplifyRelWithCycle recursively visit the tree and bubbles up Cycle instances in AND Relations or removes them if
they are in OR Relations.
It also simplifies relation nodes with only one child by bubbling up the child.
This function returns true if the children list of the receiver was modified.
*/
func (rn *RelationshipNode) simplifyRelWithCycle() bool {
// newChildren is the list of children of the receiver after this function actions.
var newChildren []Node
modif := false
childrenToAnalyze := rn.children[:]
Outerloop:
for len(childrenToAnalyze) != 0 {
// mergedChildren will contain nodes contained in a child relation node which, itself, only has one child.
// For instance, if a node A has a child B, and B only child is C, then B is suppressed from A's children
// and C is added to mergedChildren.
var mergedChildren []Node
Innerloop:
for _, chld := range childrenToAnalyze {
if dg, ok := chld.(*RelationshipNode); ok {
// If the child node is a relation ship, visit the child recursively
modif = dg.simplifyRelWithCycle() || modif
// Now, if the child, after the recursive visit only has one child, bubble up that child
if len(dg.children) == 1 {
mergedChildren = append(mergedChildren, dg.children[0])
modif = true
// We continue, because this child node will not be added back to the children of the receiver
continue Innerloop
}
}
if _, ok := chld.(*Cycle); ok {
// Implicit: if the relation is not an AND, it is a OR. In OR relations, Cycles are a neutral element,
// like a 1 in a multiplicative expression.
if rn.relation == AND_REL && len(rn.children) > 1 {
// If the considered child is a Cycle and the receiver is an AND relation, then the receiver
// evaluation is summarized by this Cycle (because a Cycle in a AND relation is like a 0 in a
// multiplicative expression), so we just set the receiver's only child to a Cycle and don't process
// the remaining children.
newChildren = []Node{new(Cycle)}
modif = true
break Outerloop
}
}
// This node is not a Cycle, so we add it back as a child the receiver
newChildren = append(newChildren, chld)
}
// If we have bubbled up some grand-children nodes, we need to analyse them as children of the receiver
childrenToAnalyze = mergedChildren
}
rn.children = newChildren
return modif
}
/* auxSimplifyGraph recursively visits the graph and simplifies it. Simplification is done by merging relation
nodes when the receiver and one of its child relation node have the same relation type. Child relation nodes are like
parenthesis in a mathematical expression: 1 + (2*3 + 4) is equivalent to 1 + 2*3 + 4 and 2 * (3 * 4) is equivalent
to 2 * 3 * 4. Simplifying the graph that way reduces the depth of the graph and accelerates future visits.
This function returns true if the graph/tree below the receiver was altered
*/
func (rn *RelationshipNode) auxSimplifyGraph() bool {
var newChildren []Node
modif := false
// TODO I don't think I need to actually duplicate this
childrenToAnalyze := make([]Node, len(rn.children))
copy(childrenToAnalyze, rn.children)
for len(childrenToAnalyze) > 0 {
var mergedChildren []Node
for _, chldGraphNode := range childrenToAnalyze {
if chld, ok := chldGraphNode.(*RelationshipNode); ok {
if chld.relation == rn.relation {
// If the receiver's child currently considered is a RelationshipNode with the relation type as the
// receiver, then, add the children of this child node to the list of nodes that will be considered
// as children of the receiver.
mergedChildren = append(mergedChildren, chld.children...)
modif = true
} else {
// The child RelationshipNode node has a different relation type
// (AND containing an OR, or an OR containing an AND).
newChildren = append(newChildren, chldGraphNode)
}
} else {
// This child node is a LeafNode
newChildren = append(newChildren, chldGraphNode)
}
}
// TODO I don't think I need to actually duplicate this
childrenToAnalyze = make([]Node, len(mergedChildren))
copy(childrenToAnalyze, mergedChildren)
}
// TODO I don't think I need to actually duplicate this
rn.children = make([]Node, len(newChildren))
copy(rn.children, newChildren)
// Once the receiver simplified, we apply this function on all remaining children relation nodes
for _, chldGraphNode := range rn.children {
if chld, ok := chldGraphNode.(*RelationshipNode); ok {
modif = chld.auxSimplifyGraph() || modif
}
}
return modif
}
// SimplifyGraph creates a copy of the tree under the receiver, simplifies the radix under the copy, by applying
// repetitively auxSimplyGraph and simplifyRelWithCycle until the tree is stable.
// The copy is then returned.
func (rn *RelationshipNode) SimplifyGraph() *RelationshipNode {
ng, ok := rn.deepcopy().(*RelationshipNode)
if !ok {
return nil
}
modif := true
for modif {
modif = false
modif = ng.auxSimplifyGraph() || modif
modif = ng.simplifyRelWithCycle() || modif
}
return ng
}
// buildLeafNodeInventory visits the tree under the receiver and returns the list of the LeafNodes. This list is built
// by visiting the tree recursively.
func (rn *RelationshipNode) buildLeafNodeInventory() []LeafNode {
l := make([]LeafNode, 0)
for _, absChld := range rn.children {
switch chld := absChld.(type) {
case *RelationshipNode:
l2 := chld.buildLeafNodeInventory()
l = append(l, l2...)
case LeafNode:
l = append(l, chld)
}
}
return l
}
// TODO add comment
func getSiblingsUsingSimilarity(leafNode LeafNode, inventory []LeafNode, breakV4, breakV6, DNSSECOnly bool) []LeafNode {
// siblings are leafNode that are considered unavailable during the analysis of leafNode
// Are considered unavailable other nodes that are similar to leafNode (similarity being defined by the similar()
// implementation of the leafNode underlying type. Are never considered unavailable unsigned names when DNSSECOnly
// is true as well as alias names. Alias names are always ignored because they are never the actual source of an
// unavailability; either the zone that contains the alias is unavailable or the zone containing the target of the
// alias is unavailable.
// IPv4 addresses are always considered unavailable if breakV4 is true. The same applies for IPv6 addresses w.r.t.
// breakV6.
var siblings []LeafNode
for _, node := range inventory {
toIgnore := false
toAdd := false
switch n := node.(type) {
case *DomainNameNode:
if DNSSECOnly && !n.DNSSECProtected() {
toIgnore = true
}
case *AliasNode:
toIgnore = true
case *IPNode:
isV4 := n.IsV4()
if (breakV4 && isV4) || (breakV6 && !isV4) {
toAdd = true
}
case *Cycle:
toAdd = true
}
if toAdd || (!toIgnore && leafNode.similar(node)) {
siblings = append(siblings, node)
}
}
return siblings
}
/* TODO revise this comment
testNodeCriticity returns true if leafNode is necessary for this tree to be resolved. External factors may influence
whether this leafNode is required to be available, including whether the IPv4 network or the IPv6 network are
available or whether we consider that only DNSSEC-protected zone may break (e.g. in case of invalid/expired
record signatures, or DS/DNSKEY algorithm mismatches) versus all zones (e.g. truncated zone, corrupted data, etc.).
leafNode is the node being tested
inventory is the list of all leafNodes that might be broken too and influence the result
breakV4, breakV6 and DNSSEConly are flags that indicates additional conditions for a node to be available or not.
*/
func (rn *RelationshipNode) testNodeCriticity(siblings []LeafNode) bool {
// The following loops purpose is to bubble up the unavailability markers of the leafNode. If an unavailable node
// is a child of an AND relationship, the whole relationship is unavailable. If an unavailable node is a child of
// an OR relationship, the whole relationship is unavailable if all of its children are unavailable.
// The algorithm terminates if the tree root is added to new unavailable node list or if there a no more
// unavailability markers that may bubble up.
// Since multiple "and" branches may have bubbling unavailability markers, "and"s bubble up only once, so that it
// does not mess up with the "or" count. "And"s bubbles up only once by marking it as "already bubbled". This is
// done by inserting it in the andSet. The number of children of an Or relationship that have bubbled up an
// unavailability marker is stored in the orSet variable.
orSet := make(map[*RelationshipNode]int)
andSet := make(map[*RelationshipNode]bool)
var unavailableNodes []Node
for _, n := range siblings {
unavailableNodes = append(unavailableNodes, n)
}
for len(unavailableNodes) > 0 {
nodesToHandle := unavailableNodes
unavailableNodes = []Node{}
for _, node := range nodesToHandle {
parent := node.parent()
if parent == nil {
// if "node" is the root node
return true
}
n := parent.(*RelationshipNode)
if n.relation == AND_REL {
if _, ok := andSet[n]; !ok {
andSet[n] = true
unavailableNodes = append(unavailableNodes, n)
}
} else {
if v, ok := orSet[n]; ok {
orSet[n] = v + 1
} else {
orSet[n] = 1
}
if len(n.children) == orSet[n] {
unavailableNodes = append(unavailableNodes, n)
}
}
}
}
return false
}
// TODO add comment
func getSiblingsByPrefixCloseness(n LeafNode, inventory []LeafNode) []LeafNode {
if ipn, ok := n.(*IPNode) ; ok {
return getSiblingsUsingFilteringFun(inventory, ipn.similarPrefix)
}
return []LeafNode{}
}
// TODO add comment
func (rn *RelationshipNode) findMandatoryNodesUsingPrefixCloseness(inventory []LeafNode)(mandatoryNodes, optionalNodes mapset.Set) {
return rn.findMandatoryNodes(inventory, getSiblingsByPrefixCloseness)
}
// TODO add comment
func (rn *RelationshipNode) findMandatoryNodesUsingSimilarity(inventory []LeafNode, breakV4, breakV6, DNSSECOnly bool) (mandatoryNodes, optionalNodes mapset.Set) {
getSiblingsFun := func(n LeafNode, inv []LeafNode) []LeafNode {
return getSiblingsUsingSimilarity(n, inv, breakV4, breakV6, DNSSECOnly)
}
return rn.findMandatoryNodes(inventory, getSiblingsFun)
}
// TODO add comment
func getSiblingsUsingFilteringFun(inventory []LeafNode, customFilterFun func(lf LeafNode) bool) []LeafNode {
var siblings []LeafNode
for _, lf := range inventory {
if _, ok := lf.(*Cycle) ; ok || customFilterFun(lf) {
siblings = append(siblings, lf)
}
}
return siblings
}
// TODO add comment
func getSiblingsByASN(n LeafNode, inventory []LeafNode) []LeafNode {
if ipn, ok := n.(*IPNode) ; ok {
return getSiblingsUsingFilteringFun(inventory, ipn.similarASN)
}
return []LeafNode{}
}
// TODO add comment
func (rn *RelationshipNode) findMandatoryNodesUsingASN(inventory []LeafNode) (mandatoryNodes, optionalNodes mapset.Set) {
return rn.findMandatoryNodes(inventory, getSiblingsByASN)
}
// TODO revise comment
// findMandatoryNodes explores all nodes from the inventory and returns the list of leafNodes that are mandatory
func (rn *RelationshipNode) findMandatoryNodes(inventory []LeafNode, getSiblingsFun func(LeafNode, []LeafNode) []LeafNode) (mandatoryNodes, optionalNodes mapset.Set) {
mandatoryNodesSet := make(map[[8]byte]LeafNode)
optionalNodesSet := make(map[[8]byte]LeafNode)
for _, leafNode := range inventory {
// We use a hash of the leafNode to "uniquely" identify nodes. This is because several leafNode instances have
// different memory addresses, while still representing the same node (at the semantic level).
h := leafNode.hash()
// Test whether this node was already evaluated
if _, ok := mandatoryNodesSet[h]; ok {
continue
}
if _, ok := optionalNodesSet[h]; ok {
continue
}
// We cut the inventory using pos, because if we are here, all previous nodes were not "siblings" of this one.
// If they had been, we would have "continue"d during the previous tests
siblings := getSiblingsFun(leafNode, inventory)
if rn.testNodeCriticity(siblings) {
mandatoryNodesSet[h] = leafNode
} else {
optionalNodesSet[h] = leafNode
}
}
mandatoryNodes = mapset.NewThreadUnsafeSet()
optionalNodes = mapset.NewThreadUnsafeSet()
// Convert the map into a list of the map values
for _, v := range mandatoryNodesSet {
mandatoryNodes.Add(v)
}
// Convert the map into a list of the map values
for _, v := range optionalNodesSet {
optionalNodes.Add(v)
}
return mandatoryNodes, optionalNodes
}
// TODO add comment
func convertToListOfLeafNodes(s mapset.Set) []LeafNode {
var l []LeafNode
for _, v := range s.ToSlice() {
l = append(l, v.(LeafNode))
}
return l
}
// analyse starts the analysis of the tree under the receiver and returns the list of mandatory nodes
func (rn *RelationshipNode) analyse(breakV4, breakV6, DNSSECOnly bool, tree *iradix.Tree) []CriticalNode {
// A copy of the receiver's tree is performed because we will alter the nodes by simplyfing the graph and setting
// the parent of the nodes and we don't want to be "destructive" in anyway
ng := rn.SimplifyGraph()
ng.setParentNodes()
inventory := ng.buildLeafNodeInventory()
var criticalNodes []CriticalNode
mandatoryNodes, _ := ng.findMandatoryNodesUsingSimilarity(inventory, breakV4, breakV6, DNSSECOnly)
for _, node := range convertToListOfLeafNodes(mandatoryNodes) {
switch typedNode := node.(type) {
case *DomainNameNode:
criticalNodes = append(criticalNodes, CriticalName{typedNode.Domain()})
case *IPNode:
criticalNodes = append(criticalNodes, CriticalIP{net.ParseIP(typedNode.IP())})
case *Cycle:
criticalNodes = append(criticalNodes, &Cycle{})
}
}
mandatoryNodes, _ = ng.findMandatoryNodesUsingASN(inventory)
asnSet := make(map[int]bool)
for _, node := range convertToListOfLeafNodes(mandatoryNodes) {
if typedNode, ok := node.(*IPNode) ; ok {
asnSet[typedNode.ASN()] = true
}
}
for asn, _ := range asnSet {
criticalNodes = append(criticalNodes, CriticalASN{asn})
}
mandatoryNodes, _ = ng.findMandatoryNodesUsingPrefixCloseness(inventory)
prefixSet := make(map[string]bool)
for _, node := range convertToListOfLeafNodes(mandatoryNodes) {
if typedNode, ok := node.(*IPNode) ; ok {
prefixSet[typedNode.Prefix()] = true
}
}
for prefix, _ := range prefixSet {
criticalNodes = append(criticalNodes, CriticalPrefix{net.ParseIP(prefix)})
}
return criticalNodes
}
// Analyse is the exported version of analyse. It starts the analysis of the tree under the receiver and returns the
// list of mandatory nodes.
// IPv4 and IPv6 addresses have normal availability markers (no breakV4/breakV6)
func (rn *RelationshipNode) Analyse(DNSSECOnly bool, tree *iradix.Tree) []CriticalNode {
return rn.analyse(false, false, DNSSECOnly, tree)
}
// AnalyseWithoutV4 runs the same type of analysis as "Analyse" except all IPv4 addresses are marked as unavailable.
// This may reveal that some IPv6 are actually SPOFs when IPv4 addresses are not available.
// AnalyseWithoutV4 may either return the list of mandatory leafNodes or an error if the name cannot be resolved without
// IPv4 address participation.
func (rn *RelationshipNode) AnalyseWithoutV4(DNSSECOnly bool, tree *iradix.Tree) ([]CriticalNode, error) {
l := rn.analyse(true, false, DNSSECOnly, tree)
for _, e := range l {
if node, ok := e.(CriticalIP); ok {
if node.IP.To4() != nil {
return []CriticalNode{}, fmt.Errorf("this domain name requires some IPv4 addresses to be resolved properly")
}
}
}
return l, nil
}
// AnalyseWithoutV6 runs the same type of analysis as "Analyse" except all IPv6 addresses are marked as unavailable.
// This may reveal that some IPv4 are actually SPOFs when IPv6 addresses are not available.
// AnalyseWithoutV6 may either return the list of mandatory leafNodes or an error if the name cannot be resolved without
// IPv6 address participation.
func (rn *RelationshipNode) AnalyseWithoutV6(DNSSECOnly bool, tree *iradix.Tree) ([]CriticalNode, error) {
l := rn.analyse(false, true, DNSSECOnly, tree)
for _, e := range l {
if node, ok := e.(CriticalIP); ok {
if node.IP.To4() == nil {
return []CriticalNode{}, fmt.Errorf("this domain name requires some IPv6 addresses to be resolved properly")
}
}
}
return l, nil
}
type WorkerAnalysisResult struct {
Nodes []CriticalNode
Err error
}
func PerformAnalyseOnResult(g *RelationshipNode, reqConf *tools.RequestConfig, tree *iradix.Tree) (allNamesResult, allNamesNo4Result, allNamesNo6Result, dnssecResult, dnssecNo4Result, dnssecNo6Result *WorkerAnalysisResult) {
if !reqConf.AnalysisCond.DNSSEC || reqConf.AnalysisCond.All {
dnssecResult = nil
dnssecNo4Result = nil
dnssecNo6Result = nil
allNamesResult, allNamesNo4Result, allNamesNo6Result = performAnalyseOnResultWithDNSSECIndicator(g, reqConf,false, tree)
}
if reqConf.AnalysisCond.DNSSEC || reqConf.AnalysisCond.All {
if !reqConf.AnalysisCond.All {
allNamesResult = nil
allNamesNo4Result = nil
allNamesNo6Result = nil
}
dnssecResult, dnssecNo4Result, dnssecNo6Result = performAnalyseOnResultWithDNSSECIndicator(g, reqConf, true, tree)
}
return
}
func performAnalyseOnResultWithDNSSECIndicator(g *RelationshipNode, reqConf *tools.RequestConfig, DNSSEC bool, tree *iradix.Tree) (natural, noV4, noV6 *WorkerAnalysisResult) {
if reqConf.AnalysisCond.All || (!reqConf.AnalysisCond.NoV4 && !reqConf.AnalysisCond.NoV6) {
natural = &WorkerAnalysisResult{g.Analyse(DNSSEC, tree), nil}
} else {
natural = nil
}
if reqConf.AnalysisCond.All || reqConf.AnalysisCond.NoV4 {
analyseResult, err := g.AnalyseWithoutV4(DNSSEC, tree)
noV4 = &WorkerAnalysisResult{analyseResult, err}
} else {
noV4 = nil
}
if reqConf.AnalysisCond.All || reqConf.AnalysisCond.NoV6 {
analyseResult, err := g.AnalyseWithoutV6(DNSSEC, tree)
noV6 = &WorkerAnalysisResult{analyseResult, err}
} else {
noV6 = nil
}
return
}

37
graph/analysis_result.go Normal file
View file

@ -0,0 +1,37 @@
package graph
import "net"
type CriticalIP struct {
IP net.IP `json:"ip"`
}
type CriticalName struct {
Name string `json:"name"`
}
type CriticalAlias struct {
Source string `json:"source"`
Target string `json:"target"`
}
type CriticalASN struct {
ASN int `json:"asn"`
}
type CriticalPrefix struct {
Prefix net.IP `json:"prefix"`
}
type CriticalNode interface {
isCriticalNode()
}
func (n CriticalIP) isCriticalNode() {}
func (n CriticalName) isCriticalNode() {}
func (n CriticalAlias) isCriticalNode() {}
func (n CriticalASN) isCriticalNode() {}
func (n CriticalPrefix) isCriticalNode() {}
// Cycles are also critical nodes
func (c *Cycle) isCriticalNode() {}

64
graph/analysis_test.go Normal file
View file

@ -0,0 +1,64 @@
package graph
import (
"testing"
"net"
"bytes"
)
func TestGetBitsFromIP(t *testing.T) {
ip := net.ParseIP("10.0.0.1")
bstr := getIPBitsInBytes(ip)
vector := []byte{
0, 0, 0, 0, 1, 0 ,1, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 1,
}
for i, b := range bstr {
if b != vector[i] {
t.Fail()
}
}
}
func TestIsInTree(t *testing.T) {
buf := new(bytes.Buffer)
buf.WriteString("12345 192.168.10.0/24")
tr, err := BuildRadixTree(buf)
if err != nil {
t.Fatal("Failed tree building")
}
asn, err := getASNFor(tr, net.ParseIP("192.168.0.1"))
if err == nil {
t.Fatal("Got a match for 192.168.0.1")
}
asn, ok := getASNFor(tr, net.ParseIP("192.168.10.1"))
if ok != nil || asn != 12345 {
t.Fatal("Did not discover 12345")
}
}
func TestIsInAS(t *testing.T) {
buf := new(bytes.Buffer)
buf.WriteString("12345 192.168.10.0/24")
tr, err := BuildRadixTree(buf)
if err != nil {
t.Fatal("Failed tree building")
}
n := NewIPNode("192.168.10.1")
f := getASNFilteringFunc(tr)
f2 := getASNTestFunc(f, 12345)
f3 := getASNTestFunc(f, 12346)
if !f2(n) {
t.Fatal("Not in 12345")
}
if f3(n) {
t.Fatal("In 12346")
}
}

57
graph/cycle.go Normal file
View file

@ -0,0 +1,57 @@
package graph
import (
"bytes"
"encoding/json"
)
// Cycle instances represent parts of the graph where circular dependencies are detected. During analysis, they
// signify that this branch of the graph is always invalid.
type Cycle struct {
parentNode Node
}
func (c *Cycle) String() string {
jsonbstr, err := json.Marshal(c)
if err != nil {
return ""
}
return string(jsonbstr)
}
// Implements json.Marshaler
func (c *Cycle) MarshalJSON() ([]byte, error) {
buf := new(bytes.Buffer)
buf.WriteString("{\"type\": \"cycle\"}")
return buf.Bytes(), nil
}
// Implements json.Unmarshaler
func (c *Cycle) UnmarchalJSON([]byte) error {
return nil
}
func (c *Cycle) deepcopy() Node {
nc := new(Cycle)
nc.parentNode = c.parentNode
return nc
}
func (c *Cycle) setParent(g Node) {
c.parentNode = g
}
func (c *Cycle) parent() Node {
return c.parentNode
}
// similar returns true if the provided LeafNode is also a Cycle node
func (c *Cycle) similar(o LeafNode) bool {
_, ok := o.(*Cycle)
return ok
}
func (c *Cycle) hash() (ret [8]byte) {
copy(ret[:], []byte("Cycle")[:8])
return
}

100
graph/domainName.go Normal file
View file

@ -0,0 +1,100 @@
package graph
import (
"crypto/sha256"
"encoding/json"
"github.com/miekg/dns"
"strings"
)
/* serializedDomainNameNode is a proxy struct used to serialize an DomainNameNode into JSON.
The DomainNameNode struct is not directly used because the Go json module requires that attributes must be exported for
it to work, and DomainNameNode struct attributes have no other reason for being exported.
*/
type serializedDomainNameNode struct {
Domain string `json:"domain"`
DnssecProtected bool `json:"dnssec"`
}
// DomainNameNode represents a domain name or an alias of a name within the dependency tree
// As a metadata, if a node represents a zone apex, a DNSSEC indicator is set if there is a DS record for this name.
type DomainNameNode struct {
domain string
dnssecProtected bool
parentNode Node
}
// NewDomainNameNode returns a new DomainNameNode instance and initializes it with the domain name and the
// DNSSEC indicator
func NewDomainNameNode(domain string, dnssecProtected bool) *DomainNameNode {
n := new(DomainNameNode)
n.domain = strings.ToLower(dns.Fqdn(domain))
n.dnssecProtected = dnssecProtected
return n
}
// Implements json.Marshaler
func (n *DomainNameNode) MarshalJSON() ([]byte, error) {
sn := new(serializedDomainNameNode)
sn.Domain = n.domain
sn.DnssecProtected = n.dnssecProtected
return json.Marshal(sn)
}
// Implements json.Unmarshaler
func (n *DomainNameNode) UnmarshalJSON(bstr []byte) error {
sn := new(serializedDomainNameNode)
err := json.Unmarshal(bstr, sn)
if err != nil {
return err
}
n.domain = sn.Domain
n.dnssecProtected = sn.DnssecProtected
return nil
}
func (n *DomainNameNode) Domain() string {
return n.domain
}
func (n *DomainNameNode) DNSSECProtected() bool {
return n.dnssecProtected
}
func (n *DomainNameNode) String() string {
jsonbstr, err := json.Marshal(n)
if err != nil {
return ""
}
return string(jsonbstr)
}
func (n *DomainNameNode) deepcopy() Node {
nn := new(DomainNameNode)
nn.domain = n.domain
nn.dnssecProtected = n.dnssecProtected
nn.parentNode = n.parentNode
return nn
}
func (n *DomainNameNode) setParent(g Node) {
n.parentNode = g
}
func (n *DomainNameNode) parent() Node {
return n.parentNode
}
// similar returns true if the o LeafNode is a DomainNameNode and the domain are similar, regardless of the DNSSEC protection status
func (n *DomainNameNode) similar(o LeafNode) bool {
otherDomain, ok := o.(*DomainNameNode)
// It is safe to use == to compare domain names here, because NewDomainNameNode performed canonicalization
return ok && n.domain == otherDomain.domain
}
func (n *DomainNameNode) hash() [8]byte {
var ret [8]byte
h := sha256.Sum256([]byte(n.domain))
copy(ret[:], h[:8])
return ret
}

114
graph/drawGraphViz.go Normal file
View file

@ -0,0 +1,114 @@
package graph
import (
"fmt"
"encoding/hex"
"github.com/awalterschulze/gographviz"
)
// isCritical returns true if n is similar to any of the criticalNodes
func isCritical(n LeafNode, criticalNodes []CriticalNode) bool {
IPNode, isIPNode := n.(*IPNode)
critical := false
for _, cn := range criticalNodes {
switch typedCritNode := cn.(type) {
case CriticalName:
critical = n.similar(NewDomainNameNode(typedCritNode.Name, false))
case CriticalIP:
critical = n.similar(NewIPNode(typedCritNode.IP.String(), 0))
case CriticalAlias:
critical = n.similar(NewAliasNode(typedCritNode.Target, typedCritNode.Source))
case CriticalASN:
if isIPNode {
critical = IPNode.ASN() == typedCritNode.ASN
}
case CriticalPrefix:
if isIPNode {
critical = IPNode.Prefix() == typedCritNode.Prefix.String()
}
}
if critical {
return true
}
}
return false
}
// DrawGraph initializes a graphviz graph instance rooted on g, then returns it, along with the "root" node of that
// "subgraph" (since g could be the children of another node). Members of the criticalNodes are highlighted.
func DrawGraph(g Node, criticalNodes []CriticalNode) (*gographviz.Graph, string) {
gv := gographviz.NewGraph()
gv.SetStrict(true)
gv.SetDir(true)
gv.Attrs.Add(string(gographviz.RankSep), "3")
gv.Attrs.Add(string(gographviz.NodeSep), "1")
h := g.hash()
nodeId := "node" + hex.EncodeToString(h[:8])
// Create a node, then add it and keep the reference to self, to add the edges later on.
// Use attributes to encode AND or OR.
switch node := g.(type) {
case *RelationshipNode:
var label string
if node.relation == AND_REL {
label = fmt.Sprintf("AND rel: %s", node.comment)
} else {
label = fmt.Sprintf("OR rel: %s", node.comment)
}
attr := make(map[gographviz.Attr]string)
attr[gographviz.Label] = "\"" + label + "\""
gv.Nodes.Add(&gographviz.Node{nodeId, attr})
for _, chld := range node.children {
chldGraph, firstNode := DrawGraph(chld, criticalNodes)
for _, chldNode := range chldGraph.Nodes.Nodes {
gv.Nodes.Add(chldNode)
}
for _, chldEdge := range chldGraph.Edges.Edges {
gv.Edges.Add(chldEdge)
}
gv.AddEdge(nodeId, firstNode, true, nil)
}
case *Cycle:
label := "Cycle"
attr := make(map[gographviz.Attr]string)
attr[gographviz.Label] = label
attr[gographviz.Style] = "radial"
attr[gographviz.FillColor] = "\"red:white\""
gv.Nodes.Add(&gographviz.Node{nodeId, attr})
case *DomainNameNode:
label := node.Domain()
attr := make(map[gographviz.Attr]string)
if isCritical(node, criticalNodes) {
attr[gographviz.Style] = "radial"
attr[gographviz.FillColor] = "\"red:white\""
}
attr[gographviz.Label] = "\""+ label + "\""
gv.Nodes.Add(&gographviz.Node{nodeId, attr})
case *AliasNode:
source := node.Source()
attr := make(map[gographviz.Attr]string)
attr[gographviz.Style] = "dotted"
attr[gographviz.Label] = "\"" + source + "\""
gv.Nodes.Add(&gographviz.Node{nodeId, attr})
target := node.Target()
attr = make(map[gographviz.Attr]string)
attr[gographviz.Style] = "solid"
attr[gographviz.Label] = "\"" + target + "\""
gv.Nodes.Add(&gographviz.Node{nodeId+"2", attr})
attr = make(map[gographviz.Attr]string)
attr[gographviz.Label] = "CNAME"
gv.Edges.Add(&gographviz.Edge{nodeId, "", nodeId+"2", "", true, attr})
case *IPNode:
label := node.IP()
attr := make(map[gographviz.Attr]string)
attr[gographviz.Label] = "\"" + label + "\""
if isCritical(node, criticalNodes) {
attr[gographviz.Style] = "radial"
attr[gographviz.FillColor] = "\"red:white\""
}
gv.Nodes.Add(&gographviz.Node{nodeId, attr})
}
return gv, nodeId
}

22
graph/graphnode.go Normal file
View file

@ -0,0 +1,22 @@
package graph
// A node is an intermediary node (RelationshipNode) or a LeafNode in a dependency graph.
type Node interface {
String() string
// deepcopy performs a copy the receiver node and returns the new identical instance
deepcopy() Node
// setParent sets the parent node of the receiver
setParent(g Node)
// parent returns the parent node of the receiver
parent() Node
// hash returns a byte array representing the node as a value that can be used as a map key
hash() [8]byte
}
// A LeafNode is, as the name implies a leaf node in a dependency tree. The only difference with the Node interface
// is that LeadNode instances can be compared using the similar() method.
type LeafNode interface {
Node
// similar compares two LeafNode and returns true if they are similar enough (not necessarily strictly similar, though)
similar(g LeafNode) bool
}

132
graph/ip.go Normal file
View file

@ -0,0 +1,132 @@
package graph
import (
"crypto/sha256"
"encoding/json"
"github.com/miekg/dns"
"net"
"strings"
)
type serializedIPNode struct {
Addr net.IP `json:"ip"`
Name string `json:"name"`
ASN int `json:"asn"`
Prefix net.IP `json:"prefix"`
}
type IPNode struct {
addr net.IP
name string
asn int
prefix net.IP
parentNode Node
}
func NewIPNode(ip string, asn int) (n *IPNode) {
n = NewIPNodeWithName(ip, "", asn)
return
}
func NewIPNodeWithName(ip string, dn string, asn int) *IPNode {
n := new(IPNode)
n.addr = net.ParseIP(ip)
n.asn = asn
n.name = strings.ToLower(dns.Fqdn(dn))
if n.IsV4() {
n.prefix = n.addr.Mask(net.CIDRMask(24, 32))
} else {
n.prefix = n.addr.Mask(net.CIDRMask(48, 128))
}
return n
}
func (n *IPNode) MarshalJSON() ([]byte, error) {
sip := new(serializedIPNode)
sip.Addr = n.addr
sip.Name = n.name
sip.Prefix = n.prefix
sip.ASN = n.asn
return json.Marshal(sip)
}
func (n *IPNode) UnmarshalJSON(bstr []byte) error {
sip := new(serializedIPNode)
if err := json.Unmarshal(bstr, sip) ; err != nil {
return err
}
n.addr = sip.Addr
n.name = sip.Name
n.prefix = sip.Prefix
n.asn = sip.ASN
return nil
}
func (n *IPNode) String() string {
jsonbstr, err := json.Marshal(n)
if err != nil {
return ""
}
return string(jsonbstr)
}
func (n *IPNode) IsV4() bool {
return n.addr.To4() != nil
}
func (n *IPNode) IP() string {
return n.addr.String()
}
func (n *IPNode) ASN() int {
return n.asn
}
func (n *IPNode) Prefix() string {
return n.prefix.String()
}
func (n *IPNode) deepcopy() Node {
nn := new(IPNode)
nn.name = n.name
nn.addr = n.addr
nn.asn = n.asn
nn.prefix = n.prefix
nn.parentNode = n.parentNode
return nn
}
func (n *IPNode) setParent(g Node) {
n.parentNode = g
}
func (n *IPNode) parent() Node {
return n.parentNode
}
func (n *IPNode) similar(o LeafNode) bool {
otherIP, ok := o.(*IPNode)
return ok && n.addr.Equal(otherIP.addr)
}
func (n *IPNode) similarASN(o LeafNode) bool {
otherIP, ok := o.(*IPNode)
return ok && n.ASN() != 0 && n.ASN() == otherIP.ASN()
}
func (n *IPNode) similarPrefix(o LeafNode) bool {
otherIP, ok := o.(*IPNode)
if !ok || n.IsV4() != otherIP.IsV4() {
return false
}
return n.prefix.Equal(otherIP.prefix)
}
func (n *IPNode) hash() [8]byte {
var ret [8]byte
h := sha256.Sum256([]byte(n.addr.String()))
copy(ret[:], h[:8])
return ret
}

175
graph/relationNode.go Normal file
View file

@ -0,0 +1,175 @@
package graph
import (
"crypto/sha256"
"encoding/json"
)
const (
// OR_REL is a constant used to designate the OR relationship in RelationshipNode instances
OR_REL = iota
// AND_REL is a constant used to designate the AND relationship in RelationshipNode instances
AND_REL
)
/* serializedRelationshipNode is a proxy struct used to serialize an RelationshipNode node into JSON.
The RelationshipNode struct is not directly used because the Go json module requires that attributes must be exported
for it to work, and RelationshipNode struct attributes have no other reason for being exported.
*/
type serializedRelationshipNode struct {
Comment string `json:"comment"`
Relation int `json:"rel"`
Children []interface{} `json:"elmts"`
}
// RelationshipNode instances represents intermediary nodes in the dependency graph. RelationshipNode are N-ary trees,
// not necessarily binary trees.
// Children of such a node are related following either an "and" or an "or" boolean expression.
type RelationshipNode struct {
comment string
relation int
parentNode Node
children []Node
}
/* NewRelationshipNode returns a new RelationshipNode after initializing it.
comment is a free-form string giving some indication as to why this node exists and what it represents w.r.t. the
dependency tree.
relation is either equal to AND_REL or OR_REL
*/
func NewRelationshipNode(comment string, relation int) *RelationshipNode {
if relation != AND_REL && relation != OR_REL {
panic("Contract violation: relation is not equal to AND_REL or OR_REL.")
}
g := new(RelationshipNode)
g.comment = comment
g.relation = relation
return g
}
// Implements json.Marshaler
func (rn *RelationshipNode) MarshalJSON() ([]byte, error) {
srn := new(serializedRelationshipNode)
srn.Comment = rn.comment
srn.Relation = rn.relation
for _, v := range rn.children {
srn.Children = append(srn.Children, v)
}
return json.Marshal(srn)
}
// Implements json.Unmarshaler
func (rn *RelationshipNode) UnmarshalJSON(b []byte) error {
// This function unserializes first a serializedRelationShip node then tries to use this object to initialize the
// receiver.
srn := new(serializedRelationshipNode)
err := json.Unmarshal(b, srn)
if err != nil {
return err
}
rn.comment = srn.Comment
rn.relation = srn.Relation
for _, chld := range srn.Children {
m := chld.(map[string]interface{})
rn.addChildrenFromMap(m)
}
return nil
}
/* addChildrenFromMap discovers from a map of interface{} the type of the object that was serialized as this map.
This is due to the fact that struct instances implementing an interface are uniformed as interface{} instances during
the serialization process and it is up to the unserializer to detect what's what.
Using the map key names, the object type is discovered. Ultimately, the object is initialized and added as a child of
the receiver.
*/
func (rn *RelationshipNode) addChildrenFromMap(m map[string]interface{}) {
if _, ok := m["target"]; ok {
rn.children = append(rn.children, NewAliasNode(m["target"].(string), m["source"].(string)))
} else if _, ok := m["domain"]; ok {
rn.children = append(rn.children, NewDomainNameNode(m["domain"].(string), m["dnssec"].(bool)))
} else if _, ok := m["ip"]; ok {
if _, ok := m["name"]; ok {
rn.children = append(rn.children, NewIPNodeWithName(m["ip"].(string), m["name"].(string), int(m["asn"].(float64))))
} else {
rn.children = append(rn.children, NewIPNode(m["ip"].(string), int(m["asn"].(float64))))
}
} else if _, ok := m["comment"]; ok {
// When there is a comment, this indicates a RelationshipNode => recursive call
chldGraph := new(RelationshipNode)
// Initialization of the child RelationshipNode cannot be done with initializeFromSerializedRelNode because the
// child node is also represented as a map!
chldGraph.initializeFromMap(m)
rn.children = append(rn.children, chldGraph)
} else if c, ok := m["type"] ; ok && c.(string) == "cycle" {
// Cycles are represented in JSON as an object containing a "type" key, and a "cycle" string value.
rn.children = append(rn.children, new(Cycle))
} else {
panic("BUG: invalid or unknown child type")
}
}
// initializeFromMap initializes the receiver using a map representing a RelationshipNode unserialized from JSON
func (rn *RelationshipNode) initializeFromMap(m map[string]interface{}) {
rn.comment = m["comment"].(string)
// float64 is used for type casting because JSON numbers are floats. We recast it as int because we know that values
// are only equal to AND_REL or OR_REL
rn.relation = int(m["rel"].(float64))
for _, chld := range m["elmts"].([]interface{}) {
m := chld.(map[string]interface{})
rn.addChildrenFromMap(m)
}
}
func (rn *RelationshipNode) deepcopy() Node {
cg := new(RelationshipNode)
cg.comment = rn.comment
cg.relation = rn.relation
cg.children = make([]Node, 0, len(rn.children))
cg.parentNode = rn.parentNode
for _, chld := range rn.children {
cg.children = append(cg.children, chld.deepcopy())
}
return cg
}
// AddChild adds a Node to the children of the receiver. This is the main function used for tree building
func (rn *RelationshipNode) AddChild(c Node) {
rn.children = append(rn.children, c)
}
func (rn *RelationshipNode) String() string {
jsonbtr, err := json.Marshal(rn)
if err != nil {
return ""
}
return string(jsonbtr)
}
func (rn *RelationshipNode) hash() [8]byte {
var ret [8]byte
h := sha256.Sum256([]byte(rn.String()))
copy(ret[:], h[:8])
return ret
}
func (rn *RelationshipNode) setParent(p Node) {
rn.parentNode = p
}
func (rn *RelationshipNode) parent() Node {
return rn.parentNode
}
func (rn *RelationshipNode) setParentNodes() {
for _, chld := range rn.children {
chld.setParent(rn)
if cg, ok := chld.(*RelationshipNode); ok {
cg.setParentNodes()
}
}
}

View file

@ -0,0 +1,210 @@
package dependency
import (
"github.com/miekg/dns"
"github.com/ANSSI-FR/transdep/graph"
"github.com/ANSSI-FR/transdep/tools"
"strings"
"time"
"github.com/ANSSI-FR/transdep/errors"
)
/* RequestTopic is a key used to uniquely represent a request.
This may be used in order to detect request loops and circular dependencies, and to identify the topic of a
dependency resolver worker.
*/
type RequestTopic struct {
// domain is the queried domain name
domain string
/* followAlias indicates whether to insert the resolved name as part of the dependency tree. This is part of the
topic because we could have two workers, one returning the cached result WITH the name resolved, and one
WITHOUT the name resolved.
*/
followAlias bool
// includeIP indicates whether to insert to the IP addresses as part of the dependency tree. This is part of the
// topic of the same reasons resolveName is.
includeIP bool
/* depth is used to detect CNAME/aliases loops and overly long chains. Also, it is used to differentiate request
topic because a chain might be considered too long from a starting point and not too long if considered from a
node in the middle of the chain. For instance, let's consider of a CNAME chain where "A" is a CNAME to "B",
"B" is a CNAME to "C" and so on until "K". This is a 10 CNAME long chain. We might not be interested in
following through after "K" to spare resources. Now, if that "not following through" was cached, this would be
problematic if someone considered the chain from "F"; indeed, the "F" to "K" chain is not 10 CNAME long. In that
case, we want to follow through to see where "K" resolves. Since the response to the "A" request is composed of
the resolution of "A" and the response to "B" (and so on), caching the "K" response saying that this is a "dead"
chain would be incorrect, except if we cache that this is the "K" response after a 9 CNAME long chain.
*/
depth int
/*
except contains a list of booleans indicating the exceptions/violations to the DNS protocol that we are OK to accept
for this query
*/
except tools.Exceptions
}
//TODO revoir cet exemple !
/* Request struct represents a request sent to fetch the dependency tree about a domain name.
It is initialized by calling NewRequest. A request is passed to the Finder.Handle() method. The result of the Finder
handling is obtained by calling the request Result() method.
import (
"github.com/ANSSI-FR/transdep/graph"
"github.com/ANSSI-FR/transdep/dependency"
"github.com/ANSSI-FR/transdep/tools"
)
func example(f *dependency.Finder, domain string) *graph.Node {
r := NewRequest(domain, false, false)
f.Handle(r)
result, err := r.Result()
if err != nil {
if err == tools.ERROR_TIMEOUT {
fmt.Printf("Timeout during resolution of %s\n", domain)
} else {
fmt.Println(err)
}
return
} else if result.Err != nil {
fmt.Println(result.Err)
return
}
graph := result.Result
return graph
}
*/
type Request struct {
topic RequestTopic
// resultChan is used as a "blocking" communication channel between the goroutine that resolves the request and the
// goroutine that is waiting for the result. The goroutine waiting for the result blocks on "Result()" until the
// worker responsible for the result is ready to send it by calling SetResult on the request.
resultChan chan *result
// context is used to detect dependency loops. It is just a stack of request topic that were already spooled and
// that led to the resolution of the current request topic
context map[RequestTopic]bool
}
/* NewRequest builds a new request from a context-free perspective.
This is mainly used when making a request that is completely unrelated to any other request. Thus, it should be used
by the dependency finder users to submit requests.
domain is the domain name that is requested for dependency resolution
resolveName indicates whether we are interested in following an eventual CNAME that is found at the requested domain
name. False indicates that we only want the dependency tree for the parent domains of the requested name and the
delegation info to that name.
includeIP indicates that on top of following the eventual CNAME, we want the IP addresses associated to the requested domain name
*/
func NewRequest(domain string, resolveName, includeIP bool, except tools.Exceptions) *Request {
dr := new(Request)
dr.topic.domain = strings.ToLower(dns.Fqdn(domain))
dr.topic.followAlias = resolveName
dr.topic.includeIP = includeIP
dr.topic.depth = 0
dr.topic.except = except
dr.resultChan = make(chan *result, 1)
dr.context = make(map[RequestTopic]bool)
return dr
}
/* NewRequestWithContext builds a new request that is built in the context of the resolution of another request. Thus,
it is possible that loops get created, if a request is dependent on the resolution of another request which is dependent
on the result of the resolution of the first request. Building a request using NewRequestWithContext will prevent this
by using the DetectCycle() method whenever appropriate.
*/
func NewRequestWithContext(domain string, resolveName, includeIP bool, parentReq *Request, depth int) *Request {
dr := new(Request)
dr.topic.domain = strings.ToLower(dns.Fqdn(domain))
dr.topic.followAlias = resolveName
dr.topic.includeIP = includeIP
dr.topic.depth = depth
dr.topic.except = parentReq.Exceptions()
dr.resultChan = make(chan *result, 1)
/* Simply affecting the parentReq.context to dr.context would only copy the map reference, but we need a deepcopy
here, because the parentReq context must NOT be changed by the addition of the parentReq to the context :)) Else,
this would break cycle detection if the parent request was to be dependent of multiple request results. */
dr.context = make(map[RequestTopic]bool)
for k, v := range parentReq.context {
dr.context[k] = v
}
dr.context[parentReq.topic] = true
return dr
}
// Name is the getter of the domain name that is the topic of this request.
func (dr *Request) Name() string {
return dr.topic.domain
}
func (dr *Request) Exceptions() tools.Exceptions {
return dr.topic.except
}
// FollowAlias is the getter of the FollowAlias value part of the topic of this request.
func (dr *Request) FollowAlias() bool {
return dr.topic.followAlias
}
// IncludeIP is the getter of the IncludeIP value part of the topic of this request.
func (dr *Request) IncludeIP() bool {
return dr.topic.includeIP
}
func (dr *Request) Equal(other *Request) bool {
return dr.topic == other.topic
}
/* ResolveTargetName indicates whether the requester is interested in the value of the requested name (the CNAME and
its dependency tree or the IP addresses) or if the request topic is only the dependency graph of the apex of the zone
containing the requested domain name.
*/
func (dr *Request) ResolveTargetName() bool {
return dr.topic.followAlias || dr.topic.includeIP
}
// Topic is the getter of the request topic as specified during this request initialization
func (dr *Request) Topic() RequestTopic {
return dr.topic
}
// Returns the depth of the current request. This is used to detect overly long alias chains
func (dr *Request) Depth() int {
return dr.topic.depth
}
// SetResult records the result of this request.
// This function must only be called once per request, although nothing enforces it at the moment...
func (dr *Request) SetResult(g graph.Node, err *errors.ErrorStack) {
if err != nil {
err = err.Copy()
}
dr.resultChan <- &result{g, err}
}
/* Result returns the result that is set by SetResult().
If the result is yet to be known when this method is called, a timeout duration is waited and if there are still no
result available after that period, tools.ERROR_TIMEOUT is returned as an error.
The specific timeout duration may be specified if the default one is not appropriate, using the
ResultWithSpecificTimeout() method, instead of calling Result()
*/
func (dr *Request) Result() (graph.Node, *errors.ErrorStack) {
return dr.ResultWithSpecificTimeout(tools.DEFAULT_TIMEOUT_DURATION)
}
// ResultWithSpecificTimeout usage is described in the documentation of Request.Result()
func (dr *Request) ResultWithSpecificTimeout(dur time.Duration) (graph.Node, *errors.ErrorStack) {
select {
case res := <-dr.resultChan:
return res.Result, res.Err
case _ = <-tools.StartTimeout(dur):
return nil, errors.NewErrorStack(errors.NewTimeoutError("dependency graph resolution", dr.topic.domain))
}
}
// DetectCycle returns true if this request creates a dependency cycle
func (dr *Request) DetectCycle() bool {
_, ok := dr.context[dr.topic]
return ok
}

View file

@ -0,0 +1,16 @@
package dependency
import (
"github.com/ANSSI-FR/transdep/graph"
"github.com/ANSSI-FR/transdep/errors"
)
/* result contains the result of a dependency request, containing the dependency tree or an error message associated to
that dependency tree resolution.
This struct is used mainly as a vector inside go channels to emulate multiple return values.
*/
type result struct {
Result graph.Node
Err *errors.ErrorStack
}

View file

@ -0,0 +1,118 @@
package nameresolver
import (
"bufio"
"bytes"
"encoding/json"
"github.com/miekg/dns"
"io"
"os"
"strings"
"github.com/ANSSI-FR/transdep/errors"
)
// CACHE_DIRNAME is the name of the directory under the cache root directory for storage of name resolution cache files
const CACHE_DIRNAME = "nameresolver"
// CreateCacheDir creates the cache dir for storage of name resolution cache files.
// It may return an error if the directory cannot be created. If the directory already exists, this function does
// nothing.
func CreateCacheDir(cacheRootDir string) error {
if err := os.MkdirAll(cacheRootDir+string(os.PathSeparator)+CACHE_DIRNAME, 0700); !os.IsExist(err) {
return err
}
return nil
}
// CacheFile represents just that.
type CacheFile struct {
fileName string
}
// NewCacheFile initializes a new CacheFile struct, based on the cache root dir, and the name of the domain that is the
// subject of this cache file.
func NewCacheFile(cacheRootDir string, topic RequestTopic) *CacheFile {
buf := new(bytes.Buffer)
buf.WriteString(cacheRootDir)
buf.WriteRune(os.PathSeparator)
buf.WriteString(CACHE_DIRNAME)
buf.WriteRune(os.PathSeparator)
buf.WriteString("nr-")
buf.WriteString(strings.ToLower(dns.Fqdn(topic.Name)))
buf.WriteString("-")
if topic.Exceptions.RFC8020 {
buf.WriteString("1")
} else {
buf.WriteString("0")
}
if topic.Exceptions.AcceptServFailAsNoData {
buf.WriteString("1")
} else {
buf.WriteString("0")
}
fileName := buf.String()
cf := &CacheFile{fileName}
return cf
}
// NewCacheFile initializes a new CacheFile struct and ensures that this file exists or else returns an error.
func NewExistingCacheFile(cacheRootDir string, topic RequestTopic) (*CacheFile, error) {
cf := NewCacheFile(cacheRootDir, topic)
fd, err := os.Open(cf.fileName)
defer fd.Close()
return cf, err
}
/* Result returns the entry or the error that were stored in the cache file. An error may also be returned, if an
incident happens during retrieval/interpretation of the cache file.
entry is the entry that was stored in the cache file.
resultError is the resolution error that was stored in the cache file.
err is the error that may happen during retrieval of the value in the cache file.
*/
func (cf *CacheFile) Result() (entry *Entry, resultError *errors.ErrorStack, err error) {
fd, err := os.Open(cf.fileName)
if err != nil {
return nil, nil, err
}
defer fd.Close()
buffedFd := bufio.NewReader(fd)
// For some reason, a null byte is appended at the end of the file. Not sure why, but let's use it :)
jsonbstr, err := buffedFd.ReadBytes('\x00')
if err != nil && err != io.EOF {
return nil, nil, err
}
res := new(result)
err = json.Unmarshal(jsonbstr, res)
if err != nil {
return nil, nil, err
}
return res.Result, res.Err, nil
}
// SetResult writes in the cache file the provided entry or error. An error is returned if an incident happens or else
// nil is returned.
func (cf *CacheFile) SetResult(entry *Entry, resultErr *errors.ErrorStack) error {
var jsonRepr []byte
var err error
var fd *os.File
if jsonRepr, err = json.Marshal(&result{entry, resultErr}); err != nil {
return err
}
if fd, err = os.OpenFile(cf.fileName, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0600); err != nil {
return err
}
fd.WriteString(string(jsonRepr))
fd.Close()
return nil
}

View file

@ -0,0 +1,82 @@
package nameresolver
import (
"github.com/miekg/dns"
"net"
"strings"
"encoding/json"
)
// serializedEntry is used as a proxy to generate a JSON representation of an entry, using the json go module.
type serializedEntry struct {
Name string `json:"name"`
Alias *string `json:"alias,omitempty"`
Addrs *[]net.IP `json:"addrs,omitempty"`
}
// Entry represents the resolution of a name, either into an alias (CNAME) or a list of IP addresses (both v4 and v6).
type Entry struct {
owner string
cNAMETarget string
addrs []net.IP
}
// NewAliasEntry is the constructor of an entry whose content symbolizes a domain name owning just a CNAME.
func NewAliasEntry(owner, CNAMETarget string) *Entry {
e := new(Entry)
e.owner = strings.ToLower(dns.Fqdn(owner))
e.cNAMETarget = strings.ToLower(dns.Fqdn(CNAMETarget))
return e
}
// NewIPEntry is the constructor of an entry whose content is a domain name and its associated IP addresses.
func NewIPEntry(name string, addrs []net.IP) *Entry {
e := new(Entry)
e.owner = strings.ToLower(dns.Fqdn(name))
e.cNAMETarget = ""
e.addrs = addrs
return e
}
// Implements json.Marshaler
func (e *Entry) MarshalJSON() ([]byte, error) {
sre := new(serializedEntry)
sre.Name = e.owner
if e.cNAMETarget != "" {
sre.Alias = new(string)
*sre.Alias = e.cNAMETarget
} else if len(e.addrs) > 0 {
sre.Addrs = new([]net.IP)
*sre.Addrs = e.addrs
}
return json.Marshal(sre)
}
// Implements json.Unmarshaler
func (e *Entry) UnmarshalJSON(bstr []byte) error {
sre := new(serializedEntry)
err := json.Unmarshal(bstr, sre)
if err != nil {
return err
}
e.owner = sre.Name
if sre.Alias != nil {
e.cNAMETarget = *sre.Alias
}
if sre.Addrs != nil {
e.addrs = *sre.Addrs
}
return nil
}
func (e *Entry) Owner() string {
return e.owner
}
func (e *Entry) CNAMETarget() string {
return e.cNAMETarget
}
func (e *Entry) Addrs() []net.IP {
return e.addrs
}

View file

@ -0,0 +1,99 @@
package nameresolver
import (
"github.com/miekg/dns"
"github.com/ANSSI-FR/transdep/tools"
"time"
"strings"
"github.com/ANSSI-FR/transdep/errors"
)
type RequestTopic struct {
// name is the request topic.
Name string
// except is the list of exceptions/violations of the DNS protocol that we are willing to accept for this query
Exceptions tools.Exceptions
}
// Request represents a request to the name resolution "finder".
type Request struct {
topic RequestTopic
// resultChan is used internally by the request methods to pass around the result of the request between the worker
// goroutine doing the resolution and the calling goroutines that initiated the resolution.
resultChan chan *result
// context is used for cycle detection to prevent cyclic name resolution, for instance if a domain name owns a
// CNAME to itself or if a CNAME chain is circular.
context map[RequestTopic]bool
}
// NewRequest builds a new Request instance.
// This is the standard way of building new requests from a third-party module.
func NewRequest(name string, except tools.Exceptions) *Request {
nr := new(Request)
nr.topic.Name = strings.ToLower(dns.Fqdn(name))
nr.topic.Exceptions = except
nr.resultChan = make(chan *result, 1)
nr.context = make(map[RequestTopic]bool)
return nr
}
// NewRequestWithContext builds a new Request instance, adding some context information to it to prevent resolution loops.
// This is mainly used by github.com/ANSSI-FR/transdep/nameresolver
func NewRequestWithContext(name string, except tools.Exceptions, ctx *Request) *Request {
nr := new(Request)
nr.topic.Name = strings.ToLower(dns.Fqdn(name))
nr.topic.Exceptions = except
nr.resultChan = make(chan *result, 1)
nr.context = make(map[RequestTopic]bool)
for k, v := range ctx.context {
nr.context[k] = v
}
nr.context[ctx.topic] = true
return nr
}
func (nr *Request) Name() string {
return nr.topic.Name
}
func (nr *Request) Exceptions() tools.Exceptions {
return nr.topic.Exceptions
}
func (nr *Request) RequestTopic() RequestTopic {
return nr.topic
}
// DetectLoop returns true if this request is part of a resolution loop
func (nr *Request) DetectLoop() bool {
_, ok := nr.context[nr.topic]
return ok
}
func (nr *Request) Equal(other *Request) bool {
return nr.topic == other.topic
}
// Result returns the result of a name resolution or an error.
// The error may be caused by a timeout after the default timeout duration, or an error during the resolution process.
func (nr *Request) Result() (*Entry, *errors.ErrorStack) {
return nr.ResultWithSpecificTimeout(tools.DEFAULT_TIMEOUT_DURATION)
}
// ResultWithSpecificTimeout is similar to Result except that a timeout duration may be specified.
func (nr *Request) ResultWithSpecificTimeout(dur time.Duration) (*Entry, *errors.ErrorStack) {
select {
case res := <-nr.resultChan:
return res.Result, res.Err
case _ = <-tools.StartTimeout(dur):
return nil, errors.NewErrorStack(errors.NewTimeoutError("name resolution", nr.topic.Name))
}
}
// SetResult allows the definition of the result value associated with this request.
func (nr *Request) SetResult(resEntry *Entry, err *errors.ErrorStack) {
if err != nil {
err = err.Copy()
}
nr.resultChan <- &result{resEntry, err}
}

View file

@ -0,0 +1,35 @@
package nameresolver
import (
"encoding/json"
"github.com/ANSSI-FR/transdep/errors"
)
type serializedResult struct {
Result *Entry `json:"result,omitempty"`
Err *errors.ErrorStack `json:"error,omitempty"`
}
// result is used for serialization of entries/errors for caching purposes as well as for transmission between
// goroutines using channels
type result struct {
Result *Entry
Err *errors.ErrorStack
}
func (r *result) MarshalJSON() ([]byte, error) {
sr := new(serializedResult)
sr.Result = r.Result
sr.Err = r.Err
return json.Marshal(sr)
}
func (r *result) UnmarshalJSON(bstr []byte) error {
sr := new(serializedResult)
if err := json.Unmarshal(bstr, sr) ; err != nil {
return err
}
r.Result = sr.Result
r.Err = sr.Err
return nil
}

View file

@ -0,0 +1,114 @@
package zonecut
import (
"bufio"
"bytes"
"encoding/json"
"github.com/miekg/dns"
"io"
"os"
"strings"
"github.com/ANSSI-FR/transdep/errors"
)
// CACHE_DIRNAME is the name of the directory under the cache root directory for storage of zone cut cache files
const CACHE_DIRNAME = "zonecut"
// CreateCacheDir creates the cache dir for storage of zone cut cache files.
// It may return an error if the directory cannot be created. If the directory already exists, this function does
// nothing.
func CreateCacheDir(cacheRootDir string) error {
if err := os.MkdirAll(cacheRootDir+string(os.PathSeparator)+CACHE_DIRNAME, 0700); !os.IsExist(err) {
return err
}
return nil
}
// CacheFile represents just that.
type CacheFile struct {
name string
}
// NewCacheFile initializes a new CacheFile struct, based on the cache root dir, and the name of the domain that is the
// subject of this cache file.
func NewCacheFile(cacheRootDir string, topic RequestTopic) *CacheFile {
buf := new(bytes.Buffer)
buf.WriteString(cacheRootDir)
buf.WriteRune(os.PathSeparator)
buf.WriteString(CACHE_DIRNAME)
buf.WriteRune(os.PathSeparator)
buf.WriteString("zcf-")
buf.WriteString(strings.ToLower(dns.Fqdn(topic.Domain)))
buf.WriteString("-")
if topic.Exceptions.RFC8020 {
buf.WriteString("1")
} else {
buf.WriteString("0")
}
if topic.Exceptions.AcceptServFailAsNoData {
buf.WriteString("1")
} else {
buf.WriteString("0")
}
fileName := buf.String()
cf := &CacheFile{fileName}
return cf
}
// NewCacheFile initializes a new CacheFile struct and ensures that this file exists or else returns an error.
func NewExistingCacheFile(cacheRootDir string, topic RequestTopic) (*CacheFile, error) {
cf := NewCacheFile(cacheRootDir, topic)
fd, err := os.Open(cf.name)
defer fd.Close()
return cf, err
}
/* Result returns the entry or the error that were stored in the cache file. An error may also be returned, if an
incident happens during retrieval/interpretation of the cache file.
entry is the entry that was stored in the cache file.
resultError is the resolution error that was stored in the cache file.
err is the error that may happen during retrieval of the value in the cache file.
*/
func (cf *CacheFile) Result() (entry *Entry, resultError *errors.ErrorStack, err error) {
fd, err := os.Open(cf.name)
if err != nil {
return nil, nil, err
}
defer fd.Close()
buffedFd := bufio.NewReader(fd)
jsonbstr, err := buffedFd.ReadBytes('\x00')
if err != nil && err != io.EOF {
return nil, nil, err
}
res := new(result)
err = json.Unmarshal(jsonbstr, res)
if err != nil {
return nil, nil, err
}
return res.Result, res.Err, nil
}
// SetResult writes in the cache file the provided entry or error. An error is returned if an incident happens or else
// nil is returned.
func (cf *CacheFile) SetResult(entry *Entry, resultErr *errors.ErrorStack) error {
var jsonRepr []byte
var err error
var fd *os.File
if jsonRepr, err = json.Marshal(&result{entry, resultErr}); err != nil {
return err
}
if fd, err = os.OpenFile(cf.name, os.O_WRONLY|os.O_TRUNC|os.O_CREATE, 0600); err != nil {
return err
}
fd.WriteString(string(jsonRepr))
fd.Close()
return nil
}

82
messages/zonecut/entry.go Normal file
View file

@ -0,0 +1,82 @@
package zonecut
import (
"encoding/json"
"github.com/miekg/dns"
"strings"
)
// serializedEntry is a proxy for Entry, used for JSON serialization
type serializedEntry struct {
Domain string `json:"domain"`
DNSSEC bool `json:"dnssec"`
NameServers []NameSrvInfo `json:"nameservers"`
}
// Entry contains the response to a zonecut request when no error occurred. It contains information about the delegation
// of a zone,
type Entry struct {
// domain is the name that is being delegated (so if we query "d.nic.fr" for the delegation of "ssi.gouv.fr", domain
// contains "ssi.gouv.fr")
domain string
// dnssec values true if there was a DS record at the parent zone for the domain referenced in the "attribute" name
dnssec bool
// nameServers contains the list of NameSrvInfo records
nameServers []*NameSrvInfo
}
// NewEntry builds a new entry, and performs some normalization on the input values.
func NewEntry(domain string, DNSSECEnabled bool, nameServers []*NameSrvInfo) *Entry {
e := new(Entry)
e.domain = strings.ToLower(dns.Fqdn(domain))
e.dnssec = DNSSECEnabled
e.nameServers = nameServers
return e
}
// SetDNSSEC allows modification of the DNSSEC status relative to that entry, if need be afterwards
func (e *Entry) SetDNSSEC(val bool) {
e.dnssec = val
}
func (e *Entry) Domain() string {
return e.domain
}
func (e *Entry) DNSSEC() bool {
return e.dnssec
}
func (e *Entry) NameServers() []*NameSrvInfo {
return e.nameServers
}
func (e *Entry) MarshalJSON() ([]byte, error) {
se := new(serializedEntry)
se.Domain = e.domain
se.DNSSEC = e.dnssec
for _, val := range e.nameServers {
se.NameServers = append(se.NameServers, *val)
}
return json.Marshal(se)
}
func (e *Entry) UnmarshalJSON(bstr []byte) error {
se := new(serializedEntry)
if err := json.Unmarshal(bstr, se); err != nil {
return err
}
e.domain = se.Domain
e.dnssec = se.DNSSEC
for _, srvInfo := range se.NameServers {
val := srvInfo
e.nameServers = append(e.nameServers, &val)
}
return nil
}
func (e *Entry) String() string {
jsonrepr, _ := json.Marshal(e)
return string(jsonrepr)
}

View file

@ -0,0 +1,50 @@
package zonecut
import (
"net"
"encoding/json"
"strings"
"github.com/miekg/dns"
)
type serializedNameSrvInfo struct {
Name string `json:"name"`
Addrs []net.IP `json:"addrs"`
}
type NameSrvInfo struct {
name string
addrs []net.IP
}
func NewNameSrv(name string, addrs []net.IP) *NameSrvInfo {
n := new(NameSrvInfo)
n.name = strings.ToLower(dns.Fqdn(name))
n.addrs = addrs
return n
}
func (n *NameSrvInfo) Name() string {
return n.name
}
func (n *NameSrvInfo) Addrs() []net.IP {
return n.addrs
}
func (n *NameSrvInfo) MarshalJSON() ([]byte, error) {
sns := new(serializedNameSrvInfo)
sns.Name = n.name
sns.Addrs = n.addrs
return json.Marshal(sns)
}
func (n *NameSrvInfo) UnmarshalJSON(bstr []byte) error {
sns := new(serializedNameSrvInfo)
if err := json.Unmarshal(bstr, sns) ; err != nil {
return err
}
n.name = sns.Name
n.addrs = sns.Addrs
return nil
}

View file

@ -0,0 +1,74 @@
package zonecut
import (
"github.com/miekg/dns"
"github.com/ANSSI-FR/transdep/tools"
"strings"
"time"
"github.com/ANSSI-FR/transdep/errors"
)
type RequestTopic struct {
// domain is the topic of the request: the name whose delegation info is sought.
Domain string
// exceptions is the list of exceptions we are willing to make for this request w.r.t to the DNS standard
Exceptions tools.Exceptions
}
// Request contains the elements of a request for a delegation information
type Request struct {
topic RequestTopic
// ansChan is the channel used to return the result of the request from the worker goroutine to the calling goroutine
ansChan chan *result
}
// NewRequest builds a new request instance
func NewRequest(name string, exceptions tools.Exceptions) *Request {
zcr := new(Request)
zcr.topic.Domain = strings.ToLower(dns.Fqdn(name))
zcr.topic.Exceptions = exceptions
zcr.ansChan = make(chan *result, 1)
return zcr
}
func (zcr *Request) Domain() string {
return zcr.topic.Domain
}
func (zcr *Request) Exceptions() tools.Exceptions {
return zcr.topic.Exceptions
}
func (zcr *Request) RequestTopic() RequestTopic {
return zcr.topic
}
func (zcr *Request) Equal(other *Request) bool {
return zcr.topic == other.topic
}
// Result returns the result of this request. It blocks for the default timeout duration or until an answer is provided.
// An error is returned upon timeout or if an incident occurred during the discovery of the delegation information.
// The entry might value nil even if error is nil too, if the request topic is not a zone apex.
func (zcr *Request) Result() (*Entry, *errors.ErrorStack) {
return zcr.ResultWithSpecificTimeout(tools.DEFAULT_TIMEOUT_DURATION)
}
// ResultWithSpecificTimeout is identical to Result except a timeout duration may be specified.
func (zcr *Request) ResultWithSpecificTimeout(dur time.Duration) (*Entry, *errors.ErrorStack) {
select {
case res := <-zcr.ansChan:
return res.Result, res.Err
case _ = <-tools.StartTimeout(dur):
return nil, errors.NewErrorStack(errors.NewTimeoutError("zone cut retrieval", zcr.topic.Domain))
}
}
// SetResult is used to set/pass along the result associated to this request
// This function is meant to be called only once, though the implemention does not currently prevent this.
func (zcr *Request) SetResult(res *Entry, err *errors.ErrorStack) {
if err != nil {
err = err.Copy()
}
zcr.ansChan <- &result{res, err}
}

View file

@ -0,0 +1,35 @@
package zonecut
import (
"encoding/json"
"github.com/ANSSI-FR/transdep/errors"
)
type serializedResult struct {
Result *Entry `json:"result,omitempty"`
Err *errors.ErrorStack `json:"error,omitempty"`
}
// result is used for serialization of entries/errors for caching purposes as well as for transmission between
// goroutines using channels
type result struct {
Result *Entry
Err *errors.ErrorStack
}
func (r *result) MarshalJSON() ([]byte, error) {
sr := new(serializedResult)
sr.Result = r.Result
sr.Err = r.Err
return json.Marshal(sr)
}
func (r *result) UnmarshalJSON(bstr []byte) error {
sr := new(serializedResult)
if err := json.Unmarshal(bstr, sr) ; err != nil {
return err
}
r.Result = sr.Result
r.Err = sr.Err
return nil
}

181
nameresolver/finder.go Normal file
View file

@ -0,0 +1,181 @@
package nameresolver
import (
"fmt"
"github.com/hashicorp/golang-lru"
"github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/ANSSI-FR/transdep/messages/zonecut"
"github.com/ANSSI-FR/transdep/tools"
"github.com/ANSSI-FR/transdep/errors"
)
// REQ_CHAN_CAPACITY is the capacity of the channel into which are submitted new requests. This is used as a back off
// mechanism if the submitter is much faster than the finder.
const REQ_CHAN_CAPACITY = 10
// Finder is a worker pool maintainer for resolution of domain names into aliases or IP addresses.
type Finder struct {
// reqs is the channel used internally to spool new requests to the goroutine that is handling the new requests and
// the worker orchestration.
reqs chan *nameresolver.Request
// closedReqChan is used to prevent double-close issue
closedReqChan bool
// zcHandler is a callback used to submit new requests for zone cut/delegation information for discovery.
zcHandler func(*zonecut.Request) *errors.ErrorStack
// workerPool stores worker instances indexed by the domain name they are charged of resolving.
workerPoll *lru.Cache
// joinChan is used for goroutine synchronization so that the owner of a finder instance does not exit before
// this finder is done cleaning up after itself.
joinChan chan bool
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
/* NewFinder instantiates a new Finder
zcHandler is a function that may be called to submit new requests for zone cut/delegation info discovery.
maxWorkerCount is the number of concurrent workers that will be maintained by this finder. Once this number of workers reached,
the Finder will shut down those that were the least recently used (LRU).
cacheRootDir is the root directory for caching. Workers that are shut down store the result that they are distributing
into a cache file, for later use.
*/
func NewFinder(zcHandler func(*zonecut.Request) *errors.ErrorStack, conf *tools.TransdepConfig) *Finder {
nr := new(Finder)
// Preemptively tries to create the cache directory, to prevent usage of the finder if the cache directory cannot be created.
if err := nameresolver.CreateCacheDir(conf.CacheRootDir); err != nil {
return nil
}
nr.config = conf
var err error
nr.workerPoll, err = lru.NewWithEvict(conf.LRUSizes.NameResolverFinder, nr.writeOnDisk)
if err != nil {
return nil
}
nr.zcHandler = zcHandler
nr.reqs = make(chan *nameresolver.Request, REQ_CHAN_CAPACITY)
nr.closedReqChan = false
nr.joinChan = make(chan bool, 1)
nr.start()
return nr
}
// Handle is the method to call to submit new name resolution requests.
// An error might be returned if this finder is already stopping.
func (nr *Finder) Handle(req *nameresolver.Request) *errors.ErrorStack {
if nr.closedReqChan {
return errors.NewErrorStack(fmt.Errorf("name resolver request channel already closed"))
}
// Spool the request for handling by the goroutine started using start()
nr.reqs <- req
return nil
};
// writeOnDisk is a method to clean up entries from the LRU list. It writes the result from the evicted worker on disk
// as JSON, then shuts the worker down.
func (nr *Finder) writeOnDisk(key, value interface{}) {
wrk := value.(*worker)
// Get an entry of that worker to persist it on disk before shutting it down
topic := key.(nameresolver.RequestTopic)
req := nameresolver.NewRequest(topic.Name, topic.Exceptions)
wrk.handle(req)
nrres, err := req.Result()
if err != nil {
if _, ok := err.OriginalError().(*errors.TimeoutError) ; ok {
return
}
}
// Shutting the worker down
wrk.stop()
// Caching on disk the result obtained from the worker
cf := nameresolver.NewCacheFile(nr.config.CacheRootDir, topic)
errSetRes := cf.SetResult(nrres, err)
if errSetRes != nil {
return
}
}
// loadFromDisk searches on disk for a cache file for the specified request and starts off a new worker to handle that
// request.
// The new started worker is returned. An error is returned if no cache file are found for that request or if an error
// happened during the initialization of that worker.
func (nr *Finder) loadFromDisk(req *nameresolver.Request) (*worker, error) {
cf, err := nameresolver.NewExistingCacheFile(nr.config.CacheRootDir, req.RequestTopic())
if err != nil {
return nil, err
}
w := newWorkerWithCachedResult(req, nr.Handle, nr.zcHandler, cf, nr.config)
if w == nil {
return nil, fmt.Errorf("unable to create new worker!")
}
return w, nil
}
// spool searches for a live worker that can handle the specified request, or starts a new one for that purpose.
func (nr *Finder) spool(req *nameresolver.Request) {
var wrk *worker
var err error
if val, ok := nr.workerPoll.Get(req.RequestTopic()); ok {
// First, search in the LRU of live workers
wrk = val.(*worker)
} else if wrk, err = nr.loadFromDisk(req); err == nil {
// Then, search if the worker can be started from a cache file
nr.workerPoll.Add(req.RequestTopic(), wrk)
} else {
// Finally, start a new worker to handle that request, if nothing else worked
wrk = newWorker(req, nr.Handle, nr.zcHandler, nr.config)
nr.workerPoll.Add(req.RequestTopic(), wrk)
}
// Spools the request to the worker
wrk.handle(req)
}
// start performs the operation for the finder to be ready to handle new requests.
func (nr *Finder) start() {
// Current start implementation start off a goroutine which reads from the reqs request channel attribute
go func() {
for req := range nr.reqs {
//Detect dependency loops
if req.DetectLoop() {
req.SetResult(nil, errors.NewErrorStack(fmt.Errorf("Loop detected on %s", req.Name())))
} else {
nr.spool(req)
}
}
// Cleanup, because Stop() was called by the goroutine that owns the finder
for _, key := range nr.workerPoll.Keys() {
wrk, _ := nr.workerPoll.Peek(key)
nr.writeOnDisk(key, wrk)
}
// Signal that the cleanup is over
nr.joinChan <- true
}()
}
// Stop signals that no new requests will be submitted. It triggers some cleanup of the remaining live workers, wait for
// them to finish and then returns true. Subsequent calls to Stop will return false as the finder is already stopped.
func (nr *Finder) Stop() bool {
if nr.closedReqChan {
return false
}
close(nr.reqs)
nr.closedReqChan = true
_ = <-nr.joinChan
close(nr.joinChan)
return true
}

402
nameresolver/worker.go Normal file
View file

@ -0,0 +1,402 @@
package nameresolver
import (
"fmt"
"github.com/miekg/dns"
"net"
"github.com/ANSSI-FR/transdep/messages/zonecut"
"github.com/ANSSI-FR/transdep/tools"
"github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/ANSSI-FR/transdep/errors"
"strings"
)
// WORKER_CHAN_CAPACITY indicates the maximum number of request unhandled by the start() goroutine can be spooled before
// the call to Handle() becomes blocking.
const WORKER_CHAN_CAPACITY = 10
// MAX_CNAME_CHAIN indicates the longest chain of CNAME that is acceptable to be followed a name is considered a
// dead-end (i.e. unfit for name resolution)
const MAX_CNAME_CHAIN = 10
// worker represents a request handler for a specific request target domain name for which name resolution is sought.
type worker struct {
// req is the request topic for which this worker was started in the first place.
req *nameresolver.Request
// reqs is the channel by which subsequent requests for the same topic as for "req" are received.
reqs chan *nameresolver.Request
// closedReqChan helps prevent double-close issue on reqs channel, when the worker is stopping.
closedReqChan bool
// joinChan is used by stop() to wait for the completion of the start() goroutine
joinChan chan bool
// zcHandler is used to submit new zone cut requests. This is most notably used to get the delegation information of
// the parent zone of the requested name, in order to query its name servers for the requested name delegation
// information.
zcHandler func(*zonecut.Request) *errors.ErrorStack
// nrHandler is used to submit new name resolution requests. This is used, for instance, to get the IP addresses
// associated to nameservers that are out-of-bailiwick and for which we don't have acceptable glues or IP addresses.
nrHandler func(*nameresolver.Request) *errors.ErrorStack
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
// initNewWorker builds a new worker instance and returns it.
// It DOES NOT start the new worker, and should not be called directly by the finder.
func initNewWorker(req *nameresolver.Request, nrHandler func(*nameresolver.Request) *errors.ErrorStack, zcHandler func(*zonecut.Request) *errors.ErrorStack, conf *tools.TransdepConfig) *worker {
w := new(worker)
w.req = req
w.zcHandler = zcHandler
w.nrHandler = nrHandler
w.config = conf
w.reqs = make(chan *nameresolver.Request, WORKER_CHAN_CAPACITY)
w.closedReqChan = false
w.joinChan = make(chan bool, 1)
return w
}
// newWorker builds a new worker instance and returns it.
// The worker is started and will resolve the request from the network.
func newWorker(req *nameresolver.Request, nrHandler func(*nameresolver.Request) *errors.ErrorStack, zcHandler func(*zonecut.Request) *errors.ErrorStack, conf *tools.TransdepConfig) *worker {
w := initNewWorker(req, nrHandler, zcHandler, conf)
w.start()
return w
}
// newWorker builds a new worker instance and returns it.
// The worker is started and will resolve the request from a cache file.
func newWorkerWithCachedResult(req *nameresolver.Request, nrHandler func(*nameresolver.Request) *errors.ErrorStack, zcHandler func(*zonecut.Request) *errors.ErrorStack, cf *nameresolver.CacheFile, conf *tools.TransdepConfig) *worker {
w := initNewWorker(req, nrHandler, zcHandler, conf)
w.startWithCachedResult(cf)
return w
}
// handle allows the submission of new requests to this worker.
// This method returns an error if the worker is stopped or if the submitted request does not match the request usually
// handled by this worker.
func (w *worker) handle(req *nameresolver.Request) *errors.ErrorStack {
if w.closedReqChan {
return errors.NewErrorStack(fmt.Errorf("handle: worker channel for name resolution of %s is already closed", w.req.Name()))
} else if !w.req.Equal(req) {
return errors.NewErrorStack(fmt.Errorf("handle: invalid request; the submitted request (%s) does not match the requests handled by this worker (%s)", req.Name(), w.req.Name()))
}
w.reqs <- req
return nil
}
// resolveFromWith resolves the topic of the requests associated with this worker by querying the "ip" IP address and
// using the "proto" protocol (either "" for UDP or "tcp"). It returns an entry corresponding to the requested topic, or an
// definitive error that happened during the resolution.
func (w *worker) resolveFromWith(ip net.IP, proto string) (*nameresolver.Entry, *errors.ErrorStack) {
var ipList []net.IP
// We first query about the IPv4 addresses associated to the request topic.
clnt := new(dns.Client)
clnt.Net = proto
ma := new(dns.Msg)
ma.SetEdns0(4096, false)
ma.SetQuestion(w.req.Name(), dns.TypeA)
ma.RecursionDesired = false
ans, _, err := clnt.Exchange(ma, net.JoinHostPort(ip.String(), "53"))
if err != nil {
errStack := errors.NewErrorStack(err)
errStack.Push(fmt.Errorf("resolveFromWith: error while exchanging with %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeA]))
return nil, errStack
}
if ans == nil {
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got empty answer from %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeA]))
}
if ans.Rcode != dns.RcodeSuccess {
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got DNS error %s from %s over %s for %s %s?", dns.RcodeToString[ans.Rcode], ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeA]))
}
if !ans.Authoritative {
// We expect an non-empty answer from the server, with a positive answer (no NXDOMAIN (lame delegation),
// no SERVFAIL (broken server)). We also expect the server to be authoritative; if it is not, it is not clear
// why, because the name is delegated to this server according to the parent zone, so we assume that this server
// is broken, but there might be other reasons for this that I can't think off from the top of my head.
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got non-authoritative data from %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeA]))
}
// If the answer is truncated, we might want to retry over TCP... except of course if the truncated answer is
// already provided over TCP (see Spotify blog post about when it happened to them :))
if ans.Truncated {
if proto == "tcp" {
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got a truncated answer from %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeA]))
}
return w.resolveFromWith(ip, "tcp")
}
for _, grr := range ans.Answer {
// We only consider records from the answer section that have a owner name equal to the qname.
if dns.CompareDomainName(grr.Header().Name, w.req.Name()) == dns.CountLabel(w.req.Name()) && dns.CountLabel(grr.Header().Name) == dns.CountLabel(w.req.Name()){
// We may receive either A or CNAME records with matching owner name. We dismiss all other cases
// (which are probably constituted of NSEC and DNAME and similar stuff. NSEC is of no value here, and DNAME
// are not supported by this tool.
switch rr := grr.(type) {
case *dns.A:
// We stack IPv4 addresses because the RRSet might be composed of multiple A records
ipList = append(ipList, rr.A)
case *dns.CNAME:
// A CNAME is supposed to be the only record at a given domain name. Thus, we return this alias marker
// and forget about all other records that might resides here.
return nameresolver.NewAliasEntry(w.req.Name(), rr.Target), nil
}
}
}
// We now query for the AAAA records to also get the IPv6 addresses
clnt = new(dns.Client)
clnt.Net = proto
maaaa := new(dns.Msg)
maaaa.SetEdns0(4096, false)
maaaa.SetQuestion(w.req.Name(), dns.TypeAAAA)
maaaa.RecursionDesired = false
ans, _, err = clnt.Exchange(maaaa, net.JoinHostPort(ip.String(), "53"))
if err != nil {
errStack := errors.NewErrorStack(err)
errStack.Push(fmt.Errorf("resolveFromWith: error while exchanging with %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeAAAA]))
return nil, errStack
}
if ans == nil {
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got empty answer from %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeAAAA]))
}
if ans.Rcode != dns.RcodeSuccess {
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got DNS error %s from %s over %s for %s %s?", dns.RcodeToString[ans.Rcode], ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeAAAA]))
}
if !ans.Authoritative {
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got non-authoritative data from %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeAAAA]))
}
if ans.Truncated {
if proto == "tcp" {
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got a truncated answer from %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeAAAA]))
}
return w.resolveFromWith(ip, "tcp")
}
for _, grr := range ans.Answer {
if dns.CompareDomainName(grr.Header().Name, w.req.Name()) == dns.CountLabel(w.req.Name()) && dns.CountLabel(grr.Header().Name) == dns.CountLabel(w.req.Name()){
switch rr := grr.(type) {
case *dns.AAAA:
ipList = append(ipList, rr.AAAA)
case *dns.CNAME:
// We should have a CNAME here because the CNAME was not returned when asked for A records, and if we
// had received a CNAME, we would already have returned.
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromWith: got a CNAME that was not provided for the A query from %s over %s for %s %s?", ip.String(), errors.PROTO_TO_STR[errors.STR_TO_PROTO[proto]], w.req.Name(), dns.TypeToString[dns.TypeAAAA]))
}
}
}
return nameresolver.NewIPEntry(w.req.Name(), ipList), nil
}
// resolveFrom resolves the request associated to this worker. It returns the entry generated from a successful
// resolution or the error that occurred.
func (w *worker) resolveFrom(ip net.IP) (*nameresolver.Entry, *errors.ErrorStack) {
// (proto == "" means UDP)
return w.resolveFromWith(ip, "")
}
// resolveFromGlues tries to resolve the request associated to this worker using the list of servers provided as
// parameters, assuming their are all delegation with glues (i.e. IP addresses of nameservers are already known).
func (w *worker) resolveFromGlues(nameSrvs []*zonecut.NameSrvInfo) (*nameresolver.Entry, *errors.ErrorStack) {
var errList []string
for _, ns := range nameSrvs {
for _, ip := range ns.Addrs() {
// Tries every IP address of every name server. If an error occurs, the next IP, then server is tried.
entry, err := w.resolveFrom(ip)
if err == nil {
return entry, nil
}
errList = append(errList, fmt.Sprintf("resolveFromGlues: error from %s(%s): %s", ns.Name(), ip.String(), err.Error()))
}
}
// No IP address of any server returned a positive result.
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromGlues: no valid glued delegation for %s: [%s]", w.req.Name(), strings.Join(errList, ", ")))
}
// resolveFromGluelessNameSrvs resolves the request associated to this worker using name servers whose IP address is not
// known thanks to glues and in-bailiwick address records. It returns the answer to that request or an error no server
// returned an acceptable response.
func (w *worker) resolveFromGluelessNameSrvs(nameSrvs []*zonecut.NameSrvInfo) (*nameresolver.Entry, *errors.ErrorStack) {
var errList []string
Outerloop:
for _, ns := range nameSrvs {
var addrs []net.IP
// requestedName is the nameserver name, by default. It may evolve, as aliases/CNAME are met along the resolution
requestedName := ns.Name()
// We limit to MAX_CNAME_CHAIN the number of CNAME that we are willing to follow
Innerloop:
for i := 0; i < MAX_CNAME_CHAIN && len(addrs) == 0; i++ {
// Start up the resolution of the name of the nameserver into IP addresses so that we can query these IP
// addresses for the request topic of this worker.
req := nameresolver.NewRequestWithContext(requestedName, w.req.Exceptions(), w.req)
w.nrHandler(req)
ne, err := req.Result()
if err != nil || ne == nil {
// if an error occurred, we just try with the next nameserver until we get an answer or all servers have
// been tried.
continue Outerloop
}
if ne.CNAMETarget() == "" {
// We got some IP addresses ; we store them away and go to the next step
addrs = ne.Addrs()
break Innerloop
}
// If the answer is an alias, we retry with the new target name
requestedName = ne.CNAMETarget()
}
if len(addrs) == 0 {
// We hit a very long CNAME Chain or the name cannot be resolved for some reason
continue
}
// Try to query every IP that we found, until we get a valid answer
for _, addr := range addrs {
entry, err := w.resolveFrom(addr)
if err == nil {
return entry, nil
}
errList = append(errList, fmt.Sprintf("resolveFromGluelessNameSrvs: error from %s(%s): %s", ns.Name(), addr.String(), err.Error()))
}
}
// We tried every IP address of every name server to no avail. Return an error
return nil, errors.NewErrorStack(fmt.Errorf("resolveFromGluelessNameSrvs: no valid glueless delegation for %s: [%s]", w.req.Name(), strings.Join(errList, ", ")))
}
// resolve is in charge of orchestrating the resolution of the request that is associated with this worker
func (w *worker) resolve() (*nameresolver.Entry, *errors.ErrorStack) {
// First, we search the list of name servers to which the requested domain name is delegated. This is obtained by
// submitting delegation info request, removing a label each time, until a non-null response is provided (meaning we
// reached the apex of the zone containing the requested name).
var entry *zonecut.Entry
reqName := w.req.Name()
for entry == nil {
var err *errors.ErrorStack
// Get the servers for this zonecut
req := zonecut.NewRequest(reqName, w.req.Exceptions())
w.zcHandler(req)
entry, err = req.Result()
if err != nil {
var returnErr bool
switch typedErr := err.OriginalError().(type) {
case *errors.TimeoutError:
returnErr = true
case *errors.NXDomainError:
returnErr = w.req.Exceptions().RFC8020
case *errors.ServfailError:
returnErr = !w.req.Exceptions().AcceptServFailAsNoData
case *errors.NoNameServerError:
returnErr = false
default:
_ = typedErr
returnErr = true
}
// If we receive an error while searching for the delegation info, we will not be able to perform the
// subsequent queries, so we bail out on this request.
if returnErr {
err.Push(fmt.Errorf("resolve: error while getting zone cut info of %s for %s", reqName, w.req.Name()))
return nil, err
}
err = nil
entry = nil
}
if entry == nil {
// If no entry was provided, reqName is not the zone apex, so we remove a label and retry.
pos, end := dns.NextLabel(reqName, 1)
if end {
reqName = "."
} else {
reqName = reqName[pos:]
}
}
}
// Setting apart glueless delegations and glued delegations
var nameSrvsWithGlues []*zonecut.NameSrvInfo
var gluelessNameSrvs []*zonecut.NameSrvInfo
for _, nameSrv := range entry.NameServers() {
if len(nameSrv.Addrs()) == 0 {
gluelessNameSrvs = append(gluelessNameSrvs, nameSrv)
} else {
nameSrvsWithGlues = append(nameSrvsWithGlues, nameSrv)
}
}
// Try to resolve first using glues to go faster
r, gluedErr := w.resolveFromGlues(nameSrvsWithGlues)
if gluedErr != nil {
if _, ok := gluedErr.OriginalError().(*errors.NXDomainError) ; ok {
gluedErr.Push(fmt.Errorf("resolve: got NXDomain while resolving %s from glued servers", w.req.Name()))
return nil, gluedErr
}
// No glued servers returned an answer, so we now try with the glueless delegations.
var gluelessErr *errors.ErrorStack
r, gluelessErr = w.resolveFromGluelessNameSrvs(gluelessNameSrvs)
if gluelessErr != nil {
gluelessErr.Push(fmt.Errorf("resolve: unable to resolve %s: glued errors: [%s]", w.req.Name(), gluedErr.Error()))
return nil, gluelessErr
}
}
return r, nil
}
// start prepares the worker for handling new requests.
// The current implementation is to launch a goroutine that will read from the reqs channel attribute new requests and
// will try to answer them. When stopped, it will immediately send the join signal.
func (w *worker) start() {
go func() {
result, err := w.resolve()
for req := range w.reqs {
req.SetResult(result, err)
}
w.joinChan <- true
}()
}
// startWithCachedResult performs the same kind of operations that start(), except that the response is not obtained
// from the network, but by loading it from a cache file.
func (w *worker) startWithCachedResult(cf *nameresolver.CacheFile) {
go func() {
var result *nameresolver.Entry
var resultErr *errors.ErrorStack
var err error
result, resultErr, err = cf.Result()
if err != nil {
result = nil
cacheErr := fmt.Errorf("startWithCachedResult: error while loading cache of %s: %s", w.req.Name(), err.Error())
if resultErr != nil {
resultErr.Push(cacheErr)
} else {
resultErr = errors.NewErrorStack(cacheErr)
}
}
for req := range w.reqs {
req.SetResult(result, resultErr)
}
w.joinChan <- true
}()
}
// stop is to be called during the cleanup of the worker. It shuts down the goroutine started by start() and waits for
// it to actually end. stop returns true if it is the first time it is called and the start() routine was stopped, or
// else it returns false.
func (w *worker) stop() bool {
if w.closedReqChan {
return false
}
close (w.reqs)
w.closedReqChan = true
_ = <-w.joinChan
close(w.joinChan)
return true
}

54
tools/config.go Normal file
View file

@ -0,0 +1,54 @@
package tools
import "fmt"
type AnalysisConditions struct {
All bool
DNSSEC bool
NoV4 bool
NoV6 bool
}
type LRUConfig struct {
DependencyFinder int
ZoneCutFinder int
NameResolverFinder int
}
type FormatOptions struct {
ScriptFriendlyOutput bool
Graph bool
DotOutput bool
}
type TransdepConfig struct {
JobCount int
LRUSizes LRUConfig
CacheRootDir, RootHintsFile, MaboFile string
}
type RequestConfig struct {
AnalysisCond AnalysisConditions
OutputFormat FormatOptions
Exceptions Exceptions
}
type Exceptions struct {
RFC8020, AcceptServFailAsNoData bool
}
func (tc RequestConfig) Check(fileName string) error {
if tc.OutputFormat.Graph &&
(tc.AnalysisCond.All || tc.AnalysisCond.NoV4 || tc.AnalysisCond.NoV6 || tc.AnalysisCond.DNSSEC || tc.OutputFormat.ScriptFriendlyOutput || tc.OutputFormat.DotOutput) {
return fmt.Errorf("-graph option is supposed to be used alone w.r.t. to other output selection options.")
}
if tc.OutputFormat.DotOutput && (len(fileName) != 0 || tc.AnalysisCond.All || tc.OutputFormat.Graph || tc.OutputFormat.ScriptFriendlyOutput) {
return fmt.Errorf("Cannot use -dot with -file, -all, -graph, or -script")
}
if tc.AnalysisCond.All && (tc.AnalysisCond.DNSSEC || tc.AnalysisCond.NoV6 || tc.AnalysisCond.NoV4) {
return fmt.Errorf("Can't have -all option on at the same time as -break4, -break6 or -dnssec")
}
return nil
}

103
tools/radix/radix.go Normal file
View file

@ -0,0 +1,103 @@
package radix
import (
"net"
"io"
"github.com/hashicorp/go-immutable-radix"
"bufio"
"bytes"
"encoding/csv"
"encoding/binary"
"strconv"
"os"
"fmt"
)
// Converts an IP into a byte slice whose elements are the individual bytes of the IP address in big endian format
// This function assumes that the IPv4 bits are in the LSB of the net.IP. This is an assumption because this is
// undocumented at the time of writing.
func getIPBitsInBytes(ip net.IP) []byte {
var input, ret []byte
input = ip.To4()
if input == nil {
input = ip
}
ptr := 0
for ptr < len(input) {
intRepr := binary.BigEndian.Uint32(input[ptr:ptr+4])
var i uint32 = 1 << 31
var val byte
for i > 0 {
if intRepr & i != 0 {
val = 1
} else {
val = 0
}
ret = append(ret, val)
i >>= 1
}
ptr += 4
}
return ret
}
func buildRadixTree(rd io.Reader) (*iradix.Tree, error) {
t := iradix.New()
txn := t.Txn()
scanner := bufio.NewScanner(rd)
for scanner.Scan() {
buf := new(bytes.Buffer)
buf.WriteString(scanner.Text())
csvrd := csv.NewReader(buf)
csvrd.Comma = ' '
csvrd.FieldsPerRecord = 2
rec, err := csvrd.Read()
if err != nil {
return nil, err
}
asn, err := strconv.Atoi(rec[0])
if err != nil {
return nil, err
}
_, prefix, err := net.ParseCIDR(rec[1])
if err != nil {
return nil, err
}
prefixLen, _ := prefix.Mask.Size()
ipBstr := getIPBitsInBytes(prefix.IP)
txn.Insert(ipBstr[:prefixLen], asn)
}
if err := scanner.Err() ; err != nil {
return nil, err
}
return txn.Commit(), nil
}
func GetASNTree(fn string) (*iradix.Tree, error) {
var fd *os.File
var err error
if fd, err = os.Open(fn) ; err != nil {
return nil, err
}
defer fd.Close()
return buildRadixTree(fd)
}
func GetASNFor(t *iradix.Tree, ip net.IP) (int, error) {
if t == nil {
return 0, fmt.Errorf("tree is uninitialized")
}
var val interface{}
var ok bool
ipBstr := getIPBitsInBytes(ip)
if _, val, ok = t.Root().LongestPrefix(ipBstr) ; !ok {
return 0, fmt.Errorf("Cannot find ASN for %s", ip.String())
}
asn := val.(int)
return asn, nil
}

20
tools/timeout.go Normal file
View file

@ -0,0 +1,20 @@
package tools
import (
"time"
)
const DEFAULT_TIMEOUT_DURATION = 20 * time.Second
// Timeout waits for "dur" delay to expire, and then writes in "c"
func Timeout(dur time.Duration, c chan<- bool) {
time.Sleep(dur)
c <- true
}
// StartTimeout initiates a goroutine that will write a boolean into the returned channel after the "dur" delay expired.
func StartTimeout(dur time.Duration) <-chan bool {
c := make(chan bool, 1)
go Timeout(dur, c)
return c
}

396
transdep.go Normal file
View file

@ -0,0 +1,396 @@
package main
import (
"bufio"
"encoding/json"
"flag"
"fmt"
"io"
"os"
"github.com/ANSSI-FR/transdep/dependency"
dep_msg "github.com/ANSSI-FR/transdep/messages/dependency"
"github.com/ANSSI-FR/transdep/graph"
"strings"
"github.com/hashicorp/go-immutable-radix"
"github.com/ANSSI-FR/transdep/tools/radix"
"github.com/ANSSI-FR/transdep/tools"
)
func displayDomain(prefix string, res *graph.WorkerAnalysisResult, conf *tools.RequestConfig) {
if res.Err != nil {
if conf.OutputFormat.ScriptFriendlyOutput {
fmt.Printf("%s%s\n", prefix, "-ERROR-")
} else {
fmt.Printf("%s%s\n", prefix, res.Err)
}
} else {
for _, elmt := range res.Nodes {
switch e := elmt.(type) {
case graph.CriticalName:
fmt.Printf("%sName:%s\n", prefix, e.Name)
case graph.CriticalAlias:
fmt.Printf("%sAlias:%s->%s\n", prefix, e.Source, e.Target)
case graph.CriticalIP:
fmt.Printf("%sIP:%s\n", prefix, e.IP.String())
case graph.CriticalASN:
fmt.Printf("%sASN:%d\n", prefix, e.ASN)
case graph.CriticalPrefix:
if e.Prefix.To4() != nil {
fmt.Printf("%sPrefix:%s/24\n", prefix, e.Prefix.String())
} else {
fmt.Printf("%sPrefix:%s/48\n", prefix, e.Prefix.String())
}
case *graph.Cycle:
fmt.Printf("%sCycle\n", prefix)
default:
panic("BUG: missing case")
}
}
}
}
type WorkerResult struct {
dn string
stringRepr string
allNames *graph.WorkerAnalysisResult
dnssec *graph.WorkerAnalysisResult
allNamesNo4 *graph.WorkerAnalysisResult
dnssecNo4 *graph.WorkerAnalysisResult
allNamesNo6 *graph.WorkerAnalysisResult
dnssecNo6 *graph.WorkerAnalysisResult
err error
}
func performBackgroundAnalysis(name string, g *graph.RelationshipNode, ansChan chan<- *WorkerResult, analysisDoneChan chan<- bool, requestConf *tools.RequestConfig, tree *iradix.Tree) {
allNamesResult, allNamesNo4Result, allNamesNo6Result, dnssecResult, dnssecNo4Result, dnssecNo6Result := graph.PerformAnalyseOnResult(g, requestConf, tree)
ansChan <- &WorkerResult{
name, "",
allNamesResult,
dnssecResult,
allNamesNo4Result,
dnssecNo4Result,
allNamesNo6Result,
dnssecNo6Result,
nil,
}
analysisDoneChan <- true
}
func spoolDependencyRequest(wc <-chan *dep_msg.Request, ansChan chan<- *WorkerResult, df *dependency.Finder, reqConf *tools.RequestConfig, transdepConf *tools.TransdepConfig, tree *iradix.Tree) {
currentlyAnalyzedCounter := 0
analysisDoneChan := make(chan bool, transdepConf.JobCount)
inputClosed := false
for !inputClosed || currentlyAnalyzedCounter != 0 {
select {
case _ = <- analysisDoneChan:
currentlyAnalyzedCounter--
case req, opened := <-wc:
inputClosed = !opened
if req != nil && opened {
if err := df.Handle(req) ; err != nil {
ansChan <- &WorkerResult{
req.Name(), "",nil, nil,
nil, nil, nil, nil,
err,
}
}
res, err := req.Result()
if err != nil {
ansChan <- &WorkerResult{
req.Name(), "", nil, nil,
nil, nil, nil, nil,
err,
}
} else {
relNode, ok := res.(*graph.RelationshipNode)
if !ok {
ansChan <- &WorkerResult{
req.Name(), "", nil, nil,
nil, nil, nil, nil,
fmt.Errorf("returned node is not a RelationshipNode instance"),
}
}
if reqConf.OutputFormat.Graph {
jsonbstr, err := json.Marshal(relNode.SimplifyGraph())
ansChan <- &WorkerResult{
req.Name(), string(jsonbstr), nil, nil,
nil, nil, nil, nil,
err,
}
} else if reqConf.OutputFormat.DotOutput {
ansChanForDot := make(chan *WorkerResult, 1)
analysisDoneChanForDot := make(chan bool, 1)
go performBackgroundAnalysis(req.Name(), relNode, ansChanForDot, analysisDoneChanForDot, reqConf, tree)
<- analysisDoneChanForDot
analysisResult := <- ansChanForDot
var criticalNodes []graph.CriticalNode
if reqConf.AnalysisCond.DNSSEC {
if reqConf.AnalysisCond.NoV4 {
criticalNodes = analysisResult.dnssecNo4.Nodes
} else if reqConf.AnalysisCond.NoV6 {
criticalNodes = analysisResult.dnssecNo6.Nodes
} else {
criticalNodes = analysisResult.dnssec.Nodes
}
} else if reqConf.AnalysisCond.NoV4 {
criticalNodes = analysisResult.allNamesNo4.Nodes
} else if reqConf.AnalysisCond.NoV6 {
criticalNodes = analysisResult.allNamesNo6.Nodes
} else {
criticalNodes = analysisResult.allNames.Nodes
}
g, _ := graph.DrawGraph(relNode.SimplifyGraph(), criticalNodes)
g.SetStrict(true)
ansChan <- &WorkerResult{
req.Name(), g.String(), nil, nil,
nil, nil, nil, nil,
err,
}
} else {
go performBackgroundAnalysis(req.Name(), relNode, ansChan, analysisDoneChan, reqConf, tree)
currentlyAnalyzedCounter++
}
}
}
}
}
ansChan <- nil
}
func handleWorkerResponse(res *WorkerResult, reqConf *tools.RequestConfig) bool {
if res == nil {
return true
}
if res.err != nil {
if reqConf.OutputFormat.ScriptFriendlyOutput {
fmt.Printf("Error:%s:%s\n", res.dn, "-FAILURE-")
} else {
fmt.Printf("Error:%s:%s\n", res.dn, fmt.Sprintf("Error while resolving this name: %s", res.err))
}
} else if reqConf.OutputFormat.Graph {
fmt.Printf("%s:%s\n", res.dn, res.stringRepr)
} else if reqConf.OutputFormat.DotOutput {
fmt.Println(res.stringRepr)
} else {
if reqConf.AnalysisCond.All {
displayDomain(fmt.Sprintf("AllNames:%s:", res.dn), res.allNames, reqConf)
displayDomain(fmt.Sprintf("DNSSEC:%s:", res.dn), res.dnssec, reqConf)
displayDomain(fmt.Sprintf("AllNamesNo4:%s:", res.dn), res.allNamesNo4, reqConf)
displayDomain(fmt.Sprintf("DNSSECNo4:%s:", res.dn), res.dnssecNo4, reqConf)
displayDomain(fmt.Sprintf("AllNamesNo6:%s:", res.dn), res.allNamesNo6, reqConf)
displayDomain(fmt.Sprintf("DNSSECNo6:%s:", res.dn), res.dnssecNo6, reqConf)
} else if reqConf.AnalysisCond.DNSSEC {
if reqConf.AnalysisCond.NoV4 {
displayDomain(fmt.Sprintf("%s:", res.dn), res.dnssecNo4, reqConf)
} else if reqConf.AnalysisCond.NoV6 {
displayDomain(fmt.Sprintf("%s:", res.dn), res.dnssecNo6, reqConf)
} else {
displayDomain(fmt.Sprintf("%s:", res.dn), res.dnssec, reqConf)
}
} else {
if reqConf.AnalysisCond.NoV4 {
displayDomain(fmt.Sprintf("%s:", res.dn), res.allNamesNo4, reqConf)
} else if reqConf.AnalysisCond.NoV6 {
displayDomain(fmt.Sprintf("%s:", res.dn), res.allNamesNo6, reqConf)
} else {
displayDomain(fmt.Sprintf("%s:", res.dn), res.allNames, reqConf)
}
}
}
return false
}
func createDomainNameStreamer(fileName string, c chan<- string) {
fd, err := os.Open(fileName)
if err != nil {
panic("Unable to open file for read access")
}
reader := bufio.NewReader(fd)
err = nil
for err == nil {
var line string
line, err = reader.ReadString('\n')
if err != nil {
if err != io.EOF {
panic("Error while reading file")
}
}
c <- strings.TrimRight(line, "\n")
}
close(c)
}
func analyseDomains(domainNameChan <-chan string, reqConf *tools.RequestConfig, transdepConf *tools.TransdepConfig, df *dependency.Finder, tree *iradix.Tree) {
// Start workers
wc := make(chan *dep_msg.Request)
ansChan := make(chan *WorkerResult, 1)
for i := 0; i < transdepConf.JobCount; i++ {
go spoolDependencyRequest(wc, ansChan, df, reqConf, transdepConf, tree)
}
// Prepare for reading input file
deadWorker := 0
sent := true
var req *dep_msg.Request
// Loop until all lines are read and a corresponding request has been spooled
Outerloop:
for {
opened := true
// This loop does not only loop when a new request is spooled, but also when a response is received. Thus,
// we need this "sent" switch to know whether we should continue to try to push a request or read a new line
if sent {
sent = false
targetDn := ""
for targetDn == "" {
// Read a domain name
targetDn, opened = <-domainNameChan
if targetDn == "" && !opened {
close(wc)
break Outerloop
}
}
// Build the dependency request
req = dep_msg.NewRequest(targetDn, true, false, reqConf.Exceptions)
}
select {
case wc <- req:
if !opened {
close(wc)
break Outerloop
}
sent = true
case res := <-ansChan:
if handleWorkerResponse(res, reqConf) {
deadWorker++
}
}
}
for deadWorker < transdepConf.JobCount {
res := <-ansChan
if handleWorkerResponse(res, reqConf) {
deadWorker++
}
}
close(ansChan)
}
func analyseFromFile(loadFile string, requestConf *tools.RequestConfig, tree *iradix.Tree) {
fd, err := os.Open(loadFile)
if err != nil {
fmt.Println(err)
return
}
bufrd := bufio.NewReader(fd)
targetDn, err := bufrd.ReadString(':')
if err != nil {
fmt.Println(err)
return
}
targetDn = strings.TrimRight(targetDn, ":")
jsonbstr, err := bufrd.ReadBytes('\x00')
if err != nil && err != io.EOF {
fmt.Println(err)
return
}
g := new(graph.RelationshipNode)
err = json.Unmarshal(jsonbstr, g)
if err != nil {
fmt.Println(err)
return
}
ansChan := make(chan *WorkerResult, 1)
analysisDoneChan := make(chan bool, 1)
go performBackgroundAnalysis(targetDn, g, ansChan, analysisDoneChan, requestConf, tree)
<-analysisDoneChan
wr := <-ansChan
handleWorkerResponse(wr, requestConf)
}
func analyseFromDomainList(domChan <-chan string, reqConf *tools.RequestConfig, transdepConf *tools.TransdepConfig, tree *iradix.Tree) {
df := dependency.NewFinder(transdepConf, tree)
defer df.Stop()
analyseDomains(domChan, reqConf, transdepConf, df, tree)
}
func buildDomainListChan(targetDn, fileName string) <-chan string {
domChan := make(chan string)
if len(targetDn) != 0 {
go func() {
domChan <- targetDn
close(domChan)
}()
} else {
go createDomainNameStreamer(fileName, domChan)
}
return domChan
}
func main() {
var targetDn, fileName, loadFile string
var transdepConf tools.TransdepConfig
var requestConf tools.RequestConfig
tmpdir := os.Getenv("TMPDIR")
if tmpdir == "" {
tmpdir = "/tmp"
}
flag.StringVar(&targetDn, "domain", "", "Indicates the domain name to analyze")
flag.StringVar(&fileName, "file", "", "Indicates the file containing domain to analyze, one per line")
flag.StringVar(&loadFile, "load", "", "Indicates the file containing a dependency graph in JSON format")
flag.IntVar(&transdepConf.JobCount, "jobs", 5, "Indicates the maximum number of concurrent workers")
flag.BoolVar(&requestConf.AnalysisCond.All, "all", false, "Indicates that IPv4 are not available")
flag.BoolVar(&requestConf.AnalysisCond.NoV4, "break4", false, "Indicates that IPv4 are not available")
flag.BoolVar(&requestConf.AnalysisCond.NoV6, "break6", false, "Indicates that IPv6 are not available")
flag.BoolVar(&requestConf.AnalysisCond.DNSSEC, "dnssec", false, "Indicates that only DNSSEC-protected domains can break")
flag.BoolVar(&requestConf.OutputFormat.ScriptFriendlyOutput, "script", false, "On error, just write \"-ERROR-\"")
flag.BoolVar(&requestConf.OutputFormat.Graph, "graph", false, "Indicates whether to just print the graph")
flag.BoolVar(&requestConf.OutputFormat.DotOutput, "dot", false, "Indicates whether to just print the graphviz dot file representation")
flag.IntVar(&transdepConf.LRUSizes.DependencyFinder, "dflrusize", 2000, "Indicates the maximum number of concurrent Dependency Finder workers")
flag.IntVar(&transdepConf.LRUSizes.ZoneCutFinder, "zcflrusize", 10000, "Indicates the maximum number of concurrent Zone Cut Finder workers")
flag.IntVar(&transdepConf.LRUSizes.NameResolverFinder, "nrlrusize", 10000, "Indicates the maximum number of concurrent Name Resolver workers")
flag.StringVar(&transdepConf.CacheRootDir, "cachedir", tmpdir, "Specifies the cache directory")
flag.StringVar(&transdepConf.RootHintsFile, "hints", "", "An updated DNS root hint file. If left unspecified, some hardcoded values will be used.")
flag.StringVar(&transdepConf.MaboFile, "mabo", "", "Indicates the name of a file containing the output of the Mabo tool when used with the prefix option")
flag.BoolVar(&requestConf.Exceptions.RFC8020, "rfc8020", false, "If set, a RCODE=3 on a zonecut request will be considered as an ENT.")
flag.BoolVar(&requestConf.Exceptions.AcceptServFailAsNoData, "servfail", false, "Consider a SERVFAIL error as an ENT (for servers that can't answer to anything else than A and AAAA)")
flag.Parse()
if len(targetDn) == 0 && len(fileName) == 0 && len(loadFile) == 0 {
panic("Either domain parameter, load parameter or file parameter must be specified.")
}
if err := requestConf.Check(fileName) ; err != nil {
panic(err.Error())
}
var tree *iradix.Tree
var err error
if len(transdepConf.MaboFile) != 0 {
tree, err = radix.GetASNTree(transdepConf.MaboFile)
if err != nil {
panic(err)
}
}
if len(loadFile) != 0 {
analyseFromFile(loadFile, &requestConf, tree)
} else {
analyseFromDomainList(buildDomainListChan(targetDn, fileName), &requestConf, &transdepConf, tree)
}
}

303
webserver.go Normal file
View file

@ -0,0 +1,303 @@
package main
import (
"bytes"
"crypto/rand"
"encoding/hex"
"encoding/json"
"flag"
"fmt"
"log"
"net/http"
"net/url"
"os"
"github.com/ANSSI-FR/transdep/dependency"
"github.com/ANSSI-FR/transdep/graph"
dependency2 "github.com/ANSSI-FR/transdep/messages/dependency"
"github.com/ANSSI-FR/transdep/tools"
"strconv"
"time"
)
// handleRequest is common to all request handlers. The difference lies in the reqConf parameter whose value varies
// depending on the request handler.
func handleRequest(
params url.Values, reqConf *tools.RequestConfig, reqChan chan<- *dependency2.Request,
w http.ResponseWriter, req *http.Request,
) {
// Get requested domain
domain, ok := params["domain"]
if !ok || len(domain) != 1 {
w.WriteHeader(http.StatusBadRequest)
return
}
// Submit the request
depReq := dependency2.NewRequest(domain[0], true, false, reqConf.Exceptions)
select {
case <-tools.StartTimeout(20 * time.Second):
w.WriteHeader(http.StatusRequestTimeout)
return
case reqChan <- depReq:
res, err := depReq.Result()
if err != nil {
bstr, err := json.Marshal(err)
if err != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
w.Header().Add("content-type", "application/json+error")
w.Write(bstr)
return
}
rootNode, ok := res.(*graph.RelationshipNode)
if !ok {
w.WriteHeader(http.StatusInternalServerError)
return
}
var queryResult *graph.WorkerAnalysisResult
allNamesResult, allNamesNo4Result, allNamesNo6Result, dnssecResult, dnssecNo4Result, dnssecNo6Result :=
graph.PerformAnalyseOnResult(rootNode, reqConf, nil)
if reqConf.AnalysisCond.DNSSEC == false {
if reqConf.AnalysisCond.NoV4 {
queryResult = allNamesNo4Result
} else if reqConf.AnalysisCond.NoV6 {
queryResult = allNamesNo6Result
} else {
queryResult = allNamesResult
}
} else {
if reqConf.AnalysisCond.NoV4 {
queryResult = dnssecNo4Result
} else if reqConf.AnalysisCond.NoV6 {
queryResult = dnssecNo6Result
} else {
queryResult = dnssecResult
}
}
if queryResult.Err != nil {
bstr, jsonErr := json.Marshal(queryResult.Err)
if jsonErr != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
w.Header().Add("content-type", "application/json+error")
w.Write(bstr)
return
}
bstr, jsonErr := json.Marshal(queryResult.Nodes)
if jsonErr != nil {
w.WriteHeader(http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusOK)
w.Header().Add("content-type", "application/json+nodes")
w.Write(bstr)
return
}
}
// getRequestConf uses the parameters from the query string to define the request configuration, notably concerning
// deemed acceptable DNS violations
func getRequestConf(params url.Values) *tools.RequestConfig {
var RFC8020, AcceptServfail bool
paramRFC8020, ok := params["rfc8020"]
if !ok || len(paramRFC8020) != 1 {
RFC8020 = false
} else if i, err := strconv.ParseInt(paramRFC8020[0], 10, 0); err != nil {
RFC8020 = false
} else {
RFC8020 = i != 0
}
paramAcceptServfail, ok := params["servfail"]
if !ok || len(paramAcceptServfail) != 1 {
AcceptServfail = false
} else if i, err := strconv.ParseInt(paramAcceptServfail[0], 10, 0); err != nil {
AcceptServfail = false
} else {
AcceptServfail = i != 0
}
// Prepare request-specific configuration based on rfc8020 and servfail query string parameters presence and value
reqConf := &tools.RequestConfig{
AnalysisCond: tools.AnalysisConditions{
All: false,
DNSSEC: false,
NoV4: false,
NoV6: false,
},
Exceptions: tools.Exceptions{
RFC8020: RFC8020,
AcceptServFailAsNoData: AcceptServfail,
},
}
return reqConf
}
func handleAllNamesRequests(reqChan chan<- *dependency2.Request, w http.ResponseWriter, req *http.Request) {
params, err := url.ParseQuery(req.URL.RawQuery)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
// Prepare request-specific configuration
reqConf := getRequestConf(params)
handleRequest(params, reqConf, reqChan, w, req)
}
func handleDNSSECRequests(reqChan chan<- *dependency2.Request, w http.ResponseWriter, req *http.Request) {
params, err := url.ParseQuery(req.URL.RawQuery)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
// Prepare request-specific configuration
reqConf := getRequestConf(params)
reqConf.AnalysisCond.DNSSEC = true
handleRequest(params, reqConf, reqChan, w, req)
}
func handleNo4Requests(reqChan chan<- *dependency2.Request, w http.ResponseWriter, req *http.Request) {
params, err := url.ParseQuery(req.URL.RawQuery)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
// Prepare request-specific configuration
reqConf := getRequestConf(params)
reqConf.AnalysisCond.NoV4 = true
handleRequest(params, reqConf, reqChan, w, req)
}
func handleNo6Requests(reqChan chan<- *dependency2.Request, w http.ResponseWriter, req *http.Request) {
params, err := url.ParseQuery(req.URL.RawQuery)
if err != nil {
w.WriteHeader(http.StatusBadRequest)
return
}
// Prepare request-specific configuration
reqConf := getRequestConf(params)
reqConf.AnalysisCond.NoV6 = true
handleRequest(params, reqConf, reqChan, w, req)
}
func stopFinder(df *dependency.Finder, reqChan chan<- *dependency2.Request, secret string, w http.ResponseWriter, req *http.Request) {
params, err := url.ParseQuery(req.URL.RawQuery)
if err == nil {
if secretParam, ok := params["secret"]; ok && len(secretParam) == 1 && secretParam[0] == secret {
// Secret is correct, initiating graceful stop
go func() {
// Wait for 1 second, just to give the time to send the web confirmation page
time.Sleep(1 * time.Second)
fmt.Printf("Stopping the finder: ")
// Will print dots until os.Exit kills the process
go func() {
for {
fmt.Printf(".")
time.Sleep(100 * time.Millisecond)
}
}()
close(reqChan)
// Perform a graceful stop of the dependency finder, which will flush caches on disk
df.Stop()
fmt.Printf("OK\n")
os.Exit(0)
}()
// Returns a webpage confirming shutdown
w.WriteHeader(http.StatusOK)
buf := new(bytes.Buffer)
buf.WriteString("Stopping.")
w.Write(buf.Bytes())
return
}
}
// Reject all requests that are missing the secret parameter or whose secret value is different from the "secret"
// function parameter.
w.WriteHeader(http.StatusForbidden)
}
// runWebWorker is a go routine that handles dependency requests received from the web handlers.
func runWebWorker(df *dependency.Finder, reqChan <-chan *dependency2.Request) {
for req := range reqChan {
df.Handle(req)
}
}
func main() {
var transdepConf tools.TransdepConfig
var ip string
var port int
secret := make([]byte, 16)
var secretString string
tmpdir := os.Getenv("TMPDIR")
if tmpdir == "" {
tmpdir = "/tmp"
}
flag.IntVar(&transdepConf.JobCount, "jobs", 5, "Indicates the maximum number of concurrent workers")
flag.IntVar(&transdepConf.LRUSizes.DependencyFinder, "dflrusize", 2000, "Indicates the maximum number of concurrent Dependency Finder workers")
flag.IntVar(&transdepConf.LRUSizes.ZoneCutFinder, "zcflrusize", 10000, "Indicates the maximum number of concurrent Zone Cut Finder workers")
flag.IntVar(&transdepConf.LRUSizes.NameResolverFinder, "nrlrusize", 10000, "Indicates the maximum number of concurrent Name Resolver workers")
flag.StringVar(&transdepConf.CacheRootDir, "cachedir", tmpdir, "Specifies the cache directory")
flag.StringVar(&transdepConf.RootHintsFile, "hints", "", "An updated DNS root hint file. If left unspecified, some hardcoded values will be used.")
flag.StringVar(&ip, "bind", "127.0.0.1", "IP address to which the HTTP server will bind and listen")
flag.IntVar(&port, "port", 5000, "Port on which the HTTP server will bind and listen")
flag.Parse()
// A single dependency finder is shared between all web clients. This allows for cache sharing.
df := dependency.NewFinder(&transdepConf, nil)
reqChan := make(chan *dependency2.Request)
for i := 0; i < transdepConf.JobCount; i++ {
go runWebWorker(df, reqChan)
}
// A secret is generated from random. The point of this secret is to be an authentication token allowing graceful
// shutdown
rand.Read(secret[:])
secretString = hex.EncodeToString(secret)
// The URL to call to perform a graceful shutodown is printed on stdout
fmt.Printf("To stop the server, send a query to http://%s:%d/stop?secret=%s\n", ip, port, secretString)
// handles all requests where we want a list of all SPOF, even domain names that are not protected by DNSSEC
http.HandleFunc("/allnames", func(w http.ResponseWriter, req *http.Request) {
handleAllNamesRequests(reqChan, w, req)
})
// handles all requests where we want a list of all SPOF (domain are considered a SPOF candidates only if they are DNSSEC-protected)
http.HandleFunc("/dnssec", func(w http.ResponseWriter, req *http.Request) {
handleDNSSECRequests(reqChan, w, req)
})
// handles all requests where we want a list of all SPOF when IPv4 addresses are unreachable
http.HandleFunc("/break4", func(w http.ResponseWriter, req *http.Request) {
handleNo4Requests(reqChan, w, req)
})
// handles all requests where we want a list of all SPOF when IPv6 addresses are unreachable
http.HandleFunc("/break6", func(w http.ResponseWriter, req *http.Request) {
handleNo6Requests(reqChan, w, req)
})
// handles requests to stop graceful this webservice
http.HandleFunc("/stop", func(w http.ResponseWriter, req *http.Request) {
stopFinder(df, reqChan, secretString, w, req)
})
// start web server and log fatal error that may arise during execution
log.Fatal(http.ListenAndServe(fmt.Sprintf("%s:%d", ip, port), nil))
}

181
zonecut/finder.go Normal file
View file

@ -0,0 +1,181 @@
package zonecut
import (
"fmt"
"github.com/hashicorp/golang-lru"
"github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/ANSSI-FR/transdep/messages/zonecut"
"github.com/ANSSI-FR/transdep/tools"
"github.com/ANSSI-FR/transdep/errors"
errors2 "errors"
)
// REQ_CHAN_CAPACITY is the capacity of the channel into which are submitted new requests. This is used as a back off
// mechanism if the submitter is much faster than the finder.
const REQ_CHAN_CAPACITY = 10
// Finder is a worker pool maintainer for the retrieval of zone cuts/delegation information of requested zones.
type Finder struct {
// reqs is the channel used by the goroutine started by this finder to handle new requests submitted by the calling
// goroutine owning the finder instance.
reqs chan *zonecut.Request
// closedReqChan is used to prevent double-close issue on the reqs channel
closedReqChan bool
// workerPool stores references to live workers, indexed by the name of the zone for which the worker returns the
// delegation information.
workerPool *lru.Cache
// cacheRootDir is the root directory for caching
cacheRootDir string
// joinChan is used for goroutine synchronization so that the owner of a finder instance does not exit before
// this finder is done cleaning up after itself.
joinChan chan bool
// nrHandler is the function to call to submit new name resolution requests.
nrHandler func(*nameresolver.Request) *errors.ErrorStack
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
/* NewFinder builds a new Finder instance and starts the associated goroutine for request handling.
nrHandler is the function to call to submit new name resolution requests
maxWorkerCount is the maximum number of simultaneously live zone cut workers. Once this number is reached, the least
recently used worker is shut down and a new worker is started to handle the new request.
cacheRootDir is the root directory for caching
rootHints is the name of the file from which the root hints should be loaded.
*/
func NewFinder(nrHandler func(*nameresolver.Request) *errors.ErrorStack, conf *tools.TransdepConfig) *Finder {
z := new(Finder)
z.nrHandler = nrHandler
// Preemptively tries to create the cache directory, to prevent usage of the finder if the cache directory cannot be created.
if err := zonecut.CreateCacheDir(conf.CacheRootDir); err != nil {
return nil
}
z.cacheRootDir = conf.CacheRootDir
z.reqs = make(chan *zonecut.Request, REQ_CHAN_CAPACITY)
z.closedReqChan = false
z.joinChan = make(chan bool, 1)
var err error
z.workerPool, err = lru.NewWithEvict(conf.LRUSizes.ZoneCutFinder, z.writeOnDisk)
if err != nil {
return nil
}
z.config = conf
z.start()
return z
}
// Handle is the method to call to submit new zone cut/delegation information discovery requests.
// An error might be returned if this finder is already stopping.
func (z *Finder) Handle(req *zonecut.Request) *errors.ErrorStack {
if z.closedReqChan {
return errors.NewErrorStack(errors2.New("request channel for zone cut finding is already closed"))
}
z.reqs <- req
return nil
}
// writeOnDisk is a method to clean up entries from the LRU list. It writes the result from the evicted worker on disk
// as JSON, then shuts the worker down.
func (z *Finder) writeOnDisk(key, value interface{}) {
wrk := value.(*worker)
// Get an entry of that worker to persist it on disk before shutting it down
topic := key.(zonecut.RequestTopic)
req := zonecut.NewRequest(topic.Domain, topic.Exceptions)
wrk.handle(req)
entry, err := req.Result()
if err != nil {
if _, ok := err.OriginalError().(*errors.TimeoutError) ; ok {
return
}
}
wrk.stop()
cf := zonecut.NewCacheFile(z.cacheRootDir, topic)
errSetRes := cf.SetResult(entry, err)
if errSetRes != nil {
return
}
}
// loadFromDisk searches on disk for a cache file for the specified request and starts off a new worker to handle that
// request.
// The new started worker is returned. An error is returned if no cache file are found for that request or if an error
// happened during the initialization of that worker.
func (z *Finder) loadFromDisk(req *zonecut.Request) (*worker, error) {
cf, err := zonecut.NewExistingCacheFile(z.cacheRootDir, req.RequestTopic())
if err != nil {
return nil, err
}
w := newWorkerFromCachedResult(req, z.Handle, z.nrHandler, cf, z.config)
if w == nil {
return nil, fmt.Errorf("unable to create new worker!")
}
return w, nil
}
// spool searches for a live worker that can handle the specified request, or starts a new one for that purpose.
func (z *Finder) spool(req *zonecut.Request) {
var wrk *worker
var err error
if val, ok := z.workerPool.Get(req.RequestTopic()); ok {
// First, search in the LRU of live workers
wrk = val.(*worker)
} else if wrk, err = z.loadFromDisk(req); err == nil {
// Then, search if the worker can be started from a cache file
z.workerPool.Add(req.RequestTopic(), wrk)
} else {
// Finally, start a new worker to handle that request, if nothing else worked
if req.Domain() == "." {
// Starts immediately the worker for the root zone, which is a special-case
wrk = newRootZoneWorker(req.Exceptions(), z.config)
} else {
wrk = newWorker(req, z.Handle, z.nrHandler, z.config)
}
z.workerPool.Add(req.RequestTopic(), wrk)
}
// Spools the request to the worker
wrk.handle(req)
}
// start performs the operation for the finder to be ready to handle new requests.
func (z *Finder) start() {
// Current start implementation start off a goroutine which reads from the reqs request channel attribute
go func() {
for req := range z.reqs {
z.spool(req)
}
// Cleanup workers, because Stop() was called by the goroutine that owns the finder
for _, key := range z.workerPool.Keys() {
wrk, _ := z.workerPool.Peek(key)
z.writeOnDisk(key, wrk)
}
z.joinChan <- true
}()
}
// Stop signals that no new requests will be submitted. It triggers some cleanup of the remaining live workers, wait for
// them to finish and then returns true. Subsequent calls to Stop will return false as the finder is already stopped.
func (z *Finder) Stop() bool {
if z.closedReqChan {
return false
}
close(z.reqs)
z.closedReqChan = true
_ = <-z.joinChan
close(z.joinChan)
return true
}

566
zonecut/worker.go Normal file
View file

@ -0,0 +1,566 @@
package zonecut
import (
"fmt"
"github.com/miekg/dns"
"net"
"os"
"github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/ANSSI-FR/transdep/messages/zonecut"
"github.com/ANSSI-FR/transdep/tools"
"strings"
"github.com/ANSSI-FR/transdep/errors"
)
// WORKER_CHAN_CAPACITY is the maximum number of unhandled requests that may be to the worker, before a call to handle()
// is blocking.
const WORKER_CHAN_CAPACITY = 10
// worker represents a request handler for a specific request target domain name for which delegation information is sought.
type worker struct {
// req is the request associated with this worker
req *zonecut.Request
// reqs is the channel by which subsequent requests for the same topic as for "dn" are received.
reqs chan *zonecut.Request
// closedReqChan helps prevent double-close issue on reqs channel, when the worker is stopping.
closedReqChan bool
// joinChan is used by stop() to wait for the completion of the start() goroutine
joinChan chan bool
// zcHandler is used to submit new zone cut requests. This is most notably used to get the delegation information of
// the parent zone of the requested name, in order to query its name servers for the requested name delegation
// information. By definition, this may loop up to the root zone which is hardcoded in this program.
zcHandler func(*zonecut.Request) *errors.ErrorStack
// nrHandler is used to submit new name resolution requests. This is used, for instance, to get the IP addresses
// associated to nameservers that are out-of-bailiwick and for which we don't have acceptable glues or IP addresses.
nrHandler func(*nameresolver.Request) *errors.ErrorStack
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
// initNewWorker builds a new worker instance and returns it.
// It DOES NOT start the new worker, and should not be called directly by the finder.
func initNewWorker(req *zonecut.Request, zcHandler func(*zonecut.Request) *errors.ErrorStack, nrHandler func(*nameresolver.Request) *errors.ErrorStack, config *tools.TransdepConfig) *worker {
w := new(worker)
w.req = req
w.reqs = make(chan *zonecut.Request, WORKER_CHAN_CAPACITY)
w.closedReqChan = false
w.joinChan = make(chan bool, 1)
w.zcHandler = zcHandler
w.nrHandler = nrHandler
w.config = config
return w
}
/* newWorker builds a new worker instance and returns it.
The worker is started and will resolve the request from the network.
dn is the domain name to which this worker is associated. All subsequent requests that this worker will handle will have
the same target domain name.
zcHandler is the function to call to submit new requests for delegation information (most notably for parent domains,
while chasing for a zone apex).
nrHandler is the function to call to submit new requests for name resolution (most notably to resolve a name server name
into an IP address).
*/
func newWorker(req *zonecut.Request, zcHandler func(*zonecut.Request) *errors.ErrorStack, nrHandler func(*nameresolver.Request) *errors.ErrorStack, config *tools.TransdepConfig) *worker {
w := initNewWorker(req, zcHandler, nrHandler, config)
w.start()
return w
}
// newWorkerFromCachedResult is similar to newWorker, except this worker will not chase the answer on the network; it will
// simply load the answer from cache.
func newWorkerFromCachedResult(req *zonecut.Request, zcHandler func(*zonecut.Request) *errors.ErrorStack, nrHandler func(*nameresolver.Request) *errors.ErrorStack, cf *zonecut.CacheFile, config *tools.TransdepConfig) *worker {
w := initNewWorker(req, zcHandler, nrHandler, config)
w.startWithCachedResult(cf)
return w
}
// newRootZoneWorker is similar to newWorker, except it handles the root zone, which is a special case, since it can be
// loaded from a root hints file
func newRootZoneWorker(exceptions tools.Exceptions, config *tools.TransdepConfig) *worker {
req := zonecut.NewRequest(".", exceptions)
w := initNewWorker(req, nil, nil, config)
w.startForRootZone(w.config.RootHintsFile)
return w
}
// handle allows the submission of new requests to this worker.
// This method returns an error if the worker is stopped or if the submitted request does not match the request usually
// handled by this worker.
func (w *worker) handle(req *zonecut.Request) *errors.ErrorStack {
if w.closedReqChan {
return errors.NewErrorStack(fmt.Errorf("handle: worker request channel for zone cut of %s is already closed", w.req.Domain()))
} else if w.req.RequestTopic() != req.RequestTopic() {
return errors.NewErrorStack(fmt.Errorf("handle: invalid request; the submitted request (%s) does not match the requests handled by this worker (%s)", req.Domain(), w.req.Domain()))
}
w.reqs <- req
return nil
}
// getHardcodedRootZone returns an entry for the root zone. This entry is currently hardcoded for simplicity's sake.
func (w *worker) getHardcodedRootZone() (*zonecut.Entry, *errors.ErrorStack) {
// TODO Complete hardcoded root zone
zce := zonecut.NewEntry(w.req.Domain(), true, []*zonecut.NameSrvInfo{
zonecut.NewNameSrv("l.root-servers.net.", []net.IP{net.ParseIP("199.7.83.42")}),
},
)
return zce, nil
}
// getRootZoneFromFile loads the entries from the specified rootHints file. An error is returned, if the root zone file
// cannot be opened
func (w *worker) getRootZoneFromFile(rootHints string) (*zonecut.Entry, *errors.ErrorStack) {
fd, err := os.Open(rootHints)
if err != nil {
return nil, errors.NewErrorStack(err)
}
defer fd.Close()
nsList := make(map[string]bool)
addrList := make(map[string][]net.IP)
zoneIter := dns.ParseZone(fd, ".", "")
for token := range zoneIter {
if token.Error != nil {
return nil, errors.NewErrorStack(token.Error)
}
if token.RR != nil {
switch rr := token.RR.(type) {
case *dns.NS:
// Just a small test to only consider NS entries for the root zone, in case additional records are provided
if rr.Hdr.Name == "." {
nsList[rr.Ns] = true
}
case *dns.A:
addrList[rr.Hdr.Name] = append(addrList[rr.Hdr.Name], rr.A)
case *dns.AAAA:
addrList[rr.Hdr.Name] = append(addrList[rr.Hdr.Name], rr.AAAA)
}
}
}
var nameSrvs []*zonecut.NameSrvInfo
for name, ipAddrs := range addrList {
if _, ok := nsList[name]; ok {
nameSrvs = append(nameSrvs, zonecut.NewNameSrv(name, ipAddrs))
}
}
return zonecut.NewEntry(".", true, nameSrvs), nil
}
// extractDelegationInfo extracts the list of name servers that are authoritative for the domain that is associated to
// this worker.
// The parent domain is used to filter out additional address records whose credibility are insufficient (because they
// are out-of-bailiwick of the parent domain).
func (w *worker) extractDelegationInfo(parentDomain string, m *dns.Msg) []*zonecut.NameSrvInfo {
nsList := make(map[string][]net.IP, 0)
// Going after the delegation info; we look into the Answer and Authority sections, because following implementations,
// the answer to that NS query might be in both sections (e.g. ns2.msft.com answers in answer section for
// glbdns2.microsoft.com. NS?)
for _, rr := range m.Answer {
// From the Authority section, we only observe NS records whose owner name is equal to the domain name associated to this worker.
if dns.CompareDomainName(rr.Header().Name, w.req.Domain()) == dns.CountLabel(rr.Header().Name) &&
dns.CountLabel(rr.Header().Name) == dns.CountLabel(w.req.Domain()) {
if nsrr, ok := rr.(*dns.NS); ok {
// We create a list of potential IP addresses for each name server; we don't know yet if we will have glues for them or not.
nsList[strings.ToLower(nsrr.Ns)] = make([]net.IP, 0)
}
}
}
for _, rr := range m.Ns {
// From the Authority section, we only observe NS records whose owner name is equal to the domain name associated to this worker.
if dns.CompareDomainName(rr.Header().Name, w.req.Domain()) == dns.CountLabel(rr.Header().Name) &&
dns.CountLabel(rr.Header().Name) == dns.CountLabel(w.req.Domain()) {
if nsrr, ok := rr.(*dns.NS); ok {
// We create a list of potential IP addresses for each name server; we don't know yet if we will have glues for them or not.
nsList[strings.ToLower(nsrr.Ns)] = make([]net.IP, 0)
}
}
}
//Going after the glue records
for _, rr := range m.Extra {
rrname := strings.ToLower(rr.Header().Name)
// Is it an in-bailiwick glue? If not, ignore
if dns.CompareDomainName(rrname, parentDomain) != dns.CountLabel(parentDomain) {
continue
}
// Is this glue record within the NS list
if _, ok := nsList[rrname]; !ok {
continue
}
switch addrrr := rr.(type) {
case *dns.A:
nsList[rrname] = append(nsList[rrname], addrrr.A)
case *dns.AAAA:
nsList[rrname] = append(nsList[rrname], addrrr.AAAA)
}
}
nameSrvs := make([]*zonecut.NameSrvInfo, 0)
for name, addrs := range nsList {
// Ignore NS requiring a glue but without the associated glue records
if dns.CompareDomainName(name, w.req.Domain()) != dns.CountLabel(w.req.Domain()) || len(addrs) > 0 {
nameSrvs = append(nameSrvs, zonecut.NewNameSrv(name, addrs))
}
}
return nameSrvs
}
/*getDelegationInfo searches for the delegation info of the name associated to this worker. It initially tries over UDP
and returns the delegation info as an entry, or the error that occurred during the retrieval of the delegation info.
It also returns a boolean in case of an error; if true, the error is definitive and there is no point in trying other IP
addresses. An exemple of such case is if we obtain a NXDomain error: the parent zone does not know about this domain at
all (assuming all parent nameservers are in sync).
parentDomain is the name of the parent zone. This will be used to filter out non-credible glue records.
addr is the IP address of one of the name server authoritative for the parent zone of the domain that is associated with
this worker. For instance, if the worker is about "example.com.", addr will be the IP address of one of the name servers
that are authoritative for "com."
*/
func (w *worker) getDelegationInfo(parentDomain string, addr net.IP) (*zonecut.Entry, *errors.ErrorStack, bool) {
// proto == "" means UDP
nameSrvs, err, definitiveErr := w.getDelegationInfoOverProto(parentDomain, addr, "")
if err != nil {
err.Push(fmt.Errorf("getDelegationInfo: for %s", w.req.Domain()))
return nil, err, definitiveErr
}
if len(nameSrvs) == 0 {
// having no name servers is the indication that the current name is not an apex. Thus, we don't need to check
// whether there is a DS record. There are "none that we want to consider" :)
return nil, errors.NewErrorStack(errors.NewNoNameServerError(w.req.Domain())), true
}
dnssecProtected, err := w.getDNSSECInfoOverProto(addr, "")
if err != nil {
err.Push(fmt.Errorf("getDelegationInfo: for %s: failed to get DNSSEC info", w.req.Domain()))
return nil, err, false
}
return zonecut.NewEntry(w.req.Domain(), dnssecProtected, nameSrvs), nil, false
}
/* getDNSSECInfoOverProto discovers whether there is a DS record for the domain associated with the domain associated to
this worker in its parent zone.
addr is the address to send the DNS query to
proto is the transport protocol to use to query addr
This function returns a boolean indicator of whether there is a DS record of not in the parent domain. This value is
meaningless if an error occurred while searching for the DS record (error != nil).
*/
func (w *worker) getDNSSECInfoOverProto(addr net.IP, proto string) (bool, *errors.ErrorStack) {
// Sends a DNS query to addr about the domain name associated with this worker, using the "proto" protocol.
clnt := new(dns.Client)
clnt.Net = proto // Let's switch the "DNSSECEnabled" flag on if there is a DS record for this delegation
mds := new(dns.Msg)
mds.SetEdns0(4096, false)
mds.SetQuestion(w.req.Domain(), dns.TypeDS)
mds.RecursionDesired = false
ansds, _, err := clnt.Exchange(mds, net.JoinHostPort(addr.String(), "53"))
// Is this server broken? We should get a delegation or an authoritative DS record
if err != nil {
errStack := errors.NewErrorStack(err)
errStack.Push(fmt.Errorf("getDNSSECInfoOverProto: error during exchange with %s for %s %s?", addr.String(), w.req.Domain(), dns.TypeToString[dns.TypeDS]))
return false, errStack
}
if ansds == nil {
return false, errors.NewErrorStack(fmt.Errorf("getDNSSECInfoOverProto: no answer to DS query or got a DNS error code for %s from %s", w.req.Domain(), addr.String()))
}
if ansds.Rcode != dns.RcodeSuccess {
return false, errors.NewErrorStack(fmt.Errorf("getDNSSECInfoOverProto: received error when asking for %s %s? => %s", w.req.Domain(), dns.TypeToString[dns.TypeDS], dns.RcodeToString[ansds.Rcode]))
}
if ansds.Truncated {
// idem as above
if proto == "tcp" {
return false, errors.NewErrorStack(fmt.Errorf("getDNSSECInfoOverProto: got a truncated answer over TCP while querying DS of %s", w.req.Domain()))
}
return w.getDNSSECInfoOverProto(addr, "tcp")
}
return ansds.Authoritative && len(ansds.Answer) > 0, nil // DNSSEC protected zone or not
}
/* getDelegationInfoOverProto retrieves the list of name servers to which the domain associated with this worker is
delegated to.
parentDomain is the name of the parent domain of the domain associated with this worker.
If an error occurred during the retrieval of this information, the error is not nil. In that case, the list of name
servers is meaningless. The returned bool indicates whether the error is likely to occur when querying one of the other
name servers that we could query for this exact same delegation info.
*/
func (w *worker) getDelegationInfoOverProto(parentDomain string, addr net.IP, proto string) ([]*zonecut.NameSrvInfo, *errors.ErrorStack, bool) {
// Sends a DNS query to addr about the domain name associated with this worker, using the "proto" protocol.
clnt := new(dns.Client)
clnt.Net = proto
m := new(dns.Msg)
m.SetEdns0(4096, false)
m.SetQuestion(w.req.Domain(), dns.TypeNS)
m.RecursionDesired = false
ans, _, err := clnt.Exchange(m, net.JoinHostPort(addr.String(), "53"))
// Did the server answered a valid response?
if err != nil {
errStack := errors.NewErrorStack(err)
errStack.Push(fmt.Errorf("getDelegationInfoOverProto: error while exchanging with %s for %s %s?", addr.String(), w.req.Domain(), dns.TypeToString[dns.TypeNS]))
return nil, errStack, false
}
if ans == nil {
// Not getting an answer may just indicate that this server timed out.
// This is probably not a definitive error, so might wanna retry
return nil, errors.NewErrorStack(fmt.Errorf("getDelegationInfoOverProto: no answer for %s %s? from %s", w.req.Domain(), dns.TypeToString[dns.TypeNS], addr.String())), false
}
// Did the server returned a negative answer?
// (most probably meaning that this name server does not know about this child zone)
if ans.Rcode == dns.RcodeNameError {
// Having the server answer us authoritatively that the name does not exist is probably good enough for us to
// stop waste time on this name. It might be a server that is out-of-sync, though... Currently, we consider
// that this error is definitive.
return nil, errors.NewErrorStack(errors.NewNXDomainError(w.req.Domain(), dns.TypeNS, addr, errors.STR_TO_PROTO[proto])), true
}
if ans.Rcode == dns.RcodeServerFailure {
// If we accept servfail as no data, then this is a definitive answer, else it is not
return nil, errors.NewErrorStack(errors.NewServfailError(w.req.Domain(), dns.TypeNS, addr, errors.STR_TO_PROTO[proto])), w.req.Exceptions().AcceptServFailAsNoData
}
if ans.Rcode != dns.RcodeSuccess {
// A non-NXDomain error may be FORMERR or SERVFAIL which indicates a failure to communicate with the server.
// Maybe this particular server is broken; let's try another one! Not a definitive error for the target domain name.
return nil, errors.NewErrorStack(fmt.Errorf("getDelegationInfoOverProto: got a DNS error for %s %s? from %s: %s", w.req.Domain(), dns.TypeToString[dns.TypeNS], addr.String(), dns.RcodeToString[ans.Rcode])), false
}
if ans.Authoritative {
// The server is authoritative for this name... that means that w.dn is a non-terminal node or the Apex
return nil, nil, true
}
if ans.Truncated {
// A truncated answer usually means retry over TCP. However, sometimes TCP answer are truncated too...
// In that case, we return an error. I don't see how this would not be a definitive error; other servers will
// probably return the same overly large answer!
if proto == "tcp" {
return nil, errors.NewErrorStack(fmt.Errorf("getDelegationInfoOverProto: got a truncated answer over TCP while querying NS of %s", w.req.Domain())), true
}
return w.getDelegationInfoOverProto(parentDomain, addr, "tcp")
}
// Extract info from the DNS message
nameSrvs := w.extractDelegationInfo(parentDomain, ans)
return nameSrvs, nil, false
}
// getDelegationInfoFromGluedNameSrvs does what the name implies. It will stop iterating over the server list if one of
// the servers returns a definitive error that is likely to occur on other servers too.
func (w *worker) getDelegationInfoFromGluedNameSrvs(parentDomain string, nameSrvs []*zonecut.NameSrvInfo) (*zonecut.Entry, *errors.ErrorStack) {
var errList []string
for _, ns := range nameSrvs {
for _, addr := range ns.Addrs() {
entry, err, definitiveError := w.getDelegationInfo(parentDomain, addr)
if err == nil {
return entry, nil
}
if definitiveError {
err.Push(fmt.Errorf("getDelegationInfoFromGluedNameSrvs: definitive error for %s from %s(%s)", w.req.Domain(), ns.Name(), addr.String()))
return nil, err
}
errList = append(errList, fmt.Sprintf("getDelegationInfoFromGluedNameSrvs: %s", err.Error()))
}
}
// No server returned a valid answer nor a definitive error
return nil, errors.NewErrorStack(fmt.Errorf("getDelegationInfoFromGluedNameSrvs: cannot get the delegation info of %s from glued delegation: %s", w.req.Domain(), strings.Join(errList, ", ")))
}
// getDelegationInfoFromGluelessNameSrvs retrieves the IP addresses of the name servers then queries them. It will stop
// iterating over the server list if one of the servers returns a definitive error that is likely to occur on other
// servers too.
func (w *worker) getDelegationInfoFromGluelessNameSrvs(parentDomain string, nameSrvs []string) (*zonecut.Entry, *errors.ErrorStack) {
var errList []string
for _, ns := range nameSrvs {
req := nameresolver.NewRequest(ns, w.req.Exceptions())
w.nrHandler(req)
res, err := req.Result()
if err != nil {
err.Push(fmt.Errorf("getDelegationInfoFromGluelessNameSrvs: error while resolving the IP addresses of nameserver %s of %s", ns, w.req.Domain()))
return nil, err
}
if res.CNAMETarget() != "" {
// This name server name points to a CNAME... This is illegal, so we just skip that server :o)
continue
}
for _, addr := range res.Addrs() {
entry, err, definitiveError := w.getDelegationInfo(parentDomain, addr)
if err == nil {
return entry, nil
}
if definitiveError {
err.Push(fmt.Errorf("getDelegationInfoFromGluelessNameSrvs: definitive error for %s from %s(%s)", w.req.Domain(), ns, addr.String()))
return nil, err
}
errList = append(errList, fmt.Sprintf("getDelegationInfoFromGluelessNameSrvs: error for %s from %s(%s): %s", w.req.Domain(), ns, addr.String(), err.Error()))
}
}
// No server returned a valid answer nor a definitive error
return nil, errors.NewErrorStack(fmt.Errorf("getDelegationInfoFromGluelessNameSrvs: cannot get the delegation info of %s from glueless delegation: [%s]", w.req.Domain(), strings.Join(errList, ", ")))
}
func (w *worker) resolve() (*zonecut.Entry, *errors.ErrorStack) {
var parentZCE *zonecut.Entry
queriedName := w.req.Domain()
// Cycling until we get the delegation info for the parent zone. Cycling like this is necessary if the parent domain
// is not a zone apex. For instance, this is necessary for ssi.gouv.fr, since gouv.fr is an ENT.
for parentZCE == nil {
var err *errors.ErrorStack
// First we get the Entry for the parent zone
pos, end := dns.NextLabel(queriedName, 1)
if end {
queriedName = "."
} else {
queriedName = queriedName[pos:]
}
newReq := zonecut.NewRequest(queriedName, w.req.Exceptions())
w.zcHandler(newReq)
parentZCE, err = newReq.Result()
if err != nil {
var returnErr bool
switch typedErr := err.OriginalError().(type) {
case *errors.TimeoutError:
returnErr = true
case *errors.NXDomainError:
returnErr = w.req.Exceptions().RFC8020
case *errors.ServfailError:
returnErr = !w.req.Exceptions().AcceptServFailAsNoData
case *errors.NoNameServerError:
returnErr = false
default:
_ = typedErr
returnErr = true
}
if returnErr {
err.Push(fmt.Errorf("resolve: error while getting the zone cut info of %s for %s", queriedName, w.req.Domain()))
return nil, err
}
parentZCE = nil
err = nil
}
}
// Split delegation info into glued vs glueless, to prioritize glued delegations, which are faster (no additional
// query required).
var gluedNameSrvs []*zonecut.NameSrvInfo
var gluelessNameSrvs []string
for _, nameSrv := range parentZCE.NameServers() {
inbailiwick := dns.CompareDomainName(nameSrv.Name(), parentZCE.Domain()) == dns.CountLabel(parentZCE.Domain())
if inbailiwick || len(nameSrv.Addrs()) > 0 {
gluedNameSrvs = append(gluedNameSrvs, nameSrv)
} else if !inbailiwick {
gluelessNameSrvs = append(gluelessNameSrvs, nameSrv.Name())
}
}
var entry *zonecut.Entry
entry, gluedErr := w.getDelegationInfoFromGluedNameSrvs(parentZCE.Domain(), gluedNameSrvs)
if gluedErr != nil {
switch typedErr := gluedErr.OriginalError().(type) {
case *errors.NXDomainError:
gluedErr.Push(fmt.Errorf("resolve: got NXDomain while resolving from glued NS of %s", w.req.Domain()))
return nil, gluedErr
case *errors.NoNameServerError:
return nil, nil
default:
_ = typedErr
}
var gluelessErr *errors.ErrorStack
entry, gluelessErr = w.getDelegationInfoFromGluelessNameSrvs(parentZCE.Domain(), gluelessNameSrvs)
if gluelessErr != nil {
if _, ok := gluelessErr.OriginalError().(*errors.NoNameServerError) ; ok {
return nil, nil
}
gluelessErr.Push(fmt.Errorf("resolve: unable to resolve %s: glued errors: [%s]", w.req.Domain(), gluedErr.Error()))
return nil, gluelessErr
}
}
return entry, nil
}
// start prepares the worker for handling new requests.
// The current implementation is to launch a goroutine that will read from the reqs channel attribute new requests and
// will try to answer them. When stopped, it will immediately send the join signal.
func (w *worker) start() {
go func() {
result, err := w.resolve()
for req := range w.reqs {
req.SetResult(result, err)
}
w.joinChan <- true
}()
}
// startWithCachedResult performs the same kind of operations that start(), except that the response is not obtained
// from the network, but by loading it from a cache file.
func (w *worker) startWithCachedResult(cf *zonecut.CacheFile) {
go func() {
result, resultErr, err := cf.Result()
if err != nil {
result = nil
cacheErr := fmt.Errorf("startWithCachedResult: error while loading from cache: %s", err)
if resultErr != nil {
resultErr.Push(cacheErr)
} else {
resultErr = errors.NewErrorStack(cacheErr)
}
}
for req := range w.reqs {
req.SetResult(result, resultErr)
}
w.joinChan <- true
}()
}
// startForRootZone is a special starting procedure for the root zone. Root zone can be loaded from a root hints file,
// but we can also use an hardcoded-but-probably-obsolete list for easy startups. If the rootzone file cannot be loaded,
// the hardcoded list is used instead and an error message is printed on stderr.
func (w *worker) startForRootZone(rootHints string) {
go func() {
var result *zonecut.Entry
var err *errors.ErrorStack
if rootHints == "" {
result, err = w.getHardcodedRootZone()
} else {
result, err = w.getRootZoneFromFile(rootHints)
if err != nil {
fmt.Fprintf(os.Stderr, "startForRootZone: error loading from root hints. Using hardcoded value instead: %s", err)
result, err = w.getHardcodedRootZone()
}
}
for req := range w.reqs {
req.SetResult(result, err)
}
w.joinChan <- true
}()
}
// stop is to be called during the cleanup of the worker. It shuts down the goroutine started by start() and waits for
// it to actually end. stop returns true if it is the first time it is called and the start() routine was stopped, or
// else it returns false.
func (w *worker) stop() bool {
if w.closedReqChan {
return false
}
close(w.reqs)
w.closedReqChan = true
<-w.joinChan
close(w.joinChan)
return true
}