initial commit

This commit is contained in:
Florian Maury 2018-01-23 21:25:00 +01:00
commit 3d855e8b1e
38 changed files with 5515 additions and 0 deletions

9
LICENCE Normal file
View file

@ -0,0 +1,9 @@
Copyright 2017- ANSSI
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

10
Makefile Normal file
View file

@ -0,0 +1,10 @@
.PHONY: all
all: transdep webserver
transdep: transdep.go
go build transdep.go
webserver: webserver.go
go build webserver.go

307
README.md Normal file
View file

@ -0,0 +1,307 @@
# Transdep
Transdep is a utility to discover single points of failure (SPOF) in DNS dependency graphs leading to unability to
resolve domain names.
DNS dependency graph is a notion that was introduced by Venugopalan Ramasubramanian and Emin Gün Sirer in
[Perils of Transitive Trust in the Domain Name System][1].
Current types of single points of failure that are detected are :
- domain names (which can be availability SPOF if DNSSEC is incorrectly configured);
- IP addresses of name servers;
- Longest network prefixes that may generally be announced over the Internet (/24 over IPv4 and /48 over IPv6);
- ASN of the AS announcing the IP addresses of name servers.
The ``transdep`` utility is the CLI version of the tool. The ``webserver`` utility spawns a REST/JSON webservice.
Endpoints are described below.
[1]: https://www.cs.cornell.edu/people/egs/papers/dnssurvey.pdf
## Licence
Transdep is licenced under the 2-clause BSD licence.
## Installation
Transdep uses the following external libraries:
- https://github.com/miekg/dns
- https://github.com/awalterschulze/gographviz
- https://github.com/hashicorp/golang-lru
- https://github.com/deckarep/golang-set
- https://github.com/hashicorp/go-immutable-radix
You may install them using the go get command or whichever other method you prefer:
```bash
$ go get github.com/miekg/dns
$ go get github.com/awalterschulze/gographviz
$ go get github.com/hashicorp/golang-lru
$ go get github.com/deckarep/golang-set
$ go get github.com/hashicorp/go-immutable-radix
```
You may then use the Makefile to compile the Transdep tools:
```bash
$ make all
```
## Usage
### CLI
The ``transdep`` utility can be used to analyze the dependencies of a single domain name, of multiple names, or a saved
dependency graph.
#### Analysis Target Types
To analyze a single name, the ``-domain`` option is to be used:
```bash
./transdep -domain www.example.net
```
To analyze multiple domain names, you must provide a list stored in a file with one domain name per line, with the
option ``-file``:
```bash
./transdep -file <(echo -ne "example.com\nexample.net")
./transdep -file /tmp/list_of_domain_names
```
If you saved a dependency graph into a file (that was generated using the ``-graph`` option ), you may analyze it by
loading the graph with the ``-load`` option:
```bash
./transdep -domain example.net -graph > /tmp/example_net_graph.json
./transdep -load /tmp/example_net_graph.json
```
#### Analysis Nature
Transdep can analyze a domain based on multiple criteria.
All analysis types consider that IP addresses and announcing network prefixes may be SPOF.
By default, SPOF discovery is conducted while considering that all names may break, including non-DNSSEC protected
domain names. This is used to analyze SPOF in the event of misconfigurations, zone truncation and all other types of
zone corruptions that may render a zone impossible to resolve.
If the analysis must be constrained to only consider that DNSSEC protected names may break, the ``-dnssec`` option must
be added to the command line:
```bash
./transdep -domain www.example.com -dnssec
```
By default, the SPOF discovery considers that resolvers are connected to both IPv4 and IPv6 networks. This means that if
an IPv4 address is unavailable, this unavailibility may be covered for by a server available over IPv6.
In some scenarii, this is unacceptable, because the IPv4 resolvers and the IPv6 resolvers are separate servers. Also,
one of these two networks might be unavailable (temporarily or permanently). To represent these situations, the
``-break4`` (resp. ``-break6``) options simulates that all IPv4 (resp. IPv6) addresses are always considered unavailable
when analyzing the SPOF potential of an IP address in the other network type:
```bash
./transdep -domain www.x-cli.eu -break4
www.x-cli.eu:this domain name requires some IPv4 addresses to be resolved properly
./transdep -domain www.example.com -break4
www.example.com.:Name:example.com.
www.example.com.:IP:2001:500:8d::53
www.example.com.:Name:iana-servers.net.
www.example.com.:Name:.
www.example.com.:Name:net.
www.example.com.:Name:com.
```
In the previous example, `www.x-cli.eu.` cannot be resolved by IPv6-only resolvers (because some names or delegations do
not have IPv6 addresses).
For `www.example.com`, the result shows that during that run, `Transdep` detected that when using a resolver that
has only access to the IPv6 network at the time of resolution, the name `www.example.com` might not be possible to
resolve if the IP address ``2001:500:8d::53`` is unavailable.
The ``-all`` option indicates to `Transdep` to analyze the requested domain name(s), using all possible compositions of
the previous options: with and without ``-dnssec``, and with and without ``-break4`` or with and without ``-break6``.
```bash
./transdep -domain www.x-cli.eu -all
AllNames:www.x-cli.eu.:Name:x-cli.eu.
AllNames:www.x-cli.eu.:Name:.
AllNames:www.x-cli.eu.:Name:eu.
DNSSEC:www.x-cli.eu.:Name:.
DNSSEC:www.x-cli.eu.:Name:eu.
AllNamesNo4:www.x-cli.eu.:this domain name requires some IPv4 addresses to be resolved properly
DNSSECNo4:www.x-cli.eu.:this domain name requires some IPv4 addresses to be resolved properly
AllNamesNo6:www.x-cli.eu.:Name:eu.
AllNamesNo6:www.x-cli.eu.:Name:x-cli.eu.
AllNamesNo6:www.x-cli.eu.:Name:.
DNSSECNo6:www.x-cli.eu.:Name:.
DNSSECNo6:www.x-cli.eu.:Name:eu.
```
`Transdep` may also consider an analysis criterion based on the ASN of the AS announcing the network prefixes covering
the IP addresses of the name servers. The association between the ASN and the IP address is done by using a file whose
format is as follows:
- one association per line;
- each line contains an ASN and an announced network prefix.
Here is an example of such a file:
```
64501 192.0.2.0/24
64501 198.51.100.0/24
64502 203.0.113.0/24
64502 2001:db8::/32
```
Such a file can be generated from a MRT dump file (bviews) such as the ones made available by the [RIS project][2],
using ANSSI's [`mabo`][3] tool with the sub-command ``prefixes``.
The ASN-prefix file is provided to `Transdep` using the ``-mabo`` option:
```bash
./mabo prefixes bview.20171013.0800.gz > prefixes-20171013.txt
./transdep -domain www.example.com -mabo prefixes-20171013.txt
```
[2]: https://www.ripe.net/analyse/internet-measurements/routing-information-service-ris
[3]: https://github.com/ANSSI-FR/mabo
#### Output Types
`Transdep` can generate several types of documents. By default, it generates a CSV containing the discovered SPOF for
the requested analysis.
If the ``-all`` option is provided, the format is ``AnalysisType:DomainName:TypeOfSPOF:SPOFReference``, where
``AnalysisType`` indicates one of the following combinations:
* ``AllNames``: default options (no ``-dnssec``, no ``-break4``, no ``-break6``);
* ``AllNamesNo4``: default options except that ``-break4`` is specified;
* ``AllNamesNo6``: default options except that ``-break6`` is specified;
* ``DNSSEC``: default options except that ``-dnssec`` is specified;
* ``DNSSECNo4``: ``-dnssec`` and ``-break4`` options are specified;
* ``DNSSECNo6``: ``-dnssec`` and ``-break6`` options are specified.
If the ``-all`` option is not specified, the format is ``DomainName:TypeOfSPOF:SPOFRerefence``.
In both formats, ``DomainName`` indicates the domain name that is analyzed.
``TypeOfSPOF`` can value:
* ``Name``: the next field specifies a domain name that must be resolvable for ``DomainName`` to be resolvable.
* ``IP``: the next field specifies an IP address that must be available and not hijacked for ``DomainName`` to be
resolvable.
* ``Prefix:``: the next field specifies a network prefix that must be available and not hijacked for ``DomainName`` to
be resolvable.
* ``ASN:``: the next field specifies an AS number whose whole network must not be totally broken for ``DomainName`` to
be resolvable.
TypeOfSPOF may also value a special value: ``Cycle``. ``Cycle`` indicates that there is a circular dependency in the
graph somewhere, or an overly long CNAME chain (for some definition of "overly long").
Having ``Cycle`` as dependency means that the name cannot be resolved at all using a RFC-compliant resolver at the time
of resolution.
The ``-graph`` output option generates an output that can be later loaded for analysis using the
``-load`` option, described above.
The ``-dot`` output option generates a DOT file output. This output may be passed to any Graphviz interpret for graph
drawing. The generated DOT file will highlight domain name and IP addresses that are SPOF by coloring the nodes in red.
```bash
./transdep -domain www.x-cli.eu | dot -T pdf -o /tmp/graph_x-cli.eu.pdf
```
#### Caches
`Transdep` maintains several caches in order to limit the number of requests to name servers during the discovery of the
dependency graph. There are in-memory caches, using LRU lists and go routines, and on-disk caches for long term cache
and to store value overflowing from the in-memory LRU lists.
In-memory cache sizes are controlled with the ``-nrlrusize``, ``-zcflrusize`` and ``-dflrusize`` options. The first two
options are associated with lists that contain data that is cached on disk when the LRU lists are overflowing.
The on-disk cache is leverage whenever possible and the entry is reinstated in the LRU list upon usage. Thus, an entry
is either in-memory or on-disk and is never lost unless the cache directoy is flushed manually. The third option is
associated with a LRU list whose entries may be very large. These entries are synthetized from the entries of the other
caches, and thus are not stored on disk when the list is overflowing.
If your computer swaps or consumes too much memory while running `Transdep`, you should try to lower these values,
trying to lower ``-dflrusize`` value first. If your computer spends too much time in "disk I/O wait" and you have
some RAM capacity available, you may try to increase the two first options.
On-disk caches consist of a lot very small JSON files. Please monitor the number of remaining inodes and adapt your
inode table accordingly.
On-disk caches are stored in the directory designated by the `TMPDIR` environment variable, the `-cachedir` command line
option. The default value is ``/tmp``.
`Transdep` caches never expire, with the current implementation. If you need to flush the cache, you may change the
cache directory to keep the previous one and yet start fresh. You may also delete the `nameresolver` and `zonecut`
directories that are present in the designated cache directory.
#### Root Zone Hint File
You may specify a root zone hint file with the `-hints` option. If left unspecified, a hard-coded list of root-servers
will be used by `Transdep`, when querying the root zone for delegations.
#### DNS Violations
Using a RFC-compliant implementation prevents you from resolving many domain names. Thus, some degree of DNS violation
tolerance was implemented in `Transdep`, with much grumble.
By default, `Transdep` will consider `rcode 3` status on non-terminal nodes equivalent to `rcode 0` answers with
`ancount=0`. You may reinstate RFC8020 compliance with the `-rfc8020` option.
Some devices are also unable to answer to non-A/AAAA queries and always return `rcode 2` answers for any other qtype,
including NS or DS. By default, `Transdep` considers this servers as broken, but you may use the `-servfail` option to
indicate `Transdep` to treat these answers as `rcode 0` answers with `ancount=0`. This may lead `Transdep` to return
incorrect results in some instances.
#### Script Friendliness
If you don't care about the nature of errors that may arise during the analysis of a domain name or if you want to have
a output that is easily parsable, you may use the `-script` option to return errors as the constant ``-ERROR-``.
`Transdep` will return an error if any name that is part of the dependency graph cannot be resolved at the time of
dependency graph discovery. Doing otherwise might have led to incorrect results from partial dependency graph discovery.
#### Concurrency
You may adapt the number of domain names whose dependency graphs are discovered simultaneously with the `-jobs` option.
The higher this option value, the more you will harass the name servers. You will want to keep this value relatively low,
to prevent blacklisting of your IP and false measurements.
### Web Service
The webservice uses the ``webserver`` binary.
The ``-bind`` and ``-port`` can be used to specify, respectively, on which address and port the web server should listen
on. By default, the service is available on `http://127.0.0.1:5000`.
The ``-nrlrusize``, ``-zcflrusize``, ``-dflrusize``, ``-jobs``, ``-hints`` and ``-cachedir`` options have the same usage
as for the `Transdep` CLI utility.
The web server exposes several endpoints:
* ``/allnames`` is the endpoint corresponding to the default behaviour of the `transdep` CLI utility.
* ``/dnssec`` is the endpoint corresponding to the ``-dnssec`` option of the `transdep` CLI utility.
* ``/break4`` is the endpoint corresponding to the ``-break4`` option of the `transdep` CLI utility.
* ``/break6`` is the endpoint corresponding to the ``-break6`` option of the `transdep` CLI utility.
Combination of ``-dnssec`` and ``-break4`` or ``-break6`` is not possible with the web server.
Each endpoint takes a ``domain`` parameter as part of the query string, to specify which domain name is to be analyzed.
Endpoints may also receive ``rfc8020`` and ``servfail`` query string parameters to indicate which DNS violations are
tolerated for this analysis. If these options are not specified, `rcode 3` answers on non-terminal nodes are treated as
`rcode 0` answers with `ancount=0` and `rcode 2` answers are considered as broken name servers.
When launched from a console, the `webserver` utility outputs a URL to query to gracefully stop the service. Gracefully
shutting down the service is strongly advised to prevent on-disk cache corruption or incompleteness.
```bash
$ ./webserver &
[1] 7416
To stop the server, send a query to http://127.0.0.1:5000/stop?secret=5942985ebdc9102663130752c1d21f23
$ curl http://127.0.0.1:5000/stop?secret=5942985ebdc9102663130752c1d21f23
Stopping.
Stopping the finder: OK
$
```

173
dependency/finder.go Normal file
View file

@ -0,0 +1,173 @@
// Depfinder package contains the DNS dependency finder.
// Its purpose is to provide a request channel, and to build the dependency graph of a requested domain name.
package dependency
import (
"fmt"
"github.com/hashicorp/golang-lru"
"github.com/ANSSI-FR/transdep/graph"
"github.com/ANSSI-FR/transdep/nameresolver"
"github.com/ANSSI-FR/transdep/zonecut"
"github.com/ANSSI-FR/transdep/messages/dependency"
msg_nameresolver "github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/hashicorp/go-immutable-radix"
"github.com/ANSSI-FR/transdep/tools"
"github.com/ANSSI-FR/transdep/errors"
)
// REQ_CHAN_CAPACITY indicates the maximum number of requests that can be queued to a dependency finder instance,
// before the write call is blocking.
const REQ_CHAN_CAPACITY = 10
// Finder is a worker pool maintainer for the construction of dependency trees of domain names.
type Finder struct {
// workerPool LRU keys are requestTopic instances and values are *worker.
// Its use is to have at most one worker per domain (and type of request (resolveName/includeIP)) and spool matching
// requests to that worker.
workerPool *lru.Cache
// reqs is the channel that feeds new requests to the finder.
reqs chan *dependency.Request
// closedReqChan is true when the reqs channel has been closed. This prevents double-close or writes to a closed chan
closedReqChan bool
// cacheDir is the path to the root directory for on-disk cache.
cacheRootDir string
// joinChan is used for goroutine synchronization so that the owner of a finder instance does not exit before
// this finder is done cleaning up after itself.
joinChan chan bool
// nameResolver is the instance of Name Resolver that is started by this Finder. Its handler is passed to this
// finder workers.
nameResolver *nameresolver.Finder
// zoneCutFinder is the instance of Zone Cut Finder that is started by this Finder. Its handler is passed to this
// finder workers.
zoneCutFinder *zonecut.Finder
// tree is the reference to a radix tree containing a view of the prefixes announced with BGP over the Internet.
// This is used to fill IPNode instances with their corresponding ASN number, at the time of query.
tree *iradix.Tree
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
/* NewFinder initializes a new dependency finder struct instance.
dependencyWorkerCount designates the maximum number of workers simultaneously live for dependency tree construction.
zoneCutWorkerCount designates the maximum number of workers simultaneously live for zone cut/delegation information retrieval.
nameResolverWorkerCount designates the maximum number of workers simultaneously live for name resolution.
cacheRootDir designates the root cache directory in which the on-disk cache will be stored. The directory will be created
if it does not already exist.
rootHints is the name of the file from which the root hints should be loaded.
*/
func NewFinder(transdepConf *tools.TransdepConfig, tree *iradix.Tree) *Finder {
df := new(Finder)
var err error
df.workerPool, err = lru.NewWithEvict(transdepConf.LRUSizes.DependencyFinder, cleanupWorker)
if err != nil {
return nil
}
df.reqs = make(chan *dependency.Request, REQ_CHAN_CAPACITY)
df.closedReqChan = false
df.joinChan = make(chan bool, 1)
df.tree = tree
df.config = transdepConf
// Trick using late binding to have circular declaration of zonecut finder and name resolver handlers
var nrHandler func(request *msg_nameresolver.Request) *errors.ErrorStack
df.zoneCutFinder = zonecut.NewFinder(func(req *msg_nameresolver.Request) *errors.ErrorStack {return nrHandler(req)}, transdepConf)
df.nameResolver = nameresolver.NewFinder(df.zoneCutFinder.Handle, transdepConf)
nrHandler = df.nameResolver.Handle
df.start()
return df
}
// cleanupWorker is the callback called by the LRU when an entry is evicted.
// value is the worker instance stored within the evicted entry.
func cleanupWorker(_, value interface{}) {
wrk := value.(*worker)
wrk.stop()
}
/*spool finds an already existing worker for the spooled request or create a new worker and adds it to the LRU. It
then feeds the request to that worker.
req is the request to be forwarded to the appropriate worker. If no existing worker can handle that request, a new one
is created and added to the list of workers
*/
func (df *Finder) spool(req *dependency.Request) {
var wrk *worker
key := req.Topic()
if val, ok := df.workerPool.Get(key); ok {
wrk = val.(*worker)
} else {
wrk = newWorker(req, df.Handle, df.zoneCutFinder.Handle, df.nameResolver.Handle, df.config, df.tree)
df.workerPool.Add(key, wrk)
}
wrk.handle(req)
}
// Handle is the function called to submit new requests.
// Caller may call req.Result() after calling Handle(req) to get the result of that Handle call.
// This method returns an error if the Finder is stopped.
func (df *Finder) Handle(req *dependency.Request) *errors.ErrorStack {
if df.closedReqChan {
return errors.NewErrorStack(fmt.Errorf("Handle: dependency finder request channel already closed"))
}
df.reqs <- req
return nil
}
// start handles new requests, detects dependency cycles or else spools the requests for processing.
// When no more requests are expected, start cleans up all workers.
// start must be called as a separate goroutine (go instance.start()).
func (df *Finder) start() {
go func() {
for req := range df.reqs {
if req.DetectCycle() {
//Detect dependency loops
g := graph.NewRelationshipNode(fmt.Sprintf("start: cycle detected on %s", req.Name), graph.AND_REL)
g.AddChild(new(graph.Cycle))
req.SetResult(g, nil)
} else if req.Depth() > nameresolver.MAX_CNAME_CHAIN {
// Detect long CNAME chain (incremented only when an alias is drawing in a new dependency graph)
g := graph.NewRelationshipNode(fmt.Sprintf("start: overly long CNAME chain detected %s", req.Name), graph.AND_REL)
g.AddChild(new(graph.Cycle))
req.SetResult(g, nil)
} else {
df.spool(req)
}
}
// Cleanup workers
for _, key := range df.workerPool.Keys() {
val, _ := df.workerPool.Peek(key)
wrk := val.(*worker)
wrk.stop()
}
df.joinChan <- true
}()
}
// Stop signals that no more requests are expected.
// This function must be called for proper memory and cache management. Thus, it is advised to defer a call to this
// function as soon as a Finder is instantiated with NewFinder().
func (df *Finder) Stop() bool {
if df.closedReqChan {
// This if prevents double closes
return false
}
close(df.reqs)
df.closedReqChan = true
// wait for the "start() func to terminate
_ = <- df.joinChan
close(df.joinChan)
// Cleanup other tools
df.nameResolver.Stop()
df.zoneCutFinder.Stop()
return true
}

315
dependency/worker.go Normal file
View file

@ -0,0 +1,315 @@
package dependency
import (
"fmt"
"github.com/hashicorp/go-immutable-radix"
"github.com/miekg/dns"
"github.com/ANSSI-FR/transdep/graph"
"github.com/ANSSI-FR/transdep/messages/dependency"
"github.com/ANSSI-FR/transdep/messages/nameresolver"
"github.com/ANSSI-FR/transdep/messages/zonecut"
"github.com/ANSSI-FR/transdep/tools"
"github.com/ANSSI-FR/transdep/tools/radix"
"github.com/ANSSI-FR/transdep/errors"
)
const WORKER_CHAN_CAPACITY = 10
// worker represents a handler of requests for a specific requestTopic.
// It retrieves the relevant information, cache it in memory and serves it until stop() is called.
type worker struct {
// req is the request that is handled by this worker
req *dependency.Request
// reqs is a channel of requests with identical requestTopic as the original request
reqs chan *dependency.Request
// joinChan is used by stop() to wait for the completion of the start() goroutine
joinChan chan bool
// closedReqChan is used to prevent double-close during stop()
closedReqChan bool
// tree is the reference to a radix tree containing a view of the prefixes announced with BGP over the Internet.
// This is used to fill IPNode instances with their corresponding ASN number, at the time of query.
tree *iradix.Tree
// depHandler is the handler used to fetch the dependency tree of a dependency of the current requestTopic
depHandler func(*dependency.Request) *errors.ErrorStack
// zcHandler is used to get the delegation info of some name that is part of the dependency tree of the current requestTopic
zcHandler func(request *zonecut.Request) *errors.ErrorStack
// nrHandler is used to get the IP addresses or Alias associated to a name that is part of the dependency tree of the current requestTopic
nrHandler func(*nameresolver.Request) *errors.ErrorStack
// config is the configuration of the current Transdep run
config *tools.TransdepConfig
}
/* newWorker instantiates and returns a new worker.
It builds the worker struct, and starts the routine in charge of building the dependency tree of the
requested topic and serving the answer to subsequent requests.
req is the first request that triggered the instantiation of that worker
depHandler is a function that can be called to have another dependency graph resolved (probably to integrate it to the current one)
zcHandler is a function that can be called to obtain the zone cut of a requested name
nrHandler is a function that can be called to obtain the IP address or Alias of a name
*/
func newWorker(req *dependency.Request, depHandler func(*dependency.Request) *errors.ErrorStack, zcHandler func(request *zonecut.Request) *errors.ErrorStack, nrHandler func(*nameresolver.Request) *errors.ErrorStack, conf *tools.TransdepConfig, tree *iradix.Tree) *worker {
w := new(worker)
w.req = req
w.reqs = make(chan *dependency.Request, WORKER_CHAN_CAPACITY)
w.closedReqChan = false
w.joinChan = make(chan bool, 1)
w.config = conf
w.tree = tree
w.depHandler = depHandler
w.zcHandler = zcHandler
w.nrHandler = nrHandler
w.start()
return w
}
/* handle is the function called to submit a new request to that worker.
Caller may call req.Result() after this function returns to get the result for this request.
This method returns an error if the worker is stopped or if the submitted request does not match the request usually
handled by this worker.
*/
func (w *worker) handle(req *dependency.Request) *errors.ErrorStack {
if w.closedReqChan {
return errors.NewErrorStack(fmt.Errorf("handle: dependency worker channel for %s is already closed", w.req.Name()))
} else if !w.req.Equal(req) {
return errors.NewErrorStack(fmt.Errorf("handle: invalid request; the submitted request (%s) does not match the requests handled by this worker (%s)", req.Name(), w.req.Name()))
}
w.reqs <- req
return nil
}
// resolveRoot is a trick used to simplify the circular dependency of the root-zone, which is self-sufficient by definition.
func (w *worker) resolveRoot() graph.Node {
g := graph.NewRelationshipNode("resolveRoot: dependency graph of the root zone", graph.AND_REL)
g.AddChild(graph.NewDomainNameNode(".", true))
return g
}
/*getParentGraph is a helper function which gets the dependency graph of the parent domain.
This function submits a new dependency request for the parent domain and waits for the result.
Consequently, this function triggers a recursive search of the parent domain dependency tree until the root-zone
dependency tree is reached. Said otherwise, for "toto.fr", this function triggers a search for the dependency radix of
"fr.", which will recursively trigger a search for the dependency tree of ".".
*/
func (w *worker) getParentGraph() (graph.Node, *errors.ErrorStack) {
nxtLblPos, end := dns.NextLabel(w.req.Name(), 1)
shrtndName := "."
if !end {
shrtndName = w.req.Name()[nxtLblPos:]
}
// resolveName and includeIP are set to false, because this name is not dependent of the IP address of set at the
// parent domain, and we are not compatible with DNAME.
req := dependency.NewRequestWithContext(shrtndName, false, false, w.req, 0)
w.depHandler(req)
res, err := req.Result()
if err != nil {
err.Push(fmt.Errorf("getParentGraph: error during resolution of parent graph %s of %s", shrtndName, w.req.Name()))
return nil, err
}
return res, nil
}
// resolveSelf returns the graph of the current requestTopic
func (w *worker) resolveSelf() (graph.Node, *errors.ErrorStack) {
g := graph.NewRelationshipNode(fmt.Sprintf("Dependency graph of exact name %s", w.req.Name()), graph.AND_REL)
// First, we resolve the current name, to get its IP addresses or the indication that it is an alias
nr := nameresolver.NewRequest(w.req.Name(), w.req.Exceptions())
w.nrHandler(nr)
var ne *nameresolver.Entry
ne, err := nr.Result()
if err != nil {
err.Push(fmt.Errorf("resolveSelf: error while getting the exact resolution of %s", w.req.Name()))
return nil, err
}
if ne.CNAMETarget() != "" {
if !w.req.FollowAlias() {
return nil, errors.NewErrorStack(fmt.Errorf("resolveSelf: alias detected (%s) but alias is not requested to be added to the graph of %s", ne.CNAMETarget(), w.req.Name()))
}
// the following line is commented because we might not want to add to the dependency graph the name of the node
// that contains an alias that draws in a complete dependency graph, because this name is not really important
// per se wrt dependency graphs.
g.AddChild(graph.NewAliasNode(ne.CNAMETarget(), ne.Owner()))
// We reuse the FollowAlias and IncludeIP value of the current requestTopic because if we are resolving a
// name for a NS, we will want the IP address and to follow CNAMEs, even though this is an illegal configuration.
// Depth is incremented so that overly long chains can be detected
depReq := dependency.NewRequestWithContext(ne.CNAMETarget(), w.req.FollowAlias(), w.req.IncludeIP(), w.req, w.req.Depth()+1)
w.depHandler(depReq)
aliasGraph, err := depReq.Result()
if err != nil {
err.Push(fmt.Errorf("resolveSelf: error while getting the dependency graph of alias %s", ne.CNAMETarget()))
return nil, err
}
g.AddChild(aliasGraph)
} else if w.req.IncludeIP() {
gIP := graph.NewRelationshipNode(fmt.Sprintf("IPs of %s", ne.Owner()), graph.OR_REL)
g.AddChild(gIP)
for _, addr := range ne.Addrs() {
asn, err := radix.GetASNFor(w.tree, addr)
if err != nil {
asn = 0
}
gIP.AddChild(graph.NewIPNodeWithName(addr.String(), ne.Owner(), asn))
}
}
return g, nil
}
// getDelegationGraph gets the graph relative to the delegation info of the current name. The graph is empty if the
// request topic is not a zone apex.
func (w *worker) getDelegationGraph() (graph.Node, *errors.ErrorStack) {
g := graph.NewRelationshipNode(fmt.Sprintf("Dependency graph for %s delegation", w.req.Name()), graph.AND_REL)
// Get the graph for the current zone. First, we get the delegation info for this zone, and we add it.
req := zonecut.NewRequest(w.req.Name(), w.req.Exceptions())
w.zcHandler(req)
entry, err := req.Result()
if err != nil {
var returnErr bool
switch typedErr := err.OriginalError().(type) {
case *errors.TimeoutError:
returnErr = true
case *errors.NXDomainError:
returnErr = w.req.Exceptions().RFC8020
case *errors.ServfailError:
returnErr = !w.req.Exceptions().AcceptServFailAsNoData
case *errors.NoNameServerError:
returnErr = false
default:
_ = typedErr
returnErr = true
}
if returnErr {
err.Push(fmt.Errorf("getDelegationGraph: error while getting the zone cut of %s", w.req.Name()))
return nil, err
}
err = nil
entry = nil
}
// If entry is nil, then we are at a non-terminal node, so we have no other dependencies (except aliases)
if entry != nil {
g.AddChild(graph.NewDomainNameNode(entry.Domain(), entry.DNSSEC()))
nameSrvsGraph := graph.NewRelationshipNode(fmt.Sprintf("Graph of NameSrvInfo of %s", w.req.Name()), graph.OR_REL)
g.AddChild(nameSrvsGraph)
for _, nameSrv := range entry.NameServers() {
nsGraph := graph.NewRelationshipNode(fmt.Sprintf("Graph of NS %s from NameSrvInfo of %s", nameSrv.Name(), w.req.Name()), graph.AND_REL)
nameSrvsGraph.AddChild(nsGraph)
// If there are glues
if len(nameSrv.Addrs()) > 0 {
nsAddrGraph := graph.NewRelationshipNode(fmt.Sprintf("IPs of %s", nameSrv.Name()), graph.OR_REL)
nsGraph.AddChild(nsAddrGraph)
for _, ip := range nameSrv.Addrs() {
asn, err := radix.GetASNFor(w.tree, ip)
if err != nil {
asn = 0
}
nsAddrGraph.AddChild(graph.NewIPNodeWithName(ip.String(), nameSrv.Name(), asn))
}
} else {
// The NS is out-of-bailiwick and does not contain glues; thus we ask for the dependency graph of
// this NS name res
req := dependency.NewRequestWithContext(nameSrv.Name(), true, true, w.req, 0)
w.depHandler(req)
NSGraph, err := req.Result()
if err != nil {
err.Push(fmt.Errorf("getDelegationGraph: error while getting the dependency graph of NS %s", nameSrv.Name()))
return nil, err
}
nsGraph.AddChild(NSGraph)
}
}
}
return g, nil
}
// resolve orchestrates the resolution of the worker request topic and returns it
func (w *worker) resolve() (graph.Node, *errors.ErrorStack) {
// Shortcut if this is the root zone, because we don't want to have to handle the circular dependency of the root-zone
if w.req.Name() == "." {
g := w.resolveRoot()
return g, nil
}
// The graph of a name is the graph of the parent name + the graph of the name in itself (including its eventual
// delegation info and its eventual alias/IP address)
g := graph.NewRelationshipNode(fmt.Sprintf("Dependency graph for %s", w.req.Name()), graph.AND_REL)
// Get Graph of the parent zone
res, err := w.getParentGraph()
if err != nil {
err.Push(fmt.Errorf("resolve: error while getting the parent graph of %s", w.req.Name()))
return nil, err
}
g.AddChild(res)
// Get Graph of the delegation of the request topic
graphDelegRes, err := w.getDelegationGraph()
if err != nil {
err.Push(fmt.Errorf("resolve: error while getting the delegation graph of %s", w.req.Name()))
return nil, err
}
g.AddChild(graphDelegRes)
// If the request topic is interesting in itself (for instance, because it is the name used in a NS record and that
// name is out-of-bailiwick), we resolve its graph and add it
if w.req.ResolveTargetName() {
res, err := w.resolveSelf()
if err != nil {
err.Push(fmt.Errorf("resolve: error while resolving %s", w.req.Name()))
return nil, err
}
g.AddChild(res)
}
return g, nil
}
// start launches a goroutine in charge of resolving the request topic, and then serving the result of this resolution
// to subsequent identical request topic
func (w *worker) start() {
go func() {
result, err := w.resolve()
if err != nil {
result = nil
err.Push(fmt.Errorf("start: error while resolving dependency graph of %s", w.req.Name()))
}
for req := range w.reqs {
req.SetResult(result, err)
}
w.joinChan <- true
}()
}
// stop is to be called during the cleanup of the worker. It shuts down the goroutine started by start() and waits for
// it to actually end.
func (w *worker) stop() bool {
if w.closedReqChan {
return false
}
close(w.reqs)
w.closedReqChan = true
<-w.joinChan
close(w.joinChan)
return true
}

156
errors/dns.go Normal file
View file

@ -0,0 +1,156 @@
package errors
import (
"encoding/json"
"fmt"
"github.com/miekg/dns"
"net"
)
const (
UDP_TRANSPORT = 17
TCP_TRANSPORT = 6
)
var PROTO_TO_STR = map[int]string{
TCP_TRANSPORT: "TCP",
UDP_TRANSPORT: "UDP",
}
var STR_TO_PROTO = map[string]int{
"": UDP_TRANSPORT,
"TCP": TCP_TRANSPORT,
"tcp": TCP_TRANSPORT,
"UDP": UDP_TRANSPORT,
"udp": UDP_TRANSPORT,
}
type serializedServfailError struct {
Type string `json:"type"`
Qname string `json:"qname"`
Qtype string `json:"qtype"`
Addr string `json:"ip"`
Proto string `json:"protocol"`
}
type ServfailError struct {
qname string
qtype uint16
addr net.IP
proto int
}
func NewServfailError(qname string, qtype uint16, addr net.IP, proto int) *ServfailError {
se := new(ServfailError)
se.qname = qname
se.qtype = qtype
se.addr = addr
se.proto = proto
return se
}
func (se *ServfailError) MarshalJSON() ([]byte, error) {
sse := new(serializedServfailError)
sse.Type = dns.RcodeToString[dns.RcodeServerFailure]
sse.Qname = se.qname
sse.Qtype = dns.TypeToString[se.qtype]
sse.Addr = se.addr.String()
sse.Proto = PROTO_TO_STR[se.proto]
return json.Marshal(sse)
}
func (se *ServfailError) UnmarshalJSON(bstr []byte) error {
sse := new(serializedServfailError)
if err := json.Unmarshal(bstr, sse); err != nil {
return err
}
se.qname = sse.Qname
se.qtype = dns.StringToType[sse.Qtype]
se.addr = net.ParseIP(sse.Addr)
se.proto = STR_TO_PROTO[sse.Proto]
return nil
}
func (se *ServfailError) Error() string {
return fmt.Sprintf("received a SERVFAIL while trying to query %s %s? from %s with %s", se.qname, dns.TypeToString[se.qtype], se.addr.String(), PROTO_TO_STR[se.proto])
}
type serializedNXDomainError struct {
Type string `json:"type"`
Qname string `json:"qname"`
Qtype string `json:"qtype"`
Addr string `json:"ip"`
Proto string `json:"protocol"`
}
type NXDomainError struct {
qname string
qtype uint16
addr net.IP
proto int
}
func NewNXDomainError(qname string, qtype uint16, addr net.IP, proto int) *NXDomainError {
nx := new(NXDomainError)
nx.qname = qname
nx.qtype = qtype
nx.addr = addr
nx.proto = proto
return nx
}
func (nx *NXDomainError) Error() string {
return fmt.Sprintf("received a NXDomain while trying to query %s %s? from %s with %s", nx.qname, dns.TypeToString[nx.qtype], nx.addr.String(), PROTO_TO_STR[nx.proto])
}
func (nx *NXDomainError) MarshalJSON() ([]byte, error) {
snx := new(serializedNXDomainError)
snx.Type = dns.RcodeToString[dns.RcodeNameError]
snx.Qname = nx.qname
snx.Qtype = dns.TypeToString[nx.qtype]
snx.Addr = nx.addr.String()
snx.Proto = PROTO_TO_STR[nx.proto]
return json.Marshal(snx)
}
func (nx *NXDomainError) UnmarshalJSON(bstr []byte) error {
snx := new(serializedNXDomainError)
if err := json.Unmarshal(bstr, snx); err != nil {
return err
}
nx.qname = snx.Qname
nx.qtype = dns.StringToType[snx.Qtype]
nx.addr = net.ParseIP(snx.Addr)
nx.proto = STR_TO_PROTO[snx.Proto]
return nil
}
type serializedNoNameError struct {
Name string `json:"name"`
}
type NoNameServerError struct {
name string
}
func (ne *NoNameServerError) MarshalJSON() ([]byte, error) {
sne := new(serializedNoNameError)
sne.Name = ne.name
return json.Marshal(sne)
}
func (ne *NoNameServerError) UnmarshalJSON(bstr []byte) error {
sne := new(serializedNoNameError)
if err := json.Unmarshal(bstr, sne); err != nil {
return err
}
ne.name = sne.Name
return nil
}
func NewNoNameServerError(name string) *NoNameServerError {
return &NoNameServerError{name}
}
func (ne *NoNameServerError) Error() string {
return fmt.Sprintf("%s has no nameservers", ne.name)
}

119
errors/stack.go Normal file
View file

@ -0,0 +1,119 @@
package errors
import (
"strings"
"encoding/json"
"errors"
"github.com/miekg/dns"
"net"
"fmt"
)
type ErrorStack struct {
errors []error
}
func NewErrorStack(err error) *ErrorStack {
s := new(ErrorStack)
s.Push(err)
return s
}
func (es *ErrorStack) Copy() *ErrorStack {
newStack := new(ErrorStack)
for _, err := range es.errors {
// provision for when an error type will require a deepcopy
switch typedErr := err.(type) {
/* case *NXDomainError:
newStack.errors = append(newStack.errors, err)
case *ServfailError:
newStack.errors = append(newStack.errors, err)
case *NoNameServerError:
newStack.errors = append(newStack.errors, err)
case *TimeoutError:
newStack.errors = append(newStack.errors, err)*/
default:
_ = typedErr
newStack.errors = append(newStack.errors, err)
}
}
return newStack
}
func (es *ErrorStack) MarshalJSON() ([]byte, error) {
var ses []interface{}
for _, err := range es.errors {
switch typedErr := err.(type) {
case *NXDomainError:
ses = append(ses, typedErr)
case *ServfailError:
ses = append(ses, typedErr)
case *NoNameServerError:
ses = append(ses, typedErr)
default:
ses = append(ses, typedErr.Error())
}
}
return json.Marshal(ses)
}
func (es *ErrorStack) UnmarshalJSON(bstr []byte) error {
var ses []interface{}
if err := json.Unmarshal(bstr, &ses) ; err != nil {
return err
}
for _, err := range ses {
switch typedErr := err.(type) {
case string:
es.errors = append(es.errors, errors.New(typedErr))
case map[string]interface{}:
if typeVal, ok := typedErr["type"] ; ok {
if typeVal.(string) == dns.RcodeToString[dns.RcodeServerFailure] {
es.errors = append(es.errors, NewServfailError(typedErr["qname"].(string), dns.StringToType[typedErr["qtype"].(string)], net.ParseIP(typedErr["ip"].(string)), STR_TO_PROTO[typedErr["protocol"].(string)]))
} else if typeVal.(string) == dns.RcodeToString[dns.RcodeNameError] {
es.errors = append(es.errors, NewNXDomainError(typedErr["qname"].(string), dns.StringToType[typedErr["qtype"].(string)], net.ParseIP(typedErr["ip"].(string)), STR_TO_PROTO[typedErr["protocol"].(string)]))
} else {
panic(fmt.Sprintf("missing case: type unknown: %s", typeVal))
}
} else if name, ok := typedErr["name"] ; ok {
es.errors = append(es.errors, NewNoNameServerError(name.(string)))
}
default:
panic("missing case: not a string nor a map?")
}
}
return nil
}
func (es *ErrorStack) Push(err error) {
es.errors = append(es.errors, err)
}
func (es *ErrorStack) OriginalError() error {
if len(es.errors) > 0 {
return es.errors[0]
}
return nil
}
func (es *ErrorStack) LatestError() error {
if len(es.errors) > 0 {
return es.errors[len(es.errors)-1]
}
return nil
}
func (es *ErrorStack) Error() string {
errCount := len(es.errors)
l := make([]string, errCount)
if errCount == 1 {
l[0] = es.errors[0].Error()
} else {
for i := 0; i < len(es.errors)/2; i++ {
l[i] = es.errors[errCount-1-i].Error()
l[errCount-1-i] = es.errors[i].Error()
}
}
return strings.Join(l, ", ")
}

19
errors/timeout.go Normal file
View file

@ -0,0 +1,19 @@
package errors
import "fmt"
type TimeoutError struct {
operation string
requestTopic string
}
func NewTimeoutError(operation, topic string) *TimeoutError {
te := new(TimeoutError)
te.operation = operation
te.requestTopic = topic
return te
}
func (te *TimeoutError) Error() string {
return fmt.Sprintf("timeout while performing \"%s\" on \"%s\"", te.operation, te.requestTopic)
}

107
graph/aliasName.go Normal file
View file

@ -0,0 +1,107 @@
package graph
import (
"crypto/sha256"
"encoding/json"
"github.com/miekg/dns"
"strings"
)
/* serializedAliasNode is a proxy struct used to serialize an Alias node into JSON.
The AliasNode struct is not directly used because the Go json module requires that attributes must be exported for it
to work, and AliasNode struct attributes have no other reason for being exported.
*/
type serializedAliasNode struct {
Target string `json:"target"`
Source string `json:"source"`
}
// AliasNode represents a CNAME in the dependency graph of a name.
type AliasNode struct {
// target is the right-hand name of the CNAME RR
target string
// source is the owner name of the CNAME RR
source string
// parentNode is a reference to the parent node in the dependency graph. This is used to visit the graph from leafs
// to root
parentNode Node
}
/* NewAliasNode returns a new instance of AliasNode after initializing it.
target is the right-hand name of the CNAME RR
source is the owner name of the CNAME RR
*/
func NewAliasNode(target, source string) *AliasNode {
n := new(AliasNode)
n.target = strings.ToLower(dns.Fqdn(target))
n.source = strings.ToLower(dns.Fqdn(source))
return n
}
// Implements json.Marshaler
func (n *AliasNode) MarshalJSON() ([]byte, error) {
sn := new(serializedAliasNode)
sn.Target = n.target
sn.Source = n.source
return json.Marshal(sn)
}
// Implements json.Unmarshaler
func (n *AliasNode) UnmarshalJSON(bstr []byte) error {
sn := new(serializedAliasNode)
err := json.Unmarshal(bstr, sn)
if err != nil {
return err
}
n.target = sn.Target
n.source = sn.Source
return nil
}
func (n *AliasNode) Target() string {
return n.target
}
func (n *AliasNode) Source() string {
return n.source
}
func (n *AliasNode) String() string {
jsonbstr, err := json.Marshal(n)
if err != nil {
return ""
}
return string(jsonbstr)
}
func (n *AliasNode) deepcopy() Node {
nn := new(AliasNode)
nn.target = n.target
nn.source = n.source
nn.parentNode = n.parentNode
return nn
}
func (n *AliasNode) setParent(g Node) {
n.parentNode = g
}
func (n *AliasNode) parent() Node {
return n.parentNode
}
// similar compares to LeafNode and returns true if the o LeafNode is also an AliasNode and the targets are the same.
func (n *AliasNode) similar(o LeafNode) bool {
otherDomain, ok := o.(*AliasNode)
// It is safe to use == here to compare domain names b/c NewAliasNode performs canonicalization of the domain names
return ok && n.target == otherDomain.target //&& n.source == otherDomain.source
}
func (n *AliasNode) hash() [8]byte {
var ret [8]byte
h := sha256.Sum256([]byte(n.target + n.source))
copy(ret[:], h[:8])
return ret
}

478
graph/analysis.go Normal file
View file

@ -0,0 +1,478 @@
package graph
import (
"fmt"
"github.com/hashicorp/go-immutable-radix"
"github.com/deckarep/golang-set"
"net"
"github.com/ANSSI-FR/transdep/tools"
)
/* simplifyRelWithCycle recursively visit the tree and bubbles up Cycle instances in AND Relations or removes them if
they are in OR Relations.
It also simplifies relation nodes with only one child by bubbling up the child.
This function returns true if the children list of the receiver was modified.
*/
func (rn *RelationshipNode) simplifyRelWithCycle() bool {
// newChildren is the list of children of the receiver after this function actions.
var newChildren []Node
modif := false
childrenToAnalyze := rn.children[:]
Outerloop:
for len(childrenToAnalyze) != 0 {
// mergedChildren will contain nodes contained in a child relation node which, itself, only has one child.
// For instance, if a node A has a child B, and B only child is C, then B is suppressed from A's children
// and C is added to mergedChildren.
var mergedChildren []Node
Innerloop:
for _, chld := range childrenToAnalyze {
if dg, ok := chld.(*RelationshipNode); ok {
// If the child node is a relation ship, visit the child recursively
modif = dg.simplifyRelWithCycle() || modif
// Now, if the child, after the recursive visit only has one child, bubble up that child
if len(dg.children) == 1 {
mergedChildren = append(mergedChildren, dg.children[0])
modif = true
// We continue, because this child node will not be added back to the children of the receiver
continue Innerloop
}
}
if _, ok := chld.(*Cycle); ok {
// Implicit: if the relation is not an AND, it is a OR. In OR relations, Cycles are a neutral element,
// like a 1 in a multiplicative expression.
if rn.relation == AND_REL && len(rn.children) > 1 {
// If the considered child is a Cycle and the receiver is an AND relation, then the receiver
// evaluation is summarized by this Cycle (because a Cycle in a AND relation is like a 0 in a
// multiplicative expression), so we just set the receiver's only child to a Cycle and don't process
// the remaining children.
newChildren = []Node{new(Cycle)}
modif = true
break Outerloop
}
}
// This node is not a Cycle, so we add it back as a child the receiver
newChildren = append(newChildren, chld)
}
// If we have bubbled up some grand-children nodes, we need to analyse them as children of the receiver
childrenToAnalyze = mergedChildren
}
rn.children = newChildren
return modif
}
/* auxSimplifyGraph recursively visits the graph and simplifies it. Simplification is done by merging relation
nodes when the receiver and one of its child relation node have the same relation type. Child relation nodes are like
parenthesis in a mathematical expression: 1 + (2*3 + 4) is equivalent to 1 + 2*3 + 4 and 2 * (3 * 4) is equivalent
to 2 * 3 * 4. Simplifying the graph that way reduces the depth of the graph and accelerates future visits.
This function returns true if the graph/tree below the receiver was altered
*/
func (rn *RelationshipNode) auxSimplifyGraph() bool {
var newChildren []Node
modif := false
// TODO I don't think I need to actually duplicate this
childrenToAnalyze := make([]Node, len(rn.children))
copy(childrenToAnalyze, rn.children)
for len(childrenToAnalyze) > 0 {
var mergedChildren []Node
for _, chldGraphNode := range childrenToAnalyze {
if chld, ok := chldGraphNode.(*RelationshipNode); ok {
if chld.relation == rn.relation {
// If the receiver's child currently considered is a RelationshipNode with the relation type as the
// receiver, then, add the children of this child node to the list of nodes that will be considered
// as children of the receiver.
mergedChildren = append(mergedChildren, chld.children...)
modif = true
} else {
// The child RelationshipNode node has a different relation type
// (AND containing an OR, or an OR containing an AND).
newChildren = append(newChildren, chldGraphNode)
}
} else {
// This child node is a LeafNode
newChildren = append(newChildren, chldGraphNode)
}
}
// TODO I don't think I need to actually duplicate this
childrenToAnalyze = make([]Node, len(mergedChildren))
copy(childrenToAnalyze, mergedChildren)
}
// TODO I don't think I need to actually duplicate this
rn.children = make([]Node, len(newChildren))
copy(rn.children, newChildren)
// Once the receiver simplified, we apply this function on all remaining children relation nodes
for _, chldGraphNode := range rn.children {
if chld, ok := chldGraphNode.(*RelationshipNode); ok {
modif = chld.auxSimplifyGraph() || modif
}
}
return modif
}
// SimplifyGraph creates a copy of the tree under the receiver, simplifies the radix under the copy, by applying
// repetitively auxSimplyGraph and simplifyRelWithCycle until the tree is stable.
// The copy is then returned.
func (rn *RelationshipNode) SimplifyGraph() *RelationshipNode {
ng, ok := rn.deepcopy().(*RelationshipNode)
if !ok {
return nil
}
modif := true
for modif {
modif = false
modif = ng.auxSimplifyGraph() || modif
modif = ng.simplifyRelWithCycle() || modif
}
return ng
}
// buildLeafNodeInventory visits the tree under the receiver and returns the list of the LeafNodes. This list is built
// by visiting the tree recursively.
func (rn *RelationshipNode) buildLeafNodeInventory() []LeafNode {
l := make([]LeafNode, 0)
for _, absChld := range rn.children {
switch chld := absChld.(type) {
case *RelationshipNode:
l2 := chld.buildLeafNodeInventory()
l = append(l, l2...)
case LeafNode:
l = append(l, chld)
}
}
return l
}
// TODO add comment
func getSiblingsUsingSimilarity(leafNode LeafNode, inventory []LeafNode, breakV4, breakV6, DNSSECOnly bool) []LeafNode {
// siblings are leafNode that are considered unavailable during the analysis of leafNode
// Are considered unavailable other nodes that are similar to leafNode (similarity being defined by the similar()
// implementation of the leafNode underlying type. Are never considered unavailable unsigned names when DNSSECOnly
// is true as well as alias names. Alias names are always ignored because they are never the actual source of an
// unavailability; either the zone that contains the alias is unavailable or the zone containing the target of the
// alias is unavailable.
// IPv4 addresses are always considered unavailable if breakV4 is true. The same applies for IPv6 addresses w.r.t.
// breakV6.
var siblings []LeafNode
for _, node := range inventory {
toIgnore := false
toAdd := false
switch n := node.(type) {
case *DomainNameNode:
if DNSSECOnly && !n.DNSSECProtected() {
toIgnore = true
}
case *AliasNode:
toIgnore = true
case *IPNode:
isV4 := n.IsV4()
if (breakV4 && isV4) || (breakV6 && !isV4) {
toAdd = true
}
case *Cycle:
toAdd = true
}
if toAdd || (!toIgnore && leafNode.similar(node)) {
siblings = append(siblings, node)
}
}
return siblings
}
/* TODO revise this comment
testNodeCriticity returns true if leafNode is necessary for this tree to be resolved. External factors may influence
whether this leafNode is required to be available, including whether the IPv4 network or the IPv6 network are
available or whether we consider that only DNSSEC-protected zone may break (e.g. in case of invalid/expired
record signatures, or DS/DNSKEY algorithm mismatches) versus all zones (e.g. truncated zone, corrupted data, etc.).
leafNode is the node being tested
inventory is the list of all leafNodes that might be broken too and influence the result
breakV4, breakV6 and DNSSEConly are flags that indicates additional conditions for a node to be available or not.
*/
func (rn *RelationshipNode) testNodeCriticity(siblings []LeafNode) bool {
// The following loops purpose is to bubble up the unavailability markers of the leafNode. If an unavailable node
// is a child of an AND relationship, the whole relationship is unavailable. If an unavailable node is a child of
// an OR relationship, the whole relationship is unavailable if all of its children are unavailable.
// The algorithm terminates if the tree root is added to new unavailable node list or if there a no more
// unavailability markers that may bubble up.
// Since multiple "and" branches may have bubbling unavailability markers, "and"s bubble up only once, so that it
// does not mess up with the "or" count. "And"s bubbles up only once by marking it as "already bubbled". This is
// done by inserting it in the andSet. The number of children of an Or relationship that have bubbled up an
// unavailability marker is stored in the orSet variable.
orSet := make(map[*RelationshipNode]int)
andSet := make(map[*RelationshipNode]bool)
var unavailableNodes []Node
for _, n := range siblings {
unavailableNodes = append(unavailableNodes, n)
}
for len(unavailableNodes) > 0 {
nodesToHandle := unavailableNodes
unavailableNodes = []Node{}
for _, node := range nodesToHandle {
parent := node.parent()
if parent == nil {
// if "node" is the root node
return true
}
n := parent.(*RelationshipNode)
if n.relation == AND_REL {
if _, ok := andSet[n]; !ok {
andSet[n] = true
unavailableNodes = append(unavailableNodes, n)
}
} else {
if v, ok := orSet[n]; ok {
orSet[n] = v + 1
} else {
orSet[n] = 1
}
if len(n.children) == orSet[n] {
unavailableNodes = append(unavailableNodes, n)
}
}
}
}
return false
}
// TODO add comment
func getSiblingsByPrefixCloseness(n LeafNode, inventory []LeafNode) []LeafNode {
if ipn, ok := n.(*IPNode) ; ok {
return getSiblingsUsingFilteringFun(inventory, ipn.similarPrefix)
}
return []LeafNode{}
}
// TODO add comment
func (rn *RelationshipNode) findMandatoryNodesUsingPrefixCloseness(inventory []LeafNode)(mandatoryNodes, optionalNodes mapset.Set) {
return rn.findMandatoryNodes(inventory, getSiblingsByPrefixCloseness)
}
// TODO add comment
func (rn *RelationshipNode) findMandatoryNodesUsingSimilarity(inventory []LeafNode, breakV4, breakV6, DNSSECOnly bool) (mandatoryNodes, optionalNodes mapset.Set) {
getSiblingsFun := func(n LeafNode, inv []LeafNode) []LeafNode {
return getSiblingsUsingSimilarity(n, inv, breakV4, breakV6, DNSSECOnly)
}
return rn.findMandatoryNodes(inventory, getSiblingsFun)
}
// TODO add comment
func getSiblingsUsingFilteringFun(inventory []LeafNode, customFilterFun func(lf LeafNode) bool) []LeafNode {
var siblings []LeafNode
for _, lf := range inventory {
if _, ok := lf.(*Cycle) ; ok || customFilterFun(lf) {
siblings = append(siblings, lf)
}
}
return siblings
}
// TODO add comment
func getSiblingsByASN(n LeafNode, inventory []LeafNode) []LeafNode {
if ipn, ok := n.(*IPNode) ; ok {
return getSiblingsUsingFilteringFun(inventory, ipn.similarASN)
}
return []LeafNode{}
}
// TODO add comment
func (rn *RelationshipNode) findMandatoryNodesUsingASN(inventory []LeafNode) (mandatoryNodes, optionalNodes mapset.Set) {
return rn.findMandatoryNodes(inventory, getSiblingsByASN)
}
// TODO revise comment
// findMandatoryNodes explores all nodes from the inventory and returns the list of leafNodes that are mandatory
func (rn *RelationshipNode) findMandatoryNodes(inventory []LeafNode, getSiblingsFun func(LeafNode, []LeafNode) []LeafNode) (mandatoryNodes, optionalNodes mapset.Set) {