Abstract | ||
---|---|---|
We investigate the complexity of the reachability problem for (deep) neural networks: does it compute valid output given some valid input? It was recently claimed that the problem is NP-complete for general neural networks and conjunctive input/output specifications. We repair some flaws in the original upper and lower bound proofs. We then show that NP-hardness already holds for restricted classes of simple specifications and neural networks with just one layer, as well as neural networks with minimal requirements on the occurring parameters. |
Year | DOI | Venue |
---|---|---|
2021 | 10.1007/978-3-030-89716-1_10 | RP |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 2 |
Name | Order | Citations | PageRank |
---|---|---|---|
Marco Sälzer | 1 | 0 | 1.01 |
Martin Lange | 2 | 447 | 22.83 |