Title
Dense Subset Sum May Be the Hardest.
Abstract
The SUBSET SUM problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O*(2(n)(/2))-time algorithm for SUBSET SUM by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O*(2((0.5-delta)n))-time algorithm, with some constant delta > 0. Continuing an earlier work [STACS 2015], we study SUBSET SUM parameterized by the maximum bin size beta, defined as the largest number of subsets of the n input integers that yield the same sum. For every epsilon > 0 we give a truly faster algorithm for instances with beta <= 2((0.5- epsilon)n) as well as instances with beta >= 2(0)(.661n) Consequently, we also obtain a characterization in terms of the popular density parameter n/log(2)t: if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance. This goes against the current intuition that instances of density 1 are the hardest, and therefore is a step toward answering the open question in the affirmative. Our results stem from a novel combinatorial analysis of mixings of earlier algorithms for SUBSET SUM and a study of an extremal question in additive combinatorics connected to the problem of Uniquely Decodable Code Pairs in information theory.
Year
DOI
Venue
2016
10.4230/LIPIcs.STACS.2016.13
Leibniz International Proceedings in Informatics
Keywords
Field
DocType
subset sum,additive combinatorics,exponential-time algorithm,homomorphic hashing,littlewood-offord problem
Information theory,Integer,Discrete mathematics,Combinatorics,Parameterized complexity,Littlewood–Offord problem,Subset sum problem,Computer science,Intuition,Combinatorial analysis,Dense set
Conference
Volume
ISSN
Citations 
47
1868-8969
0
PageRank 
References 
Authors
0.34
3
4
Name
Order
Citations
PageRank
per austrin118718.31
Petteri Kaski291266.03
Mikko Koivisto380355.81
Jesper Nederlof429424.22