The aim of the unsolvability IPC is to test the ability of classical automated planners to detect when a planning task has no solution. The benchmarks used in the track will contain a mix of solvable and unsolvable instances, and points will only be awarded for correctly identifying the unsolvable instances.
|Announcement of the track||Jun, 2015|
|Call for domains / expression of interest||Aug, 2015|
|Demo problems provided||Nov, 2015|
|(optional) Initial feedback on buggy output||Feb, 2016|
|Domain submission deadline||Feb, 2016|
|Final planner submission deadline||Mar, 2016|
|Paper submission deadline||May, 2016|
|Contest run||Apr - May, 2016|
|Results announced in London||Jun, 2016|
Some details on how the planners must behave:
Example unsolvable instances can be found [here]. The first two problems are small instances detected by most planners (one without a delete relaxed solution), and the latter two are larger instances that exhibit the same behaviour.
While not every benchmark domain will be of this form, an ideal domain will have the following properties:
The properties are meant to dissuade inappropriate entries that return 'Unsolvable' when a problem is too difficult to find a plan for -- false positives will be penalized heavily.
The primary focus will be on the coverage of problems correctly identified as having no solution. Ties will be broken based on the standard IPC time score for the unsolvable instances. There will be no points awarded for the solvable instances in the domain sets -- they are there primarily to deter from simply returning 'Unsolvable' for every problem.
Similar to the deterministic optimal IPC track, a solver will be disqualified for a domain if it returns a false positive (saying 'Unsolvable' when the problem is in fact solvable).