Time Limit : sec, Memory Limit : KB


We are developing the world's coolest AI robot product. After the long struggle, we finally managed to send our product at revision $R_{RC}$ to QA team as a release candidate. However, they reported that some tests failed! Because we were too lazy to set up a continuous integration system, we have no idea when our software corrupted. We only know that the software passed all the test at the past revision $R_{PASS}$. To determine the revision $R_{ENBUG}$ ($R_{PASS} < R_{ENBUG} \leq R_{RC}$) in which our software started to fail, we must test our product revision-by-revision.

Here, we can assume the following conditions:

  • When we test at the revision $R$, the test passes if $R < R_{ENBUG}$, or fails otherwise.
  • It is equally possible, which revision between $R_{PASS} + 1$ and $R_{RC}$ is $R_{ENBUG}$.

From the first assumption, we don't need to test all the revisions. All we have to do is to find the revision $R$ such that the test at $R - 1$ passes and the test at $R$ fails. We have $K$ testing devices. Using them, we can test at most $K$ different revisions simultaneously. We call this "parallel testing". By the restriction of the testing environment, we cannot start new tests until a current parallel testing finishes, even if we don't use all the $K$ devices.

Parallel testings take some cost. The more tests fail, the more costly the parallel testing becomes. If $i$ tests fail in a parallel testing, its cost is $T_i$ ($0 \leq i \leq K$). And if we run parallel testings multiple times, the total cost is the sum of their costs.

Of course we want to minimize the total cost to determine $R_{ENBUG}$, by choosing carefully how many and which revisions to test on each parallel testing. What is the minimum expected value of the total cost if we take an optimal strategy?


The input consists of a single test case with the following format.

$R_{PASS}$ $R_{RC}$ $K$
$T_0$ $T_1$ ... $T_K$

$R_{PASS}$ and $R_{RC}$ are integers that represent the revision numbers of our software at which the test passed and failed, respectively. $1 \leq R_{PASS} < R_{RC} \leq 1,000$ holds. $K$ ($1 \leq K \leq 30$) is the maximum number of revisions we can test in a single parallel testing. $T_i$ is an integer that represents the cost of a parallel testing in which $i$ tests fail ($0 \leq i \leq K$). You can assume $1 \leq T_0 \leq T_1 \leq ... \leq T_K \leq 100,000$.


Output the minimum expected value of the total cost. The output should not contain an error greater than 0.0001.

Sample Input 1

1 10 2
1 1 1

Output for the Sample Input 1


Sample Input 2

1 100 1
100 100

Output for the Sample Input 2


Sample Input 3

100 200 4
1 1 2 2 3

Output for the Sample Input 3


Sample Input 4

2 3 4
1 2 3 4 5

Output for the Sample Input 4


Sample Input 5

998 1000 4
10 100 1000 10000 100000

Output for the Sample Input 5