I don't think that you can get a meaningful estimate for the total time, with a probability of completion within that time, by a calculation based on those data for the individual tasks. But there is a way to do it. First you need to assume a form of probability distribution for the individual task times. From these distributions, get a distribution for the total time, which will enable you to give an estimate that is achievable with a desired probability.
An easy, albeit rough-and-ready, way to do this would be to assume normal distributions for the individual task times. While this is pretty unrealistic, it has two great virtues: first, the error from this choice of model will tend to decrease as the number of tasks increases; secondly, and perhaps more importantly, the total-time distribution and estimates with associated probabilities can be calculated by simple arithmetic from standard tables.
The details on how to do this are widely known, and I (or many other contributors to this site) could provide you with them if you want.
Edit: We assume here that $n$ tasks are done in a particular order, with the distribution of times for each task conditional on the previous tasks having been completed. The times for task $i$ are modelled as normally distributed: $T_i \sim \mathrm N(\mu_i, \sigma_i^2)$, for $i=1 ,..., n$, where $\mu_i$ is the expected time for completion of task $i$, in this case the time within which there is a 50% probability of completion, and $\sigma_i^2$ is the associated variance, which is a measure of the uncertainty of $T_i$. To calculate $\mu_i$ and $\sigma_i$ (the square root of the variance, i.e. the standard deviation), you need two numbers, which can be estimated completion-within times for two probabilities. Most conveniently, these probabilities may be taken as 50% and 84.134% (as it happens). Then $\mu_i$ is the time within which there is a 50% probability of completion, and $\mu_i+\sigma_i$ is the time within which there is an 84.134% probability of completion. From the second time, you get $\sigma_i$ by subtracting $\mu_i$.
With these $\mu_i$ and $\sigma_i$, calculate $\mu=\mu_1+\cdots+\mu_n$ and $\sigma^2=\sigma_1^2+\cdots+\sigma_n^2$. Now the time for the whole task is modelled as a random variable $T$ with $\mathrm N(\mu, \sigma^2)$ distribution. To estimate a completion-within time for the whole task with a given probability $p$, you need a table of cumulative probabilities for the standard normal distribution, using the so-called z-score:$Z=\dfrac{T-\mu}{\sigma}.$Taking the value of $Z$ corresponding to your chosen $p$ in the table, and making $T$ the subject of the above formula, we find that the whole task can be completed within time $T=\mu+\sigma Z$ with probability $p$.
The obvious criticism of the above procedure is that completion times for real-life tasks aren't normal: in particular, there is a minimum completion time even when the task goes as smoothly as possible (the truncation issue), whereas normally distributed times are unbounded, even including negative "times"; also, sometimes there are unexpected hold-ups which make the task take much longer than usual (the fat-tail issue). However, the magic of the central limit theorem of statistics diminishes the relevance of this criticism when the times for a number of tasks are added together. Under rather weak conditions, the distribution of completion times for a whole task tends towards the normal, even if the times for the component tasks are not at all normal, as the number of components increases. Also the behaviour of the distribution for small (or negative) times isn't relevant to the upper estimate of completion time that you want to make.
The theory underlying the above procedure can be found in any standard textbook of statistics. If you want more accurate modelling using truncated fat-tailed distributions, you will need to consult advanced specialist works in applied statistics. But I doubt whether the effort involved would bring a comensurate return in the form of increased accuracy.