The EM algorithm has the two steps. First you find the expected log likelihood (where the expectation is taken under the current parameters and conditional on whatever data you can see) and then you adjust the parameters in the likelihood function to maximize this, and then iterate using these new parameter values.
You've probably read that much so I guess I'll just go through it slowly on this example.
Here your observed data is $Y_i = I(X_i>c_i),$ in other words you only know which variables were greater than their thresholds.
You are looking for the log-likelihood. For exponential data this is just $$ l(\lambda;X_i) = n\ln(\;\lambda) - \lambda\sum_i X_i$$
Now we need to take the expected value of this under the current parameter $\lambda_t$ and conditional on our observations $y_i$ of the $Y_i.$ The expected value is $$ E(l(\lambda;X_i)|Y_i=y_i,\lambda_t) = n\ln(\lambda)-\lambda\sum_iE(X_i|Y_i=y_i,\lambda_t).$$
This is maximized at $$ \lambda_{t+1} = \frac{n}{\sum_iE(X_i|Y_i=y_i,\lambda_t)}$$
So to finish, we need to compute the conditional expected values of the $X_i.$ If $y_i=0$ that means $X_i
We can write this succinctly as $$ E(X_i|Y_i=y_i,\lambda_t) = \frac{1}{\lambda_t}\left(1+ y_i\lambda_tc_i + (y_i-1)\left(\frac{\lambda_tc_i}{e^{\lambda_t c_i}+1}\right)\right).$$
So we sum this over $i$ and then plug into the expression $$ \lambda_{t+1} = \frac{n}{\sum_iE(X_i|Y_i=y_i,\lambda_t)}$$
to get the new value $\lambda_{t+1}.$ And then, we iterate. Do it all over again for using the starting value $\lambda_{t+1}$ rather than $\lambda_t.$