Before you compute the epipolar line, I believe you are already familiar with the definitions of epipole, epipolar plane, and essential matrix.
Suppose there are two cameras $1$ and $2$. The rotation and translation from camera 1 to camera 2 is $R$ and $T$. Then the essential matrix is $E=[T]_{\times}R$ where $[T]_{\times}$ is the skew-symmetric matrix associating $T$.
Now given a point in 3D space, you should first get its coordinates in the camera frames, say $p_1$ in camera frame 1 and $p_2$ in camera frame 2. Then the projection point on the normalized plane (z=1) in camera 1 is $m_1=[p_{1x}/p_{1z}, p_{1y}/p_{1z}, 1]$, and the projection point in camera frame 2 is $m_2=[p_{2x}/p_{2z}, p_{2y}/p_{2z}, 1]$. Then the epiploar line in camera frame 1 is $l_1 \sim E^Tm_2$, and the epipolar line in camera frame 2 is $l_2\sim Em_1$, where $\sim$ means equivalence up to a scale.
For your specific example, you need first to compute the rotation and translation between to camera frames. (Note there are totally three frames, two camera frames and one object frame or world frame, which you should define in advance.) Then you know the essential matrix $E$. After that compute the coordinates of the given point in each camera frame and corresponding perspective projection points. (please use the normalized image plane z=1 instead of z=10. For z=10, you further need a intrinsic calibration matrix.) Then you get $m_1$ and $m_2$. Finally, you can compute the two epipolar lines on two images as mentioned above.
By the way, there are some good books you can refer to: 1. An invitation to 3D vision 2. Multiple view geometry in computer vision. At last, this kind problem may not be suitable for this math site. I just happen to have some experience in computer vision.