Recently, an autonomous vehicle with 3D sensors for the rubber tree (Hevea brasiliensis) plantation navigation has been presented. Therefore, we present a machine vision for detecting the tapping position and the rubber juice collecting cup in the images, which can be deployed onto an autonomous platform. Firstly, we show an RGB-D image acquisition technique using artificial lights for capturing a rubber tree in the low-light. Then, we present two tapping position detection algorithms which are the color-feature based with sliding window algorithm and the novel deep object detector. To perform the detection on our custom dataset, we build a Faster-RCNN with the pre-trained MobileNetV2 using the fine-tuning technique. The results show that the deep detector outperforms our conventional detector which gives 0.92 average precision on our dataset.