Reasons and Solutions - Missing Data in Cloud Extraction
Thursday, December 22, 2016 3:27 AMFor the latest tutorials, visit our new self-service portal. Sharpen your skills and explore new ways to use Octoparse.
Fields blank on Cloud Extraction could occur when:
1. Tasks executed with cloud extraction are split-table and working too fast hence some elements may skip.
Tasks with "Fixed List", "List of URLs" and "Text List" loop mode are split-table. The main tasks will be split into sub-tasks executed with multiple cloud servers simultaneously. So in this case, every step of the task will work very fast hence some pages may not be loaded completely before moving to the next step.
2. The website you are after is actually multi-regional.
A multi-regional website could have different page structures for the content provided to visitors from different countries. When a task is set to run in the cloud, it is executed with our IPs based in America. In this case, for tasks targeted by websites outside America, some data may be skipped as it can’t be found on the website opened in the cloud.
3. When the task has both 1 and 2 situations.
Here are common solutions to deal with blank fields on cloud extraction.
1) To ensure the web page to be loaded completely in the cloud, you could try to
1. Increase timeout for "Go To Web Page“ step, go to General, "Timeout"
2. set up Wait before action
All steps created in the workflow are able to set up a waiting time.
3. Set up an anchor element to find before action
This step will guarantee the extraction only starts after a certain element has been found. You can choose any element's XPath from the desired fields.
Firstly, you click the 'Extract Data' step. Secondly, fill the element with an XPath and change “Wait before action” to "30s".
Tips! How to get the XPath of a certain element on the page?
|
2) To identify if the website is multi-regional, you could
1. Test the task with local extraction. If there's no data missing as it does on the cloud extraction, then the website is most likely multi-regional. In this case, as the targeted content can only be found when opening the website with your own IP, we suggest you Local Extraction to get the data instead.
2. Extract the outer HTML of the whole page. By checking the extracted HTML, you could find what has caused the data missing by the prompt in the source code like "Access denied".
Here is a related tutorial for checking errors in the Cloud: Why does the task get no data in the Cloud but work well when running in the local?
If you still have no idea what happens to your task, feel free to leave your message.
Happy Data Hunting!
Author: The Octoparse Team
For more information about Octoparse, please click here.
Sign up today.