I try to crawl the head banner carousel for practice in python.org. I use WebDriverWait to wait elements visible after clicking the trigger but not working properly. Here is my code.
# ChromeDriver
driver.get("https://www.python.org/")
hBannerNav = driver.find_elements_by_xpath(
'//ol[@class="flex-control-nav flex-control-paging"]/li/a')
for i in range(len(hBannerNav)):
print(hBannerNav[i].text)
hBannerNav[i].click()
try:
self.wait.until(EC.visibility_of_element_located(
(By.XPATH, '//ul[@class="slides menu"]/li[{}]'.format(i + 1))))
h1 = driver.find_element_by_xpath(
'//ul[@class="slides menu"]/li[{}]/div/h1'.format(i + 1))
print(h1.text)
# if add a sleep the crawler will work properly and smoothly,
# but I want to use WebDriverWait only.
# sleep(1)
except Exception as e:
print('error', e)
Here are logs:
# without sleep
1
Functions Defined
2
Compound Data Types
3
error Message:
4
Quick & Easy to Learn
5
All the Flow You’d Expect # wait for a long time but still crawl it
# use sleep
1
Functions Defined
2
Compound Data Types
3
Intuitive Interpretation
4
Quick & Easy to Learn
5
All the Flow You’d Expect
Use presence_of_all_elements_located
# the results by using
h1 = self.wait.until(EC.presence_of_all_elements_located(
(By.XPATH, '//ul[@class="slides menu"]/li[{}]/div/h1'.format(i + 1))))[0]
1
Functions Defined
2
Compound Data Types
3
4
5
try/exceptso you can see the full error.