论坛的老哥要的东西!练手试了一下!
实现了图片一键(网址)下载,真正的懒人版工具,只要输入你想要获取的相册首页链接,图片基本到手!
又拍相册 https://x.yupoo.com
又拍图片管家-淘宝相册,图片空间,图片存储,专业的图片云计算服务
电商图片管家,还是有不少产品相册在里面!
尤其是我大莆田产品,谁卖谁知道,我买我知道!
PS:教你如何找产品相册!
http://rank.chinaz.com/?host=x.yupoo.com
点开有惊喜!!
注意鉴别骗子哈!
技术比较渣,见谅!
拿去玩!
适合想要获取 又拍云 相册图片的需求!
自己测试了一下,没有用多线程,可能速度还是比较low!
写了报错以及记录功能,如果没有下载到的图片,自己手动补上吧,失败的链接都写在spider.txt上!
运行测试:
网速比较慢,暂时只有这么多了,程序应该可以运行结束,有bug的话 可以留言回复私聊!
方便大哥们使用,弟弟已经给各位大佬,大哥们打包了一下exe程序,运行环境,win7 64位,其他系统环境不一定能运行哈!
exe百度云地址:
链接: https://pan.baidu.com/s/1BUo52myhLfRM22wQU2asxg 提取码: i7qd
又拍云相册图片采集助手使用说明:
1.打开yupoo.exe程序!
2.输入又拍云相册的首页地址,比如:https://chenxuezhen1989.x.yupoo.com!
3.回车运行即可!
4.采集失败链接记录在spider.txt,建议自行手动补图片!
需知:该程序运行环境为win7 64位
附上python源码:
#又拍图片管家 #https://735539381.x.yupoo.com爬取 # -*- coding: UTF-8 -*- import requests,re,os,random,time from lxml import etree class Yupoo(): def __init__(self,url): ua_list = [ "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; AcooBrowser; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; Acoo Browser; SLCC1; .NET CLR 2.0.50727; Media Center PC 5.0; .NET CLR 3.0.04506)", "Mozilla/4.0 (compatible; MSIE 7.0; AOL 9.5; AOLBuild 4337.35; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 2.0.50727)", "Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)", "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)", "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0", "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5", "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Fedora/1.9.0.8-1.fc10 Kazehakase/0.5.6", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_3) AppleWebKit/535.20 (KHTML, like Gecko) Chrome/19.0.1036.7 Safari/535.20", "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; fr) Presto/2.9.168 Version/11.52", ] self.ua=random.choice(ua_list) self.url=url #抓取首页相册全部分类链接 def get_categorylist(self): headers={'user-agent': self.ua} ceagortylists=[] response=requests.get(self.url,headers=headers,timeout=5).text time.sleep(1) req=etree.HTML(response) ceagortylist=req.xpath('//ul[@class="showheader__category"]/a/@href') for list in ceagortylist: list=f'{url}{list}' ceagortylists.append(list) print(ceagortylists) print(f'相册分类数:{len(ceagortylists)}') return ceagortylists # 抓取相册链接 def get_urllists(self, ceagortylists): headers = {'user-agent': self.ua} for url in ceagortylists: print(url) try: response = requests.get(url, headers=headers, timeout=5).text time.sleep(1) req = etree.HTML(response) if "pagination__jumpwrap" in response: print(f'{url}存在相册分页链接') num=req.xpath('//form[@class="pagination__jumpwrap"]/span[1]/text()')[0] num=num[1:-1] print(num) num=int(num) print(f'共有{num}页分页!') for i in range(1,num+1): numurl=f'{url}?page={i}' print(numurl) try: num_response = requests.get(numurl, headers=headers, timeout=5).text time.sleep(1) num_req = etree.HTML(num_response) category_urllists = num_req.xpath('//div[@class="showindex__children"]/a[@class="album__main"]/@href') for category_url in category_urllists: category_url = f'{self.url}{category_url}' try: self.get_img(category_url) except Exception as e: print(f'采集产品相册页链接失败,错误代码:{e}') with open(f'yupoo/spider.txt', 'a+', encoding='utf-8') as f: f.write(f'{category_url}---产品相册页链接采集失败,错误代码:{e}') except Exception as e: print(f'采集相册分页链接失败,错误代码:{e}') with open(f'yupoo/spider.txt', 'a+', encoding='utf-8') as f: f.write(f'{numurl}---相册分页链接采集失败,错误代码:{e}') else: print(f'{url}没有相册分页链接') category_urllists = req.xpath('//div[@class="showindex__children"]/a[@class="album__main"]/@href') for category_url in category_urllists: category_url = f'{self.url}{category_url}' try: self.get_img(category_url) except Exception as e: print(f'采集产品相册页链接失败,错误代码:{e}') with open(f'yupoo/spider.txt', 'a+', encoding='utf-8') as f: f.write(f'{category_url}---产品相册页链接采集失败,错误代码:{e}') except Exception as e: print(f'采集相册链接失败,错误代码:{e}') with open(f'yupoo/spider.txt', 'a+', encoding='utf-8') as f: f.write(f'{url}---相册链接采集失败,错误代码:{e}') #采集相册详情内容 def get_img(self,url): headers = {'user-agent': self.ua} response = requests.get(url,headers=headers,timeout=5).text time.sleep(1) req = etree.HTML(response) #采集标题 h2=req.xpath('//h2/span[@class="showalbumheader__gallerytitle"]/text()')[0] h2= re.sub(r'[\|\/\<\>\:\*\?\\\"]', "_", h2) # 剔除不合法字符 print(h2) #创建目录 os.makedirs(f'yupoo/{h2}/',exist_ok=True) #采集文字详情 try: text=req.xpath('//div[@class="showalbumheader__gallerysubtitle htmlwrap__main"]/text()')[0] except Exception as e: print(f'无详情文字内容!') text=[] print(text) if text!=[]: texts='%s%s%s'%(h2,'\n',text) with open(f'yupoo/{h2}/{h2}.txt','w',encoding='utf-8') as f: f.write(texts) print(f'{h2}.txt 文档保存成功!') #采集图片 imgurls=req.xpath('//div[@class="showalbum__children image__main"]/div[@class="image__imagewrap"]/img/@data-origin-src') i=1 headers2 = { 'referer': url, 'user-agent': self.ua, } print(headers) for imgurl in imgurls: imgurl=f'https:{imgurl}' print(imgurl) if "jpeg" in imgurl: imgname=f'{i}{imgurl[-5:]}' elif "jgp" in imgurl or "png" in imgurl or "gif" in imgurl: imgname = f'{i}{imgurl[-4:]}' else: imgname = f'{i}{imgurl[-4:]}' print(imgname) try: r=requests.get(imgurl,headers=headers2,timeout=8) with open(f'yupoo/{h2}/{imgname}','wb') as f: f.write(r.content) print(f'下载{imgname}图片成功!') time.sleep(1) except Exception as e: print(f'下载{imgurl}图片失败,错误代码:{e}') with open(f'yupoo/{h2}/spider.txt','a+',encoding='utf-8') as f: f.write(f'访问相册{url}---{imgurl}---下载失败,错误代码:{e}') i=i+1 print(f'采集 {h2} 图片成功!') if __name__ == '__main__': #url = "https://735539381.x.yupoo.com" #url="https://chenxuezhen1989.x.yupoo.com" url=input("请输入要爬取的又拍相册首页链接地址:") print(f'爬虫启动中,请稍后......') spider=Yupoo(url) print(f'相册首页分类链接采集中,请稍后......') ceagortylists=spider.get_categorylist() print(f'相册首页分类链接采集完毕!') time.sleep(2) print(f'相册所有链接采集中,时间可能稍长,请稍后......') spider.get_urllists(ceagortylists) print(f'采集完毕,采集{url}图片成功!') print(f'10后自动结束程序,退出!') time.sleep(10)