Web Content Retrieval for Humans™

Overview

Lassie

https://img.shields.io/pypi/v/lassie.svg?style=flat-square https://img.shields.io/travis/michaelhelmick/lassie.svg?style=flat-square https://img.shields.io/coveralls/michaelhelmick/lassie/master.svg?style=flat-square https://img.shields.io/badge/Say%20Thanks!-:)-1EAEDB.svg?style=flat-square

Lassie is a Python library for retrieving basic content from websites.

https://i.imgur.com/QrvNfAX.gif

Usage

>>> import lassie
>>> lassie.fetch('http://www.youtube.com/watch?v=dQw4w9WgXcQ')
{
    'description': u'Music video by Rick Astley performing Never Gonna Give You Up. YouTube view counts pre-VEVO: 2,573,462 (C) 1987 PWL',
    'videos': [{
        'src': u'http://www.youtube.com/v/dQw4w9WgXcQ?autohide=1&version=3',
        'height': 480,
        'type': u'application/x-shockwave-flash',
        'width': 640
    }, {
        'src': u'https://www.youtube.com/embed/dQw4w9WgXcQ',
        'height': 480,
        'width': 640
    }],
    'title': u'Rick Astley - Never Gonna Give You Up',
    'url': u'http://www.youtube.com/watch?v=dQw4w9WgXcQ',
    'keywords': [u'Rick', u'Astley', u'Sony', u'BMG', u'Music', u'UK', u'Pop'],
    'images': [{
        'src': u'http://i1.ytimg.com/vi/dQw4w9WgXcQ/hqdefault.jpg?feature=og',
        'type': u'og:image'
    }, {
        'src': u'http://i1.ytimg.com/vi/dQw4w9WgXcQ/hqdefault.jpg',
        'type': u'twitter:image'
    }, {
        'src': u'http://s.ytimg.com/yts/img/favicon-vfldLzJxy.ico',
        'type': u'favicon'
    }, {
        'src': u'http://s.ytimg.com/yts/img/favicon_32-vflWoMFGx.png',
        'type': u'favicon'
    }],
    'locale': u'en_US'
}

Install

Install Lassie via pip

$ pip install lassie

or, with easy_install

$ easy_install lassie

But, hey... that's up to you.

Documentation

Documentation can be found here: https://lassie.readthedocs.org/

Comments
  • Fix possible ValueError in convert_to_int caused by values like 1px

    Fix possible ValueError in convert_to_int caused by values like 1px

    When trying to parse http://www.wired.com/wiredscience/2013/09/rim-fire-map-color-scale/ a ValueError was raised in convert_to_img, because the page has image width and height values ending in px.

    I changed the function to be more liberal regarding dimension values, by extracting the digits before casting to int. I added a test for this.

    Not sure though if the value should be converted to int at all or kept as a string.

    opened by yaph 14
  • Import fails on Python3.5

    Import fails on Python3.5

    It appears something is seriously broken when trying to install lassie with Python 3.5. Install goes fine but when importing I get here:

    Python 3.5.0 (default, Sep 23 2015, 04:41:38)
    [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)] on darwin
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import lassie
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/__init__.py", line 19, in <module>
        from .api import fetch
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/api.py", line 11, in <module>
        from .core import Lassie
      File "/Users/ben/dev/beavy/venv/src/lassie/lassie/core.py", line 13, in <module>
        from bs4 import BeautifulSoup
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/__init__.py", line 30, in <module>
        from .builder import builder_registry, ParserRejectedMarkup
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/builder/__init__.py", line 308, in <module>
        from . import _htmlparser
      File "/Users/ben/dev/beavy/venv/lib/python3.5/site-packages/bs4/builder/_htmlparser.py", line 7, in <module>
        from html.parser import (
    ImportError: cannot import name 'HTMLParseError'
    
    opened by gnunicorn 6
  • Add optional structured properties for og:image and og:video

    Add optional structured properties for og:image and og:video

    From http://ogp.me/#structured.

    The og:video tag has the identical tags as og:image.

    og:image:url - Identical to og:image. og:image:secure_url - An alternate url to use if the webpage requires HTTPS. og:image:type - A MIME type for this image. og:image:width - The number of pixels wide. og:image:height - The number of pixels high.

    opened by jpadilla 6
  • Optional support for canonical URL meta tag.

    Optional support for canonical URL meta tag.

    This is very roughed in, but it adds support for returning the URL as provided by the canonical link element.

    There isn't anything to determine precedence with og:url.

    Has passing tests, and is disabled by default.

    Needed this for a project, not sure if it would be useful upstream.

    enhancement 
    opened by jmhobbs 5
  • Possible relative URL in og:image

    Possible relative URL in og:image

    I just came accros a page with a relative path value for the og:image. Adding a call to urljoin on the src attribute in line 186 of core.py would be a possibility, but maybe it's better to check for the src prop (possibly href prop too) in _filter_meta_data and do it there. What do you think about that?

    opened by yaph 5
  • Can't get the full article.

    Can't get the full article.

    Hi, I want to extract the article from the source url. I got only the title of the article and small parts of it under the "description" parameter.

    opened by yaseenox 4
  • Update requests==2.8 in setup.py, too

    Update requests==2.8 in setup.py, too

    The changelog for the last release states, that request is now pinned at version 2.8, yet when installing the latest version of lassie, it requires (and install) version 2.6 – the setup.py hasn't been updated to reflect that change and breaks the installation. This PR corrects that.

    opened by gnunicorn 4
  • Please allow to configure the requests session

    Please allow to configure the requests session

    It would be useful to be able to configure the requests session used to retrieve the requested URL.

    You could perhaps initialize a default session object in the Lassie constructor, which the user could then configure, and/or add a parameter to Lassie.fetch() to override the default session.

    opened by tawmas 4
  • Bump requests from 2.18.4 to 2.20.0

    Bump requests from 2.18.4 to 2.20.0

    ⚠️ Dependabot is rebasing this PR ⚠️

    If you make any changes to it yourself then they will take precedence over the rebase.


    Bumps requests from 2.18.4 to 2.20.0.

    Changelog

    Sourced from requests's changelog.

    2.20.0 (2018-10-18)

    Bugfixes

    • Content-Type header parsing is now case-insensitive (e.g. charset=utf8 v Charset=utf8).
    • Fixed exception leak where certain redirect urls would raise uncaught urllib3 exceptions.
    • Requests removes Authorization header from requests redirected from https to http on the same hostname. (CVE-2018-18074)
    • should_bypass_proxies now handles URIs without hostnames (e.g. files).

    Dependencies

    • Requests now supports urllib3 v1.24.

    Deprecations

    • Requests has officially stopped support for Python 2.6.

    2.19.1 (2018-06-14)

    Bugfixes

    • Fixed issue where status_codes.py's init function failed trying to append to a __doc__ value of None.

    2.19.0 (2018-06-12)

    Improvements

    • Warn user about possible slowdown when using cryptography version < 1.3.4
    • Check for invalid host in proxy URL, before forwarding request to adapter.
    • Fragments are now properly maintained across redirects. (RFC7231 7.1.2)
    • Removed use of cgi module to expedite library load time.
    • Added support for SHA-256 and SHA-512 digest auth algorithms.
    • Minor performance improvement to Request.content.
    • Migrate to using collections.abc for 3.7 compatibility.

    Bugfixes

    • Parsing empty Link headers with parse_header_links() no longer return one bogus entry.
    ... (truncated)
    Commits
    • bd84045 v2.20.0
    • 7fd9267 remove final remnants from 2.6
    • 6ae8a21 Add myself to AUTHORS
    • 89ab030 Use comprehensions whenever possible
    • 2c6a842 Merge pull request #4827 from webmaven/patch-1
    • 30be889 CVE URLs update: www sub-subdomain no longer valid
    • a6cd380 Merge pull request #4765 from requests/encapsulate_urllib3_exc
    • bbdbcc8 wrap url parsing exceptions from urllib3's PoolManager
    • ff0c325 Merge pull request #4805 from jdufresne/https
    • b0ad249 Prefer https:// for URLs throughout project
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot ignore this [patch|minor|major] version will close this PR and stop Dependabot creating any more for this minor/major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 3
  • Added support for open graph optional property `site_name`.

    Added support for open graph optional property `site_name`.

    Hi, I added supported for the open graph site_name property.

    This parse the following tag... <meta property="og:site_name" content="IMDb" /> into {"site_name": "IMDb"}

    opened by cameronmaske 3
  • make image urls absolute and added mock to test_requirements

    make image urls absolute and added mock to test_requirements

    I made a change so that when lassie.fetch is called with all_images=True the images src attributes contain absolute URLs. Since lassie already comes with a function that makes relative URLs absolute, I think it's better done inside lassie than in the application which imports it.

    When trying to run the tests after the changes the mock package was missing, so I added it to the test_requirements.txt file.

    opened by yaph 2
  • docs: Fix a few typos

    docs: Fix a few typos

    There are small typos in:

    • docs/usage/advanced_usage.rst

    Fixes:

    • Should read attributes rather than attibutes.
    • Should read actual rather than acutal.

    Semi-automated pull request generated by https://github.com/timgates42/meticulous/blob/master/docs/NOTE.md

    opened by timgates42 0
  • Any reason to pindown upper version in requirements.txt

    Any reason to pindown upper version in requirements.txt

    Hi,

    Since lassie is a library, limiting upper versions for dependencies as in

    requests>=2.18.4,<3.0.0
    beautifulsoup4>=4.9.0,<4.10.0
    

    can lead to conflicts for software using it, e.g. on pip install:

    The conflict is caused by:
        The user requested beautifulsoup4==4.10.0
        lassie 0.11.11 depends on beautifulsoup4<4.10.0 and >=4.9.0
    

    Is there any reason for the pindown?

    opened by idlesign 1
  • Encoding issues with german umlauts

    Encoding issues with german umlauts

    Hi,

    when getting the description from a German website the "ü" "ä" etc. end up being "ä", "ü" etc. Example: https://finanzguru.de/ Result:

    Finanzguru - Finanzen magisch einfach Finanzen magisch einfach. Verwalte deine Verträge, kündige per Fingertipp und spare Geld mit meinen Spartipps. Alles an einem Ort und komplett kostenfrei. Einfacher war es noch nie.

    I am using lassie within Django.

    opened by leugh 0
  • Add new filters for embeddable items

    Add new filters for embeddable items

    The idea is to return as much data as we can in the API so users can possibly embed media. (i.e. Spotify tracks)

    We'll probably add a new embed.py and return a new embed key in the lassie API response.

    enhancement 
    opened by michaelhelmick 0
Releases(0.11.11)
Proxy scraper. Format: IP | PORT | COUNTRY | TYPE

proxy scraper 🔎 Installation: git clone https://github.com/ebankoff/proxy_scraper Required pip libraries (pip install library name): lxml beautifulso

Eban'ko 19 Dec 07, 2022
Crawler job that scrapes comments from social media posts and saves them in a S3 bucket.

Toxicity comments crawler Crawler job that scrapes comments from social media posts and saves them in a S3 bucket. Twitter Tweets and replies are scra

Douglas Trajano 2 Jan 24, 2022
Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine .

TwitterScraper Script for scrape user data like "id,username,fullname,followers,tweets .. etc" by Twitter's search engine . Screenshot Data Users Only

Remax Alghamdi 19 Nov 17, 2022
A simplistic scraper made to download tons of random screenshots made by people.

printStealer 1.1 What is this tool? This tool is developed to show the insecurity of the screenshot utility called prnt sc. It is a site that stores s

appelsiensam 4 Jul 26, 2022
Scrapy-soccer-games - Scraping information about soccer games from a few websites

scrapy-soccer-games Esse projeto tem por finalidade pegar informação de tabela d

Caio Alves 2 Jul 20, 2022
Pro Football Reference Game Data Webscraper

Pro Football Reference Game Data Webscraper Code Copyright Yeetzsche This is a simple Pro Football Reference Webscraper that can either collect all ga

6 Dec 21, 2022
A tool for scraping and organizing data from NewsBank API searches

nbscraper Overview This simple tool automates the process of copying, pasting, and organizing data from NewsBank API searches. Curerntly, nbscrape onl

0 Jun 17, 2021
Haphazard scripts for scraping bitcoin/bitcoin data from GitHub

This is a quick-and-dirty tool used to scrape bitcoin/bitcoin pull request and commentary data. Each output/pr number folder contains comments.json:

James O'Beirne 8 Oct 12, 2022
This is a web crawler that works on employ email data by gmane.org and visualizes it in different ways.

crawler_to_visual_gmane Analyzing an EMAIL Archive from gmane and vizualizing the data using the D3 JavaScript library. This is a set of tools that al

Saim Zafar 1 Dec 20, 2021
script to scrape direct download links (ddls) from google drive index.

bhadoo Google Personal/Shared Drive Index scraper. A small script to scrape direct download links (ddls) of downloadable files from bhadoo google driv

sαɴᴊɪᴛ sɪɴʜα 53 Dec 16, 2022
京东抢茅台,秒杀成功很多次讨论,天猫抢购,赚钱交流等。

Jd_Seckill 特别声明: 请添加个人微信:19972009719 进群交流讨论 目前群里很多人抢到【扫描微信添加群就好,满200关闭群,有喜欢薅信用卡羊毛的也可以找我交流】 本仓库发布的jd_seckill项目中涉及的任何脚本,仅用于测试和学习研究,禁止用于商业用途,不能保证其合法性,准确性

50 Jan 05, 2023
京东秒杀商品抢购Python脚本

Jd_Seckill 非常感谢原作者 https://github.com/zhou-xiaojun/jd_mask 提供的代码 也非常感谢 https://github.com/wlwwu/jd_maotai 进行的优化 主要功能 登陆京东商城(www.jd.com) cookies登录 (需要自

Andy Zou 1.5k Jan 03, 2023
New World Market Scraper

Bean Seller A New Worlds market scraper. Deployment This must be installed on Windows as it uses the Windows api to do its stuff Install Prerequisites

4 Sep 21, 2022
Explore scraping with BeautifulSoup!

beautifulsoup-scrape Explore scraping with BeautifulSoup! Part One: Start from Shakespeare As my professor is a poet (yes, and he teaches me data and

Chuqin 2 Oct 05, 2022
A powerful annex BUBT, BUBT Soft, and BUBT website scraping script.

Annex Bubt Scraping Script I think this is the first public repository that provides free annex-BUBT, BUBT-Soft, and BUBT website scraping API script

Md Imam Hossain 4 Dec 03, 2022
Rottentomatoes, Goodreads and IMDB sites crawler. Semantic Web final project.

Crawler Rottentomatoes, Goodreads and IMDB sites crawler. Crawler written by beautifulsoup, selenium and lxml to gather books and films information an

Faeze Ghorbanpour 1 Dec 30, 2021
Web-scraping - Program that scrapes a website for a collection of quotes, picks one at random and displays it

web-scraping Program that scrapes a website for a collection of quotes, picks on

Manvir Mann 1 Jan 07, 2022
Semplice scraper realizzato in Python tramite la libreria BeautifulSoup

Semplice scraper realizzato in Python tramite la libreria BeautifulSoup

2 Nov 22, 2021
An application that on a given url, crowls a web page and gets all words, sorts and counts them.

Web-Scrapping-1 An application that on a given url, crowls a web page and gets all words, sorts and counts them. Installation Using the package manage

adriano atambo 1 Jan 16, 2022
Quick Project made to help scrape Lexile and Atos(AR) levels from ISBN

Lexile-Atos-Scraper Quick Project made to help scrape Lexile and Atos(AR) levels from ISBN You will need to install the chrome webdriver if you have n

1 Feb 11, 2022