Python HDFS client

Overview

Python HDFS client

Because the world needs yet another way to talk to HDFS from Python.

Usage

This library provides a Python client for WebHDFS. NameNode HA is supported by passing in both NameNodes. Responses are returned as nice Python classes, and any failed operation will raise some subclass of HdfsException matching the Java exception.

Example usage:

>>> fs = pyhdfs.HdfsClient(hosts='nn1.example.com:50070,nn2.example.com:50070', user_name='someone')
>>> fs.list_status('/')
[FileStatus(pathSuffix='benchmarks', permission='777', type='DIRECTORY', ...), FileStatus(...), ...]
>>> fs.listdir('/')
['benchmarks', 'hbase', 'solr', 'tmp', 'user', 'var']
>>> fs.mkdirs('/fruit/x/y')
True
>>> fs.create('/fruit/apple', 'delicious')
>>> fs.append('/fruit/apple', ' food')
>>> with contextlib.closing(fs.open('/fruit/apple')) as f:
...     f.read()
...
b'delicious food'
>>> fs.get_file_status('/fruit/apple')
FileStatus(length=14, owner='someone', type='FILE', ...)
>>> fs.get_file_status('/fruit/apple').owner
'someone'
>>> fs.get_content_summary('/fruit')
ContentSummary(directoryCount=3, fileCount=1, length=14, quota=-1, spaceConsumed=14, spaceQuota=-1)
>>> list(fs.walk('/fruit'))
[('/fruit', ['x'], ['apple']), ('/fruit/x', ['y'], []), ('/fruit/x/y', [], [])]
>>> fs.exists('/fruit/apple')
True
>>> fs.delete('/fruit')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../pyhdfs.py", line 525, in delete
  ...
pyhdfs.HdfsPathIsNotEmptyDirectoryException: `/fruit is non empty': Directory is not empty
>>> fs.delete('/fruit', recursive=True)
True
>>> fs.exists('/fruit/apple')
False
>>> issubclass(pyhdfs.HdfsFileNotFoundException, pyhdfs.HdfsIOException)
True

The methods and return values generally map directly to WebHDFS endpoints. The client also provides convenience methods that mimic Python os methods and HDFS CLI commands (e.g. walk and copy_to_local).

pyhdfs logs all HDFS actions at the INFO level, so turning on INFO level logging will give you a debug record for your application.

For more information, see the full API docs.

Installing

pip install pyhdfs

Python 3 is required.

Development testing

http://codecov.io/github/jingw/pyhdfs/coverage.svg?branch=master Documentation Status

First run install-hdfs.sh x.y.z, which will download, extract, and run the HDFS NN/DN processes in the current directory. (Replace x.y.z with a real version.) Then run the following commands. Note they will create and delete hdfs://localhost/tmp/pyhdfs_test.

Commands:

python3 -m venv env
source env/bin/activate
pip install -e .
pip install -r dev_requirements.txt
pytest
Comments
  • client should return some info when succuessfully create a file

    client should return some info when succuessfully create a file

    for example, hdfs server may return a response with headers like this

    HTTP/1.1 201 Created
    Location: webhdfs://<HOST>:<PORT>/<PATH>
    Content-Length: 0
    

    I want to get location from response headers, however, client.create do not return any thing.

    opened by cosven 7
  • Write error

    Write error

    Hello Mkdir and listdir work fine But create didn't

    fs.create('/fruit/apple', 'delicious')
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/root/miniconda2/lib/python2.7/site-packages/pyhdfs.py", line 426, in create
        metadata_response.headers['location'], data=data, **self._requests_kwargs)
      File "/root/miniconda2/lib/python2.7/site-packages/requests/api.py", line 126, in put
        return request('put', url, data=data, **kwargs)
      File "/root/miniconda2/lib/python2.7/site-packages/requests/api.py", line 58, in request
        return session.request(method=method, url=url, **kwargs)
      File "/root/miniconda2/lib/python2.7/site-packages/requests/sessions.py", line 512, in request
        resp = self.send(prep, **send_kwargs)
      File "/root/miniconda2/lib/python2.7/site-packages/requests/sessions.py", line 622, in send
        r = adapter.send(request, **kwargs)
      File "/root/miniconda2/lib/python2.7/site-packages/requests/adapters.py", line 513, in send
        raise ConnectionError(e, request=request)
    requests.exceptions.ConnectionError: HTTPConnectionPool(host='1566bb80c4dc', port=50075): Max retries exceeded with url: /webhdfs/v1/fruit/apple?op=CREATE&user.name=hdfs&namenoderpcaddress=localhost:8020&overwrite=false (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f644f364510>: Failed to establish a new connection: [Errno -2] Name or service not known',))
    
    opened by albertoRamon 4
  • requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, '远程主机强迫关闭了一个现有的连接。', None, 10054, None))

    requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, '远程主机强迫关闭了一个现有的连接。', None, 10054, None))

    Traceback (most recent call last): File "D:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen chunked=chunked) File "D:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request conn.request(method, url, **httplib_request_kw) File "D:\Anaconda3\lib\http\client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "D:\Anaconda3\lib\http\client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "D:\Anaconda3\lib\http\client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "D:\Anaconda3\lib\http\client.py", line 1065, in _send_output self.send(chunk) File "D:\Anaconda3\lib\http\client.py", line 986, in send self.sock.sendall(data) ConnectionResetError: [WinError 10054] 远程主机强迫关闭了一个现有的连接。

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "D:\Anaconda3\lib\site-packages\requests\adapters.py", line 440, in send timeout=timeout File "D:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen _stacktrace=sys.exc_info()[2]) File "D:\Anaconda3\lib\site-packages\urllib3\util\retry.py", line 357, in increment raise six.reraise(type(error), error, _stacktrace) File "D:\Anaconda3\lib\site-packages\urllib3\packages\six.py", line 685, in reraise raise value.with_traceback(tb) File "D:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen chunked=chunked) File "D:\Anaconda3\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request conn.request(method, url, **httplib_request_kw) File "D:\Anaconda3\lib\http\client.py", line 1239, in request self._send_request(method, url, body, headers, encode_chunked) File "D:\Anaconda3\lib\http\client.py", line 1285, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "D:\Anaconda3\lib\http\client.py", line 1234, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "D:\Anaconda3\lib\http\client.py", line 1065, in _send_output self.send(chunk) File "D:\Anaconda3\lib\http\client.py", line 986, in send self.sock.sendall(data) urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, '远程主机强迫关闭了一个现有的连接。', None, 10054, None))

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last): File "D:\workspace\phdfs\check_wrf.py", line 144, in fs.copy_from_local(parname,"/test/fcst/china/10d_arwpost_sta/near/" + wrflisttime.format("YYYYMMDD") + "/" + parname,overwrite = True) File "D:\Anaconda3\lib\site-packages\pyhdfs.py", line 753, in copy_from_local self.create(dest, f, **kwargs) File "D:\Anaconda3\lib\site-packages\pyhdfs.py", line 426, in create metadata_response.headers['location'], data=data, **self._requests_kwargs) File "D:\Anaconda3\lib\site-packages\requests\api.py", line 126, in put return request('put', url, data=data, **kwargs) File "D:\Anaconda3\lib\site-packages\requests\api.py", line 58, in request return session.request(method=method, url=url, **kwargs) File "D:\Anaconda3\lib\site-packages\requests\sessions.py", line 508, in request resp = self.send(prep, **send_kwargs) File "D:\Anaconda3\lib\site-packages\requests\sessions.py", line 618, in send r = adapter.send(request, **kwargs) File "D:\Anaconda3\lib\site-packages\requests\adapters.py", line 490, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, '远程主机强迫关闭了一个现有的连接。', None, 10054, None))

    opened by Georege 4
  • BUG:Chinese character can't copy to hdfs

    BUG:Chinese character can't copy to hdfs

    UnicodeEncodeError: 'latin-1' codec can't encode characters in position 2-3: Body ('张三') is not valid Latin-1. Use body.encode('utf-8') if you want to send it encoded in UTF-8.

    opened by yiershanxll 3
  • Help me,please . The second run of the function in the script results in an abnormal result

    Help me,please . The second run of the function in the script results in an abnormal result

    I am a rookie~~!!

    The following code:

    list_info = [{"tenant": "coco", "hive_path": "/user/open_001_dev", "ftp_path": "/files/prov/001"},
                     {"tenant": "lili", "hive_path": "/user/open_002_dev", "ftp_path": "/files/prov/002"}]
    result = 0
    client=pyhdfs.HdfsClient(hosts="10.173.5.18:9000",user_name="hdfs",timeout=10,max_tries=3,randomize_hosts="false")
    def hive_content_size():
        global result
        for item in range(2):
            if "hive_path" in list_info[item]:
                print(client.get_content_summary(list_info[item]["hive_path"]))
    
    hive_content_size()
    

    The result of the first loop is output normally,but the output of the second loop is abnormal.

    The bottom is the error report:

    ContentSummary(directoryCount=1258, fileCount=3773, length=141829751002, quota=4000000, spaceConsumed=425489253006, spaceQuota=659706976665600)
    
    Failed to reach to 10.173.5.18:9000 (attempt 3/3)
    Traceback (most recent call last):
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/connectionpool.py", line 445, in _make_request
        six.raise_from(e, None)
      File "<string>", line 3, in raise_from
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/connectionpool.py", line 440, in _make_request
        httplib_response = conn.getresponse()
      File "/usr/local/python/lib/python3.9/http/client.py", line 1347, in getresponse
        response.begin()
      File "/usr/local/python/lib/python3.9/http/client.py", line 307, in begin
        version, status, reason = self._read_status()
      File "/usr/local/python/lib/python3.9/http/client.py", line 268, in _read_status
        line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
      File "/usr/local/python/lib/python3.9/socket.py", line 704, in readinto
        return self._sock.recv_into(b)
    socket.timeout: timed out
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/python/lib/python3.9/site-packages/requests-2.25.1-py3.9.egg/requests/adapters.py", line 439, in send
        resp = conn.urlopen(
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/connectionpool.py", line 755, in urlopen
        retries = retries.increment(
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/util/retry.py", line 532, in increment
        raise six.reraise(type(error), error, _stacktrace)
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/packages/six.py", line 735, in reraise
        raise value
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/connectionpool.py", line 699, in urlopen
        httplib_response = self._make_request(
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/connectionpool.py", line 447, in _make_request
        self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
      File "/usr/local/python/lib/python3.9/site-packages/urllib3-1.26.4-py3.9.egg/urllib3/connectionpool.py", line 336, in _raise_timeout
        raise ReadTimeoutError(
    urllib3.exceptions.ReadTimeoutError: HTTPConnectionPool(host='10.173.5.18', port=9000): Read timed out. (read timeout=10)
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/python/lib/python3.9/site-packages/PyHDFS-0.3.1-py3.9.egg/pyhdfs/__init__.py", line 418, in _request
        response = self._requests_session.request(
      File "/usr/local/python/lib/python3.9/site-packages/requests-2.25.1-py3.9.egg/requests/sessions.py", line 542, in request
        resp = self.send(prep, **send_kwargs)
      File "/usr/local/python/lib/python3.9/site-packages/requests-2.25.1-py3.9.egg/requests/sessions.py", line 655, in send
        r = adapter.send(request, **kwargs)
      File "/usr/local/python/lib/python3.9/site-packages/requests-2.25.1-py3.9.egg/requests/adapters.py", line 529, in send
        raise ReadTimeout(e, request=request)
    requests.exceptions.ReadTimeout: HTTPConnectionPool(host='10.162.3.171', port=19888): Read timed out. (read timeout=10)
    Traceback (most recent call last):
      File "/home/hadoop/shay/monthly_report/test01.py", line 24, in <module>
        print(hive_content_size())
      File "/home/hadoop/shay/monthly_report/test01.py", line 22, in hive_content_size
        print(client.get_content_summary(list_info[item]["hive_path"]))
      File "/usr/local/python/lib/python3.9/site-packages/PyHDFS-0.3.1-py3.9.egg/pyhdfs/__init__.py", line 633, in get_content_summary
      File "/usr/local/python/lib/python3.9/site-packages/PyHDFS-0.3.1-py3.9.egg/pyhdfs/__init__.py", line 450, in _get
      File "/usr/local/python/lib/python3.9/site-packages/PyHDFS-0.3.1-py3.9.egg/pyhdfs/__init__.py", line 442, in _request
    pyhdfs.HdfsNoServerException: Could not use any of the given hosts
    

    ask for help~~!!!

    opened by qwe55982 2
  • HdfsFileAlreadyExistsException is not implemented?

    HdfsFileAlreadyExistsException is not implemented?

    Hi! Thanks for your great work. I have noticed that some Exceptions are not implemented right now?

    For example: If I try to upload the file with same path, the python raises ConnectionError instead of HdfsFileAlreadyExistsException.

    error message as following:

    Traceback (most recent call last):
      File "test_pyhdfs.py", line 12, in <module>
        fs.create('/xxx/xxx/images/test.png', data=file)
      File "/home/chiuhongyu/workplace/xxx/venv/lib/python3.6/site-packages/pyhdfs/__init__.py", line 504, in create
        metadata_response.headers['location'], data=data, **self._requests_kwargs)
      File "/home/chiuhongyu/workplace/xxx/venv/lib/python3.6/site-packages/requests/api.py", line 132, in put
        return request('put', url, data=data, **kwargs)
      File "/home/chiuhongyu/workplace/xxx/venv/lib/python3.6/site-packages/requests/api.py", line 61, in request
        return session.request(method=method, url=url, **kwargs)
      File "/home/chiuhongyu/workplace/xxx/venv/lib/python3.6/site-packages/requests/sessions.py", line 542, in request
        resp = self.send(prep, **send_kwargs)
      File "/home/chiuhongyu/workplace/xxx/venv/lib/python3.6/site-packages/requests/sessions.py", line 655, in send
        r = adapter.send(request, **kwargs)
      File "/home/chiuhongyu/workplace/xxx/venv/lib/python3.6/site-packages/requests/adapters.py", line 498, in send
        raise ConnectionError(err, request=request)
    requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
    
    opened by james77777778 1
  • Support customized WEBHDFS_PATH

    Support customized WEBHDFS_PATH

    In the latest version of pyhdfs, webhdfs is set as a constant '/webhdfs/v1', it works well in most kind of scene, but users may use their customized HTTP URL, for example, users may set their own webhdfs service using Pylon, and they access their restful server using their customized URL PATTERN like http://<HOST>:<HTTP_PORT>/webhdfs/api/v2/<PATH>?op=...

    opened by SparkSnail 1
  • TypeError: __new__() got an unexpected keyword argument 'storagePolicy'

    TypeError: __new__() got an unexpected keyword argument 'storagePolicy'

    I am using hadoop 2.6( with Docker: sudo docker run -i -t sequenceiq/hadoop-docker:2.6.0 /etc/bootstrap.sh -bash).

    When I using PyHDFS to call client.list_status, I got error:

    Traceback (most recent call last):
      File "testhdfs.py", line 3, in <module>
        print(client.list_status('/'))
      File "...testenv/lib/python3.4/site-packages/pyhdfs.py", line 428, in list_status
        _json(self._get(path, 'LISTSTATUS', **kwargs))['FileStatuses']['FileStatus']
      File "...testenv/lib/python3.4/site-packages/pyhdfs.py", line 427, in <listcomp>
        FileStatus(**item) for item in
    TypeError: __new__() got an unexpected keyword argument 'storagePolicy'
    

    The code:

    from pyhdfs import HdfsClient
    client = HdfsClient(hosts='172.17.0.2:50070')
    print(client.list_status('/'))
    

    This issue is cause of JSON from server has extra property storagePolicy, add it to pyhdfs.py can fix this. But I want to know weather this property is standard property of HDFS/WebHDFS.

    bug 
    opened by robberphex 1
  • why response assert not empty

    why response assert not empty

    In pyhdfs.py, line 424

    assert not metadata_response.content
    

    In my client, I get some response when upload files.

    b'<html>\r\n<head><title>307 Temporary Redirect</title></head>\r\n<body bgcolor="white">\r\n<center><h1>307 Temporary Redirect</h1></center>\r\n<hr><center>nginx/1.13.8</center>\r\n</body>\r\n</html>\r\n'
    

    This response does not mean the upload process failed, and I can successfully upload my files when I delete this line. Why add this line? could you please help me to figure out this problem?

    opened by SparkSnail 0
  • Support setting webhdfs_path

    Support setting webhdfs_path

    In the latest version of pyhdfs, webhdfs is set as a constant '/webhdfs/v1', it works well in most kind of scene, but users may use their customized HTTP URL, for example, users may set their own webhdfs service using Pylon, and they access their restful server using their customized URL PATTERN like http://<HOST>:<HTTP_PORT>/webhdfs/api/v2/<PATH>?op=...

    opened by SparkSnail 0
  • Let pyhdfs can visit HDFS in kerberos environment

    Let pyhdfs can visit HDFS in kerberos environment

    When HDFS need kerberos authentication,ur pyhds.py cannot visit HDFS. So maybe u should add authentication information in ur pyhdfs.py. In fact, it will call request module when python visit HDFS, so add authentication information at here.

    opened by LuckyNemo 0
  • got type error while append file

    got type error while append file

    File "/usr/local/lib/python3.6/site-packages/pyhdfs/__init__.py", line 520, in append path, 'APPEND', expected_status=HTTPStatus.TEMPORARY_REDIRECT, **kwargs) File "/usr/local/lib/python3.6/site-packages/pyhdfs/__init__.py", line 466, in _post return self._request('post', path, op, expected_status, **kwargs) File "/usr/local/lib/python3.6/site-packages/pyhdfs/__init__.py", line 431, in _request _check_response(response, expected_status) File "/usr/local/lib/python3.6/site-packages/pyhdfs/__init__.py", line 933, in _check_response remote_exception['message'] = exception_name + ' - ' + remote_exception['message'] TypeError: must be str, not NoneType

    opened by BingoZ 0
  • can't parse JSON with unprintable characters

    can't parse JSON with unprintable characters

    If a weird non-utf file name is created in HDFS, then the client fails when it can't interpret the response as a valid JSON string.

    e.g. it's possible to put a ctrl-r in the file name

    bug 
    opened by jingw 0
Releases(v0.3.1)
This repository detects a system vulnerable to CVE-2022-21907 and protects against this vulnerability if desired

This repository detects a system vulnerable to CVE-2022-21907 and protects against this vulnerability if desired

26 Dec 26, 2022
Omega - From Wordpress admin to pty

The Linux tool to automate the process of getting a pty once you got admin credentials in a Wordpress site. Keep in mind that right now Omega only can attack Linux hosts.

Ángel Heredia 12 Nov 09, 2022
Rapidly enumerate subdomains and domains using rapiddns.io.

Description Simple python module (unofficial) allowing you to access data from rapiddns.io. You can also use it as a module. As mentioned on the rapid

27 Dec 31, 2022
Source code for "A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction" @ NAACL 2022

TSAR Source code for NAACL 2022 paper: A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction. 🔥 Introduction We focus on extra

21 Sep 24, 2022
Utility for Extracting all passwords from ConnectWise Automate

CWA Password Extractor Utility for Extracting all passwords from ConnectWise Automate (E.g. while migrating to a new system). Outputs a csv file with

Matthew Kyles 1 Dec 09, 2021
PoC for CVE-2021-26855 -Just a checker-

CVE-2021-26855 PoC for CVE-2021-26855 -Just a checker- Usage python3 CVE-2021-26855.py -u https://mail.example.com -c example.burpcollaborator.net # C

Abdullah AlZahrani 17 Dec 22, 2022
Looks at Python code to search for things which look "dodgy" such as passwords or diffs

dodgy Dodgy is a very basic tool to run against your codebase to search for "dodgy" looking values. It is a series of simple regular expressions desig

Landscape 112 Nov 25, 2022
Strapi Framework Vulnerable to Remote Code Execution

CVE-2019-19609 Strapi Framework Vulnerable to Remote Code Execution well, I didnt found any exploit for CVE-2019-19609 so I wrote one. :/ Usage pytho

Dasith Vidanage 7 Mar 08, 2022
Early days of an Asset Discovery tool.

Please star this project! Written in Python Report Bug . Request Feature DISCLAIMER This project is in its early days, everything you see here is almo

grag1337 3 Dec 20, 2022
CVE-2021-45232-RCE-多线程批量漏洞检测

CVE-2021-45232-RCE CVE-2021-45232-RCE-多线程批量漏洞检测 FOFA 查询 title="Apache APISIX Das

孤桜懶契 36 Sep 21, 2022
Mass Shortlink Bypass Merupakan Tools Yang Akan Bypass Shortlink Ke Tujuan Asli, Dibuat Dengan Python 3

Shortlink-Bypass Mass Shortlink Bypass Merupakan Tools Yang Akan Bypass Shortlink Ke Tujuan Asli, Dibuat Dengan Python 3 Support Shortlink tii.ai/tei.

Wan Naz ID 6 Oct 24, 2022
💣 Bomb Crypto Bot 💣

💣 Bomb Crypto Bot 💣 ⚠️ Warning I am not responsible for any penalties incurred by those who use the bot, use it at your own risk. 📄 Documentation -

Matheus Benites 4 Apr 27, 2022
Hacktricks - Welcome to the page where you will find each trick/technique/whatever I have learnt in CTFs, real life apps, and reading researches and news.

Hacktricks - Welcome to the page where you will find each trick/technique/whatever I have learnt in CTFs, real life apps, and reading researches and news.

Carlos Polop 5.8k Jan 07, 2023
Getting my gitlab commit history into github

🔰 ᵀᴱᴸᴱᴳᴿᴬᴹ ᴴᴬᶜᴷ ᴮᴼᵀ 🔰 The owner would not be responsible for any kind of bans due to the bot. • ⚡ INSTALLING ⚡ • • 🛠️ Lᴀɴɢᴜᴀɢᴇs Aɴᴅ Tᴏᴏʟs 🔰 • If

Santiago Chiesa 1 Dec 24, 2021
Proof of Concept Exploit for vCenter CVE-2021-21972

CVE-2021-21972 Proof of Concept Exploit for vCenter CVE-2021-21972

Horizon 3 AI Inc 210 Dec 31, 2022
Glass是一款针对资产列表的快速指纹识别工具,通过调用Fofa/ZoomEye/Shodan/360等api接口

Glass是一款针对资产列表的快速指纹识别工具,通过调用Fofa/ZoomEye/Shodan/360等api接口快速查询资产信息并识别重点资产的指纹,也可针对IP/IP段或资产列表进行快速的指纹识别。

s7ck Team 764 Jan 05, 2023
阿里云accesskey利用工具

aliyun-accesskey-Tools 此工具用于查询ALIYUN_ACCESSKEY的主机,并且远程执行命令。 对于ALIYUN_ACCESSKEY利用方式可参考文章:记一次阿里云主机泄露Access Key到Getshell 工具截图 安装模块 pip install -r require

一灯老和尚 826 Jan 01, 2023
Log4j command generator: Generate commands for CVE-2021-44228

Log4j command generator Generate commands for CVE-2021-44228. Description The vulnerability exists due to the Log4j processor's handling of log messag

1 Jan 03, 2022
Bypass ReCaptcha: A Python script for dealing with recaptcha

Bypass ReCaptcha Bypass ReCaptcha is a Python script for dealing with recaptcha.

Marcos Camargo 1 Jan 11, 2022
GRR Rapid Response: remote live forensics for incident response

GRR Rapid Response is an incident response framework focused on remote live forensics. Build Type Status Tests End-to-end Tests Windows Templates Linu

Google 4.3k Jan 05, 2023