Provably Rare Gem Miner.

Overview

Provably Rare Gem Miner

just another random project by yoyoismee.eth

useful link

useful thing you should know

  • read contract -> gems(gemID) to get useful info
  • write contract -> mine to claim(kind, salt) to claim your NFT

to run. just edit the python file and run it.

pip install -r requirement.txt
python3 stick_the_miner.py

or new one auto_mine.py for less input. but you'll need infura account

Ps. too lazy to write docs. but it's 50 LoCs have fun.


why stick the miner ? welp.. this is part of the stick the BUIDLer series.

TL;DR - I'm working on a series of opensource NFT related project just for fun.

Key parameters to change if you are using orginal version 'stick_the_miner.py' (cr. K Nattakit's FB post)

  • chain_id - eth:1, fantom:250
  • entropy - ??
  • gemAddr - Game address, can get from https://gems.alphafinance.io/ (loot/bloot/rarity)
  • userAddr - your Wallet address
  • kind = ประเภทของเพชรที่จะขุด ผมแนะนำเป็น Emerald เพราะ return/difficult สูงที่สุด ง่าย ๆ คือคุณจะกำไรเร็วกว่านั่นเอง
  • nonce - number of times you've minted a gem (https://gems.alphafinance.io/ and connect your wallet)
  • diff - difficulty of gemID (https://gems.alphafinance.io/), note that this changes everytime someone minted that gem, so you need to change it too

(more detail) how to use 'auto_mine.py', the updated version of stick_the_miner

  • benefits: manual version (stick_the_miner.py) requires you to update the 'diff' parameter every time someone minted the nft of the target gem, and 'nounce' if you successfully minted one. This version automates that so you just have to rerun to update.
  • steps:
    1. update requirements pip install -r requirements.txt
    1. create an account at (https://infura.io/), select your chain (e.g. Ethereum), create a project and obtain your project ID
    1. create a .env file in the same format as .env-example, inputing your information from (2.), your wallet address and gem ID
    1. python3 auto_mine.py
  • Note: although you dont have to manually adjust 'diff' parameter everytime, you still need to restart the process everytime someone minted target gem's nft still

Once you get the salt:

Multicore version

  • Normal version uses only 1 core of processors, the multicore version should be ~8 times faster depending on your CPU / coreNumber variable
  • You can select the number of processors by chainging coreNumber variable (should not exceed ~16 tho)
  • "fantom_mining_pool_auto_multicore_line.py" is the multicore version of fantom_mining_pool.py
  • for mining by yourself and manual claim please use "fantom_multicore_line.py"
Comments
  • 🎨Added colorlog package for output with colors

    🎨Added colorlog package for output with colors

    I use the classic stick_the_miner.py for mining and had a hard time looking for the salt output due to the monochrome color. So, I decided to differentiate the salt output with the colorlog package😁

    opened by mickyngub 2
  • Multicore version of the miner for both pool mining and self mining

    Multicore version of the miner for both pool mining and self mining

    Depending on your CPU and the coreNumber variable, it should be ~8 times faster than the original version but with the drawback of a tremendous increase in CPU utilization.

    opened by mickyngub 1
  • Lowering the priority of python.exe to reduce lags

    Lowering the priority of python.exe to reduce lags

    If a user is mining gems in the background while using other compute-intensive programs, the user might experience lags due to 100% CPU utilization. By lowering the priority of python.exe miner, other programs will have higher priorities. Thus, users would be less likely to experience lagging issues.

    Under a normal circumstance in which the CPU utilization is less than 100%, it should have no impact on iter/sec.

    Before

    image

    After

    image

    opened by mickyngub 1
  • update fantom_mining_pool

    update fantom_mining_pool

    • edit .env-example add NOTIFY_AUTH_TOKEN, DIFF and PRIVATE_KEY
    • edit var private_key to PRIVATE_KEY
    • insert if PRIVATE_KEY != ''
    • get PRIVATE_KEY from .env for safety
    opened by NuttakitDW 0
  • why other people mint so quickly

    why other people mint so quickly

    https://ftmscan.com/address/0x729d74098f6669541ed1b69403ae75f080ccf1e1

    this people mint level 4 gems so quickly ,his salt is too low, but execute success.

    are you knonw the reason? image

    opened by sumrise 3
  • refactor to support multiple chain properly

    refactor to support multiple chain properly

    some of our code is unnecessary based on Ethereum e.g. infura_key, hard code chain no, and more todo: refactor to a more generic one that would be valid across all EVM compatible chain e.g. infura_key -> rpc_provider (also fix others code to match this change) and more

    also TODO: remove the quick fix for fantom file LOL

    opened by yoyoismee 0
  • Idea for sampling different range of int random on multiple workers

    Idea for sampling different range of int random on multiple workers

    Will probably do tmr, parse n worker to the get_salt function so each worker could random int from different range of numbers eg. worker 1: 1-2^122, worker 2: 2^122 to 2^123

    opened by Duayt 1
Releases(v0.0.1d-test-build)
This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach.

PlyTitle_Generation This is the official repository of Music Playlist Title Generation: A Machine-Translation Approach. The paper has been accepted by

SeungHeonDoh 6 Jan 03, 2022
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Video Object Segmentation Language as Queries for Referring Video Object S

Jonas Wu 232 Dec 29, 2022
Unofficial implementation of MLP-Mixer: An all-MLP Architecture for Vision

MLP-Mixer: An all-MLP Architecture for Vision This repo contains PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision. Usage : impo

Rishikesh (ऋषिकेश) 175 Dec 23, 2022
A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model.

Semantic Meshes A framework for annotating 3D meshes using the predictions of a 2D semantic segmentation model. Paper If you find this framework usefu

Florian 40 Dec 09, 2022
source code for 'Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge' by A. Shah, K. Shanmugam, K. Ahuja

Source code for "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge" Reference: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahu

Abhin Shah 1 Jun 03, 2022
A reimplementation of DCGAN in PyTorch

DCGAN in PyTorch A reimplementation of DCGAN in PyTorch. Although there is an abundant source of code and examples found online (as well as an officia

Diego Porres 6 Jan 08, 2022
This repository contains datasets and baselines for benchmarking Chinese text recognition.

Benchmarking-Chinese-Text-Recognition This repository contains datasets and baselines for benchmarking Chinese text recognition. Please see the corres

FudanVI Lab 254 Dec 30, 2022
Scaling Vision with Sparse Mixture of Experts

Scaling Vision with Sparse Mixture of Experts This repository contains the code for training and fine-tuning Sparse MoE models for vision (V-MoE) on I

Google Research 290 Dec 25, 2022
PFFDTD is an open-source FDTD simulator for 3D room acoustics

PFFDTD is an open-source FDTD simulator for 3D room acoustics

Brian Hamilton 34 Nov 24, 2022
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Vide

Jonas Wu 232 Dec 29, 2022
[ICCV 2021] HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration Introduction The repository contains the source code and pre-tr

Intelligent Sensing, Perception and Computing Group 55 Dec 14, 2022
Offline Reinforcement Learning with Implicit Q-Learning

Offline Reinforcement Learning with Implicit Q-Learning This repository contains the official implementation of Offline Reinforcement Learning with Im

Ilya Kostrikov 126 Jan 06, 2023
TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline.

TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline

193 Dec 22, 2022
Install alphafold on the local machine, get out of docker.

AlphaFold This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP

Kui Xu 73 Dec 13, 2022
Tilted Empirical Risk Minimization (ICLR '21)

Tilted Empirical Risk Minimization This repository contains the implementation for the paper Tilted Empirical Risk Minimization ICLR 2021 Empirical ri

Tian Li 40 Nov 28, 2022
TransPrompt - Towards an Automatic Transferable Prompting Framework for Few-shot Text Classification

TransPrompt This code is implement for our EMNLP 2021's paper 《TransPrompt:Towards an Automatic Transferable Prompting Framework for Few-shot Text Cla

WangJianing 23 Dec 21, 2022
Jittor implementation of PCT:Point Cloud Transformer

PCT: Point Cloud Transformer This is a Jittor implementation of PCT: Point Cloud Transformer.

MenghaoGuo 547 Jan 03, 2023
The official implementation of CircleNet: Anchor-free Detection with Circle Representation, MICCAI 2030

CircleNet: Anchor-free Detection with Circle Representation The official implementation of CircleNet, MICCAI 2020 [PyTorch] [project page] [MICCAI pap

The Biomedical Data Representation and Learning Lab 45 Nov 18, 2022
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.

WSDEC This is the official repo for our NeurIPS paper Weakly Supervised Dense Event Captioning in Videos. Description Repo directories ./: global conf

Melon(Xuguang Duan) 96 Nov 01, 2022
SCU OlympicsRunning Baseline

Competition 1v1 running Environment check details in Jidi Competition RLChina2021智能体竞赛 做出的修改: 奖励重塑:修改了环境,重新设置了奖励的分配,使得奖励组成不只有零和博弈,还有探索环境的奖励。 算法微调:修改了官

ZiSeoi Wong 2 Nov 23, 2021