Dual languaged (rus+eng) tool for packing and unpacking archives of Silky Engine.

Overview

SilkyArcTool

English

Dual languaged (rus+eng) GUI tool for packing and unpacking archives of Silky Engine. It is not the same arc as used in Ai6WIN. If you want to work with Silky Engine's .mes scripts, use mesScriptAsseAndDisassembler instead.

Why this tool was created, if there are other tools that can work with this type of archive? The answer is simple: because there was no actually good enough tools. One tool can only extract the data, other -- only pack, but without using original compression, that resulting in outrageous big output archives. My tool solves all the issues -- not only it can extract archives, but also pack them from files, compressing it by algorithm (variation of LZSS), extraction of which was implemented by Silky Engine. Through the tool has one problem -- it works quite slow, especially for packing, so you may need to wait for some minutes (due to implementation compression algorithm on Python).

Русский

Двуязычное средство (рус+англ) для распаковки и запаковки архивов Silky Engine. Не стоит путать его с разновидностью .arc, используемой в Ai6WIN. Ежели вам нужно работать со скриптами .mes Silky Engine, используйте mesScriptAsseAndDisassembler.

Почему же это средство было создано, ежель и так есть средства, что могут работать с сим типом архива? Ответ прост: ни одно из тех существующих средств не является достаточно хорошим. Одно может только извлекать, другое -- только запаковывать, однако ж без использования оригинального алгоритма сжатия, из-за чего архивы получаются большими сверх всякой меры. Но моё средство исправляет эти проблемы: оно может как распаковывать данные, так и запаковывать их, причём сжимая файлы так, как их хочет видеть Silky Engine (разновидностью LZSS). Единственная, однако, проблема у средства есть -- несколько медленно работает оно, особенно при запаковке, так что может придётся прождать несколько минут (ввиду реализации алгоритма сжатия на Python).

Usage

English

image

  1. Run the tool (main.py or .exe).
  2. Print filename (with extension!!!) or choose it by clicking on button "...".
  3. Print directory or choose it by clicking on button "...".
  4. Print "0", if thou want to unpack, or "1", if thou want to pack.
  5. Just wait until it done.

Русский

image

  1. Запустите пакет средств (main.py иль .exe).
  2. Введите имя архива (с расширением!!!) или выберите его, нажав на кнопку "...".
  3. Введите имя директории файлов или выберите его, нажав на кнопку "...".
  4. Введите "0", коли распаковать желаете, али "1", коли запаковать желаете.
  5. Ждите завершения.

Tested on:

On English

На русском

You might also like...
Creating a chess engine using GPT-3
Creating a chess engine using GPT-3

GPT3Chess Creating a chess engine using GPT-3 Code for my article : https://towardsdatascience.com/gpt-3-play-chess-d123a96096a9 My game (white) vs GP

A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN
A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN

artificial intelligence cosmic love and attention fire in the sky a pyramid made of ice a lonely house in the woods marriage in the mountains lantern

A tool helps build a talk preview image by combining the given background image and talk event description

talk-preview-img-builder A tool helps build a talk preview image by combining the given background image and talk event description Installation and U

Phomber is infomation grathering tool that reverse search phone numbers and get their details, written in python3.
Phomber is infomation grathering tool that reverse search phone numbers and get their details, written in python3.

A Infomation Grathering tool that reverse search phone numbers and get their details ! What is phomber? Phomber is one of the best tools available fo

Tool which allow you to detect and translate text.
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

A unified tokenization tool for Images, Chinese and English.

ICE Tokenizer Token id [0, 20000) are image tokens. Token id [20000, 20100) are common tokens, mainly punctuations. E.g., icetk[20000] == 'unk', ice

Comments
  • Invalid argument

    Invalid argument

    I tried your tool with the .arc files of the game "[Silky's] Gakuen Saimin Reido -Sakki made, Daikirai Datta Hazu na no ni-" (学園催眠隷奴~さっきまで、大嫌いだったはずなのに~), but it keeps giving me this error:

    image

    opened by Nephiro 3
  • Extraction fails if archives are on other drive

    Extraction fails if archives are on other drive

    Exception in Tkinter callback
    Traceback (most recent call last):
      File "C:\Program Files\Python39\lib\tkinter\__init__.py", line 1892, in __call__
      File "C:\Users\Александр\Desktop\Tester\SilkyArcTool\gui.py", line 316, in _choose_file
      File "C:\Program Files\Python39\lib\ntpath.py", line 703, in relpath
    ValueError: path is on mount 'C:', start on mount 'Y:'
    Exception in Tkinter callback
    Traceback (most recent call last):
      File "C:\Program Files\Python39\lib\tkinter\__init__.py", line 1892, in __call__
      File "C:\Users\Александр\Desktop\Tester\SilkyArcTool\gui.py", line 316, in _choose_file
      File "C:\Program Files\Python39\lib\ntpath.py", line 703, in relpath
    ValueError: path is on mount 'C:', start on mount 'Y:'
    

    Simple fix is move archive to same drive as the tool

    opened by dobacco 2
Releases(1.1)
Owner
Tester
Tester Testerov Testerovich. "Test, test and test once more!"
Tester
Simple GUI where you can enter an article and get a crisp summarized version.

Text-Summarization-using-TextRank-BART Simple GUI where you can enter an article and get a crisp summarized version. How to run: Clone the repo Instal

Rohit P 4 Sep 28, 2022
To classify the News into Real/Fake using Features from the Text Content of the article

Hoax-Detector Authenticity of news has now become a major problem. The Idea is to classify the News into Real/Fake using Features from the Text Conten

Aravindhan 1 Feb 09, 2022
Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

Code for EmBERT, a transformer model for embodied, language-guided visual task completion.

41 Jan 03, 2023
It analyze the sentiment of the user, whether it is postive or negative.

Sentiment-Analyzer-Tool It analyze the sentiment of the user, whether it is postive or negative. It uses streamlit library for creating this sentiment

Paras Patidar 18 Dec 17, 2022
A2T: Towards Improving Adversarial Training of NLP Models (EMNLP 2021 Findings)

A2T: Towards Improving Adversarial Training of NLP Models This is the source code for the EMNLP 2021 (Findings) paper "Towards Improving Adversarial T

QData 17 Oct 15, 2022
Precision Medicine Knowledge Graph (PrimeKG)

PrimeKG Website | bioRxiv Paper | Harvard Dataverse Precision Medicine Knowledge Graph (PrimeKG) presents a holistic view of diseases. PrimeKG integra

Machine Learning for Medicine and Science @ Harvard 103 Dec 10, 2022
TensorFlow code and pre-trained models for BERT

BERT ***** New March 11th, 2020: Smaller BERT Models ***** This is a release of 24 smaller BERT models (English only, uncased, trained with WordPiece

Google Research 32.9k Jan 08, 2023
The Classical Language Toolkit

Notice: This Git branch (dev) contains the CLTK's upcoming major release (v. 1.0.0). See https://github.com/cltk/cltk/tree/master and https://docs.clt

Classical Language Toolkit 754 Jan 09, 2023
Perform sentiment analysis on textual data that people generally post on websites like social networks and movie review sites.

Sentiment Analyzer The goal of this project is to perform sentiment analysis on textual data that people generally post on websites like social networ

Madhusudan.C.S 53 Mar 01, 2022
Utilize Korean BERT model in sentence-transformers library

ko-sentence-transformers 이 프로젝트는 KoBERT 모델을 sentence-transformers 에서 보다 쉽게 사용하기 위해 만들어졌습니다. Ko-Sentence-BERT-SKTBERT 프로젝트에서는 KoBERT 모델을 sentence-trans

Junghyun 40 Dec 20, 2022
A simple tool to update bib entries with their official information (e.g., DBLP or the ACL anthology).

Rebiber: A tool for normalizing bibtex with official info. We often cite papers using their arXiv versions without noting that they are already PUBLIS

(Bill) Yuchen Lin 2k Jan 01, 2023
Word Bot for JKLM Bomb Party

Word Bot for JKLM Bomb Party A bot for Bomb Party on https://www.jklm.fun (Only English) Requirements pynput pyperclip pyautogui Usage: Step 1: Run th

Nicolas 7 Oct 30, 2022
A minimal code for fairseq vq-wav2vec model inference.

vq-wav2vec inference A minimal code for fairseq vq-wav2vec model inference. Runs without installing the fairseq toolkit and its dependencies. Usage ex

Vladimir Larin 7 Nov 15, 2022
NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles

NewsMTSC: (Multi-)Target-dependent Sentiment Classification in News Articles NewsMTSC is a dataset for target-dependent sentiment classification (TSC)

Felix Hamborg 79 Dec 30, 2022
Задания КЕГЭ по информатике 2021 на Python

КЕГЭ 2021 на Python В этом репозитории мои решения типовых заданий КЕГЭ по информатике в 2021 году, БЕСПЛАТНО! Задания Взяты с https://inf-ege.sdamgia

8 Oct 13, 2022
NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking

pretrain4ir_tutorial NLPIR tutorial: pretrain for IR. pre-train on raw textual corpus, fine-tune on MS MARCO Document Ranking 用作NLPIR实验室, Pre-training

ZYMa 12 Apr 07, 2022
A NLP program: tokenize method, PoS Tagging with deep learning

IRIS NLP SYSTEM A NLP program: tokenize method, PoS Tagging with deep learning Report Bug · Request Feature Table of Contents About The Project Built

Zakaria 7 Dec 13, 2022
A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

A crowdsourced dataset of dialogues grounded in social contexts involving utilization of commonsense.

Alexa 62 Dec 20, 2022
Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech tagging and word segmentation.

Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classifi

186 Dec 24, 2022
Rich Prosody Diversity Modelling with Phone-level Mixture Density Network

Phone Level Mixture Density Network for TTS This repo contains pytorch implementation of paper Rich Prosody Diversity Modelling with Phone-level Mixtu

Rishikesh (ऋषिकेश) 42 Dec 13, 2022