Working Time Statistics of working hours and working conditions by industry and company

Related tags

Data Analysisworktime
Overview

Working Time

统计各行业,各公司工作时间与工作条件。

原始数据来源:https://github.com/WorkerLivesMatter/WorkingTime,向发起人致敬。

经过少量处理,整理为供PostgreSQL直接可以使用的数据表。

Public Demo: http://demo.pigsty.cc/d/worktime-query

如何使用?

如果你已经有了pigsty环境, 使用管理用户在管理节点上克隆本项目并执行 make all 即可

git clone https://github.com/Vonng/worktime && cd worktime
make all

数据说明

CREATE TABLE worktime.worktime
(
    id          INTEGER NOT NULL,
    company     TEXT,
    department  TEXT,
    job         TEXT,
    base        TEXT,
    work_begin  TEXT,
    work_end    TEXT,
    launch_time TEXT,
    dinner_time TEXT,
    wed         TEXT,
    fri         TEXT,
    workdays    TEXT,
    summary     TEXT,
    remark      TEXT,
    category    TEXT,
    suggestion  TEXT,
    struct      TEXT,
    welfare     TEXT,
    is_foreign  BOOLEAN,
    domain      TEXT NOT NULL
) partition by list (domain);

CREATE TABLE worktime.internet PARTITION OF worktime.worktime FOR VALUES IN ('互联网');
CREATE TABLE worktime.finance  PARTITION OF worktime.worktime FOR VALUES IN ('金融');
CREATE TABLE worktime.foreign  PARTITION OF worktime.worktime FOR VALUES IN ('外企');
CREATE TABLE worktime.misc     PARTITION OF worktime.worktime FOR VALUES IN ('其他');

COMMENT ON TABLE worktime.worktime IS '企业工作时间统计表';
COMMENT ON TABLE worktime.internet IS '企业工作时间统计表:互联网行业';
COMMENT ON TABLE worktime.finance IS '企业工作时间统计表:金融行业';
COMMENT ON TABLE worktime.foreign IS '企业工作时间统计表:外企';
COMMENT ON TABLE worktime.misc IS '企业工作时间统计表:其他';

CREATE INDEX ON worktime.worktime(company, department);
COMMENT ON COLUMN worktime.worktime.id IS '原始数据行号';
COMMENT ON COLUMN worktime.worktime.company IS '公司';
COMMENT ON COLUMN worktime.worktime.department IS '部门';
COMMENT ON COLUMN worktime.worktime.job IS '岗位';
COMMENT ON COLUMN worktime.worktime.base IS 'base地';
COMMENT ON COLUMN worktime.worktime.work_begin IS '上班时间';
COMMENT ON COLUMN worktime.worktime.work_end IS '下班时间';
COMMENT ON COLUMN worktime.worktime.launch_time IS '午饭时间';
COMMENT ON COLUMN worktime.worktime.dinner_time IS '晚饭时间';
COMMENT ON COLUMN worktime.worktime.wed IS '周三是否特殊';
COMMENT ON COLUMN worktime.worktime.fri IS '周五是否特殊';
COMMENT ON COLUMN worktime.worktime.workdays IS '一周工作天数';
COMMENT ON COLUMN worktime.worktime.summary IS '新人是否日报/周报';
COMMENT ON COLUMN worktime.worktime.remark IS '备注';
COMMENT ON COLUMN worktime.worktime.category IS '行业/公司性质';
COMMENT ON COLUMN worktime.worktime.suggestion IS '建议';
COMMENT ON COLUMN worktime.worktime.struct IS '组内 35 岁及以上基层员工( 组长及以下)比例,格式为 x / y,x 为 35岁以上的人数,y 为总人数';
COMMENT ON COLUMN worktime.worktime.welfare IS '是否有其他福利(如:五险一金,带薪年假,公费旅游,免费三餐)';
COMMENT ON COLUMN worktime.worktime.is_foreign IS '是否为外资企业?';
COMMENT ON COLUMN worktime.worktime.domain IS '大分类:互联网、金融、外企、其他';
Owner
Feng Ruohang
haha
Feng Ruohang
Evaluation of a Monocular Eye Tracking Set-Up

Evaluation of a Monocular Eye Tracking Set-Up As part of my master thesis, I implemented a new state-of-the-art model that is based on the work of Che

Pascal 19 Dec 17, 2022
Tools for the analysis, simulation, and presentation of Lorentz TEM data.

ltempy ltempy is a set of tools for Lorentz TEM data analysis, simulation, and presentation. Features Single Image Transport of Intensity Equation (SI

McMorran Lab 1 Dec 26, 2022
Python package for analyzing behavioral data for Brain Observatory: Visual Behavior

Allen Institute Visual Behavior Analysis package This repository contains code for analyzing behavioral data from the Allen Brain Observatory: Visual

Allen Institute 16 Nov 04, 2022
The official repository for ROOT: analyzing, storing and visualizing big data, scientifically

About The ROOT system provides a set of OO frameworks with all the functionality needed to handle and analyze large amounts of data in a very efficien

ROOT 2k Dec 29, 2022
PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams

PLStream: A Framework for Fast Polarity Labelling of Massive Data Streams Motivation When dataset freshness is critical, the annotating of high speed

4 Aug 02, 2022
pipeline for migrating lichess data into postgresql

How Long Does It Take Ordinary People To "Get Good" At Chess? TL;DR: According to 5.5 years of data from 2.3 million players and 450 million games, mo

Joseph Wong 182 Nov 11, 2022
CubingB is a timer/analyzer for speedsolving Rubik's cubes, with smart cube support

CubingB is a timer/analyzer for speedsolving Rubik's cubes (and related puzzles). It focuses on supporting "smart cubes" (i.e. bluetooth cubes) for recording the exact moves of a solve in real time.

Zach Wegner 5 Sep 18, 2022
Toolchest provides APIs for scientific and bioinformatic data analysis.

Toolchest Python Client Toolchest provides APIs for scientific and bioinformatic data analysis. It allows you to abstract away the costliness of runni

Toolchest 11 Jun 30, 2022
Fit models to your data in Python with Sherpa.

Table of Contents Sherpa License How To Install Sherpa Using Anaconda Using pip Building from source History Release History Sherpa Sherpa is a modeli

134 Jan 07, 2023
Manage large and heterogeneous data spaces on the file system.

signac - simple data management The signac framework helps users manage and scale file-based workflows, facilitating data reuse, sharing, and reproduc

Glotzer Group 109 Dec 14, 2022
The Spark Challenge Student Check-In/Out Tracking Script

The Spark Challenge Student Check-In/Out Tracking Script This Python Script uses the Student ID Database to match the entries with the ID Card Swipe a

1 Dec 09, 2021
A pipeline that creates consensus sequences from a Nanopore reads. I

A pipeline that creates consensus sequences from a Nanopore reads. It clusters reads that are similar to each other and creates a consensus that is then identified using BLAST.

Ada Madejska 2 May 15, 2022
Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

Using Data Science with Machine Learning techniques (ETL pipeline and ML pipeline) to classify received messages after disasters.

1 Feb 11, 2022
Using Python to derive insights on particular Pokemon, Types, Generations, and Stats

Pokémon Analysis Andreas Nikolaidis February 2022 Introduction Exploratory Analysis Correlations & Descriptive Statistics Principal Component Analysis

Andreas 1 Feb 18, 2022
A Big Data ETL project in PySpark on the historical NYC Taxi Rides data

Processing NYC Taxi Data using PySpark ETL pipeline Description This is an project to extract, transform, and load large amount of data from NYC Taxi

Unnikrishnan 2 Dec 12, 2021
Orchest is a browser based IDE for Data Science.

Orchest is a browser based IDE for Data Science. It integrates your favorite Data Science tools out of the box, so you don’t have to. The application is easy to use and can run on your laptop as well

Orchest 3.6k Jan 09, 2023
CRISP: Critical Path Analysis of Microservice Traces

CRISP: Critical Path Analysis of Microservice Traces This repo contains code to compute and present critical path summary from Jaeger microservice tra

Uber Research 110 Jan 06, 2023
Parses data out of your Google Takeout (History, Activity, Youtube, Locations, etc...)

google_takeout_parser parses both the Historical HTML and new JSON format for Google Takeouts caches individual takeout results behind cachew merge mu

Sean Breckenridge 27 Dec 28, 2022
Maximum Covariance Analysis in Python

xMCA | Maximum Covariance Analysis in Python The aim of this package is to provide a flexible tool for the climate science community to perform Maximu

Niclas Rieger 39 Jan 03, 2023
Gaussian processes in TensorFlow

Website | Documentation (release) | Documentation (develop) | Glossary Table of Contents What does GPflow do? Installation Getting Started with GPflow

GPflow 1.7k Jan 06, 2023