Understanding the field usage of any object in Salesforce

Overview

Understanding the field usage of any object in Salesforce

One of the biggest problems that I have addressed while working with Salesforce is to understand and evaluate the field usage of a custom object. This application does the work for you, generating a CSV/Excel file with the date of the last record that used each field, and the percentage of use across all of them.

To make this app work, you will need a System Administrator credential to log into Salesforce
This app is currently working with the Spyder IDE, which is part of Anaconda


Let's understand how it works!

Dependencies

First, we need our dependencies. We will use Pandas, datetime and Simple Salesforce

from simple_salesforce import Salesforce
import pandas as pd
import datetime

Credentials

Next, we are going to connect to Salesforce with Simple Salesforce

  sf = Salesforce(password='password',
            username='username',
            organizationId='organizationId')

Your organizationId should look like this, 00JH0000000tYml.
To find it, just follow the next steps (Lightning experience):

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • In the quick search bar (the one in the left) type Company Information
  • Click Company Information
  • Finally, look for Salesforce.com Organization ID. The ID will look like 00JH0000000tYml

The Object

Now you will need to plug the object name. The object name is the API Name of the object. Normally, if it is a custom object, it will finish like this, __c
To find the API NAME just follow these instructions:

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • Click on Object Manager in the header of the page
  • Find your object using the name and copy the API NAME which is next to the name of the object

This part of the code if going to use the name of the object to bring all the fields
  object_to_evaluate = "object"
  object_fields = getattr(sf, object_to_evaluate).describe()

The Date

This part is important and will make you think. The default code is going to bring the data from the last year. Is important to understand what happened during that period. If you release a new field a week ago, it will show that it was use a couple of days ago, but the usage will be really low, around a 2% (7/365). You can change the days to evaluate simple change the 365 for the number of days that you want.

last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

The Result

Now we are going to iterate all the fields and get the created date from the last record that used the field, and the number of records that use that field during the period (one year).

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True)">
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

Some Formatting

Formatting is a nice to have to understand the result, especially if you are going to share the insights. We are going to rename some columns, format the dates column in a way that CSV/Excel can understand, and we are adding a % of use column.

data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

The Files

Finally, we are going to export the files to CSV and Excel, so you can choose which one you prefer to use. The files will be stored in the same folder as the app. So, if you are running this app in your Desktop folder, the CSV and Excel files will be store in the same folder.

data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

If you like it, remember to
Buy Me A Coffee


The final code will look like this:

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True) # Format the CSV/Excel report data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True) data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date data_to_csv = data_to_csv.drop('attributes', axis=1) max_value = data_to_csv['Quantity'].max() data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value # Export the data to a CSV/Excel file data_to_csv.to_csv('last Field Usage Date.csv') data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")">
from simple_salesforce import Salesforce
import pandas as pd
import datetime

# Connection to Salesforce
sf = Salesforce(password='password',
                username='username',
                organizationId='organizationId')


# Change the name to the object that you want to evaluate. If is a custom object remember to end it with __c
object_to_evaluate = "object"

# Get all the fields from the Object
object_fields = getattr(sf, object_to_evaluate).describe()

# Define an empty list to append the information
data = []

# Create a date variable to define from when we want to get the data
last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

# Iterate over the fields and bring the last record created Date where the field wasn't empty
# If the record is not found, store it in the CSV/Excel file as not found
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

# Format the CSV/Excel report
data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

# Export the data to a CSV/Excel file
data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

HOPE IT HELPS!

If you like it, remember to
Buy Me A Coffee

Owner
Sebastian Undurraga
Sebastian Undurraga
Open source tools to allow working with ESP devices in the browser

ESP Web Tools Allow flashing ESPHome or other ESP-based firmwares via the browser. Will automatically detect the board type and select a supported fir

ESPHome 195 Dec 31, 2022
A discord group chat creator just made it because i saw people selling this stuff for like up to 40 bucks

gccreator some discord group chat tools just made it because i saw people selling this stuff for like up to 40 bucks (im currently working on a faster

baum1810 6 Oct 03, 2022
Automatización del proceso Inmofianza

Selenium Inmofianza Proyecto de pruebas automatizadas con selenium webdriver para el aplicativo Omnicanalidad Pre-requisitos 📋 Componentes que deben

Natalia Narváez 1 Jan 07, 2022
🌍💉 Global COVID-19 vaccination data at the regional level.

COVID-19 vaccination data at subnational level. To ensure its officiality, the source data is carefully verified.

sociepy 61 Sep 21, 2022
A script where you execute a script that generates a base project for your gdextension

GDExtension Project Creator this is a script (currently only for linux) where you execute a script that generates a base project for your gdextension,

Unknown 11 Nov 17, 2022
EDF R&D implementation of ISO 15118-20 FDIS.

EDF R&D implementation of ISO 15118-20 FDIS ============ This project implements the ISO 15118-20 using Python. Supported features: DC Bidirectional P

30 Dec 29, 2022
A python implementation of differentiable quality diversity.

Differentiable Quality Diversity This repository is the official implementation of Differentiable Quality Diversity.

ICAROS 41 Nov 30, 2022
This repository collects nice scripts ("plugins") for the SimpleBot bot for DeltaChat.

Having fun with DeltaChat This repository collects nice scripts ("plugins") for the SimpleBot bot for DeltaChat. DeltaChat is a nice e-mail based mess

Valentin Brandner 3 Dec 25, 2021
Interfaces between napari and pymeshlab library to allow import, export and construction of surfaces.

napari-pymeshlab Interfaces between napari and the pymeshlab library to allow import, export and construction of surfaces. This is a WIP and feature r

Zach Marin 4 Oct 12, 2022
An async API wrapper for Dress To Impress written in Python.

dti.py An async API wrapper for Dress To Impress written in Python. Some notes: For the time being, there are no front-facing docs for this beyond doc

Steve C 1 Dec 14, 2022
Python interface to ISLEX, an English IPA pronunciation dictionary with syllable and stress marking.

pysle Questions? Comments? Feedback? Pronounced like 'p' + 'isle'. An interface to a pronunciation dictionary with stress markings (ISLEX - the intern

Tim 38 Dec 14, 2022
Thinky nature dots with python

Thinky Nature Welcome to my rice dotfiles of my ThinkPad X230 You surely will need to change some configs to fit your files and stuff and maybe tweak

Daniel Mironow 26 Dec 02, 2022
A collection of resources on neural rendering.

awesome neural rendering A collection of resources on neural rendering. Contributing If you think I have missed out on something (or) have any suggest

1.8k Dec 30, 2022
一个Graia-Saya的插件仓库

一个Graia-Saya的插件仓库 这是一个存储基于 Graia-Saya 的插件的仓库 如果您有这类项目

ZAPHAKIEL 111 Oct 24, 2022
Block when attacker want to bypass the limit of request

Block when attacker want to bypass the limit of request

iFanpS 1 Dec 01, 2021
A simple but complete exercise to learning Python

ResourceReservationProject This is a simple but complete exercise to learning Python. Task and flow chart We are going to do a new fork of the existin

2 Nov 14, 2022
Gunakan Dengan Bijak!!

YMBF Made with ❤️ by ikiwzXD_ menu Results notice me: if you get cp results, save 3/7 days then log in. Install script on Termux $ pkg update && pkg u

Ikiwz 0 Jul 11, 2022
🔤 Measure edit distance based on keyboard layout

clavier Measure edit distance based on keyboard layout. Table of contents Table of contents Introduction Installation User guide Keyboard layouts Dist

Max Halford 42 Dec 18, 2022
Python screenshot library, replacement for the Pillow ImageGrab module on Linux.

tldr: Use Pillow The pyscreenshot module is obsolete in most cases. It was created because PIL ImageGrab module worked on Windows only, but now Linux

455 Dec 24, 2022
In this repo, I will put all the code related to data science using python libraries like Numpy, Pandas, Matplotlib, Seaborn and many more.

Python-for-DS In this repo, I will put all the code related to data science using python libraries like Numpy, Pandas, Matplotlib, Seaborn and many mo

1 Jan 10, 2022