Understanding the field usage of any object in Salesforce

Overview

Understanding the field usage of any object in Salesforce

One of the biggest problems that I have addressed while working with Salesforce is to understand and evaluate the field usage of a custom object. This application does the work for you, generating a CSV/Excel file with the date of the last record that used each field, and the percentage of use across all of them.

To make this app work, you will need a System Administrator credential to log into Salesforce
This app is currently working with the Spyder IDE, which is part of Anaconda


Let's understand how it works!

Dependencies

First, we need our dependencies. We will use Pandas, datetime and Simple Salesforce

from simple_salesforce import Salesforce
import pandas as pd
import datetime

Credentials

Next, we are going to connect to Salesforce with Simple Salesforce

  sf = Salesforce(password='password',
            username='username',
            organizationId='organizationId')

Your organizationId should look like this, 00JH0000000tYml.
To find it, just follow the next steps (Lightning experience):

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • In the quick search bar (the one in the left) type Company Information
  • Click Company Information
  • Finally, look for Salesforce.com Organization ID. The ID will look like 00JH0000000tYml

The Object

Now you will need to plug the object name. The object name is the API Name of the object. Normally, if it is a custom object, it will finish like this, __c
To find the API NAME just follow these instructions:

  • Log into Salesforce with your System Administrator credentials
  • Press the gear button
  • Press Setup, (setup for current app)
  • Click on Object Manager in the header of the page
  • Find your object using the name and copy the API NAME which is next to the name of the object

This part of the code if going to use the name of the object to bring all the fields
  object_to_evaluate = "object"
  object_fields = getattr(sf, object_to_evaluate).describe()

The Date

This part is important and will make you think. The default code is going to bring the data from the last year. Is important to understand what happened during that period. If you release a new field a week ago, it will show that it was use a couple of days ago, but the usage will be really low, around a 2% (7/365). You can change the days to evaluate simple change the 365 for the number of days that you want.

last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

The Result

Now we are going to iterate all the fields and get the created date from the last record that used the field, and the number of records that use that field during the period (one year).

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True)">
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

Some Formatting

Formatting is a nice to have to understand the result, especially if you are going to share the insights. We are going to rename some columns, format the dates column in a way that CSV/Excel can understand, and we are adding a % of use column.

data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

The Files

Finally, we are going to export the files to CSV and Excel, so you can choose which one you prefer to use. The files will be stored in the same folder as the app. So, if you are running this app in your Desktop folder, the CSV and Excel files will be store in the same folder.

data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

If you like it, remember to
Buy Me A Coffee


The final code will look like this:

{} \ AND {} != null \ ORDER BY Id DESC \ LIMIT 1".format(object_to_evaluate, last_year , field['name']) )['records']) field_detail['Field Name'] = field['name'] field_detail['Field Label'] = field['label'] field_detail['Found?'] = 'Yes' field_quantity = pd.DataFrame( sf.query("SELECT count(Id) \ FROM {} \ WHERE createddate > {} \ AND {} != null".format(object_to_evaluate, last_year , field['name']) ))['records'][0]['expr0'] field_detail['Quantity'] = field_quantity data.append(field_detail) if field_detail.empty: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['Yes, no data']} data.append(pd.DataFrame(error_data)) except: error_data = {'Field Name': [field['name']], 'Field Label': [field['label']] , 'Found?': ['No']} data.append(pd.DataFrame(error_data)) # Concatenate the list of result into one dataframe data_to_csv = pd.concat(data, ignore_index=True) # Format the CSV/Excel report data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True) data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date data_to_csv = data_to_csv.drop('attributes', axis=1) max_value = data_to_csv['Quantity'].max() data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value # Export the data to a CSV/Excel file data_to_csv.to_csv('last Field Usage Date.csv') data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")">
from simple_salesforce import Salesforce
import pandas as pd
import datetime

# Connection to Salesforce
sf = Salesforce(password='password',
                username='username',
                organizationId='organizationId')


# Change the name to the object that you want to evaluate. If is a custom object remember to end it with __c
object_to_evaluate = "object"

# Get all the fields from the Object
object_fields = getattr(sf, object_to_evaluate).describe()

# Define an empty list to append the information
data = []

# Create a date variable to define from when we want to get the data
last_year = (datetime.datetime.now() + datetime.timedelta(days=-365)).strftime("%Y-%m-%d"+"T"+"%H:%M:%S"+"Z")

# Iterate over the fields and bring the last record created Date where the field wasn't empty
# If the record is not found, store it in the CSV/Excel file as not found
for field in object_fields['fields']:
    print(field['name'])
    try:
        field_detail = pd.DataFrame(
            sf.query("SELECT Id, createddate, SystemModStamp \
                      FROM {} \
                      WHERE createddate > {} \
                        AND {} != null \
                      ORDER BY Id DESC \
                      LIMIT 1".format(object_to_evaluate, last_year , field['name'])
                      )['records'])

        field_detail['Field Name'] = field['name']
        field_detail['Field Label'] = field['label']
        field_detail['Found?'] = 'Yes'

        field_quantity = pd.DataFrame(
            sf.query("SELECT count(Id) \
                    FROM {} \
                    WHERE createddate > {} \
                    AND {} != null".format(object_to_evaluate, last_year , field['name'])
                    ))['records'][0]['expr0']

        field_detail['Quantity'] = field_quantity                        
        data.append(field_detail)

        if field_detail.empty:
            error_data = {'Field Name': [field['name']],
                          'Field Label': [field['label']] , 
                          'Found?': ['Yes, no data']}
            data.append(pd.DataFrame(error_data))
    except:
        error_data = {'Field Name': [field['name']],
                      'Field Label': [field['label']] , 
                      'Found?': ['No']}
        data.append(pd.DataFrame(error_data))

# Concatenate the list of result into one dataframe
data_to_csv = pd.concat(data, ignore_index=True)

# Format the CSV/Excel report
data_to_csv.rename(columns={'CreatedDate': 'Created Date', 'SystemModstamp': 'Modified Date'}, inplace=True)
data_to_csv['Created Date'] = pd.to_datetime(data_to_csv['Created Date']).dt.date
data_to_csv['Modified Date'] = pd.to_datetime(data_to_csv['Modified Date']).dt.date
data_to_csv = data_to_csv.drop('attributes', axis=1)
max_value = data_to_csv['Quantity'].max()
data_to_csv['% of use'] = data_to_csv['Quantity'] / max_value

# Export the data to a CSV/Excel file
data_to_csv.to_csv('last Field Usage Date.csv')
data_to_csv.to_excel('last Field Usage Date.xlsx', float_format="%.3f")

HOPE IT HELPS!

If you like it, remember to
Buy Me A Coffee

Owner
Sebastian Undurraga
Sebastian Undurraga
🥦 Send and receive nano with 2 simple functions

easy_nano Send and receive nano (without having to understand the nano protocol).

1 Feb 14, 2022
A type based dependency injection framework for Python 3.9+

Alluka A type based dependency injection framework for Python 3.9+. Installation You can install Alluka from PyPI using the following command in any P

Lucina 16 Dec 15, 2022
A python package for bitclout.

BitClout.py A python package for bitclout. Developed by ItsAditya Run pip install bitclout to install the module! Examples of How To Use BitClout.py G

ItsAditya 9 Dec 31, 2021
3x+1 recreated in Python

3x-1 3x+1 recreated in Python If a number is odd it is multiplied by 3 and 1 is added to the product. If a number is even it is divided by 2. These ru

4 Aug 19, 2022
Ergonomic option parser on top of dataclasses, inspired by structopt.

oppapī Ergonomic option parser on top of dataclasses, inspired by structopt. Usage from typing import Optional from oppapi import from_args, oppapi @

yukinarit 4 Jul 19, 2022
Check COVID locations of interest against Google location history

Location of Interest Checker Script to compare COVID locations of interest to Google location history. The script produces a map plot (as shown below)

9 Mar 30, 2022
A simple project which is a ecm to found a good way to provide a path to img_dir in gooey

ECM to find a good way for img_dir Path in Gooey This code is just an ECM to find a good way to indicate a path of image in image_dir variable. We loo

Jean-Emmanuel Longueville 1 Oct 25, 2021
Cairo-bloom - A naive bloom filter implementation in Cairo

🥀 cairo-bloom A naive bloom filter implementation in Cairo. A Bloom filter is a

Sam Barnes 37 Oct 01, 2022
Demo of using DataLoader to prevent out of memory

Demo of using DataLoader to prevent out of memory

3 Jun 25, 2022
A simple PID tuner and simulator.

PIDtuner-V0.1 PlantPy PID tuner version 0.1 Features Supports first order and ramp process models. Supports Proportional action on PV or error or a sp

3 Jun 23, 2022
Christmas tree on the desktop.

new-year-tree Christmas tree on the desktop. [Ссылка на статью habr]

Daniil Gorbenko 10 Dec 29, 2022
Identify and annotate mutations from genome editing assays.

CRISPR-detector Here we propose our CRISPR-detector to facilitate the CRISPR-edited amplicon and whole genome sequencing data analysis, with functions

hlcas 2 Feb 20, 2022
Python 3 script for installing kali tools on your linux machine

Python 3 script for installing kali tools on your linux machine

gh0st 2 Apr 20, 2022
Learn Python tips, tools, and techniques in around 5 minutes each.

Python shorts Learn Python tips, tools, and techniques in around 5 minutes each. Watch on YouTube Subscribe on YouTube to keep up with all the videos.

Michael Kennedy 28 Jan 01, 2023
💉 🔍 VaxFinder - Backend The backend for the Vaccine Hunters Finder tool.

💉 🔍 VaxFinder - Backend The backend for the Vaccine Hunters Finder tool. Development Prerequisites Python 3.8 Poetry: A tool for dependency manageme

Vaccine Hunters Canada 32 Jan 19, 2022
Dev-meme - A repository that contains memes just for people like us

A repository that contains memes just for people like us. Coders are constantly

Padmashree Jha 4 Oct 31, 2022
Make creating Excel XLSX files fun again

Poi: Make creating Excel XLSX files fun again. Poi helps you write Excel sheet in a declarative way, ensuring you have a better Excel writing experien

Ryan Wang 11 Apr 01, 2022
Amazon SageMaker Delta Sharing Examples

This repository contains examples and related resources showing you how to preprocess, train, and serve your models using Amazon SageMaker with data fetched from Delta Lake.

Eitan Sela 5 May 02, 2022
Structured, dependable legos for starknet development.

Structured, dependable legos for starknet development.

Alucard 127 Nov 23, 2022
Notebooks for computing approximations to the prime counting function using Riemann's formula.

Notebooks for computing approximations to the prime counting function using Riemann's formula.

Tom White 2 Aug 02, 2022