Skip to content

Commit

Permalink
feat(perf): add new eFPS benchmark suite (#7407)
Browse files Browse the repository at this point in the history
* feat(perf): add new eFPS benchmark suite

* fix(perf): add `.depcheckrc.json` with `ignores`
  • Loading branch information
ricokahler committed Sep 18, 2024
1 parent 930d508 commit 2de06de
Show file tree
Hide file tree
Showing 45 changed files with 6,156 additions and 39 deletions.
3 changes: 3 additions & 0 deletions perf/efps/.depcheckrc.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"ignores": ["@swc-node/register", "@types/react", "@types/react-dom"]
}
3 changes: 3 additions & 0 deletions perf/efps/.env.template
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
VITE_PERF_EFPS_PROJECT_ID=qk0wb6qx
VITE_PERF_EFPS_DATASET=test
PERF_EFPS_SANITY_TOKEN=
4 changes: 4 additions & 0 deletions perf/efps/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
/builds
/results
/.exported
.env
85 changes: 85 additions & 0 deletions perf/efps/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Editor "Frames per Second" — eFPS benchmarks

This folder contains a performance test suite for benchmarking the Sanity Studio editor and ensuring smooth performance. The suite is designed to run various tests and measure the editor's performance using the eFPS (editor Frames Per Second) metric.

## Overview

The performance test suite is part of the Sanity Studio monorepo and is used to benchmark the editor's performance. It runs a series of tests on different document types and field configurations to measure the responsiveness and smoothness of the editing experience.

## eFPS Metric

The eFPS (editor Frames Per Second) metric is used to quantify the performance of the Sanity Studio editor. Here's how it works:

1. The test suite measures the time it takes for the editor to respond to user input (e.g., typing in a field).
2. This response time is then converted into a "frames per second" analogy to provide an intuitive understanding of performance.
3. The eFPS is calculated as: `eFPS = 1000 / responseTime`

We use the "frames per second" analogy because it helps us have a better intuition for what constitutes good or bad performance. Just like in video games or animations:

- Higher eFPS values indicate smoother, more responsive performance.
- Lower eFPS values suggest lag or sluggishness in the editor.

For example:

- An eFPS of 60 or higher is generally considered very smooth.
- An eFPS between 30-60 is acceptable but may show some lag.
- An eFPS below 30 indicates noticeable performance issues.

## Percentiles

The test suite reports eFPS values at different percentiles (p50, p75, and p90) for each run. Here's why we use percentiles and what they tell us:

- **p50 (50th percentile or median)**: This represents the typical performance. Half of the interactions were faster than this, and half were slower.
- **p75 (75th percentile)**: 75% of interactions were faster than this value. It gives us an idea of performance during slightly worse conditions.
- **p90 (90th percentile)**: 90% of interactions were faster than this value. This helps us understand performance during more challenging scenarios or edge cases.

Using percentiles allows us to:

1. Get a more comprehensive view of performance across various conditions.
2. Identify inconsistencies or outliers in performance.
3. Ensure that we're not just optimizing for average cases but also for worst-case scenarios.

## Test Structure

Each test in the suite has its own build. This approach offers several advantages:

1. **Isolation**: Each test has its own schema and configuration, preventing interference between tests.
2. **Ease of Adding Tests**: New tests can be added without affecting existing ones, making the suite more modular and maintainable.
3. **Accurate Profiling**: Individual builds allow for more precise source maps, which leads to better profiling output and easier performance debugging.

## Adding a New Test

To add a new test to the suite:

1. Create a new folder in the `tests` directory with your test name.
2. Create the following files in your test folder:
- `sanity.config.ts`: Define the Sanity configuration for your test.
- `sanity.types.ts`: Define TypeScript types for your schema (if needed).
- `<testname>.ts`: Implement your test using the `defineEfpsTest` function.
3. If your test requires assets, add them to an `assets` subfolder.
4. Update the `tests` array in `index.ts` to include your new test.

Example structure for a new test:

```
tests/
newtest/
assets/
sanity.config.ts
sanity.types.ts
newtest.ts
```

## CPU Profiles

The test suite generates CPU profiles for each test run. These profiles are remapped to the original source code, making them easier to analyze. To inspect a CPU profile:

1. Open Google Chrome DevTools.
2. Go to the "Performance" tab.
3. Click on "Load profile" and select the `.cpuprofile` file from the `results` directory.

The mapped CPU profiles allow you to:

- Identify performance bottlenecks in the original source code.
- Analyze the time spent in different functions and components.
- Optimize the areas of code that have the most significant impact on performance.
17 changes: 17 additions & 0 deletions perf/efps/entry.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
import {createRoot} from 'react-dom/client'
import {Studio} from 'sanity'
import {structureTool} from 'sanity/structure'

import config from '#config'

const configWithStructure = {
...config,
plugins: [...(config.plugins || []), structureTool()],
}

const container = document.getElementById('container')
if (!container) throw new Error('Could not find `#container`')

const root = createRoot(container)

root.render(<Studio config={configWithStructure} />)
21 changes: 21 additions & 0 deletions perf/efps/helpers/calculatePercentile.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
export function calculatePercentile(numbers: number[], percentile: number): number {
// Sort the array in ascending order
const sorted = numbers.slice().sort((a, b) => a - b)

// Calculate the index
const index = percentile * (sorted.length - 1)

// If the index is an integer, return the value at that index
if (Number.isInteger(index)) {
return sorted[index]
}

// Otherwise, interpolate between the two nearest values
const lowerIndex = Math.floor(index)
const upperIndex = Math.ceil(index)
const lowerValue = sorted[lowerIndex]
const upperValue = sorted[upperIndex]

const fraction = index - lowerIndex
return lowerValue + (upperValue - lowerValue) * fraction
}
72 changes: 72 additions & 0 deletions perf/efps/helpers/exec.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
import {spawn} from 'node:child_process'
import process from 'node:process'

import chalk from 'chalk'
import {type Ora} from 'ora'

interface ExecOptions {
spinner: Ora
command: string
text: [string, string]
cwd?: string
}

export async function exec({
spinner,
command,
text: [inprogressText, successText],
cwd,
}: ExecOptions): Promise<void> {
spinner.start(inprogressText)

const maxColumnLength = 80
const maxLines = 12
const outputLines: string[] = []

function updateSpinnerText() {
spinner.text = `${inprogressText}\n${outputLines
.map((line) => {
return chalk.dim(
`${chalk.cyan('│')} ${
line.length > maxColumnLength ? `${line.slice(0, maxColumnLength)}…` : line
}`,
)
})
.join('\n')}`
}

await new Promise<void>((resolve, reject) => {
const childProcess = spawn(command, {
shell: true,
stdio: process.env.CI ? 'inherit' : ['inherit', 'pipe', 'pipe'],
cwd,
})

function handleOutput(data: Buffer) {
const newLines = data.toString().split('\n')
for (const line of newLines) {
if (line.trim() !== '') {
outputLines.push(line.trim())
if (outputLines.length > maxLines) {
outputLines.shift()
}
updateSpinnerText()
}
}
}

childProcess.stdout?.on('data', handleOutput)
childProcess.stderr?.on('data', handleOutput)

childProcess.on('close', (code) => {
if (code === 0) resolve()
else reject(new Error(`Command exited with code ${code}`))
})

childProcess.on('error', (error) => {
reject(error)
})
})

spinner.succeed(successText)
}
82 changes: 82 additions & 0 deletions perf/efps/helpers/measureFpsForInput.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
import {type Locator} from 'playwright'

import {type EfpsResult} from '../types'
import {calculatePercentile} from './calculatePercentile'

export async function measureFpsForInput(input: Locator): Promise<EfpsResult> {
await input.waitFor({state: 'visible'})
const characters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'

await input.click()
await new Promise((resolve) => setTimeout(resolve, 500))

const rendersPromise = input.evaluate(async (el: HTMLInputElement | HTMLTextAreaElement) => {
const updates: {value: string; timestamp: number}[] = []

const mutationObserver = new MutationObserver(() => {
updates.push({value: el.value, timestamp: Date.now()})
})

if (el instanceof HTMLTextAreaElement) {
mutationObserver.observe(el, {childList: true, characterData: true, subtree: true})
} else {
mutationObserver.observe(el, {attributes: true, attributeFilter: ['value']})
}

await new Promise<void>((resolve) => {
const handler = () => {
el.removeEventListener('blur', handler)
resolve()
}

el.addEventListener('blur', handler)
})

return updates
})
await new Promise((resolve) => setTimeout(resolve, 500))

const inputEvents: {character: string; timestamp: number}[] = []

const startingMarker = '__START__|'
const endingMarker = '__END__'

await input.pressSequentially(endingMarker)
await new Promise((resolve) => setTimeout(resolve, 500))
for (let i = 0; i < endingMarker.length; i++) {
await input.press('ArrowLeft')
}
await input.pressSequentially(startingMarker)
await new Promise((resolve) => setTimeout(resolve, 500))

for (const character of characters) {
inputEvents.push({character, timestamp: Date.now()})
await input.press(character)
await new Promise((resolve) => setTimeout(resolve, 0))
}

await input.blur()

const renderEvents = await rendersPromise

await new Promise((resolve) => setTimeout(resolve, 500))

const latencies = inputEvents.map((inputEvent) => {
const matchingEvent = renderEvents.find(({value}) => {
if (!value.includes(startingMarker) || !value.includes(endingMarker)) return false

const [, afterStartingMarker] = value.split(startingMarker)
const [beforeEndingMarker] = afterStartingMarker.split(endingMarker)
return beforeEndingMarker.includes(inputEvent.character)
})
if (!matchingEvent) throw new Error(`No matching event for ${inputEvent.character}`)

return matchingEvent.timestamp - inputEvent.timestamp
})

const p50 = 1000 / calculatePercentile(latencies, 0.5)
const p75 = 1000 / calculatePercentile(latencies, 0.75)
const p90 = 1000 / calculatePercentile(latencies, 0.9)

return {p50, p75, p90, latencies}
}
94 changes: 94 additions & 0 deletions perf/efps/helpers/measureFpsForPte.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,94 @@
import {type Locator} from 'playwright'

import {type EfpsResult} from '../types'
import {calculatePercentile} from './calculatePercentile'

export async function measureFpsForPte(pteField: Locator): Promise<EfpsResult> {
const characters = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'

await pteField.waitFor({state: 'visible'})
await new Promise((resolve) => setTimeout(resolve, 500))

await pteField.click()

const contentEditable = pteField.locator('[contenteditable="true"]')
await contentEditable.waitFor({state: 'visible'})

const rendersPromise = contentEditable.evaluate(async (el: HTMLElement) => {
const updates: {
value: string
timestamp: number
// with very large PTE fields, it may take time to serialize the result
// so we capture this time and remove it from the final metric
textContentProcessingTime: number
}[] = []

const mutationObserver = new MutationObserver(() => {
const start = performance.now()
const textContent = el.textContent || ''
const end = performance.now()

updates.push({
value: textContent,
timestamp: Date.now(),
textContentProcessingTime: end - start,
})
})

mutationObserver.observe(el, {subtree: true, characterData: true})

await new Promise<void>((resolve) => {
const handler = () => {
el.removeEventListener('blur', handler)
resolve()
}

el.addEventListener('blur', handler)
})

return updates
})
await new Promise((resolve) => setTimeout(resolve, 500))

const inputEvents: {character: string; timestamp: number}[] = []

const startingMarker = '__START__|'
const endingMarker = '__END__'

await contentEditable.pressSequentially(endingMarker)
await new Promise((resolve) => setTimeout(resolve, 500))
for (let i = 0; i < endingMarker.length; i++) {
await contentEditable.press('ArrowLeft')
}
await contentEditable.pressSequentially(startingMarker)
await new Promise((resolve) => setTimeout(resolve, 500))

for (const character of characters) {
inputEvents.push({character, timestamp: Date.now()})
await contentEditable.press(character)
await new Promise((resolve) => setTimeout(resolve, 0))
}

await contentEditable.blur()

const renderEvents = await rendersPromise

const latencies = inputEvents.map((inputEvent) => {
const matchingEvent = renderEvents.find(({value}) => {
if (!value.includes(startingMarker) || !value.includes(endingMarker)) return false

const [, afterStartingMarker] = value.split(startingMarker)
const [beforeEndingMarker] = afterStartingMarker.split(endingMarker)
return beforeEndingMarker.includes(inputEvent.character)
})
if (!matchingEvent) throw new Error(`No matching event for ${inputEvent.character}`)

return matchingEvent.timestamp - inputEvent.timestamp - matchingEvent.textContentProcessingTime
})

const p50 = 1000 / calculatePercentile(latencies, 0.5)
const p75 = 1000 / calculatePercentile(latencies, 0.75)
const p90 = 1000 / calculatePercentile(latencies, 0.9)

return {p50, p75, p90, latencies}
}
Loading

0 comments on commit 2de06de

Please sign in to comment.