This is the final blog in my three-part series where I share my experience contributing to the Node.js core repository. In my previous blogs, I detailed how I identified an issue, tested it locally to confirm its existence, and opened a pull request (PR) with the necessary changes. You can read the first two blogs here:
- Contributing to Node.js: Improving Test Runner Coverage Reporting (1/3)
- Contributing to Node.js: Implementing Dynamic Colors for Test Runner Diagnostics (2/3)
test_runner: add level-based diagnostic handling for reporter #55964
This fixes #55922
Updated the reporter.diagnostic to accept level parameter like this
diagnostic(nesting, loc, message, level = 'info') {
this[kEmitMessage]('test:diagnostic', {
__proto__: null,
nesting,
message,
level,
...loc,
});
}
Then I updated #handleEvent
like this
#handleEvent({ type, data }) {
switch (type) {
case 'test:fail':
if (data.details?.error?.failureType !== kSubtestsFailed) {
ArrayPrototypePush(this.#failedTests, data);
}
return this.#handleTestReportEvent(type, data);
case 'test:pass':
return this.#handleTestReportEvent(type, data);
case 'test:start':
ArrayPrototypeUnshift(this.#stack, { __proto__: null, data, type });
break;
case 'test:stderr':
case 'test:stdout':
return data.message;
case 'test:diagnostic': // Here I added logic
const diagnosticColor =
reporterColorMap[data.level] || reporterColorMap['test:diagnostic'];
return `${diagnosticColor}${indent(data.nesting)}${
reporterUnicodeSymbolMap[type]
}${data.message}${colors.white}\n`;
case 'test:coverage':
return getCoverageReport(
indent(data.nesting),
data.summary,
reporterUnicodeSymbolMap['test:coverage'],
colors.blue,
true
);
}
}
And I am Updated reporterColorMap
like this
const reporterColorMap = {
__proto__: null,
get 'test:fail'() {
return colors.red;
},
get 'test:pass'() {
return colors.green;
},
get 'test:diagnostic'() {
return colors.blue;
},
get info() { // Here I added logic
return colors.blue;
},
get debug() {
return colors.gray;
},
get warn() {
return colors.yellow;
},
get error() {
return colors.red;
},
};
and color already contain logic for this colors
I also set the reporter.diagnostic call from test.js like this (level="Error")
if (actual < threshold) {
harness.success = false;
process.exitCode = kGenericUserError;
reporter.diagnostic(
nesting,
loc,
`Error: ${NumberPrototypeToFixed(
actual,
2
)}% ${name} coverage does not meet threshold of ${threshold}%.`,
'error' // Level is set to error for red color
);
}
After submitting the PR, I was excited to see it get approved almost immediately. However, it hasn't been merged yet, as Node.js requires two approvals for a PR to land.
One interesting aspect of contributing to Node.js is the continuous integration (CI) process. The CI for Node.js takes around four hours to complete its run. Unfortunately, one of the CI checks for my PR failed. The failing check was First commit message adheres to guidelines / lint-commit-message (pull_request)
. As the name suggests, this check validates whether the commit message adheres to Node.js's specific guidelines.
Upon reviewing the CI output, I discovered that Node.js requires commit messages to follow a specific format outlined in the Contributing.md
document. Here’s a snippet of the commit message requirements:
Although I had seen this requirement earlier and formatted my commit message accordingly, I made a small mistake—I forgot to include a blank line after the first line. I reached out to a maintainer to ask if I could rewrite the commit message and force push the change. After receiving approval, I pushed the corrected commit message.
However, after the new commit, the CI ran again, and this time, the Lint CI failed. To debug further, I decided to run the linting process locally, where I encountered the following errors:
The lint errors I encountered were minor, and I was able to fix them quickly. After addressing these issues, I pushed the changes again, and this time all CI checks passed successfully for my PR.
Subsequently, a maintainer requested a change to remove the debug()
from the log level. They mentioned that there are alternative ways to include debug output and provided guidance on how to proceed:
I made the requested changes and pushed the updated code.
Later, I received another request from the maintainer to update the documentation and write a test case for the changes to ensure the PR could land. Fortunately, they provided clear directions on where to add the documentation and tests:
Updating the Documentation
I started with the documentation updates and navigated to doc/api/test.md
, as suggested by the maintainer. The document was extensive, so I searched for keywords like "diagnostic" and found the relevant section for Event: test:diagnostic
:
I wasn't sure if this was the correct place to update the documentation, so I messaged the maintainer in the PR thread to confirm my proposed changes. Here’s the exact message I sent:
The maintainer responded, confirming that this was the right place for the updates but requested that I remove mentions of color and the default values. After updating the documentation accordingly and getting approval, I pushed the changes.
Adding Test Cases
Next, I moved on to adding test cases, which turned out to be the most challenging part of this PR. The maintainer directed me to the /test/parallel
directory to locate the appropriate file for adding tests. However, I was surprised to find 3567
test files in this directory:
The contributing guide mentioned that if you're unsure where to add a test, it can temporarily go in /test/parallel
. However, I noticed that many tests had not been moved from this directory, making it difficult to determine their purpose or scope:
I spent considerable time exploring various files and eventually found two tests related to the error threshold. While these tests verified that output was being printed to the stream, they did not cover colored output. I messaged the maintainer to ask for guidance on where to add tests for colored output and referenced these files:
The maintainer suggested adding the test in test-runner-output.mjs
, noting that it contained snapshot tests, some of which tested colored output:
I reviewed the test-runner-output.mjs
file but struggled to understand it. To clarify, I researched snapshot testing and found this explanation:
Snapshot tests capture the rendered output of a component, function, or application at a specific point in time and store it in a "snapshot" file. Future test runs compare the output against the saved snapshot to ensure that it has not unexpectedly changed.
Using this understanding, I identified a test checking colors in the snapshot. After analyzing the snapshot file, I confirmed it included color information. I wrote a new test to verify that the ANSI escape sequence for red (\u001b[31m
) was present in the snapshot. While the test passed locally, it did not create a snapshot.
I suspected I needed to pass a flag to enable snapshot creation. Using tools/test.py --help
, I discovered the --snapshot
flag and ran the test again. Despite the flag, no snapshot was created, nor did I receive any errors. Further investigation revealed that I could use a locally built Node.js binary to run the tests:
Using the locally built Node.js binary, I discovered the test was being skipped because Windows lacks pseudo-terminal support:
I messaged the maintainer, explaining my test implementation, references, and the issue I encountered:
The maintainer advised modifying the test to explicitly require pseudo-terminals, ensuring it wouldn't run on unsupported systems:
I implemented the test accordingly. Initially, I printed the output for debugging purposes, verifying that the error message appeared in red:
Afterward, I committed the final test and pushed the changes. The CI ran again, and all checks passed successfully. However, the maintainer requested another update to the documentation, asking me to remove a few additional details. I followed their instructions, made the changes, and committed them.
Final Thoughts
Contributing to Node.js has been a transformative experience. From navigating CI workflows to writing snapshot tests, every challenge taught me something new. The opportunity to collaborate with experienced developers and interact with maintainers was invaluable. This journey reinforced my willingness to learn and adapt, and I look forward to contributing more to open-source projects in the future.
Top comments (0)