Hi everyone! After exactly 365 days of very intensive development, I'm extremely happy to unveil the first stable release of Yarn 2. In this post I will explain what this release will mean for our community. Buckle up!
If you're interested to know more about what will happen to Yarn 1, keep reading as we detail our plans later down this post: Future Plans. If you just want to start right now with Yarn 2, check out the Getting Started or Migration guides.
Release Overview
Describing this release is particularly difficult - it contains core, fundamental changes, shipped together with new features born from our own usage.
Highlights
- The output got redesigned for improved readability
- designed for improved readability
- Our CLI commands (
yarn add
, ...) are now aware of workspaces - Running
yarn install
can be made optional on per-repo basis - A safer
npx
counterpart calledyarn dlx
to run one-shot tools - Run commands on all workspaces with
yarn workspaces foreach
- Packages can be modified in-place through the
patch:
protocol - Local packages can be referenced through the new
portal:
protocol - A new workflow has been designed to efficiently release workspaces
- Workspaces can now be declaratively linted and autofixed
But also...
- Package builds are now only triggered when absolutely needed
- Package builds can now be enabled or disabled on a per-package basis
- Scripts now execute within a normalized shell
- Peer dependencies now work even through
yarn link
- The lockfile is now proper YAML
- The codebase is now full TypeScript
- Yarn can now be extended through plugins
Breaking changes...
- Configuration settings have been normalized
- Packages must respect their boundaries
- Bundle dependencies aren't supported anymore
- Packages are stored in read-only archives
Those highlights are only a subset of all the changes and improvements; a more detailed changelog can be found here, and the upgrade instructions are available here.
Frequently Asked Questions
Who should we thank for this release?
A significant amount of work has been done by larixer from SysGears, who crawled deep into the engine with the mission to make the transition to Yarn 2 as easy as possible. In particular he wrote the whole node_modules
compatibility layer, which I can tell you is no easy feat!
My thanks also go to everyone who spontaneously joined us for a week or a month during the development. In particular embraser01 for the initial Windows support, bgotink for typing our filesystem API, deini for his contributions to the CLI, and Daniel for his help on the infrastructure migration.
This work couldn't have been possible without the support from many people from the open-source community - I think in particular to NicolĆ² from Babel and Jordan from Browserify, but they're far from being the only ones: the teams of Gatsby, Next, Vue, Webpack, Parcel, Husky, ... your support truly made all the difference in the world.
And finally, the project lead and design architect for Yarn 2 has been yours truly, MaĆ«l Nison. My time was sponsored in large part by Datadog, which is a super dope place to develop JS (which is hiring š), and by my fiancĆ© and our cats. Never forget that behind all open-source projects are maintainers and their families.
How easy will it be to migrate to Yarn 2?
Thanks to our beta testers and the general support of the ecosystem we've been able to soften a lot the pain associated with such a major upgrade. A Migration Guide is available that goes into more detail, but generally speaking as long as you use the latest versions of your tools (ESLint, Babel, TypeScript, Gatsby, etc), things should be fine.
One particular caveat however: Flow and React-Native cannot be used at the moment under PlugānāPlay (PnP) environments. We're looking forward to working with their respective teams to figure out how to make our technologies compatible. In the meantime you can choose to remain on Yarn 1 for as long as you need, or to use the node_modules
plugin, which aims to provide a graceful degradation path for smoother upgrade (note that it's still a work in progress - expect dragons). More details here.
If you don't want to upgrade all of your projects, just run
yarn policies set-version ^1
in the repositories that need to stay on Yarn 1, and commit the result. Yarn will always prefer the checked-in binaries over the global ones, making it the best way to ensure that everyone in your team shares the exact same release!
What will happen to the legacy codebase?
Yarn 1.22 will be released next week. Once done, the 1.x branch will officially enter maintenance mode - meaning that it won't receive further releases from me except when absolutely required to patch vulnerabilities. New features will be developed exclusively against Yarn 2. In practical terms:
The classic repository (
yarnpkg/yarn
) will move over toyarnpkg/classic
to reflect its maintenance status. It will be kept open for the time being, but we'll likely archive it in a year or two.The modern repository will not be renamed into
yarnpkg/yarn
, as that would break a significant amount of backlink history. It will remainyarnpkg/berry
for the foreseeable future.The old website will move over to classic.yarnpkg.com, and the new website (currently next.yarnpkg.com) will be migrated to the main domain name.
The
yarn
package on npm will not change; we will distribute further version using the newyarn set version
command.
We expect most of those changes to be completed by February 1, 2020.
In Depth
CLI Output
Back when Yarn was released its CLI output was a good step forward compared to other solutions (plus it had emojis! š§¶), but some issues remained. In particular lots of messages were rather cryptic, and the colours were fighting against the content rather than working with it. Strong from this experience, we decided to try something different for Yarn 2:
Almost all messages now have their own error codes that can be searched within our documentation. Here you'll find comprehensive explanations of the in-and-outs of each message - including suggested fixes. The colours are now used to support the important parts of each message, usually the package names and versions, rather than on a per-line basis.
We expect some adjustments to be made during the following months (in particular with regard to colour blindness accessibility), but over time I think you'll come to love this new display!
Workspace-aware CLI
Working with workspaces can sometimes be overwhelming. You need to keep the state of your whole project in mind when adding a new dependency to one of your workspaces. "Which version should I use? Whatās already used by my other workspaces?", etc.
Yarn now facilitates the maintenance of such setups through various means:
-
yarn up <name>
will upgrade a package in all workspaces at once -
yarn add -i <name>
will offer to reuse the same version as the ones used by your other workspaces (and some other choices) - The version plugin will give you a way to check that all the relevant workspaces are bumped when one of them is released again.
Those changes highlight the new experience that we want to bring to Yarn: the tool becomes an ally rather than a burden.
Zero-Installs
While not a feature in itself, the term "Zero Install" encompasses a lot of Yarn features tailored around one specific goal - to make your projects as stable and fast as possible by removing the main source of entropy from the equation: Yarn itself.
To make it short, because Yarn now reads the vendor files directly from the cache, if the cache becomes part of your repository then you never need to run yarn install again. It has a repository size impact, of course, but on par with the offline mirror feature from Yarn 1 - very reasonable.
For more details (such as "why is it different from checking in the node_modules
directory"), refer to this documentation page.
New Command: yarn dlx
Yarn 2 introduces a new command called yarn dlx
(dlx stands for download and execute) which basically does the same thing as npx
in a slightly less dangerous way. Since npx
is meant to be used for both local and remote scripts, there is a decent risk that a typo could open the door to an attacker:
$ npx serv # Oops, should have been "serve"
This isn't a problem with dlx, which exclusively downloads and executes remote scripts - never local ones. Local scripts are always runnable through yarn run or directly by their name:
$ yarn dlx terser my-file.js
$ yarn run serve
$ yarn serve
New Command: yarn workspaces foreach
Running a command over multiple repositories is a relatively common use case, and until now you needed an external tool in order to do it. This isn't the case anymore as the workspace-tools plugin extends Yarn, allowing you to do just that:
$ yarn workspaces foreach run build
The command also supports options to control the execution which allow you to tell Yarn to follow dependencies, to execute the commands in parallel, to skip workspaces, and more. Check out the full list of options here.
New Protocol: patch:
Yarn 2 features a new protocol called patch:
. This protocol can be used whenever you need to apply changes to a specific package in your dependency tree. Its format is similar to the following:
{
"dependencies": {
"left-pad": "patch:left-pad@1.3.0#./my-patch.patch"
}
}
Together with the resolutions
field, you can even patch a package located deep within your dependency tree. And since the patch:
protocol is just another data source, it benefits from the same mechanisms as all other protocols - including caching and checksums!
New Protocol: portal:
Yarn 2 features a new protocol called portal:
. You can see portal:
as a package counterpart of the existing link:
protocol. Where the link:
protocol is used to tell Yarn to create a symlink to any folder on your local disk, the portal:
protocol is used to create a symlink to any package folder.
{
"dependencies": {
"@my/app": "link:./src",
"eslint-plugin-foo": "portal:./pkgs/eslint-plugin-foo"
}
}
So what's the difference you say? Simple: portals follow transitive dependencies, whereas links don't. Even better, portals properly follow peer dependencies, regardless of the location of the symlinked package.
Workspace Releases
Working with workspaces brings its own bag of problems, and scalable releases may be one of the largest one. Most of large open-source projects around here use Lerna or a similar tool in order to automatically keep track of changes applied to the workspaces.
When we started releasing the beta builds for Yarn 2, we quickly noticed we would be hitting the same walls. We looked around, but existing solutions seemed to have significant requirements - for example, using Lerna you would have to either release all your packages every time, or to keep track yourself of which packages need to be released. Some of that work can be automated, but it becomes even more complex when you consider that a workspace being released may require unrelated packages to be released again too (for example because they use it in their prepack steps)!
To solve this problem, we've designed a whole new workflow available through a plugin called version
. This workflow, documented here, allows you to delegate part of the release responsibility to your contributors. And to make things even better, it also ships with a visual interface that makes managing releases a walk in the park!
This workflow is sill experimental, but it works well enough for us that we think it'll quickly prove an indispensable part of your toolkit when building large projects using workspaces.
Workspace Constraints
Workspaces quickly proved themselves being one of our most valuable features. Countless projects and applications switched to them during the years. Still, they are not flawless. In particular, it takes a lot of care to keep the workspace dependencies synchronized.
Yarn 2 ships with a new concept called Constraints. Constraints offer a way to specify generic rules (using Prolog, a declarative programming language) that must be met in all of your workspaces for the validation to pass. For example, the following will prevent your workspaces from ever depending on underscore - and will be autofixable!
gen_enforced_dependency(WorkspaceCwd, 'underscore', null, DependencyType) :-
workspace_has_dependency(WorkspaceCwd, 'underscore', _, DependencyType).
This other constraint will require that all your workspaces properly describe the repository field in their manifests:
gen_enforced_field(WorkspaceCwd, 'repository.type', 'git') :-
workspace(WorkspacedCwd).
gen_enforced_field(WorkspaceCwd, 'repository.url', 'ssh://git@github.com/yarnpkg/berry.git') :-
workspace(WorkspacedCwd).
Constraints are definitely one of our most advanced and powerful features, so don't fret yourself if you need time to wrap your head around it. We'll follow up with blog posts to explore them into details - watch this space!
Build Dependency Tracking
A recurrent problem in Yarn 1, native packages used to be rebuilt much more than they should have. For example, running yarn remove
used to completely rebuild all packages in your dependency tree.
Starting from Yarn 2 we now keep track of the individual dependency trees for each package that lists postinstall scripts, and only run them when those dependency trees changed in some way:
ā¤ YN0000: ā Link step
ā¤ YN0007: ā sharp@npm:0.23.0 must be rebuilt because its dependency tree changed
ā¤ YN0000: ā Completed in 16.92s
ā¤ YN0000: Done with warnings in 21.07s
Per-Package Build Configuration
Yarn 2 now allows you to specify whether a build script should run or not on a per-package basis. At the moment the default is to run everything, so by default you can choose to disable the build for a specific package:
{
"dependenciesMeta": {
"core-js": {
"built": false
}
}
}
If you instead prefer to disable everything by default, just toggle off enableScripts
in your settings then explicitly enable the built
flag in dependenciesMeta
.
Normalized Shell
Back when Yarn 2 was still young, the very first external PR we received was about Windows support. As it turns out Windows users are fairly numerous, and compatibility is important to them. In particular they often face problems with the scripts field which is typically only tested on Bash.
Yarn 2 ships with a rudimentary shell interpreter that knows just enough to give you 90% of the language structures typically used in the scripts field. Thanks to this interpreter, your scripts will run just the same regardless of whether they're executed on OSX or Windows:
{
"scripts": {
"redirect": "node ./something.js > hello.md",
"no-cross-env": "NODE_ENV=prod webpack"
}
}
Even better, this shell allows us to build tighter integrations, such as exposing the command line arguments to the user scripts:
{
"scripts": {
"lint-and-build": "yarn lint \"$@\" && yarn build \"$@\""
}
}
Improved Peer Dependency Links
Because Node calls realpath on all required paths (unless --preserve-symlinks is on, which is rarely the case), peer dependencies couldn't work through yarn link as they were loaded from the perspective of the true location of the linked package on the disk rather than from its dependent.
Thanks to PlugānāPlay which can force Node to instantiate packages as many times as needed to satisfy all of their dependency sets, Yarn is now able to properly support this case.
New Lockfile Format
Back when Yarn was created, it was decided that the lockfile would use a format very similar to YAML but with a few key differences (for example without colons between keys and their values). It proved fairly annoying for third-party tools authors, as the parser was custom-made and the grammar was anything but standard.
Starting from Yarn 2, the format for both lockfile and configuration files changed to pure YAML:
"@yarnpkg/parsers@workspace:^2.0.0-rc.6, @yarnpkg/parsers@workspace:packages/yarnpkg-parsers":
version: 0.0.0-use.local
resolution: "@yarnpkg/parsers@workspace:packages/yarnpkg-parsers"
dependencies:
js-yaml: ^3.10.0
pegjs: ^0.10.0
languageName: unknown
linkType: soft
TypeScript Codebase
While it might not directly impact you as a user, we've fully migrated from Flow to TypeScript. One huge advantage is that our tooling and contribution workflow is now easier than ever. And since we now allow building Yarn plugins, you'll be able to directly consume our types to make sure your plugins are safe between updates.
export interface Package extends Locator {
version: string | null,
languageName: string,
linkType: LinkType,
dependencies: Map<IdentHash, Descriptor>,
peerDependencies: Map<IdentHash, Descriptor>,
dependenciesMeta: Map<string, Map<string | null, DependencyMeta>>,
peerDependenciesMeta: Map<string, PeerDependencyMeta>,
};
Modular Architecture
I recently wrote a whole blog post on the subject so I won't delve too much into it, but Yarn now follows a very modular architecture.
In particular, this means two interesting things:
You can write plugins that Yarn will load at runtime, and that will be able to access the true dependency tree as Yarn sees it; this allows you to easily build tools such as Lerna, Femto, Patch-Package, ...
You can have a dependency on the Yarn core itself and instantiate the classes yourself (note that this part is still a bit experimental as we figure out the best way to include the builtin plugins when operating under this mode).
To give you an idea, we've built a typescript plugin which will automatically add the relevant @types/
packages each time you run yarn add
. Plugins are easy to write - we even have a tutorial -, so give it a shot sometime!
Normalized Configuration
One very common piece of feedback we got regarding Yarn 1 was about our configuration pipeline. When Yarn was released we tried to be as compatible with npm as possible, which prompted us to for example try to read the npm configuration files etc. This made it fairly difficult for our users to understand where settings should be configured.
initScope: yarnpkg
npmPublishAccess: public
yarnPath: scripts/run-yarn.js
In Yarn 2, the whole configuration has been revamped and everything is now kept within a single source of truth named .yarnrc.yml
. The settings names have changed too in order to become uniform (no more experimental-pack-script-packages-in-mirror
vs workspaces-experimental
), so be sure to take a look at our shiny new documentation.
Strict Package Boundaries
Packages aren't allowed to require other packages unless they actually list them in their dependencies. This is in line with the changes we made back when we introduced Plug'n'Play more than a year ago, and we're happy to say that the work we've been doing with the top maintainers of the ecosystem have been fruitful. Nowadays, very few packages still have compatibility issues with this rule.
// Error: Something that got detected as your top-level application
// (because it doesn't seem to belong to any package) tried to access
// a package that is not declared in your dependencies
//
// Required package: not-a-dependency (via "not-a-dependency")
// Required by: /Users/mael/my-app/
require(`not-a-dependency`);
Deprecating Bundle Dependencies
Bundle dependencies are an artefact of another time, and all support for them has been dropped. The installs will gracefully degrade and download the packages as originally listed in the dependencies field.
{
"bundleDependencies": [
"not-supported-anymore"
]
}
Should you use bundle dependencies, please check the Migration Guide for suggested alternatives.
Read-Only Packages
Packages are now kept within their cache archives. For safety and to prevent cache corruptions, those archives are mounted as read-only drives and cannot be modified under normal circumstances:
const {writeFileSync} = require(`fs`);
const lodash = require.resolve(`lodash`);
// Error: EROFS: read-only filesystem, open '/node_modules/lodash/lodash.js'
writeFileSync(lodash, `module.exports = 42;`);
If a package needs to modify its own source code, it will need to be unplugged - either explicitly in the dependenciesMeta
field, or implicitly by listing a postinstall script.
Conclusion
Wow. That's a lot of material, isn't it? I hope you enjoy this update, it's the culmination of literally years of preparation and obstinacy.
Everything I believe package management should be, you'll find it here. The result is for sure more opinionated than it used to be, but I believe this is the way going forward - a careful planning of the long term user experience we want to provide, rather than a toolbox without directions.
As for me, working on Yarn has been an incredible experience. I'm simultaneously project manager, staff engineer, lead designer, developer relations, and user support. There are ups and downs, but every time I hear someone sharing their Yarn success story my heart is internally cheering a little bit. So do this: tell me what you like, and help fix what you don't.
Happy 2020! š
Top comments (60)
This looks fabulous!
If you're interested in publishing officially as Yarn, you may consider setting up an org in your settings.
i.e.
Angular
You're welcome to publish without that as well, of course. š
Oh thanks, I didn't know about that! I'll take a look š
For me, the monorepo release part is the most interesting. I will probably try it out in the pnpm monorepo. IMO, all the existing solutions are not scalable.
Zero installs are also cool but for me, installations are bearable with lockfiles.
Scripts that work on Windows are also very cool!
I love the philosophy behind plug and play and all the other things of yarn 2. Thanks for the work you've put in.
You've said:
generally speaking as long as you use the latest versions of your tools (ESLint, Babel, TypeScript, Gatsby, etc), things should be fine.
That made me curious and I've tried using yarn 2. But it was not really true. ESLint shareable configs don't work. Adding all plugins as dependencies in a consuming package of a shared config does not really make sense.
And even with all dependencies up to date I've been running into problem after problem.
Please don't take this the wrong way, I love PNP and Zero-Installs. But
things should be fine
is just not true :-D The upgrade path requires a lot of manual steps and is still incomplete.Which version of ESLint do you use? It's only since ESLint 6 that plugins are loaded relative to the configuration that declares them. Otherwise, if you have the name of the shared config, maybe we can check whether they do something custom?
I thought our biggest problem will be resolve aliases in the webpack config, from reading the migration guide. They should be replaced by using the "link:" notation. But we use dynamic aliases based on environment variables. "~custom" will be replaced depending on what customer we want to target. That seems to not be possible (dynamically).
But I can't even get to that point. I've already fixed a lot of problems, but now I'm stuck at:
I've upgraded webpack (v4 not v5) and babel to the latest version and it still doesn't work.
Thanks for all the work you are putting in, I will continue debugging it tomorrow.
Note that this section of the migration guide isn't needed anymore (at least for Webpack), as we merged an improvement that doesn't make the PnP plugin incompatible with aliases anymore. The website still needs to be updated though š
github.com/arcanis/pnp-webpack-plu...
Note that we're relatively active on Discord, so feel free to pop in and join the talks - it's a good way to share feedback with our small community š
Thanks for the fast reply, I'm using the latest ESLint version (6.8.0).
I'm trying to use my own eslint config (github.com/brummelte/eslint-config) with yarn 2.
I think the problem is that the
extends
directives (and probablyparser
too) are supposed to userequire.resolve
in order to be fully portable. Cf what I did here for the Gatsby config:github.com/gatsbyjs/gatsby/pull/20...
Thanks, I thought exactly the same and I've tried that. It still didn't work. But I will try again tomorrow to really make sure.
Impressive, awesome news!
I encountered an issue using the migration guide, don't know if this is the right place to ask:
When I check for what version of
resolve
I have installed, I can see that while most packages use version 1.12 (so > 1.9),browser-resolve
(which hasn't been updated in two years, and it's used byjest-resolve
) still depends on 1.1.7. Onlybrowser-resolve
usesresolve@1.1.7
.Does this mean I can't update to Yarn 2?
I have a doubt with PnP. How handle compatibility with frameworks, like Angular. For example, a standard Angular project has a
angular.json
file with this content:Note that
$schema
is pointing tonode_modules
folder. With what URL should I change this line?Thanks.
Super late to this thread, but you can use the unplug command and point at that. There's a bug with Typescript 3.6.5 that breaks angular packages with pnp, however. They're working on adding support in v10 this summer, but that may be pushed back to v11.
Very excited for zero-install!
I want to add a private registry that uses an auth token. I found the yarn config docs and wrote one for my project. It worked great! But I don't want to commit the auth token. Is it possible to use an environment variable instead?
The config docs mention using env vars for simple top level properties, but I think this falls into the not-simple case. One alternative is to require all devs to configure their own global yarnrc. But then there's the build server. All our other private config values are managed with environment variables. It's not straight-forward to add a yarnrc at build time. I think I could write the build to generate a yarnrc, retrieving the auth env var. Unfortunately I'm then maintaining many copies of the yarnrc. I doubt it changes often, but it will be easy for drift, and confusing when it does.
Any ideas?
A M A Z I N G !
I wonder if the "dependenciesMeta" will be able to serve as a "per-package-documentation".
I submitted an RFC about this sort of field a while ago:
github.com/yarnpkg/rfcs/pull/104/c...
Yaaay so excited to try it out asap š¤©š
Got a question regarding the local per project cache
.yarn/cache
are those files hard linked or copies? Asking because I'm curious if those files are duplicated on my laptops backup or not.That's what happens with node_modules right? Those files are actual copies?
Hum this issue might be the answer, looks like it's been considered but got a little lost?
It's complicated: "they are copies, but". The buts:
If you use zero-install, then yes those files are duplicated as each repository will have it. For this reason zero-install is better suited at monorepos than projects with dozens of repositories
If you don't use zero-install, we still cache the archives into a global "mirror" before cloning them using the native clone operation (when supported, mostly OSX). For this reason you only pay the size cost once when relevant.
If you don't use zero-install and don't use OSX, you can enable the global cache mode which will cause Yarn to use the global mirror as datastore (in which case you only pay the size cost once no matter what).
Note that all this is about the 2.x; the 1.x had worse characteristics.
Interesting so I just stuff all repos into one then š just kidding
Yea thanks for the summary I'll keep that in mind š§ Unfortunately I'm on macOS š
Are you still considering the hard link approach or is it too hard š¤š haha sorry š
One year on and still most developers arent using yarn v2?
Why?
In my opinion there are too many complexities and teething issues, which goes against one of Yarns very own philosophy "Developing Javascript projects should be easy" .
Source. github.com/yarnpkg/yarn/issues/6953
Does anyone here have positive experiences with yarn2? id love to hear them.
I haven't use it yet - but cross platform scripting sounds exactly like something that could make developing easier.
However, my wish would be a step beyond that - I wish yarn would use use a package.js or package.ts file instead of package.json, and package.json would be generated just-in-time on the fly with a "yarn gen-pkg ..." command. That would really handy for dev vs rls modes, and for writing cross-platform scripts in nodejs.
Well, still most developers aren't using yarn v1 either :) such adoption is a slow and steady process, and we're fairly happy of our adoption rate. as for positive experiences, I suggest you come to our Discord, we have some regular users that will surely be happy to share their experience.
The last 3 points before the conclusion should be top 3, in fact they should just have their own section "How we stop node packages being a footgun". Security is not the "last concern."
Given the state of node package security of recent times even the crappiest in terms of features, fancyness and speed alternative to npm is much preferred if it actually solves some (if not all) high profile security concerns so everyone can sleep at night.
It's certainly a good step forward in other areas but I have to wonder what yarn does about "random dependency randomly building garbage" or why yarn doesn't just address all code that accesses "fs" and anything else into using a "safe" version (ie. error when reading anything outside, sending network packages, etc, unless explicitly granted). The "2" at the end feels more chilling then hype when major security concerns are not either addressed or their solutions clearly explained. If you do happen to do this, you've made a poor explanation of it.
We want to do that, but it's impossible (or at the very least a completely different project) unless Node first implements proper builtin sandboxes. Even if we were preventing accesses to
require('fs')
, there is a bazillion ways to escape any "security" measure we could have.Personally I would be more then happy with a "secure mode" that simply breaks any sort of "fancy" code people might have and requires explicit "whitelist" approval in package.json and very clear looking code for any sensitive such as imports, fs access, network access or global object access, etc.
Simple Checklist:
There's no need to be flexible when implementing something like this. People need to adapt to the secure system until we have a better "flexible" secure system not the other way around. I would drop even high profile packages if it meant peace of mind.
I don't see any sort of node "sandboxing" making any difference in this regard and if the work in Dino is anything to go by, node level sandboxing is pretty stupid in practice with out user space assumtions.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.