If you’re working in the .net ecosystem, you need to grok msbuild. Is not exactly painless or elegant, but is incredibly powerful. Creating a nuget package that applies settings and configuration files to consuming projects is the tip of a very deep iceberg.
I’m the author and owner of a similar code style/code quality package in a fairly large company and went through a very similar process, culminating with writing our own Roslyn-based analyzers to enforce various internal practices to supplant the customized configuration of the Microsoft provided analyzers. Also, we discovered that different projects need different level of analysis. We’re less strict with e.g test projects than core infrastructure. But all projects need to have the same formatting and style.
That too can be easily done with one nuget using msbuild.
> If you’re working in the .net ecosystem, you need to grok msbuild.
Agreed, it makes a huge difference.
Sadly Visual Studio made that difficult from the start of .net, given its history with attempting to hide the .csproj files from developers and thus reduce their exposure to it. Its a real shame they decided to build visual studio like that and didn't change it for years.
as the other person stated, earlier versions of Visual Studio wouldn't let you directly edit the .csproj in the IDE. You were forced to "unload" it first. If you ran an extension to override this behaviour you'd end up with glitches. Its one of the main reasons I moved to Rider given I much prefer to edit the .csproj manually in many cases as opposed to going through the GUI.
There's other little niggles, the Visual Studio gui for example offers a "pre-build" and "post-build" window that's kinda hacky. If you have more than one line in either of the windows the build no longer is able to push the _actual_ error back into the build. So its better to do this with separate target elements (that don't show up in this gui) or just run a pure msbuild file (.proj) to perform these tasks.
Older visual studio was just a bad habit generator/crutch which babied a lot of developers who could have learned better practices (i.e. more familiarity with msbuild) if they had been forced to.
We have hundreds of repos, thousands of projects. It is hard to ensure consistency at scale with a local .editorconfig in every repo.
Also, with a nuget I can do a lot more than what editorconfig allows. Our package includes custom analyzers, custom spell check dictionaries, and multiple analysis packages (i.e not just the Microsoft provided analyzers). We support different levels of analysis for different projects based on project type (with automatic type detection). Not to mention that coding practices evolve with time, tastes, and new language features. And those changes also need to be consistently applied.
With a package, all we need to do to apply all of the above consistently across the whole company is to bump a single version.
It's a combination of practices, some at develop-time and some at CI-time. The general goal is to have code as clean and standardized as possible as early as possible, especially on larger teams where human enforcement doesn't scale as much.
I completely agree that it shouldn’t be XML. Then again, I worked with Gradle in the past, which is based on Groovy syntax plus DSL. And that didn’t feel good either (though I must admit that I knew less about Gradle than I do about msbuild). Perhaps the problem of designing a good build system is harder than it seems.
You could check out FAKE. It’s pretty popular in the F# community. While not C#, the terser syntax may be beneficial for a build DSL and you still have access to .NET APIs.
But you augment it with tools written in c# which is best of both worlds. Builds are defined declaratively and custom actions are defined in code. Not the horrible hybrid of eg ant or cmake.
I've met teams that strongly prefer Cake [1] and it seems well maintained.
Personally, I think there's too much baby in the MSBuild bathwater unfortunately and too much of the ecosystem is MSBuild to abandon it entirely. That said, I think MSBuild has improved a lot over the last few years. The Sdk-Style .csproj especially has been a great improvement that sanded a lot of rough edges.
I often really hate certain technologies like MsBuild and use them begrudgingly for years, fighting with the tooling, right up until I decide once and for all to give it enough of my attention to properly learn, and then realise how powerful and useful it actually is!
I went through the same thing with webpack too.
MsBuild is far from perfect though. I often think about trying to find some sort of simple universal build system that I can use across all my projects regardless of the tech stack.
I’ve never really dug much into `make`… Maybe something like that is what I’m yearning for.
> I often really hate certain technologies like MsBuild and use them begrudgingly for years, fighting with the tooling, right up until I decide once and for all to give it enough of my attention to properly learn, and then realise how powerful and useful it actually is!
I had a similar expreience with Cmake. Note, I still hate the DSL but what it can do and what you nowadays actually need to do (or how you organize it) if you are writing a new project can be relatively clean and easy to follow.
Not to say its easy to get to that point, but I don't think anyone really would say that.
I find this experience a lot with a lot of Microsoft technologies. People bemoan powershell, NT, DirectX, even C# itself, and other Windows APIs but when you get to really learn them you start to miss them on Linux. I sometimes see a meme from beginner programmers lamenting how the world would be better if Windows was POSIX compliant but once you've learned a bit about some of the Windows API calls, POSIX feels absolutely ancient. Some stuff is really dated like Win32 windowing stuff
This is a good article and I appreciate the author sharing his ideas. But that screenshot showing an example of poorly written code. Man if someone in your team is writing code like that you have much more serious problems. I understand the need for guardrails and standards, but when you go through the right process of hiring someone and giving an offer this should not happen. This is the equivalent of a law firm hiring a lawyer then adding a tool that checks their work when drafting documents making sure they don’t make mistakes. I’m not talking about complex compliance issues but fundamental knowledge a lawyer should have. The case can be made this is for junior developers, and I agree it can be useful, but there’s usually a path for junior developers that involves 1:1 mentorship before they start pushing critical code. We do have standards and guidelines in my team, but most of them are nice-to-haves. We assume we are all professionals and trust each other’s work even when many times we disagree on design and coding style. Our effort and enforcement is testing, accountability and good documentation. We nudge for readable code. We have a guy that loves Regex and we let him use it if well documented.
> But that screenshot showing an example of poorly written code.
That screenshot looks like it was specifically written for the blog entry. (The project is called ConsoleApp1.)
I suspect the author didn't want to show their employer's proprietary code on their blog, and probably wanted to make a concise screenshot with multiple errors.
(Otherwise, they might have people who don't have a programming background occasionally writing non-production tools as part of a non-software-engineering job. This is quite common in many workplaces.)
I remember seeing at one job, to share a “token” that was in a byte array, they iterated the byte array and concatenated the values. It was supposed to be an internal “auth tool”/“sso” but was unusable in the php app I was trying to use it with because it couldn’t (or at least I wasn’t sure how to) convert the byte array back. I ended up writing a small Java console app to convert it for me.
Author here. Thanks for the feedback, I really appreciate.
The code in the screenshot was written poorly on purpose, only for the need of this blog post.
Developers make mistakes at any level of seniority. It's less likely to happen when you reach a certain proficiency in writing C# code, but it's still a possibility. Mistakes can also go through some cracks at review time.
So these are definitely automated guardrails that don't require humans with specific knowledge to enforce them.
> This is the equivalent of a law firm hiring a lawyer then adding a tool that checks their work when drafting documents making sure they don’t make mistakes
I don't agree. A better fitting comparison would be if a law firm enables spell checkers and proofreads documents to verify they use the law firm's letterhead. Do you waste your time complaining whether the space should go left or right of a bracket?
How do you expect junior programmers to become senior ones without help? Having automated guard-rails saves a large amount of your senior devs time by avoiding them having to pick such things up in code review, and you'll find the junior programmers absorb the rules in time and learn.
Several of the examples are nitpicking naming, this is exactly what should be automated. It's not like even experienced people won't accidentally use camelCase instead of PascalCase sometimes, or maybe accidentally snake_case something especially if they're having to mix C# back-end with JS frontend with different naming conventions.
Picking it up immediately in the IDE is a massive time-save for everyone.
The "There is an Async alternative" is a great roslyn rule. Depending on the API, some of those async overloads might not even have existed in the past, e.g. JSON serialisation, so having something to prompt "Hey, there's a better way to do this!" is actually magical.
Unused local variables are less likely, but they still happen, especially if a branch later has been removed. Having it become a compiler error helps force the dev to clean up as they go.
The article does mention they only turn on “TreatWarningAsErrors” in production builds.
It’s definitely a tough balance to strike. I go back and forth on this myself.
Maybe the happy medium is to have everything strictly enforced in CI, relatively relaxed settings during normal dev loop builds and then perhaps a pre-commit build configuration that forces/reminds you to do one production build before pushing… (which if you miss, just means you may end up with a failed CI build to fix…)
The original comment was about whether these things should be treated as errors during the local development process, or during CI for greenfield projects.
I deleted it after realizing that the article actually does address this. But I'm still relieved that I'm not the only one with the dillema.
> which if you miss, just means you may end up with a failed CI build to fix…
Honestly as a developer if I miss this up until CI, that's on me. The important part is that these issues are still visible during the local development, even if as warnings, and that the developer knows (maybe after making that mistake once or twice :-)) that they can't just be ignored because they will fail down the road.
> Honestly as a developer if I miss this up until CI, that's on me. The important part is that these issues are still visible during the local development, even if as warnings, and that the developer knows that they can't just be ignored because they will fail down the road.
Yeah I agree. This has got me thinking a bit more actually about how to optimise build configurations much more deliberately. Dev loop builds vs “normal” (local) builds vs production builds.
I got into the habit of turning on TreatWarningsAsErrors in greenfield .NET projects, trying to be a disciplined developer… But often these warnings can be a distraction during fast iterations… I think I may change my policy…
These suggestions being immediately executable can dramatically improve compliance. I find myself taking things like range operator syntax even though I don't really prefer it simply because the tool does the conversion automatically for me.
I used to recommend editorconfig and better tools for .NET nearly ten years ago. I never seem to get hired anywhere that appreciates better tooling and sane processes. All to the impediment of everyones productivity no less.
Just kind of giving up at this point. They are perfectly fine with waiting an extra day for every developer to finish simple tasks that better tooling could have helped with and I am not even talking about AI. Better database tools, better code refactoring that catches bugs before they happen. Lots of simple things.
How I approached it for an org with 300 projects and 10k+ failures after adding the analyzer.
1. Add .editorconfig and analyzer anyway
2. Ignore all the failing analyzer rules in .editorconfig
That's your baseline. Even if you have to ignore 80% of rules, that's still 20% of rules now being enforced going forward, which puts a stake in the ground.
Even if the .editorconfig doesn't enforce much yet, it allows incremental progress.
Crucially, your build still passes, it can get through code review, and it doesn't need to change a huge amount of existing code, so you won't cause massive merge issues or git-blame headaches.
3. Over time, take a rule from the ignored list, clean up the code base to meet that rule, then un-ignore.
How often you do such "weeding", and whether you can get any help with it, is up to you, but it's no longer a blocker, it's not on any critical path, it's just an easy way to pay down some technical debt.
Eventually you might be able to convince your team of the value. When they have fewer merge conflicts because there's fewer "random" whitespace changes. When they save time and get to address and fix a problem in private rather than getting to PR, etc.
Generally it's easier to ask forgiveness than permission. But you've got to also minimise the disruption when you introduce tooling. Make it easy for teammates to pick up the tooling, not a problem they now have to deal with.
> I used to recommend editorconfig and better tools for .NET nearly ten years ago.
Languages/tools that are not configurable and just dish out the will of the maintainers are objectively superior. This is all a weird type of mandatory bikeshedding; you need to do it, but it doesn't add anything of value to the product. Everyone is going to have a distinct opinion because they earned their programming chops at some shop that did things in some weird way.
I can vouche for .editorconfig. I set it up at my current job (although not to the degree in this article.)
The big problem we had was an old codebase, with a very inconsistent style, that had a lot of code written by junior developers and non-developers.
This resulted in a situation where, every time I had to work in an area of the code I hadn't seen before, the style was so different I had to refactor it just to understand it.
I currently somewhat wish CSharpier could also install (or modify, if we are wishing for ponies) an .editorconfig that matches its settings enough that someone with a habit of existing `dotnet format` or who hasn't yet installed CSharpier's own IDE extensions doesn't have a "bad time" or accidentally create a lot of commit churn.
Prettier was relatively easy to adopt because most styles at the time were just eslint configurations and auto-formatters were scarce before Prettier. .NET has a long history of auto-formatters and most of them speak .editorconfig, so some interop would be handy, even if the goal isn't "perfect" interop. Just enough to build a pit of success for someone's first or second PR in a project before they get to that part of the Readme that says "install this thing in VS or Rider" or actually start to pay attention to the Workspace-recommended extensions in VS Code.
Haven’t done much in C# since Claude Code has been available but I’ve found strict linting and style rules are very helpful for such agents when writing Go. I used to run a fairly strict and customized config with StyleCop etc; I wonder if something maybe more standardized like this will be more effective.
Nuget Audit is an odd one. I usually don’t want all devs to jump on fixing the latest vulnerability right away. We have a separate pipeline for resolving those issues.
I've actually changed my mind on this, if you're working in a project that's doesn't have a ton of early-lifecycle v0 packages. If there is a lot of quick churn in your dependencies, yeah you want to devote dedicated engineering resources to keeping these up-to-date and regression testing things.
If everything is pretty stable, it's nice to have each developer share the work with keeping things up-to-date and functional. Broad automated test coverage makes this a lot easier of course.
Pretty long article with not a great deal of substance beyond what is mentioned early on. Would be interested to know how much input teams had in the rule configuration before this was foisted on them.
Author here. Even though we have different teams and products/services, there's still a baseline of "historical" code style and rule configuration at our company. Also, I personally explored the various codebases and reached out to several developers to get some feedback throughout the process.
The whole thing did not come out as a surprise for most of us. Even so, for those who were not aware of it, the benefits - as I captured screenshots of improvements highlighted from the warnings in their codebases after installing an alpha version of the package - were obvious.
Adoption was quite smooth and easy at first. Definitely not pushed onto teams for several weeks/months, until enough repos were onboarded and we had enough feedback that it would be beneficial for the whole company to use this.
There is quite useful content in there, but the writing style makes it very annoying to read, it feels as if the original text went through some kind of LLM filter and made it corporately soulless, as seems to be the good practice now.
Author again here. I'm sorry to hear this. I wrote the whole thing in a mix of French and English (mostly English), and yes, it went through an LLM, but only to correct mistakes and translate French parts. I'm limited in my ability to write beautiful/delightful blog posts as English is not my main language.
Using an LLM wasn't about rewriting the whole thing, many sentences were left as before, so the style is definitely mine. It's okay if you don't like it, I'm trying to get better at it!
Plenty of substance in there for me. I’ve been building with dotnet since it existed and still learned a couple of new techniques/ideas from this article.
If you’re working in the .net ecosystem, you need to grok msbuild. Is not exactly painless or elegant, but is incredibly powerful. Creating a nuget package that applies settings and configuration files to consuming projects is the tip of a very deep iceberg.
I’m the author and owner of a similar code style/code quality package in a fairly large company and went through a very similar process, culminating with writing our own Roslyn-based analyzers to enforce various internal practices to supplant the customized configuration of the Microsoft provided analyzers. Also, we discovered that different projects need different level of analysis. We’re less strict with e.g test projects than core infrastructure. But all projects need to have the same formatting and style. That too can be easily done with one nuget using msbuild.
> If you’re working in the .net ecosystem, you need to grok msbuild.
Agreed, it makes a huge difference.
Sadly Visual Studio made that difficult from the start of .net, given its history with attempting to hide the .csproj files from developers and thus reduce their exposure to it. Its a real shame they decided to build visual studio like that and didn't change it for years.
Huh? You could always access the csproj by right clicking on the project.
Not quite. It required you to unload project, then you could right click and edit. And then reload project. And the load could take some time.
Now with sdk style project you just click on the project and the .*proj file comes up and is editable.
as the other person stated, earlier versions of Visual Studio wouldn't let you directly edit the .csproj in the IDE. You were forced to "unload" it first. If you ran an extension to override this behaviour you'd end up with glitches. Its one of the main reasons I moved to Rider given I much prefer to edit the .csproj manually in many cases as opposed to going through the GUI.
There's other little niggles, the Visual Studio gui for example offers a "pre-build" and "post-build" window that's kinda hacky. If you have more than one line in either of the windows the build no longer is able to push the _actual_ error back into the build. So its better to do this with separate target elements (that don't show up in this gui) or just run a pure msbuild file (.proj) to perform these tasks.
Older visual studio was just a bad habit generator/crutch which babied a lot of developers who could have learned better practices (i.e. more familiarity with msbuild) if they had been forced to.
>But all projects need to have the same formatting and style.That too can be easily done with one nuget using msbuild.
That's like using a car for "traveling" 3 meters. Why not just use dotnet format + .editorconfig , they were created just for this purpose.
It doesn’t scale as well across a large org.
We have hundreds of repos, thousands of projects. It is hard to ensure consistency at scale with a local .editorconfig in every repo.
Also, with a nuget I can do a lot more than what editorconfig allows. Our package includes custom analyzers, custom spell check dictionaries, and multiple analysis packages (i.e not just the Microsoft provided analyzers). We support different levels of analysis for different projects based on project type (with automatic type detection). Not to mention that coding practices evolve with time, tastes, and new language features. And those changes also need to be consistently applied.
With a package, all we need to do to apply all of the above consistently across the whole company is to bump a single version.
> Why not just use dotnet format + .editorconfig
And let the IDE take care of that. Pre-commit Hook and it's all done.
They're talking about how to sync the .editorconfig if projects are not in a mono-repo.
It's a combination of practices, some at develop-time and some at CI-time. The general goal is to have code as clean and standardized as possible as early as possible, especially on larger teams where human enforcement doesn't scale as much.
While msbuild is powerful, I strongly believe it should have been a standard C# language build system instead of a XML-based one.
Any non-trivial thing to do is a pain to figure out if the documentation is not extensive enough.
I really love C#, but msbuild is one of the weak links to me, almost everything else is a joy to use.
I completely agree that it shouldn’t be XML. Then again, I worked with Gradle in the past, which is based on Groovy syntax plus DSL. And that didn’t feel good either (though I must admit that I knew less about Gradle than I do about msbuild). Perhaps the problem of designing a good build system is harder than it seems.
You could check out FAKE. It’s pretty popular in the F# community. While not C#, the terser syntax may be beneficial for a build DSL and you still have access to .NET APIs.
https://fake.build/
But you augment it with tools written in c# which is best of both worlds. Builds are defined declaratively and custom actions are defined in code. Not the horrible hybrid of eg ant or cmake.
I remember using nant back in 2010 or so. Lol those were the days.
I've met teams that strongly prefer Cake [1] and it seems well maintained.
Personally, I think there's too much baby in the MSBuild bathwater unfortunately and too much of the ecosystem is MSBuild to abandon it entirely. That said, I think MSBuild has improved a lot over the last few years. The Sdk-Style .csproj especially has been a great improvement that sanded a lot of rough edges.
[1] https://cakebuild.net/
I agree with you on MsBuild being powerful.
I often really hate certain technologies like MsBuild and use them begrudgingly for years, fighting with the tooling, right up until I decide once and for all to give it enough of my attention to properly learn, and then realise how powerful and useful it actually is!
I went through the same thing with webpack too.
MsBuild is far from perfect though. I often think about trying to find some sort of simple universal build system that I can use across all my projects regardless of the tech stack.
I’ve never really dug much into `make`… Maybe something like that is what I’m yearning for.
> I often really hate certain technologies like MsBuild and use them begrudgingly for years, fighting with the tooling, right up until I decide once and for all to give it enough of my attention to properly learn, and then realise how powerful and useful it actually is!
I had a similar expreience with Cmake. Note, I still hate the DSL but what it can do and what you nowadays actually need to do (or how you organize it) if you are writing a new project can be relatively clean and easy to follow.
Not to say its easy to get to that point, but I don't think anyone really would say that.
I find this experience a lot with a lot of Microsoft technologies. People bemoan powershell, NT, DirectX, even C# itself, and other Windows APIs but when you get to really learn them you start to miss them on Linux. I sometimes see a meme from beginner programmers lamenting how the world would be better if Windows was POSIX compliant but once you've learned a bit about some of the Windows API calls, POSIX feels absolutely ancient. Some stuff is really dated like Win32 windowing stuff
This is a good article and I appreciate the author sharing his ideas. But that screenshot showing an example of poorly written code. Man if someone in your team is writing code like that you have much more serious problems. I understand the need for guardrails and standards, but when you go through the right process of hiring someone and giving an offer this should not happen. This is the equivalent of a law firm hiring a lawyer then adding a tool that checks their work when drafting documents making sure they don’t make mistakes. I’m not talking about complex compliance issues but fundamental knowledge a lawyer should have. The case can be made this is for junior developers, and I agree it can be useful, but there’s usually a path for junior developers that involves 1:1 mentorship before they start pushing critical code. We do have standards and guidelines in my team, but most of them are nice-to-haves. We assume we are all professionals and trust each other’s work even when many times we disagree on design and coding style. Our effort and enforcement is testing, accountability and good documentation. We nudge for readable code. We have a guy that loves Regex and we let him use it if well documented.
> But that screenshot showing an example of poorly written code.
That screenshot looks like it was specifically written for the blog entry. (The project is called ConsoleApp1.)
I suspect the author didn't want to show their employer's proprietary code on their blog, and probably wanted to make a concise screenshot with multiple errors.
(Otherwise, they might have people who don't have a programming background occasionally writing non-production tools as part of a non-software-engineering job. This is quite common in many workplaces.)
I remember seeing at one job, to share a “token” that was in a byte array, they iterated the byte array and concatenated the values. It was supposed to be an internal “auth tool”/“sso” but was unusable in the php app I was trying to use it with because it couldn’t (or at least I wasn’t sure how to) convert the byte array back. I ended up writing a small Java console app to convert it for me.
isn't it[0] intentionally bad, so as to highlight the things .editorconfig might suggest to improve it?
[0] https://anthonysimmon.com/workleap-dotnet-coding-standards/w...
Author here. Thanks for the feedback, I really appreciate.
The code in the screenshot was written poorly on purpose, only for the need of this blog post.
Developers make mistakes at any level of seniority. It's less likely to happen when you reach a certain proficiency in writing C# code, but it's still a possibility. Mistakes can also go through some cracks at review time.
So these are definitely automated guardrails that don't require humans with specific knowledge to enforce them.
> This is the equivalent of a law firm hiring a lawyer then adding a tool that checks their work when drafting documents making sure they don’t make mistakes
I don't agree. A better fitting comparison would be if a law firm enables spell checkers and proofreads documents to verify they use the law firm's letterhead. Do you waste your time complaining whether the space should go left or right of a bracket?
I couldn't disagree more.
How do you expect junior programmers to become senior ones without help? Having automated guard-rails saves a large amount of your senior devs time by avoiding them having to pick such things up in code review, and you'll find the junior programmers absorb the rules in time and learn.
Several of the examples are nitpicking naming, this is exactly what should be automated. It's not like even experienced people won't accidentally use camelCase instead of PascalCase sometimes, or maybe accidentally snake_case something especially if they're having to mix C# back-end with JS frontend with different naming conventions.
Picking it up immediately in the IDE is a massive time-save for everyone.
The "There is an Async alternative" is a great roslyn rule. Depending on the API, some of those async overloads might not even have existed in the past, e.g. JSON serialisation, so having something to prompt "Hey, there's a better way to do this!" is actually magical.
Unused local variables are less likely, but they still happen, especially if a branch later has been removed. Having it become a compiler error helps force the dev to clean up as they go.
The article does mention they only turn on “TreatWarningAsErrors” in production builds.
It’s definitely a tough balance to strike. I go back and forth on this myself.
Maybe the happy medium is to have everything strictly enforced in CI, relatively relaxed settings during normal dev loop builds and then perhaps a pre-commit build configuration that forces/reminds you to do one production build before pushing… (which if you miss, just means you may end up with a failed CI build to fix…)
The original comment was about whether these things should be treated as errors during the local development process, or during CI for greenfield projects.
I deleted it after realizing that the article actually does address this. But I'm still relieved that I'm not the only one with the dillema.
> which if you miss, just means you may end up with a failed CI build to fix…
Honestly as a developer if I miss this up until CI, that's on me. The important part is that these issues are still visible during the local development, even if as warnings, and that the developer knows (maybe after making that mistake once or twice :-)) that they can't just be ignored because they will fail down the road.
> Honestly as a developer if I miss this up until CI, that's on me. The important part is that these issues are still visible during the local development, even if as warnings, and that the developer knows that they can't just be ignored because they will fail down the road.
Yeah I agree. This has got me thinking a bit more actually about how to optimise build configurations much more deliberately. Dev loop builds vs “normal” (local) builds vs production builds.
I got into the habit of turning on TreatWarningsAsErrors in greenfield .NET projects, trying to be a disciplined developer… But often these warnings can be a distraction during fast iterations… I think I may change my policy…
At work, we use the .editorconfig of the .NET runtime, with slight modifications:
https://github.com/dotnet/runtime/blob/main/.editorconfig
This appears to be the OP / Workleap's editor config. https://github.com/workleap/wl-dotnet-codingstandards/blob/m...
It's probably a bit overkill for most shops, but you can actually write your own code fixes if you've got some common pattern:
https://learn.microsoft.com/en-us/dotnet/csharp/roslyn-sdk/t...
These suggestions being immediately executable can dramatically improve compliance. I find myself taking things like range operator syntax even though I don't really prefer it simply because the tool does the conversion automatically for me.
I used to recommend editorconfig and better tools for .NET nearly ten years ago. I never seem to get hired anywhere that appreciates better tooling and sane processes. All to the impediment of everyones productivity no less.
Just kind of giving up at this point. They are perfectly fine with waiting an extra day for every developer to finish simple tasks that better tooling could have helped with and I am not even talking about AI. Better database tools, better code refactoring that catches bugs before they happen. Lots of simple things.
The trick isn't to convince, it's to just do.
How I approached it for an org with 300 projects and 10k+ failures after adding the analyzer.
1. Add .editorconfig and analyzer anyway
2. Ignore all the failing analyzer rules in .editorconfig
That's your baseline. Even if you have to ignore 80% of rules, that's still 20% of rules now being enforced going forward, which puts a stake in the ground.
Even if the .editorconfig doesn't enforce much yet, it allows incremental progress.
Crucially, your build still passes, it can get through code review, and it doesn't need to change a huge amount of existing code, so you won't cause massive merge issues or git-blame headaches.
3. Over time, take a rule from the ignored list, clean up the code base to meet that rule, then un-ignore.
How often you do such "weeding", and whether you can get any help with it, is up to you, but it's no longer a blocker, it's not on any critical path, it's just an easy way to pay down some technical debt.
Eventually you might be able to convince your team of the value. When they have fewer merge conflicts because there's fewer "random" whitespace changes. When they save time and get to address and fix a problem in private rather than getting to PR, etc.
Generally it's easier to ask forgiveness than permission. But you've got to also minimise the disruption when you introduce tooling. Make it easy for teammates to pick up the tooling, not a problem they now have to deal with.
> I used to recommend editorconfig and better tools for .NET nearly ten years ago.
Languages/tools that are not configurable and just dish out the will of the maintainers are objectively superior. This is all a weird type of mandatory bikeshedding; you need to do it, but it doesn't add anything of value to the product. Everyone is going to have a distinct opinion because they earned their programming chops at some shop that did things in some weird way.
.editorconfig is an anti-solution.
I can vouche for .editorconfig. I set it up at my current job (although not to the degree in this article.)
The big problem we had was an old codebase, with a very inconsistent style, that had a lot of code written by junior developers and non-developers.
This resulted in a situation where, every time I had to work in an area of the code I hadn't seen before, the style was so different I had to refactor it just to understand it.
.editorconfig (with dotnet-format) fixed this.
Is there a 'prettier' equivalent for code formatting? In my opinion, it's the only thing missing for a truly scalable codebase.
CSharpier is pretty good for a prettier like feel: https://csharpier.com/
I currently somewhat wish CSharpier could also install (or modify, if we are wishing for ponies) an .editorconfig that matches its settings enough that someone with a habit of existing `dotnet format` or who hasn't yet installed CSharpier's own IDE extensions doesn't have a "bad time" or accidentally create a lot of commit churn.
Prettier was relatively easy to adopt because most styles at the time were just eslint configurations and auto-formatters were scarce before Prettier. .NET has a long history of auto-formatters and most of them speak .editorconfig, so some interop would be handy, even if the goal isn't "perfect" interop. Just enough to build a pit of success for someone's first or second PR in a project before they get to that part of the Readme that says "install this thing in VS or Rider" or actually start to pay attention to the Workspace-recommended extensions in VS Code.
dotnet format[0] with .editorconfig should do the job.
[0]: https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-f...
Haven’t done much in C# since Claude Code has been available but I’ve found strict linting and style rules are very helpful for such agents when writing Go. I used to run a fairly strict and customized config with StyleCop etc; I wonder if something maybe more standardized like this will be more effective.
Nuget Audit is an odd one. I usually don’t want all devs to jump on fixing the latest vulnerability right away. We have a separate pipeline for resolving those issues.
I've actually changed my mind on this, if you're working in a project that's doesn't have a ton of early-lifecycle v0 packages. If there is a lot of quick churn in your dependencies, yeah you want to devote dedicated engineering resources to keeping these up-to-date and regression testing things.
If everything is pretty stable, it's nice to have each developer share the work with keeping things up-to-date and functional. Broad automated test coverage makes this a lot easier of course.
Thats ok. The team can decide what process they do.
We do, update packages every 3 months. Criticals are reported by a pipeline and are fixed same week.
Title should be C# not .Net
I'm not sure i understand your comment, .editorconfig works just fine for VB files as well as F#
You could almost think of F# is an extremely strict set of conventions for C# … ;)
You could, but you'd be wrong.
Pretty long article with not a great deal of substance beyond what is mentioned early on. Would be interested to know how much input teams had in the rule configuration before this was foisted on them.
Author here. Even though we have different teams and products/services, there's still a baseline of "historical" code style and rule configuration at our company. Also, I personally explored the various codebases and reached out to several developers to get some feedback throughout the process.
The whole thing did not come out as a surprise for most of us. Even so, for those who were not aware of it, the benefits - as I captured screenshots of improvements highlighted from the warnings in their codebases after installing an alpha version of the package - were obvious.
Adoption was quite smooth and easy at first. Definitely not pushed onto teams for several weeks/months, until enough repos were onboarded and we had enough feedback that it would be beneficial for the whole company to use this.
There is quite useful content in there, but the writing style makes it very annoying to read, it feels as if the original text went through some kind of LLM filter and made it corporately soulless, as seems to be the good practice now.
Author again here. I'm sorry to hear this. I wrote the whole thing in a mix of French and English (mostly English), and yes, it went through an LLM, but only to correct mistakes and translate French parts. I'm limited in my ability to write beautiful/delightful blog posts as English is not my main language.
Using an LLM wasn't about rewriting the whole thing, many sentences were left as before, so the style is definitely mine. It's okay if you don't like it, I'm trying to get better at it!
Plenty of substance in there for me. I’ve been building with dotnet since it existed and still learned a couple of new techniques/ideas from this article.