Software engineering as a profession demands a wide range of knowledge; not just the technical knowledge required to produce software code, but a whole range of software development processes that drive this activity.
In this article, we’ll cover a range of best practices you should be aware of and learn to follow if you want to make the most of your career.
Table of Contents
General Best Software Development Practices
1. Make sure you’re solving the right problem
Whether you’re creating your own product for a particular market or building a custom solution at the request of a client, be aware of the difference between what people say they want and what they actually need.
Often people with real-world business problems might come to you with a specific list of software requirements, but those requirements might not be an exact match to what is needed to solve their problem.
Spend the time to understand the needs of the people who’ll be using the software, and work with them to find the most valuable and practical design specifications that will satisfy those needs.
Sometimes these might be a lot simpler or even totally different than what they originally thought they wanted, and you can save significant time and costs (for both them and you!) not building a system with the wrong features or just too many features.
Someone might ask for a cloud-based, blockchain-enabled machine learning system that lives in the metaverse, when all they really need is a simple app that manages orders and payment processing for their online store.
Only once you understand why you’re being asked to build something are you in a position to gauge whether the requested solution is appropriate.
This is one of the most crucial software development best practices to master, as being able to effectively guide clients and narrow scope will have an enormous impact on both the project at hand and your the level of expertise you’re assigned in professional relationships.
Agile methodologies are popular because they let you adapt as you go along.
The software engineering industry tried the waterfall model and found it didn’t work for most projects; writing up a formal software requirements specification document and following that path is likely to end up with something that ultimately doesn’t fit with what the customer needs.
As a project progresses, both you and the customer will get a better understanding of the problem and learn new things that will aid the development process.
2. Use the right tool for the job
Among the countless programming languages and frameworks out there, you’ve likely gained experience with a few specific ones. You may even have strong preferences about which is “best”.
Before embarking on a project however, consider whether the tools you’re going to use make sense for the problem at hand. Don’t just automatically go with what you’ve used in the past.
This might be a wise choice in many cases, but consider alternatives if the nature of the project differs significantly from what you’ve done in the past.
For example maybe you’re a Python developer that works on back-end systems for web apps. You’re given a new project which has much higher performance requirements than what you’ve built in the past.
Python might seem an attractive choice due to your familiarity with the language and its application frameworks. However, it might not be the ideal choice given what the app needs to do.
For example if there’s a lot of compute-intensive work involved, Rust or C++ might be a better approach. If there’s high level of concurrency involved, such as tens of thousands of simultaneous connections, then Go, Erlang, or Elixr would be a better choice.
A famous example of ‘the wrong tool for the job’ is Electron. If you’re familiar with writing web apps, and are asked to produce a desktop app, you might be tempted to take the lazy approach of just using the same language and application framework you use for the web to save time, and use Electron to bundle it up as a desktop app.
The work gets done quickly but the result is a slow, resource-heavy behemoth that cancels out a decade of hardware advances. Sure, it works fine on the latest-model MacBook Pro your employer bought you last month, but lots of your users have hardware that’s several years old and will be trying to run multiple similarly-developed apps at the same time, resulting in a poor experience.
A desktop app should be written using native code, either using a cross-platform language and application framework like C++ and Qt, or platform-native language and API like Swift for Mac/iOS, C# and .NET for Windows, or Kotlin for Android.
Writing code to run directly on the user’s hardware and operating system takes more work, but results in a much better user experience.
The best software development practices are those which emphasize quality. Users notice this sort of thing.
3. Set realistic expectations
The biggest open secret in the industry is this: We really have no idea how long it’s going to take.
Customers want estimates, for understandable reasons. You can undoubtedly come up with an approximate range—it should be pretty clear whether a project timeline is going to be measured in weeks, months, or years—but getting much more specific is often difficult, especially when a lot of the details are unknown upfront.
This can be true even when you’re developing software that is similar to other projects you’ve done before, since every project is different.
Make a point of being honest and direct with clients. Don’t just tell them what they want to hear so you can land the contract, because that’s going to create headaches for everyone down the line – missed deadlines, awkward meetings, poor code quality, conflicts over scope, disputes about payment, and team members burning out.
Establish a healthy relationship with your client from the start, where you have an understanding of their needs and they have an appreciation for the benefits of following software engineering best practices—as well as the time it takes to write high-quality code.
Core Tools and Processes for Software Development
Let’s talk about the most important software development best practices that should be upheld and prioritized in every project. All software developers should be familiar with these.
1. Version control
When working with other software engineers, and often even by yourself, the use of version control tools is essential. The most popular by far is Git, due to its elegant and flexible design, plus the mind-share it’s gained in the industry.
Git has a reputation for being intimidating for new users, but that’s only the case if you learn it the wrong way.
Don’t start from the outside and try to memorize commands without really understanding what’s going on behind the scenes.
Instead, begin by leaning the conceptual underpinnings, presented excellently in the first few chapters of the book Pro Git. You’ll find then that all the commands make a lot more sense because you can think about what they do in terms of the core data model. Every software developer should read at least the first part of this book.
Use of version control isn’t just considered a best practice due to ease of collaboration or providing access to previous versions of the code, but also for communication.
The commit logs of a repository tell a story – and if that story is written well, it can serve as a form of documentation of similar utility to code comments.
Follow this style guide on how to write a git commit message, and both you and the rest of the development team will have a rich source of project history to look back on when they need to understand how the software has evolved, and why certain changes were made.
Your colleagues will appreciate your care and attention to detail when it’s needed the most, like understanding the background behind certain technical debt or software bugs, or conducting code reviews.
2. Automated testing
Software quality isn’t just about writing good code, it’s about having a set of things in place to measure that quality.
A lot of software testing happens manually. Someone runs the program or deploys the site, tries to do a bunch of things, and notices what works and doesn’t.
Some amount of this is necessary, but relying on it as the sole form of testing is time consuming and error-prone, since its easy for bugs to creep in without being noticed.
Automated testing should be an integral part of your development process, and you should plan around it from the start.
Writing application code and unit tests should be considered inseparable parts of the development activity, such that both evolve together and exist in a symbiotic relationship.
Code reviews should also include making sure there are adequate tests.
Try to break things. Write tests that attempt to identify errors in your application logic, and fix any bugs that you find.
Similarly, add (temporary) bugs to your application logic to make sure it triggers test failures, and if none occur, write extra tests that fail when those bugs are present.
By doing this you improve code quality by ensuring your tests provide a high level of coverage.
The value of having a comprehensive set of automated tests is difficult to overstate.
It allows you to refactor your code and make other changes much more easily by giving rapid feedback to indicate whether you’ve introduced new bugs in the process.
This is especially helpful when multiple developers are working on a code base, and may need to make changes to code they didn’t originally write.
Before you deploy your code, doing a full run of the tests can give confidence in its quality.
Ideally you should ensure all tests pass every time, such as by using continuous testing to run them on every commit.
If a unit test fails, it should be fixed right away before proceeding with the commit.
Testing can detect the presence of errors, but not their absence.
It’s always possible there are features you’ve added but forgotten to write tests for, or corner cases so obscure that even an excessive number of tests would be unlikely to cover.
Short of formal verification, which isn’t practical for most projects, you won’t ever get 100% certainty.
But implementing effective testing strategies can be extremely helpful nonetheless.
3. Static typing
This can be attractive from a productivity perspective because it means keeping the code simple, and some prefer these languages as they find this approach easier.
While dynamic typing can be good for programs small enough to fit in your head, it introduces problems once a code base grows large enough to be divided into different modules and worked on by other software engineers.
For these reasons static typing, where the arguments and result types of functions and variables are explicitly written down as part of the code, can be extremely helpful in both documenting what the code does and allowing type errors to be detected by a compiler or type checker.
This results in better code and provides a higher degree of confidence for deploying software to production environments.
For Python, mypy does something similar, but uses the annotation syntax built directly into Python, and thus can be run in a way that does checks only and does not generate any output files.
Software development best practices for using these tools involve running them continually, so that any errors become apparent almost immediately.
Static type checking should be considered a best practice for anything but toy projects or short snippets of code.
Like automated testing, it greatly simplifies refactoring, bug fixes, and other changes you make as you write code.
It does this by quickly identifying mistakes so you can fix them as they occur, rather than having to debug code later on when it’s had even more changes or is being used in production.
Relative to dynamic typing, it also requires less unit tests because many of the things those unit tests might do are taken care of by the type checker.
4. Command-line environment
While we’re on the subject of mastering tools, becoming proficient with the Unix command line (built-in to Linux, MacOS, and installable on Windows, is essential to productivity).
While there are powerful IDEs available like Visual Studio Code, IntelliJ IDEA, and others that let you carry out most day-to-day tasks, they only let you do things that the authors of the IDE (and third-party extensions) have already thought of.
As soon as you want to do something that goes beyond what the IDE supports, you hit a wall.
The Unix command-line interface is one of those venerable technologies that has stood the test of time, and for good reason.
It’s based on the idea of being able to leverage the combined powers of a collection of programs which each do one thing well to achieve what you want.
Commands that you enter directly can be added to a script, allowing you to automate simple to complex processes.
And once you know how to use it, interacting with a remote server or cloud virtual machine is identical to interacting with your local machine.
As with every other skill required to be a professional programmer, becoming proficient in command-line tools involves a significant learning curve.
The payoff is worth it though, and once you’ve leveled up you will not want to be without the abilities you’ve gained.
Most people who develop software rely heavily on the command line and always have one or a few terminal windows open.
Other Useful Tips & Best Practices in Software Development
Software code style can differ a fair bit between programmers; where curly braces should go, how much indentation to use, how to quote strings, and of course the good old tabs vs spaces debate.
With a team of developers working together on the same software project, differing preferences and habits can become a source of conflict and result in a messy code base.
The best practice for a drama-free experience is to pick a set of rules and enforce them via a linter, a program that looks through all your code and either reports violations of the style or automatically fixes them. This achieves consistency and helps to make code readable.
There are many linters to choose from, usually several for each language. It doesn’t really matter which one you use, just as long as you pick something and get everybody to agree on using it.
Big software companies often have a style guide for their software developers to follow which help ensure consistency and code readability across the company’s different software products.
2. Documentation generation
Best practices for documentation vary between teams due to different levels of written communication skills among developers and different expectations of the team.
The most well-known form of documentation is code comments, which are easy to write but unfortunately not always used properly.
Good judgment is critical when determining what to include and what to omit; comments should be present only if they add value.
There’s not much point in having a detailed description of what a piece of code does if that information can be readily understood just by reading the code itself.
However, comments that give context and explain why a piece of code is written the way it is can be valuable, especially if it’s doing something tricky.
Another form of documentation is API documentation. This is where each method is documented separately with an explanation of what it does.
Many software development standards expect this for production code that is to be used by a third party, such as the public API surface of reusable components like libraries and application frameworks.
Most languages have tools available to generate HTML documentation from comments in the code, placing all the information for a class or module on the same page in an easy-to-navigate manner.
Perhaps even more important than code-level or API-level documentation is conceptual overviews and descriptions of various aspects and subsystems.
This kind of documentation focuses on software architecture and software design characteristics of a project, and serves as an approachable introduction for software engineers new to a team who need to get acquainted with a project.
Quality documentation of both conceptual issues and APIs is most important for reusable software components that will be used in other software projects.
Documentation need not be limited to the actual system itself, but also the software development practices employed by the team, the existence and reason for specific technical debt, processes related to database management, file format conversions, staging and production environments, release cycle planning, and other issues team members should be aware of.
Writing code for a well-documented system becomes much easier for new team members because they can gain an understanding of how everything works without having to constantly pester colleagues with specific questions; it can save months of time in building up a working knowledge of a project.
3. Continuous integration and continuous delivery (CI/CD)
Software development best practices inevitably involve some sort of reliable process for testing and releasing software to production environments. These days, best practices often include the automation of these tasks.
Continuous integration involves an automated system which regularly compiles a full build of the software, taking in all the different modules and packages developed by different people and teams, and making sure it works together.
It is most useful in large projects where there’s the overhead of coordination between people working on different parts of the software, and things can get out of sync between them.
Especially where one component depends on another, changes in the former might break the latter, and a continuous integration system will pick up cases like these.
Integration builds are often done on a nightly basis, and many open source projects offer what they call a “nightly build” which represents the current state of the complete code base.
Continuous delivery takes this a step further and actually automates software deployment.
Usually there will be a separate staging environment and production environment, the with both having an identical setup but staging acting as a place for final checks on the software to be carried out, such as manual testing.
Best practices for large-scale deployments often include automatically rolling out new versions in phases, where initially only some users get the new version, and once confidence is established that the software is working correctly, it can be rolled out to everyone.
This approach can be used for both server-side deployments and software updates delivered to end users.
Software Engineering = Lifelong Learning
Software engineering is a calling. Dedicating yourself to mastering your profession and following best practices will pay great dividends over time both in terms of money and job satisfaction, provided you are able to maintain a passion for learning and expanding your skills over time.
Often we’re guided towards learning the “new and shiny” things like whatever frameworks and languages appear fashionable on sites like Hacker News, but there’s also a lot to be gained from studying the history of the field and understanding different approaches to software development that have been explored over time.
A lot of things that are currently considered software engineering best practices aren’t necessarily the only or best way to do things, and this is why there’s often so much controversy between software developers about the “best” tools and practices to use.
You should dedicate a portion of your time to learning, forming opinions, and participating in discussions with colleagues and in online forums.
After all, an ever-evolving field means you have the chance to influence how its best practices change and grow in the future.