It’s 2024; who hasn’t heard of .NET Core, the primary framework developers use to build desktop and web apps? However, if you want to build something impactful, just choosing the right technology isn’t enough.
Any major development process requires you to follow certain best practices to ensure that the project you’re building is future-proof. In this article, you’ll find the .NET Core top practices to help you ensure that your application is secure, scalable, and efficient. On top of that, some of the approaches mentioned here will directly help you create a smoother experience for your users and will let you improve your application’s performance.
Table of Contents
- Practice 1: Utilize the latest version of .NET Core
- Practice 2: Implement caching mechanisms
- Practice 3: Leverage asynchronous programming
- Practice 4: Effective logging practices
- Practice 5: Faster data transfer
- Practice 6: Optimize data access
- Practice 7: Use dependency injection
- Practice 8: Implement AutoMapper
- Practice 9: Load JavaScript files last
- Practice 10: Proper exception handling
- Practice 11: Auto-generated code refactoring
- Practice 12: Remove unused middleware components
- Practice 13: Optimize resource utilization
- Practice 14: Consider scalability
- Use dotConnect data provider for smooth connectivity
- Conclusion
Practice 1: Utilize the latest version of .NET Core
Why would you want to stick to an older version of .NET Core when the latest one is readily available? Exactly. Switching to the newest version of .NET Core is a step you must keep in mind at the start of your latest projects. The newest version will provide you with major improvements in terms of features, security, and performance, as well as impact your ability to deal with the bugs affecting your app.
Also, the security patches will become a life-saver for your app, letting you eliminate potential vulnerabilities. Outdated versions often leave gaps in the security architecture that are easy to exploit for malware. So, save yourself the trouble and shift to the latest version of .NET Core.
Now, let’s talk about upgrades. It’s far easier to simplify future upgrades if you’re already using the latest version. Imagine jumping over steps while running track and field; you’d fall flat on your face if you ever do that. The same is the case with skipping updates. Take it one step at a time so the next steps will be easier for you.
Practice 2: Implement caching mechanisms
The importance of caching is undeniable when it comes to improving application performance. This approach will let you temporarily store frequently accessed data, making it easy to access instead of querying the database repeatedly to fetch it. Caching eliminates the need to make repeated API calls and lowers the load on the server. Naturally, you get accelerated response times from your app when you have heavy traffic.
There are two main types of caching in .NET Core: in-memory caching, which stores data in the server’s memory for single-instance applications, and distributed caching, which stores cached data across multiple servers. Distributed caching is ideal for large-scale applications where multiple instances share cached data.
Here’s a quick example of in-memory caching in .NET Core.
public void ConfigureServices(IServiceCollection services)
{
services.AddMemoryCache();
}
public class MyService
{
private readonly IMemoryCache _cache;
public MyService(IMemoryCache cache)
{
_cache = cache;
}
public string GetCachedData()
{
if (!_cache.TryGetValue("key", out string cachedData))
{
cachedData = "This is cached data";
_cache.Set("key", cachedData, TimeSpan.FromMinutes(10));
}
return cachedData;
}
}
Let’s break this example down. As you have noticed already, the application over here stores the data in memory for 10 minutes. Now, since the data is cached, there is no need for another API call. As a result, the servers become less loaded than usual.
Practice 3: Leverage asynchronous programming
To further improve your application’s performance, you need to leverage asynchronous programming. What’s this? Basically, you use the async-wait pattern to help prevent load-heavy tasks, such as database queries, from blocking the main thread. And, yes, the file I/O operations are also a part of these long-running tasks.
Another key factor that might convince you to opt for asynchronous programming is application scalability. In synchronous programming, as the name suggests, a program runs after the previous one finishes. This causes delays. In asynchronous programming, multiple tasks can be run at the same time. This approach often results in faster response times, reduced resource usage, and improved request handling by your application under heavy load.
Here’s an example of converting synchronous code to asynchronous:
Synchronous code:
public string ReadData()
{
var data = File.ReadAllText("data.txt");
return data;
}
Asynchronous code:
public async Task<string> ReadDataAsync()
{
var data = await File.ReadAllTextAsync("data.txt");
return data;
}
By using the async-wait pattern, you let the system handle other tasks while waiting for the file to be read. This provides smoother, more responsive application behavior, especially under heavy loads.
Practice 4: Effective logging practices
Now, let’s talk about the effective logging practices. First things first; structured logging is pretty much essential for monitoring and debugging efficiently, as well as for tracking how well your app is performing. As you use logging, you will be able to accelerate error detection and troubleshooting. Also, the well-structured logs will let you easily filter, search for, and analyze issues. This is going to make your applications easier to manage over time.
Moving on to the frameworks, here are a few you must definitely know:
- Serilog
- NLog
- Microsoft.Extensions.Logging
Using these tools, you can experiment with various information-related elements, such as timestamps, log levels (info, warning, error), and contextual data. Serilog is especially renowned for its capability to log to multiple destinations, such as files, databases, or cloud services.
Here’s an example of a basic logging setup using Serilog:
public class Program
{
public static void Main(string[] args)
{
Log.Logger = new LoggerConfiguration()
.WriteTo.Console()
.WriteTo.File("logs/log.txt", rollingInterval: RollingInterval.Day)
.CreateLogger();
Log.Information("Application starting up");
try
{
CreateHostBuilder(args).Build().Run();
}
catch (Exception ex)
{
Log.Fatal(ex, "Application startup failed");
}
finally
{
Log.CloseAndFlush();
}
}
}
Also, as a skilled developer, you must not ignore the importance of logging errors with context. Another important thing on your checklist should be the use of appropriate log levels. Please ensure that your project’s logs don’t contain any sensitive information.
Practice 5: Enable compression for faster data transfer
Can you imagine building web apps without compression? Compression basically helps reduce the size of the data exchanged between the server and the client. This approach results in lower bandwidth usage and faster response times. If you enable compression in your .NET Core application, you can pretty much guarantee accelerated content delivery and a better overall user experience.
When data is compressed, the server sends smaller payloads to the client, which means faster downloads. This becomes extra important when it comes to web apps that handle large amounts of text data, images, or other types of content.
Let’s have a look at the example of configuring GZip compression in the .NET Core:
public void ConfigureServices(IServiceCollection services)
{
services.AddResponseCompression(options =>
{
options.Providers.Add<GzipCompressionProvider>();
options.EnableForHttps = true;
});
services.Configure<GzipCompressionProviderOptions>(options =>
{
options.Level = CompressionLevel.Fastest;
});
}
public void Configure(IApplicationBuilder app)
{
app.UseResponseCompression();
}
In this setup, the GZip compression algorithm has been applied to reduce the size of the HTTP response. To prioritize speed and ensure that the data is sent as quickly as possible, we’ve used the CompressionLevel.Fastest setting.
Practice 6: Optimize data access
Are you keeping up with the best practices so far? Because we have some more for you. Let’s talk about how optimizing data access helps enhance the performance of the .NET Core applications. When dealing with larger datasets, you require efficient data handling and retrieval. Often, you achieve this effect by optimizing the data access through techniques like non-tracking queries and aggregate functions. This opens the door for faster data processing and minimal overhead time.
Let’s have a closer look at non-tracking queries. These are particularly useful when there’s no need to update the retrieved data. You can simply disable tracking and reduce the processing time since there won’t be a memory footprint to handle. For instance, Entity Framework Core has a default option with monitoring and tracking enabled. If you anticipate that you won’t be needing it, you can simply disable these options by using AsNoTracking() and save yourself some processing time.
Since we’re on the topic, using aggregate functions like Sum(), Count(), Average(), or Max() directly in the database will make your queries work faster.
Here’s an example:
public async Task<List<Product>> GetProductsAsync()
{
using (var context = new ApplicationDbContext())
{
return await context.Products.AsNoTracking().ToListAsync();
}
}
Check another example of using aggregate functions:
public async Task<int> GetTotalProductCountAsync()
{
using (var context = new ApplicationDbContext())
{
return await context.Products.CountAsync();
}
}
Using these techniques, you can easily ensure better performance of your .NET Core applications.
Practice 7: Use dependency injection
What do you know about Dependency Injection (DI)? If your answer is that it’s a design pattern used in .NET Core to enable objects to be injected with dependencies instead of having to create them internally, you are absolutely right! DI makes your applications modular, making it easier to test and maintain them. Eventually, DI leads to a cleaner codebase by allowing you to decouple components instead of needing to tightly couple them to specific dependencies.
Also, DI lets you improve code reusability, simplify unit testing thanks to mock dependencies, and offers a way to reduce hard dependencies. Moreover, .NET Core has built-in support for DI, so it can be implemented naturally in applications.
Let’s explore the three main types of DI.
- Constructor Injection: Dependencies are provided through a class constructor, which is the most common and recommended method.
- Setter Injection: Dependencies are set via public properties and are used less frequently but remain valuable in specific scenarios.
- Interface-based Injection: Dependencies are injected through an interface, offering more flexibility in coupling between components.
Here’s an example of Constructor Injection in .NET Core:
public interface IMessageService
{
void SendMessage(string message);
}
public class EmailService : IMessageService
{
public void SendMessage(string message)
{
Console.WriteLine($"Email sent: {message}");
}
}
public class NotificationController
{
private readonly IMessageService _messageService;
public NotificationController(IMessageService messageService)
{
_messageService = messageService;
}
public void Notify(string message)
{
_messageService.SendMessage(message);
}
}
In this example, the NotificationController relies on IMessageService, which is injected via the constructor. As a result, the controller can use any implementation of the IMessageService. As you can guess, this makes the code more flexible and testable. Dependency Injection is one of the best practices you can implement when building maintainable apps.
Practice 8: Implement AutoMapper
Another exponentially powerful tool in .NET Core that will directly impact your application development is AutoMapper. It automatically maps properties between objects, so you don’t have to repeat code. For instance, if you have a domain model and a view model, manually copying the properties could lead to boilerplate code. Trust me, you don’t want that here if you prioritize efficiency. AutoMapper steps in here to remove the need for this code clutter by mapping properties based on conventions.
Let’s have a look at the main benefits it brings to the table:
The main benefits of AutoMapper include:
- Reduced boilerplate code: You no longer need to write manual property assignments.
- Improved readability: AutoMapper’s clean syntax makes your code easier to understand.
- Consistency: AutoMapper uses conventions to reduce the chance of mistakes, using consistent mapping throughout your application.
Here’s an example setup of AutoMapper in a .NET Core application:
// Step 1: Define source and destination classes
public class User
{
public string FirstName { get; set; }
public string LastName { get; set; }
}
public class UserDTO
{
public string FullName { get; set; }
}
// Step 2: Create AutoMapper profile
public class UserProfile : Profile
{
public UserProfile()
{
CreateMap<User, UserDTO>()
.ForMember(dest => dest.FullName, opt => opt.MapFrom(src => $"{src.FirstName} {src.LastName}"));
}
}
// Step 3: Configure AutoMapper in Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddAutoMapper(typeof(Startup));
}
// Step 4: Use AutoMapper to map objects
public class UserService
{
private readonly IMapper _mapper;
public UserService(IMapper mapper)
{
_mapper = mapper;
}
public UserDTO GetUserDTO(User user)
{
return _mapper.Map<UserDTO>(user);
}
}
In this example, AutoMapper automatically maps the User class to the UserDTO class. By
implementing AutoMapper, you simplify object mapping, avoid boilerplate code, and improve the maintainability of your .NET Core applications.
Practice 9: Load JavaScript files last
Ever heard of lazy loading? It’s a clever trick to make your .NET Core applications faster by only loading things when needed. Now, why did we mention that? Well, loading JavaScript files last has a similar effect.
But what if JavaScript is loaded too early? It can downgrade the user experience by slowing down the load times as the page elements keep rendering. To avoid turning away users, you must postpone the loading of JavaScript until the HTML and CSS are fully rendered.
The primary benefits of loading JavaScript files last include:
- Faster initial content rendering: Users can interact with the page before all the JavaScript is loaded.
- Improved performance: The risk of blocking important elements is greatly reduced.
- Better user experience: The page becomes usable faster and thus is more user-friendly.
In a .NET Core Razor view, you can achieve this by placing your script tags at the bottom of the page or using the defer or async attributes. Here’s a sample code sample for you to check:
<!DOCTYPE html>
<html>
<head>
<title>My .NET Core App</title>
<!-- Critical CSS and other head elements -->
</head>
<body>
<h1>Welcome to My Application</h1>
<p>Loading content before JavaScript.</p>
<!-- JavaScript loaded last to improve performance -->
<script src="~/js/myscript.js" defer></script>
</body>
</html>
As you can tell, the browser will now prioritize the critical CSS and HTML first since we are using defer. Striving for improved performance and better user experience? This is certainly the criteria for a .NET Core top practice.
Practice 10: Proper exception handling
One of the prerequisites of any solid .NET Core application is proper exception handling. The exceptions are often used to prevent sudden crashes due to unforeseen scenarios. As you know, users are never fond of any unexpected crashes. Exceptions, as useful as they may sound, can result in poor performance if overused. As a result, you might end up with too much code you don’t know how to maintain.
Thus, when it comes to .NET Core, you must be able to distinguish between recoverable and non-recoverable errors. Critical errors don’t need to be handled and eliminated immediately. Network timeouts or missing files can be handled without the need for elimination.
Best practices for exception handling in .NET Core
- Use exceptions only for exceptional cases: Don’t apply exceptions to control normal application flow. Instead, use them to handle truly unexpected situations.
- Log exceptions properly: Ensure that every exception is logged for troubleshooting purposes using structured logging tools such as Serilog or NLog.
- Handle exceptions close to their source: Try to handle exceptions where they occur, rather than try to “catch” them globally. This approach provides more specific error handling and fuller context.
- Use custom exceptions: Define custom exceptions for specific application errors to improve clarity and error tracking.
- Avoid swallowing exceptions: Don’t just catch exceptions and ignore them. Always log errors or provide an appropriate response to avoid “hiding” issues.
Here’s an example of structured exception handling in .NET Core:
public class FileService
{
public string ReadFile(string path)
{
try
{
if (!File.Exists(path))
{
throw new FileNotFoundException("File not found.", path);
}
return File.ReadAllText(path);
}
catch (FileNotFoundException ex)
{
// Log the exception
Log.Error(ex, $"File not found: {path}");
// Return a meaningful response
return "Error: The specified file could not be found.";
}
catch (Exception ex)
{
// Handle any other general exceptions
Log.Error(ex, "An unexpected error occurred.");
throw; // Re-throw the exception for further handling
}
}
}
In this example:
- The FileNotFoundException is handled gracefully with a message to the user and an error log.
- A catch-all block captures any other unexpected exceptions to ensure they are logged and re-thrown if necessary.
By following best practices in exception handling, you basically guarantee that your .NET Core applications are more resilient, easier to maintain, and less prone to critical failures.
Practice 11: Auto-generated code refactoring
Auto-generated code simplifies development to a certain extent. However, is it really that simple to auto-generate it? The answer is “no”. For instance, .NET Core, tools like Entity Framework or Scaffolding generate boilerplate code, which can fit in various scenarios. But you should still avoid using the auto-generated code as it is. You often need to refactor your code to get the end result you want.
Refactoring auto-generated code involves:
- Simplifying structure. You have to remove redundant code and make it easier to read.
- Optimizing performance. Make sure to streamline database queries or improve inefficient logic.
- Adding custom logic. Enhance the code with business-specific requirements that auto-generation can’t handle.
If you want your .NET Core applications to perform flawlessly and remain clean, you need to regularly review and refactor the auto-generated code.
Practice 12: Remove unused middleware components
Middleware has an important place in .NET Core applications development since it deals with handling requests and responses. However, life isn’t all sunshine and rainbows. Unused middleware just clogging up the pipelines will cause you no good unless you are good with slow processing and security risks.
Here’s what you can do to optimize performance:
- Only include the necessary middleware. Remove any middleware that isn’t required for the current version of your application.
- Order middleware effectively. Ensure critical middleware (e.g., authentication) runs before less essential ones.
- Test after changes. Always test the application after modifying the middleware pipeline to ensure proper functionality.
Keeping your middleware lean ensures better performance and security of your .NET Core app.
Practice 13: Optimize resource utilization
Efficiency drives successful development. To keep your .NET Core application performing well under various loads, you have to prioritize optimizing CPU, memory, and network usage. Let’s have a look at the solution-oriented strategies you can adopt:
- Use connection pooling. Reuse database connections to reduce overhead.
- Implement asynchronous methods. Free up resources by using async programming to prevent thread blocking.
- Manage memory efficiently. Avoid memory leaks by disposing of unmanaged resources properly, using statements and calling Dispose() when needed.
- Monitor and scale. Use monitoring tools like Application Insights to track resource usage and scale services accordingly.
By optimizing resource utilization, you can handle higher loads while minimizing costs and maintaining high performance.
Practice 14: Consider scalability
For any good developer, one of the main aims is to develop an application that is future-proof. Basically, you always need to keep an eye on the scalability of your projects. Is your .NET Core application capable of handling increased workloads or will it crumble? That’s where the secret to a good .NET Core app lies.
Some of the key strategies for scalability include:
- Utilizing horizontal scaling. Add more servers or instances to distribute the load.
- Leverage cloud infrastructure. Cloud platforms like Azure or AWS can automatically scale resources as needed.
- Optimize database operations. Verify that your database can handle high volumes of data by using efficient queries, indexing, and caching.
- Use load balancing. Distribute incoming requests across multiple instances to ensure no single server is overwhelmed.
If you have scalability as a major determinant of a good application, your chances of developing a future-proof .NET Core application automatically go up a notch.
Use dotConnect data provider for smooth connectivity
When you’re building .NET Core applications, having a fast, reliable, and secure data provider is a game-changer. That’s where dotConnect data providers from Devart come in! They offer everything you need to ensure smooth interaction with databases & cloud platforms and scalable app architecture.
- Speed of work. dotConnect is all about performance. With features like connection pooling, batch processing, and optimized SQL execution, your data moves faster without extra lag. For those .NET Core apps that need to keep up the pace, dotConnect keeps things moving quickly and efficiently.
- Enhanced Security. Got sensitive data? No worries! dotConnect has your back with built-in support for SSL, SSH, and HTTPS encryption. Your data stays secure as it zips between your app and the database.
- Improved ORM Support. Dealing with complex data models? dotConnect works seamlessly with popular ORMs like Entity Framework and LinqConnect. This helps reduce boilerplate code and makes managing data models a breeze. Want more details? Check out Entity Framework Support and Entity Developer.
- Comprehensive Database Connectivity. Whether you’re connecting to Oracle, MySQL, PostgreSQL, or SQLite, dotConnect has you covered. Its broad compatibility makes it super easy to handle multiple databases, giving your .NET Core apps the flexibility they need to scale as your project grows.
By using dotConnect, you’re following the best practices for .NET Core development — optimal performance, security maintenance, ORM support, and flexibility in database connectivity. These features make it a powerful tool for developers aiming to build robust, scalable, and high-performing .NET Core applications.
Conclusion
Mastering best practices in .NET Core development isn’t just about writing better code — it’s about creating applications that are resilient, future-proof, and ready to scale as your business grows. Whether you’re implementing asynchronous programming to improve responsiveness, optimizing data access for faster queries, or ensuring that exceptions are handled with care, each of these practices adds layers of efficiency and reliability to your app’s architecture. In a world where performance and user experience are everything, adhering to these practices is essential.
But best practices alone won’t take you all the way. Incorporating the right tools, like dotConnect, can give your applications the extra boost they need. With this tool, you will get fast, flexible data access, top-tier security protocols, and seamless ORM support. dotConnect is more than just a provider — it’s a top game for .NET Core development. It ensures that your data flows smoothly and securely, no matter how complex your infrastructure becomes.
Explore dotConnect today and unlock the full potential of your .NET Core projects.