Use Azure DevOps Pipelines to automate EF Core Scaffold

This article covers the implementation of a CI/CD flow to automatic generate scaffoldings for a Database project. The entire use case is built over Azure DevOps, and the database deployment is made on Azure SQL Database. The flow is based on a regular development over a Database Visual Studio project, which is fully automated on deployment, and integrates with a .NET 6 Library which contains the scaffolding context and entities with EF Core 6.

Automation has been an increasing concern for architects and developers, with the demands for more efficient and productive assembly lines, it is important for us to take a look at our daily tasks and understand where we are loosing valuable time doing boring and "brain-dead" tasks, such as copy and run a script, writing the same code, or just do something that can be easily automated with the tools we are so used to use. It's not only a "DevOps team" responsibility to automate deployments and tests, but things like scaffolds can also be integrated into our pipelines.

In this article we look at a simple use case, a simple database project imported into an Visual Studio SQL project, and a .NET 6 Library where developers run scaffolding commands to generate the entities that map to that database. The library is then built into a nuget package and shipped through Azure DevOps Artifacts to be used by one or multiple teams that need to map entities to the database. Now, our goal is to automate the generation of code, so that the developer doesn't have to spend time doing it manually, but that in a matter of seconds, the library can be updated automatically without a need for manual intervention.

Sound interesting? Let's get into it then. 

TLDR: Full development available in the solution git repo.

 

Solution

The overall solution is composed by two projects, the Database project, and the Library project, which should be in different git repos for simplification.

The database project is available here, and is composed of the sqlproj file, the table definitions in SQL, the pipelines for both gate, build and release, and the PowerShell script that will run on release to create and update the entities package.

The library project is available here, and is composed of the csproj file, the data context class, the scaffolded entities and the pipelines for both gate and build/push.

 

Flow

In our use case, the development flow should follow the chart above. In gray there are the manual tasks, in light blue the usual automated tasks and in blue the new automated tasks we want to add to the pipeline.

We start by creating a feature branch to add a new feature to the database, be it a new column, a new relationship or a new table. After the change has been made we commit the code and create a new pull request (depending on the setting we may have quality/approval checks on the pull request). Once the pull request is accepted, the automated deployment to the database starts, here we want to add our custom script, this script will create a new feature branch, run the scaffolding commands using dotnet CLI, commit and push the code and create a new pull request for approval. In our case we will have a manual check on the Pull Request due to the policies enforced in the repo. Afterwards the package is released to the Artifacts feed.

 

Script

The powershell script that helps us to automate the process, available here, is divided into three main blocks:

  1. Using Git commands: Access the Library git repo, create a local feature branch, commit and push the new code.
  2. Using dotnet CLI:  Install dotnet EF Core tooling and scaffold the code using the EF Core 6 libraries and the connection to the database.
  3. Using REST API: Call the Azure DevOps REST endpoint to create a new pull request based on the branch created and then update the pull request with the auto completion settings.

Note: Please refer to the GitHub repository for the latest code version.

The script starts with the definition of the variable necessary to execute:

We receive the Connection String for the database, as well as the target git repository name as parameters to the script. The we create a timestamp variable to be used in the new feature branch name. We also set up the URLs to be used both on the APIs call and the Git commands. We also create the new Git branch name, and set the Git Name and Email for the Git connection. Lastly we set the variable with the Azure DevOps API version to be used, more about Azure DevOps REST APIs Versioning here.

After setting the variables we set the configs for the Git command email and name, then create a temporary local folder to host the library project clone. To clone it we must set the authorization bearer code with the build Agent access token (described in the next topic) and create a new feature branch with the name we defined in the variables.

With the git local copy created, we then validate/install the EF Core tool through the dotnet CLI. After the tool is installed we can run the scaffolding process to create the entities and the data context. With the changes made we can then commit them to the branch and push everything to remote. Before continue we clean up the temporary folder.

With the branch containing the changes uploaded, we can now create a pull request to merge those change to the dev branch. To do so we build the headers with the content type and the authorization token, then create the URL for the API call. After that we create the body containing both source and target branch names, title and description. Before calling the API we serialize the body into a JSON object and sent it through a POST request. With that we receive the Pull Request Id if the call was successful.

With the Pull Request Id, we can now do a patch call to update the pull request with the auto complete options. We then call the API again with the options and the pull request Id to update it with the auto complete feature.

 

Azure DevOps Configurations

Regarding authentication, the script uses the System Access Token both for GIT and REST accesses, this token is related to the agent that runs the pipeline, therefore it is necessary to provide Git accesses to the Build Service account, example below:

Furthermore, it is also important to allow the job service to be able to access other services in the project, not only limited to the running job, that can be done in the Pipeline settings in Azure DevOps:

The connection string for the database is kept in a Key Vault that is loaded during the pipeline and passed down to the script as a parameter.

Note: There are a lot of values that can be removed from script configuration and added as a parameter, please change it to your own requirements.

 

Conclusion

Automation can be a great tool in the hands of experienced developers, it can help you and your team to deliver in a standardized process, removing manual procedures that can increase efficiency, productivity and reduce error prone tasks. Be it in code generation, testing, or even in some development tasks, it's important to always be mindful of what can be improved.

As usual, thank you for reading. I appreciate all the feedback I can get, so feel free to drop a comment bellow or to reach me in social media, links bellow as well.

Filipe Teixeira

Filipe Teixeira

Hello, I'm a CS Engineer with 5 years of experience and a passion for technology. Currently I work for Microsoft as an Azure Cloud Engineer.
Any opinions expressed are solely my own and do not express the views or opinions of Microsoft.

Comments (1)

wave
  • Code Geek

    Code Geek

    7:38 PM - Sunday, March 27, 2022

    Awesome, will try it in one of my projects!

  • Leave Comment

    wave

    Please note that we won't show your email to others, or use it for sending unwanted emails. We will only use it to render your Gravatar image and to validate you as a real person.

    Press ESC to close