My experience from Business Intelligence for the last 15 years involves mainly
Contact me if need any help of want to discuss opportunities.
I have recently been struggling with an implementation where I had to setup an automated publish from devops to a fabric lakehouse and thought I might share this with other users that are facing the same challenge.
Remember to add you service principle with permission on the whole workspace, or preferable as a read only user on the lakehouse artifact, then add it to the db_owner role on the sql endpoint so that it is allowed to create all you database objects.
The use case I am having is creating shortcuts on the lakehouse, then expose parts of these shortcuts in regional views to the end users.
We've created different schemas for each region in the company Europe, Americas ands Apac.
Then created database roles and entra security groups to these roles so that the access rights are administrated in the AD rather than in the database.
This way no user has access to the raw table shortcuts in the lakehouse, only the views that might share only those comapanies within the users region.
From an existing lakehous you have the possibility to download the visual studio project as a zip-file directly from the fabric user interface. This will give you a good start if you don’t have knowledge about how to set it up from scratch.
The project can be opened and edited in Visual Studio Code (download extension) or in Visual Studio Community/Pro/Enterprise if you like that better.
Create a new repository in devops where you commit/push the newly downloaded project.
I prefer to name all my sql database projects SQL_....
Once the new repository has been created you will be able to setup a new build pipeline that is triggered automatically when new code is pushed to the repository.
Once you selected “New pipeline” you may choose “azure Repos Git” and point out your newly created repository.
In the next step choose “ASP.Net” so that you end up with a minimal list of tasks but having pool vmImage windows-latest and a build step.
You may remove the task “- task: VSTest@2” if you don’t plan to use it.
Also, add these tasks to prepare for the release pipeline later on:
- task: CopyFiles@2
inputs:
Contents: '**'
TargetFolder: '$(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
You may specify filters on the copy task if you have huge projects to speed things up. You will only need to copy the .dacpac file from the bin/release folder.
Once done, save and run will run the build pipeline for you.
Wait for it to complete, have a look at the log to make sure it doesn’t give you warnings and errors.
Also, make sure the copy task finds the dacpac file from you build.
If you push new code to the repo this build pipeline will automatically trigger.
Once the build pipeline is done, you’ll need to setup a release pipeline with potential multiple stages (dev, test, prod).
Select Pipelines – Releases and click “New release pipeline” in the dropdown list.
Then in the next step scroll down and select to the bottom where you’ll have the “Empty job”.
Give the stage a good name (dev for example).
Ion left hand side, click “add an artifact” and browse to your newly created build pipeline and press “add”.
Then in that same box in the upper right corner you can click on a lightning symbol where you can enable “continues deployment trigger”. This will start you release pipeline automatically for “dev” when the build completes.
Then on the right-hand side you can define jobs and task for this stage by clicking on the link symbol “1 job, 0 task”.
On the “Agent Job” right hand side you can click on the “+” sign to add tasks.
Search for dacpac and select Azure SQL Database.
The Azure SQL Database task needs to be configured correctly from top to bottom.
Use Azure Resource Manager and select a suitable service connection that you may already have created in the devops project settings.
When configuring the SQL Database, choose
Then go to variables – Pipeline variable and create the three variables sql_endpoin, database, appId, client-secret.
Remember clicking the “lock” symbol on the client-secret variable so that it won’t be available for anyone to fetch.
You may also (preferable) link your variables to a key-vault in azure, but that is not covered here.
Now save your release and start a new release based on this.
When the release pipeline has executed, carefully look at the logs to make sure it ran as expected.
Also check the end-result in your lakehouse so that all objects has been created.
Once you've got this running, you may setup more stages which allows you to publish that same database project to another lakehouse in a different workspace (test/uat/prod).
fabric, lakehouse, dacpac, publish, servicePrinciple, devops, CI/CD