The Power Platform is an unusual beast, sitting in this unique space between a Citizen Developer tool but with pro dev capability. And I often say that is a risk, as with great power comes great responsibility, so how exactly do you setup the platform to get the benefits of Citizen Developers but protect the organisation?
This blog is how I would setup the Power Platform, it is not the right way for everyone, and definitely isn't the only way, but its the way I would do it.
There are 4 key pillars to my setup:
- Strategy
- Configuration
- Training
- Automation
1. Strategy
I've talked about different Power Platform strategy's before, but in my ideal world I would use a federated model.
The end goal would be to enable business teams to manage their own Environment stack (Dev/Test/Prod). The reason for this I think you get the best of both worlds.
- You enable the business to be IT sufficient (no need to grow the IT departments)
- Need is close to Delivery (the developers understand the requirements)
- IT governance and controls are still enforced.
Each business team would control and administer their own environments, managing users, deployments and solution management. Yet they would all follow core governance rules set out by the IT department, key are:
- Separation of Duty (developers do not have access to production)
- Change Control (Business approval of changes and validation of testing)
- Centrally Controlled DLP policy
But although that is my end goal I understand that is not an easy path, so I would also hybrid with a Shared environment strategy. This way there is a environment stack managed by the CoE (centre of excellence), it provides a route for small teams and a model for federated teams to follow.
2. Configuration
So we have our strategy, but what does the practical side of this look like.
Well for starters we need to deal with the Default environment (thank you for this little jewel Microsoft). The default is potentially your Achilles heal to setting up good governance, as its your lowest common denominator. Meaning what every controls you put on other environments people will flow to the Default with its open governance, so you are only as good as your Default. Also to make things worst it is used by other Microsoft tools under the hood, so changes can impact other systems:
- SharePoint List forms
- SharePoint List item trigger
- Project
In my mind the Default should be Personal solutions only, and that means:
- Very restricted DLP
- Sharing limits
- Restrict Dataverse & Model Driven Apps
The restricted DLP would cover the very basic connectors (anything you cannot block), there would be no non business, just business and blocked.
Sharing limits is a harder one, unless you have all premium licenses, where you can then enforce sharing limits (flows 1, Apps 5). If you don't have all premium licenses then you will have to change from proactive to reactive.
A set of control flows will need to run on a schedule (ideally slightly randomised to get stop people working the system). The flows would check how many people each flow & app is shared with, and either remove additional shares, or quarantine them (to quarantine a flow change the owner to SYSTEM and remove their access).
Restricting Dataverse is also very difficult (thanks you Microsoft again for locking down the Environment Maker security role). This one requires the same reactive approach, with flows running automatically deleting all custom Dataverse tables and Model drive apps (yes I would be that brutal).
If your developer out grows the Personal Productivity Environment they can move on to the Shared Environment. This can be one stack of environments or multiple, each specific to a geographical area.
The Shared environment is maintained by the CoE and as I said is the model that all the business ran environments follow. It enables small teams/lone developers a path to production, though it never removes the overriding ownership of the solution, so the development work will remain with the team.
The CoE is the core IT team that maintains the Power Platform and they have a few key roles:
- Administering the Tenant Settings
- Provisioning New Environments
- Updating the DLP policy's
- Enabling new features/technology
- Setting governance process
- Setting training requirements
- Auditing
- Attaining business value
The shared environment follows a standard IT process map, with key stage gates being followed
- Intake
- Arch Review
- Security Review
- Impact (if applicable)
- Design Review
- Code Review
- UAT
- Change Approval
Along with key requirements of:
- Documentation
- Support model/escalation process
- Service Accounts used for Separation of Duty
These additional steps can feel convoluted and a barrier to entry, but they are key to ensuring that the platform is secure, scalable and generates business value for the cost. Its far to easy to follow the hammer principle:
If you own a hammer everything looks like a nail
So its key to understand the problem, the challenges and what is the right solution, very few organisations just have the Power Platform, so its key to check if there is better solutions. This can even be true within just the Power Platform, a Power BI dashboard may be a better solution then a Power App, like wise a Low Code Plugin might be a better solution then a Flow.
CoE Structure
I'm not going into resources here but simple structure, the key features the structure needs to cover are:
- Manageable Skill Set
- Collaboration
- Progression/Succession Planning
The focus is on applications, with a SME for Automations, Applications, AI, and Data. Additionally there is a Platform SME, focused on the platform settings and Managed Environments. Due to the importance of this role the leader of the platform team is the most senior.
There are then delivery teams below Automations, Apps and AI, these teams support business more directly (think of it as the SME's are for tomorrow, delivery for today). This also creates a natural succession plan, with the delivery teams a natural progression to SMEs.
Business Stack
The business stack works on a federated model, with a business team meeting the requirements to govern their own environments. The minimum requirements are:
- Backlog - they must have a volume work on the platform
- Trained dev team - accredited and key job role requirement
- Trained admin team - accredited by the CoE
- Sponsorship - the departments senior leadership are committed to maintaining the team and budgeting for additional licenses (e.g. Dataverse or AI Builder credits)
- Risk Ownership - own the risk of data breaches and/or failed systems
3. Training
This is critical to ensuring that you platform is stable. Setting up a training and certification process has 3 key benefits:
- Efficient Code
- Consistent Code
- Small Barrier to Entry Code is used loosely and covers all development work
1. Efficient Code
This has double meanings, the obvious is less runs and API calls (each user has a daily limit and we need to be thinking of our carbon footprint). The other efficiency is in development, trained developers will develop quicker with less UAT and Production issues.
2. Consistent Code
When you write your code it's easy to think it makes perfect sense to someone else, but does it? Training developers to follow standards like naming and structures ensures anyone can debug update the code. This is critical for LowCode as without formal training solutions can be very 'Creative'.
3. Small Barrier to Entry
One of the biggest benefits to the Power Platform is 'Anyone can be a developer', but should they? Developers don't just need build skills, they need a security mindset, design skills, and many more, plus they need passion (especially when debugging 😎). Getting your developers to invest time and effort in training makes sure they have the right commitment along with additional skills.
My training setup is a mix of Microsoft Learn and internal documents. MS Learn has some fantastic modules so why reinvented the wheel. Getting Microsoft to maintain 2/3's of your material is a no brainer. But don't just follow their plans, pick just the modules relevant to your organisation (don't use Dataverse, remove those modules!). The last 3rd is your organisations requirements, this should include but not exclusively:
- Naming Conventions
- Action Configuration
- Preferred Patterns
- Exception Handling
- Solution Setups
- Documentation
- Access
As example for Flows I have the following:
Naming Convention
How your variables, actions and components should be named in a constant way. For Power Automate variables I like camel case with first letter type (iNumber, sString), and actions should include original name (Get Items Tasks, Run Script Update Formatting). I've done full blog for Power Apps here too.
Action Configuration
Setting up consistent action settings ensures every flow behaves the same, I set Retry policy to off unless specific reason, and if there is a maximum of 3 non exponential retries. Secure inputs/outputs for any flows handling higher the BI data. Finally pagination settings should always be turned on and set to maximum limit.
Preferred Patterns
How your flows and apps are setup should be consistent. I want flows to follow the Direct Methodology, with Child flows not monolith flows. Apps should prioritise single screen approaches, with multi by exception (and limited screens and click throughs) Containers should not be used for positioning, that's dynamic x/y parameters, only for responsive apps.
Exception Handling
I require all flows to have exception handling, this should ensure any external sources impacted are updated (e.g. if adding rows to list and errors I may want to delete all rows already added to make it easier to re-run). Secondly it should also notify someone, this is because flows are owned by Service Accounts in production. The notification should include a link to the failed run (to pass to support team) and the error message.
Solution Setups
Dependencies can cause untold stress, so I work on dependencies by exception. So everything used should be in the solution unless there is a good reason (e.g. Custom Connector or Dataverse table). Connection references and environment variables should be solution specific as well (avoid dependency issues and allows future flexibility), and follow a standard naming convention (no auto creating in the flow). Every solution should use the dev teams publisher to help track ownership.
Documentation
Every solution should be documented (flows should be grouped into solutions), this should cover:
- SDD (Simple Design Diagram) and SGD (Simple Goal Description)
- Contents (so we know connections and variables for all environment)
- TDD (Technical Design Diagram)
- Setup (Environments, Ownership and Access)
- UAT (User Acceptance Test) evidence and requirements
Access
Flows should always be owned by a Service Account in production, this ensures no edits to production and data protection. Service Account details should be stored in a digital vault, with access tracked and limited. Dev flows should be stored in a repository and deleted once development complete.
Apps and Bots have similar requirements under the same headings.
4. Automation
The Power Platform will scale, with or without you. Once people find it they will build, they will share, and build some more. This means that maintaining the platform can become very labour intense (deployments, environment access, support tickets, DLP updates, new feature rollouts). So automation is key, luckily the Power Platform has a very good automation tool 😎
Pipelines
Whether Azure Dev Ops, Power Platform, or your own, you need pipelines. Automated deployments through pipelines is critical to allow scale and efficiency, and should be one of your first automations.
Enabling developers to promote when they need with minimum friction accelerators development and ensures more robust testing. Your pipelines therefore should be accessible to all you approved developers and easy to use.
So far I personally have found limitations with out of the box pipelines so I have made my own. They allow me to meet my separation of duty requirements and other governance I want. I use the platform itself, with a combination of Flows, Canvas App, Model Driven App and Low Code Plugins.
Access
Managing access just inside the platform is doable but not easy. So I would always use security groups, these can be used outside of the platform too (like giving access to SharePoint sites or communication groups) and can be integrated with your existing organisation's approval process.
If you don't have an existing process then automate with Power Automate, using approvals and Office 365 Groups integrations.
Dev Controls
No matter how good the majority of developers are, there will always be some how want to cut corners. And because under the hood dev environments are exactly the same as test and prod they can easily become pseudo prod. So along with good telemetry I would have automations that turn off dev flows daily if they have not been modified within 24hrs and enforce same sharing limits as Default.
Metrics
The platforms in built metrics are not the best, and the CoE Kit does not work for me (the requirement for Global Admin to run it is simply not acceptable from a security side. Integration with Fabric is the future, but if that is not available a combination of Power BI (Dataverse tables and Application Insights as source) and flows to gather additional information like Security Group data.
And that's how I would do it, its not the right way for every organisation, its probably not the right for most, but it covers all the things that I think are important, and hopefully gives you a base to build out what you need.
Top comments (1)
spot on the metrics front!!..