Copilot for Microsoft 365 - Technical readiness
Copilot for Microsoft 365 is an interesting implementation of Generative AI as it’s baked in to the 365 suite. The documentation, supporting resources, and other launch materials for Copilot for Microsoft 365 are plentiful, with pages of guides on the various work streams and tasks recommended for a successful launch.
Microsoft have iterated on these guides since I first started looking at them and today I want to share some notes on how we handled the “technical readiness” work stream. I do so from part-way through a pilot project of around 30 teaching & learning staff that collectively represent almost every school in our trust and, at this point, have enough confidence to say that we’ve done a reasonable job of it!
Technical Readiness
Microsoft have a boat load of guidance and adoption resources for Copilot in their Technical readiness for Copilot for M365 resources, and I can’t hope to cover those here. Instead, I’m going to talk through my approach to Copilot for Microsoft 365, and a few things that I’d found that were more challenging than I’d thought.
Firstly, It’s possible to group all of the technical preparation into one of two levels:
Tenant & Environment
User & cohort
For each level, we need to understand the following aspects:
Data & governance
Technical prerequisites
Tenant & environment
The first place to start is the preparation in the tenant and environment. The Copilot success kit has matured since we did the bulk of this work back in the spring, as has the Copilot Optimisation Assessment. Take the time to go through the work in these guides step-by-step, most of the checks are pretty straightforwards and involve making sure that you have the right licensing in place, and that users are using the tools where Copilot is embedded. There’s also a collection of guides on Microsoft Learn which are well worth a read.
When we completed our preparation at this level, we were looking for answers to the following questions:
Do we have anything that will take significant effort to fix, either before or during a Copilot for Microsoft 365 launch?
Generally, how ready is our tenant and environment as a whole?
These questions are important. One of the biggest risks that’s been talked about with Copilot for Microsoft 365 is that, if your practices around data are already poor, then Copilot might find data that a user shouldn’t have access to and present it to them.
The problem here is not Copilot for Microsoft 365, it’s that users are not handling data in a way that’s safe and secure. This might mean that you have some work to do across your whole organisation before you come back to Copilot, but delaying your Copilot project does nothing to fix the problem. It’s a risk you’re already carrying and it needs sorting with or without Copilot.
For each check, or test that I used, I had a threshold for acceptance. The threshold varied depending on how critical that particular check was in answering my questions above, and the workload involved in correcting any issues that I found. It’s not possible to just copy and paste these checks between organisations. We started with the checks from the Microsoft documentation and then added more as we went. A few examples are:
Number of SharePoint sites set to "Public" should be 0.
Number of users on current release of M365 apps should be > 95%.
Default sharing link type for all sites should be “People with existing access”.
…
Once all of your tests are above the threshold, you’re likely ready to move forwards. We couldn’t afford to set the target for every target to 100%, as the effort required to check every permission across every file and folder on SharePoint would be astronomical. Even if you could, the very next day something might well have changed. Instead, we need enough certainty to move forwards, and then to build in further checks and governance processes in the per-user or cohort level work. The thinking behind this is laid out in a previous post titled “Clarity through movement”, linked below.
User & Cohort
This part was initially more difficult. The question I used to frame this was “what’s the individual risk for this user?”. We’re going to have different concerns around data when granting a copilot license to a user with different levels of access, and there are different levels of trust at play too. The other trusts I know deploying Copilot for Microsoft 365 all started with very highly trusted individuals in central teams and leadership positions.
After a few late nights diving into the Sharepoint & Microsoft Graph PowerShell modules, I concluded that the data that I really wanted wasn’t actually available. I could easily get a list of groups that a user was a part of and an indication of what those groups gave a user access to, but there was a gaping hole in the data I’d get back - OneDrive and ad-hoc permissions in SharePoint.
We needed a way to assess the “blast radius” for enabling Copilot for Microsoft 365 for any given user.
There were a few SharePoint PowerShell commands that looked promising, but every one of them simply errored or timed out. I really wanted to know what data a user could see in others’ OneDrive, and the folders from SharePoint that had been shared with them as it would give me much more confidence that what I think staff are doing is actually whats happening.
Then, I found this endpoint in the Graph API:
https://graph.microsoft.com/v1.0/me/insights/shared
This endpoint is the API behind the “Shared with me” page in OneDrive which might be enough (in combination with other measures) to give us enough certainty to activate a license. We then hit barrier two… a little line in the documentation stating that “Only the user can make requests using the user's ID or principal name.”
I’d been planning to use a simple Forms & Power Automate combination to allow users to register for a pilot project, in which they’d have access to a license, but as the endpoint I needed can only be accessed by the user themselves, I put together a Flask app running on Azure App Service. This onboarding app allows the user to sign in, go through a set of onboarding pages and then calls the graph API endpoint before saving a report to a database for me to review. The whole workflow looks something like this…
At some point I’ll talk through our onboarding & user adoption work in more detail, as I think that’s even more important, but having the technical readiness for a user embedded in the onboarding process allows the work to scale. We can onboard 25 people with checks at the individual level in under an hour as the data that we need to review is collated automatically.
In our internal discussions on this work, we started to use the term “blast radius” for enabling copilot for Microsoft 365. There’s probably better wording for this, but that’s the term that stuck.
What if we miss something?
This is a really important question to have answered. As I’ve noted before, our aim was never 100% certainty as that’s simply not achievable. Instead, we needed to reach an appropriate level of certainty, based on our organisation and our data, and then have pre-agreed plans to implement if something came up.
The starting point for this was, what process do we follow if we find that a user has access to data that they should not have? We already have policies and processes in place for this kind of problem, and so it became a matter of wrapping these in specific training and documentation for staff and leaders across our organisation. The key step here that, from the very small window I have into other organisations, I don’t think is yet widely understood, is that this conversation took place at the executive team. Far too often, trusts and schools have a divide between teaching and non-teaching functions in the organisation, and ensuring that the executive / senior leadership team has a common understanding of the risks & problems (existing over-sharing of data), and agreement on what the plan will be if anything is found (don’t just blame IT) is critical.
Additional tools and licensing
It’s worth noting that some of the tools and licensing that Microsoft links to in their documentation simply isn’t available to us. The A5 compliance suite, and the SharePoint Advanced Management licenses both add some very powerful tools to support your management of data across SharePoint. Unfortunately, with budgets the way that they are, we don’t have access to these.
Adoption & Impact
I’ll save the majority of the adoption & impact work and the thinking behind our internal roadmap for AI for a separate post, but I think there is one aspect that’s worth pulling out here.
Technical readiness isn’t just about safety. It’s just as important that we use these tools to support the adoption & impact work.
In building an application to handle onboarding, we now have the ability to present some information to users during that onboarding journey. In fact, we’ve flipped the experience around. This isn’t a license that IT imply provides once we’re happy, this is something that you sign up for and enable yourself (with an approval step).
The onboarding app presents the outline of our Copilot project and our expectations of staff who are joining it. It explains our rationale behind a pilot, and links to our principles for “Responsible AI”1
In addition to this, we’ve built a series of power automate flows to support the project. One handles the onboarding process, approvals, and activation of licenses (called automatically by the application that creates the report), and another handles project communications, sending automated emails to participants with reminders to complete feedback forms, and hints and tips. The reminders go out a set number of days after sign-up and are pulled from a SharePoint list, so they can be added to and edited on the fly.
Where next?
We’re currently in the middle of a “private beta”2 pilot project with teaching staff from all schools, with a view to assessing the impact of Copilot for Microsoft 365, and whether the cost can be justified for a wider.
We’re also looking for the training and skills needed to get value out of Copilot. Can we get the same value from cheaper / free alternatives without introducing data and security concerns? Should certain knowledge and skills be considered pre-requisites for getting value from Copilot, and how could we embed those into onboarding and wider digital skills initiatives so that more people can get value from it? Should we be looking at AI elsewhere (hint - yes, and we are!), and how can this pilot inform other work on use of AI?
We’ll document and share what we find over the next few months, and look out for more from me here. The next post drops… sometime soon?
Responsible AI is an odd term. If you replace “AI” with “Technology” then you everything still makes sense, but we no longer have to worry about exactly how “AI” this thing that we’re being responsible with is. If the world had thought a little more about responsibility when social media and smartphones were new and shiny, arguably we’d be in less of a pickle now!
I know, not really the correct use of this term as we’re not developing stuff, but it’s a good analogy for the project.