For a long time, it has been discussed about the importance of protecting data through the use of identifying, classifying and preventing data. Microsoft agree and have created a white paper, you can read it here.
Their main goal behind this is because they want to sell you something and in this case it is Microsoft Purview, a product natively interoperates with Copilot.
This is a classic example of how IT is getting more and more expensive. Copilot is now free for certain things - it is native in Bing for example. In all other applications it is taking hole. Within Teams Premium it now allows digital dictation through CoPilot and creates "chapters" automatically throughout meetings. Other features across even the office suite are being added almost daily. Windows 11 is changing (24H2 is launching later this year) and we can expect Copilot to be integrated across all Microsoft applications in the coming year.
Fundamentally, unless you have way to identifying, classifying and prevent loss of data you are likely to experience a data breach or other loss. It is no wonder that many companies are banning generative AI. As Cisco have highlighted, many have got a number of fears about what it will unleash!
Maybe you should be considering providing a clear framework to employees about AI Policy and Procedure?