Artificial intelligence has great potential, but it also poses risks for both security and hallucination.. Discover how you can ensure the safe use of AI in your organization without preventing your team from using it for creativity and innovation.
Artificial intelligence (AI) tools are set to revolutionize the way we all do business. It’s exciting to see computers perform everyday workplace tasks for us simply by typing something in to ask them to do so.
So it’s no wonder that workers around the world are rushing to try out tools like ChatGPT to see what they can do and how they can increase productivity. However, it’s important to exercise caution.
Samsung recently discovered employees entering proprietary code into ChatGPT to test its bug-fixing capabilities. This not only allowed ChatGPT’s vendor, OpenAI, to view the code, but could also have taught ChatGPT the structure of the code, meaning it could generate code samples in response to other users’ requests.
Samsung, however, realized that prohibiting employees from using the tool was not an effective way to protect their organization. Instead, they provided their employees with a private instance of ChatGPT that was unreadable to OpenAI and did not send information to users outside of Samsung. This allows Samsung employees to continue experimenting with ChatGPT and finding new ways to harness its potential, but in a secure environment where they cannot put the company at risk. Enabling the use of AI tools within the necessary protected boundaries will be essential to successfully realizing the tools’ potential.
Artificial intelligence has incredible potential. Analyzing rich data sets and making strategic decisions much faster than a human can is impressive enough, but generative AI’s ability to produce human-sounding content and functional programming code instantaneously is a game-changer.
Holding your team back from taking advantage of these tools will leave them less productive than your competitors who are allowing their team to make use of generative AI. This, in itself, will put you at a competitive disadvantage.
However, generative AI can also be used for ideation, and organizations that fail to take advantage of this may find themselves struggling to compete creatively. It is therefore critical that IT teams and enterprise architects are able to empower their organizations with innovative AI tools.
This requires knowledge of the evolving AI market, so that enterprise architects can find the tools their colleagues need to succeed. More challenging, however, is identifying the best place to fit these tools into your IT landscape.
To properly empower your organization with AI tools, your enterprise architects need to start with a complete map of your application portfolio and data landscape. However, putting the tools in place is just the beginning of your work.
Artificial intelligence (AI) tools are so important because they can act without direct instruction, reducing the stress on your resources. AIs can analyze data to gain intelligence, direct resources where they are needed, eliminate wasted investments using predictive algorithms, and even create new software before you realize you need it.
The question, however, is: what happens when AI makes mistakes? Nothing is infallible, and even if there is a 1% chance of error, mistakes will still occur.
Of course, this is no more than what can be said about a human being. Both humans and AIs make mistakes, but the problem is that humans can identify and correct their mistakes when they do.
AI, however, cannot self-diagnose its own problems. Not only does AI struggle to recognize mistakes, it often creates fictitious evidence to try to prove itself right, which is known as AI “hallucination.”
This is not a theoretical concern, however. There are many recent real-world examples of AI causing real problems for organizations.
Microsoft recently made 50 writers in its news division redundant to use generative artificial intelligence (AI) to write articles instead. This led to embarrassment when AI listed a charity food bank in Ottawa, Canada, as a popular place for tourists to visit.
More seriously, a Manhattan district court recently fined law firm Levidow, Levidow, & Oberman USD 5,000 for using legal precedents in their cases that had been derived from research done in ChatGPT. It turned out that the precedents had been completely fabricated by the AI tool.
This isn’t a malicious action or false capability, it’s simply that generative AI tools are unable to tell the difference between factual information and creative fiction. These are just a few examples of AI tools going rogue.
Generative AI misuse is one thing, but when you’re letting AI platforms control your logistics or monitor your data for cyber attacks, errors can be critical. What happens when your AI blocks access to your network as it thinks all your employees are cyber attackers, or worse, when it believes a cyber attacker is a legitimate user and gives them full control?
When leveraging AI tools, you must put in place the correct protections to ensure human oversight of AI innovation. Yet, you need to do this without becoming a blocker to the competitive edge that AI tools can offer.
Artificial intelligence (AI) has incredible potential and those who don’t take advantage of it risk being left behind in the market. Yet, we have seen that there are tremendous risks involved in the unsupervised use of these tools.
You can’t simply allow your employees to have free rein (‘Shadow AI’) to leverage these tools across your organization, but you also can’t prevent your organization from utilizing AI innovation. Striking the right balance involves aiding AI adoption, while simultaneously ensuring AI governance.
The key here is to manage your AI platforms as you would an employee. Let them do the work while your organization supervises their activity and monitors their performance.
To do this, you need to create an AI governance framework to evaluate and approve tools for use in your organization, oversee their inclusion in your application portfolio, and then continue to monitor them and ensure their usefulness in the future. Rather than blocking AI innovation, enterprise architects should become champions by delivering, implementing, and developing the use of AI tools whenever possible.
By becoming the one who says “yes” to AI more often, you will also become the one people are more likely to listen to when you have to say “no.” Doing so, however, means gathering enough intelligence to be able to confidently say “yes” when it is safe to do so.
Managing artificial intelligence (AI) platforms requires enterprise architects to understand the capabilities and limitations of AI tools so they can best implement them across their application portfolio and IT landscape. They need to know where they are safe to use and where they are not.
Gaining this insight into your application portfolio will enable you to say “yes” to AI innovation without risk. To do this, you need the LeanIX platform to map your entire application portfolio in a collaborative and dynamic environment.
Would you like to know more about how SAP LeanIX can support your transformation journey, migration to S/4HANA and reduce costs and technological risks?
Contact us and schedule a demo!! NUMEN is the partner capable of helping you on this journey!
By subscribing, you agree to our Privacy Policy.
At NUMEN, we are on a mission to connect technology and human expertise to deliver exceptional solutions.