Getting real with AI and Apple security in 2025
Four of Jamf’s leaders share their ideas on what will dominate tech discourse in the year ahead.
![A robotic hand appears to direct the movement of streams of data or electricity.](https://media.jamf.com/images/news/2025/jamf-insights-2025-ai-security.webp?q=80&w=800)
Photo by Tara Winstead
Demystification of AI and genAI will lead to changes
Linh Lam, Jamf’s Chief Information Officer, believes that as the dust clears from the excitement and early adoption of AI, business leaders will think more about the real value of an AI solution rather than simply adopting AI for its own sake.
“Everyone from household-name companies in cybersecurity to start-ups with 10 employees has quickly entered the genAI market over the past year or two,” she says. “It’s a crowded space that can easily overwhelm even leaders of technology companies who are looking to select the right genAI solution for their businesses.”
As the market settles down a few degrees from sizzling hot, Lam predicts that technology leaders will have access to better information on which tools perform better than others, as well. “It’s going to be a year of cutting through the genAI noise,” adds Lam, “and organizations that can break through the noise will be the companies that stick around for years to come.”
More mature AI assessment
In order to assist organizations in these sorts of assessments, Michael Covington, Vice President of Portfolio Strategy at Jamf, believes the tech industry will need a strong set of standards for choosing AI products.
An AI foundational rubric
“The industry will need to develop set of foundational rubrics to guide business leaders in more timely assessments of AI technologies,” says Covington. “As a result, I predict we will see a renewed focus on data classification labels, a better understanding of AI processing locations, and a demand for confidentiality assertions from vendors as private data traverses their infrastructure.”
Do you need everything AI?
With strong guidelines to follow, he believes that leaders will be able to take a more measured approach to adoption. They can focus less on the appearance of “keeping up” and focus more on how the technology can be used responsibly to drive business objectives.
Mitigating AI security concerns
Andy Smeaton, Jamf’s Chief Information Security Officer, agrees. He sees a high risk for a major breach at companies that have already adopted AI quickly but not critically. He also believes that people have over emphasized AI in the past few years.
Many have been so impressed with what AI can do that they have been perhaps overly trusting in the power of AI to avoid being compromised. For companies to avoid costly breeches, 2025 will be the year that IT leaders take a good hard look at better security for the data behind their AI tools, as well as the tools themselves.
Tracking laws that affect IA
Risk doesn’t only come in the form of a breach. Business leaders will also have to take a close look at laws governing the use of AI, such as privacy law, says Smeaton.
In the US alone, in the absence of federal legislation regulating the use of AI, states have taken over. “The individual state laws will be different from each other,” he says. And more legislation will be introduced in the coming years.
It has been a massive undertaking to read, digest, and then translate the implications for companies. And there are only more laws to come.
“There are around seven or eight more laws going into effect in January of 2025,” he continues.
Smeaton believes that this may be the year that IT companies start to offer tracking solutions for all of this vital information.
Avoiding the security pendulum effect
Some companies are already beefing up AI security or banning it entirely within their organizations, and Covington believes that this is an overreaction that can have negative consequences for organizations.
“Businesses are reacting with catch-all policies,” explains Covington. These policies restrict usage and control how sensitive information and intellectual property flows outside the organization’s data protection boundary. “For many,” he continues, “This means blanket policies forbidding the use of AI until reviewed by an oversight board.”
While oversight is good, says Covington, it can significantly delay the adoption of useful tools if the process is not streamlined.
“The recent release of Apple Intelligence serves as a good case study on how AI keywords can trigger restrictive business policies,” Covington says, “despite an implementation that keeps private data on-device and includes controls to govern the use of third-party AI models.”
Sitting down with all of the leadership in an organization to determine timely policies about AI that will not get in the way of work or of innovation will be an absolute must in 2025.
Renewed emphasis on student online safety
Suraj Mohandas, Vice President, Strategy at Jamf, believes similar issues will play out in schools in 2025.
“We’re seeing a fundamental shift in how technology and mobile devices are being utilized in the classroom,” says Mohandas. Administrators and teachers have moved beyond needing training in technology skills and are now using technology to enhance learning across all subjects. “Now that students have access to these tools at their fingertips,” continues Mohandas, “we’ll see educational institutions push to maximize impact for individual students, which also involves prioritizing their safety.”
Due to the power of individualization that smart adoption of AI tools offers, Mohandas believes that more and more teachers will be using AI in their classrooms. “Adaptive learning platforms will see a major adoption uptick in K-12 institutions,” says Mohandas, “and real-time feedback and assessment tools will be crucial.” These tools will not only adapt to many learning styles and teaching pedagogies, but will also measure the impact that devices and personalized learning programs have on students.
And while AI can be a powerful way to teach, it’s also a powerful way for hackers to step up the speed and specificity of their attacks.
“The attacks are getting more and more targeted,” explains Mohandas, “and the more student-specific data attackers can get their hands on to fuel the specificity of their attacks, the more attacks they’ll launch. And the more successful those attacks will be.”
This is why Mohandas predicts that school districts will see a strong push for more safety mechanisms to be installed on student devices: specifically when it comes to data protection, threat prevention, and privacy controls.
Educational institutions will be encouraged or required to improve encryption protocols and access controls and to use AI-powered threat detection to fight AI-powered attacks. He also predicts an uptick in schools making use of security solutions that provide real-time alerts to keep a close eye on student data privacy.
A greater focus on Apple security
Is Apple still unbreakable?
For a long, long time, Apple device users could be confident that they didn’t have much to fear from malware or other malicious interference.
And while it is true that Apple’s OS’s are more secure than PCs, there was another factor: supply and demand.
Windows and PCs were such an enormous part of market share for decades that most hackers would write code for the OS that was the easiest to break into and that also had the most data to mine. Breaking into Apple devices was harder work, with fewer targets to exploit.
An increasing target
The sharp increase of Apple in the enterprise, while a boon for school and businesses, started to attract unwanted attention. Bad actors began to see that they now had access to more data-- enough that made it worth their while to write some nasty code to break into Apple OS’s.
Unfortunately, nothing is unbreakable.
The recent development of malware that faked airplane mode or lockdown mode to lure iPhone users into believing that their devices were protected is one case in point.
While Apple devices are still very difficult to access, it is still possible for hackers to do it.
2025, predicts Covington, will be a big year of organizational understanding that Apple devices not only need securing, but they need securing with Apple-specific solutions.
Preparing for what’s ahead
Forward-thinking leaders will assume that a security incident isn’t a question of if. It’s a question of when.
The more organizations move forward assuming that there will be an Apple security incident, the smaller those incidents will be, and the faster organizations can remediate them.
It’s a good idea to take a look at security products that are custom-built for Apple OS instead of those that were built originally for other products and then retrofitted for Apple security later.
Security and secure connectivity solutions that are built from the ground up for Apple can not only better protect Apple, they can do it without compromising the user experience.
Keeping a cool eye on AI to focus only on what companies can actually use and secure, following stringent and Apple-focused security protocols, and ensuring that legal has a handle on local AI laws will help organizations weather smooth or choppy seas ahead.
Subscribe to the Jamf Blog
Have market trends, Apple updates and Jamf news delivered directly to your inbox.
To learn more about how we collect, use, disclose, transfer, and store your information, please visit our Privacy Policy.