macOS scripting and so-called Artificial Intelligence

Chatbots built with the new, more powerful, "large language models" seem to be on everyone's mind. The most prominent one is OpenAI's ChatGPT. These bots use statistical models to predict the next words in a sentence and the results are often quite astounding. What is even more astounding, is that this technique also can generate (mostly) usable code.

March 21 2023 by

Armin Briegel

Adult male with beard playing chess with a robot

Establishing communication

I had mostly ignored the initial wave of excitement because I am busy enough chatting and talking with real humans most of the time. But when I saw more and more people who lauded ChatGPT's skill at explaining and even teaching how to write code, I had to see for myself.

To be honest, this was not the first contact I’ve had with this new generation of Artificial Intelligence (AI). My first? An app that claims to specialize in writing macOS bash scripts called SudoSolverAI. My experience with this app was not a ringing endorsement of the new technology, but it did pique my curiosity enough that I signed up for ChatGPT and tested it using the same questions as I used with the former app…only with very different and more intriguing results.

While ChatGPT can’t fully replace admins, learn how to create admin scripts on your own.

Let’s chat

What makes ChatGPT's coding advice so compelling is that it does not just spit out sample code but rather it explains what the code does as well. In this example, the bot provided a script for Linux, from a previous prompt. When I asked specifically about macOS, it retained the context and gave me a script for macOS with explanations on each step and how to use it, as highlighted in the excerpt below.

 Q: Does this work on macOS? A: The script I provided is specific to Linux and may not work as expected on macOS. The reason is that macOS uses a different command to manage user groups and privileges than Linux. In macOS, the command to remove a user from the admin group is: sudo dseditgroup -o edit -d  -t user admin Therefore, a modified script for macOS would look like: #!/bin/bash # Check if user is an administrator if ! id -Gn | grep -q "admin"; then echo "You are not an administrator." exit 1 fi # Remove the user from the admin group sudo dseditgroup -o edit -d $(whoami) -t user admin echo "Admin privileges removed for $(whoami)" You can save the script with a ".sh" extension, make it executable with "chmod +x", and run it with "./" on macOS.

This script works when you run it in Terminal, but if you tried to run this as a management script from Jamf Pro, it would fail. The whoami command will return a response as the user the process is running as, however when running as part of a Jamf policy script, that “user” is root.

When I tried to coax ChatGPT towards the solution with more specific questions, the replies reflected my questions but did not provide new information toward a proper solution. I got ChatGPT to walk in circles, rotating between recommending whomai, id -un, and the variable $USER as "improved" solutions to return the currently logged-in user — even though they all return the same value.

Note: for those that cannot bear the suspense, here are two safe ways to determine the currently logged-in user on macOS.

Does not compute

When asked how the script would look if I wanted to deploy it using Jamf Pro, it gave me detailed instructions on how to create a Jamf Pro policy utilizing the faulty script. Sadly, the instructions were wrong and did not match the actual workflow required to successfully add a script to a Jamf Pro policy.

Aside from the method to determine the current user, the script received from ChatGPT contains another very subtle error. The way this script checks whether a user is a member of the Administrators group will also return a false positive for users that are a member of the _lpadmin group. On macOS, membership of the _lpadmin group provides the ability to manage printers without requiring full administrative privileges. This is a solution that you will likely only encounter on managed Macs.

In this script, the error would only lead to an erroneous attempt to remove the user from the Administrators group — which would fail — causing no further harm. But if you reused that code in another context, it could have unexpected side effects — even resulting in far worse outcomes.

Some users claim that to obtain the best information from ChatGPT, the prompts should be properly formulated. Without any clear indication as to what exactly constitutes proper prompt formulations, this supposed skill has received the designation of "prompt engineer" by users online. As a programmer, if I need to learn special "incantations" to communicate with your language model, then I would say your AI is failing at its intended purpose of learning from its users.

Despite the rising popularity of Macs, most of the people who need to write shell scripts are not using macOS. Mac users who need to write shell scripts need them for tasks other than managing Macs. Only a subset of those that do need scripts to manage Macs are Jamf Pro administrators. The skill set of managing Macs with Jamf Pro, package installers, profiles and scripts can be argued as a niche within a niche within a niche.

As their name implies, the large language models that feed ChatGPT and its siblings rely on a large quantity of input, which allows them to statistically predict which words make sense in that context. That this approach works for code as well as human language, is amazing. But human communication has a lot of redundancy and resiliency built in.

Computer code does not.

Information ≠ Facts

Take for example the FAQ for ChatGPT which states that its data model is based on articles and posts that are older than September 2021. This is problematic for Mac-specific solutions because we all know that macOS — much like any technology — can change and evolve (and often does so) a lot within two years. This is especially true of management tasks.

The FAQ also warns that the information provided by ChatGPT “may not be factual". The language model is based on statistical models of words. It has no concept of the actual meaning of those words and even less of what is factual or not. It should not surprise that a statistic generated by AI that is based on millions of online posts is incapable of admitting it doesn't know the answer and will confabulate nonsense instead.

During my short experience with ChatGPT, initial replies are usually quite solid, though often displaying some minor flaws. But when you start pushing into the details, you will quickly reach the point where the bot starts to make things up, a process OpenAI has coined “hallucination”. When you are asking a chatbot for help on something you have no background in, you might not be able to tell when it is moving its responses from true statements into non-factual territory.

In general, it may be best to treat information from a chatbot in much the same way you would treat the results of a web search. Do not trust anything you get from the internet initially, but rather, evaluate the responses and their context. On a webpage, you can verify the source. For example, ask yourself is the information from a well-known MacAdmin? Or perhaps a developer who is transparent about running their scripts in different environments? Also, the age of a post can often be a tell-tale sign. Consider if the script was created for an unsupported OS, like Snow Leopard. if so, chances are low that it still works as written. Sometimes digging a little deeper will provide better or more current information from the comments and replies left recently by users. Each of these actions allows users to make better judgments as to the validity of the information.

Alas, for now, ChatGPT does not show the sources of its knowledge, though it looks like the new AI-powered searches from Bing and Google address this concern by including links to sources, which will be a huge improvement on this front.

That said, a conversation with an AI bot may be useful to provide pointers on where to focus further research and experimentation. But you would be prudent to remain skeptical and careful. Conversations with real humans, in person and in online forums and over messaging apps still provide better quality information and advice. Speaking with humans could also challenge you to "take a step back and explain what you want to achieve", which is often an important step in actually finding the solution you are looking for.

Destination unknown

Despite all my reservations, I acknowledge that this new generation of chatbots represents a significant improvement over what’s come before. I am reminded of using Google for the first time in the late 1990s. Google search results were a huge leap forward from the results of its predecessors. I also remember concerns that Google search would completely disrupt education and work, making several categories of human professions completely obsolete. In hindsight, while the improved web search that Google introduced had a significant impact on how we learn and work, the perception that our lives would come to revolve around it did not turn out to be the case either.

Similar excitement was more recently attached to the emergence of voice assistants, like Siri. Yet in the end, after over a decade of use, they have remained mostly underwhelming too apart from a few specific use cases.

ChatGPT might be the start of one of those exponential growth curves we sometimes get in technology, with the next few years potentially bringing even more dramatic changes. More likely though, improvements from the past years, algorithms, hardware and data have all aligned to create an impressive step forward…leaving us to wait another decade or two for further breakthroughs.

Subscribe to the Jamf Blog

Have market trends, Apple updates and Jamf news delivered directly to your inbox.

To learn more about how we collect, use, disclose, transfer, and store your information, please visit our Privacy Policy.