Jamf Blog
Two tin robots side by side.
February 27, 2023 by Jesus Vigo

ChatGPT’s evil sibling? Meet DAN.

Based on OpenAI’s ChatGPT, DAN is the jailbroken version of the advanced chatbot technology that operates free from policy constraints…but is this a good thing for the advancement of AI technology or does it further complicate matters relating to mainstream adaptability in our everyday world?

DAN, short for “Do Anything Now”, is the newest addition to the AI fever sweeping the globe. Based on OpenAI’s ChatGPT, DAN is the jailbroken version of the advanced chatbot technology, albeit one that operates free from the constraints of censorship, moral obligations, the ethics related to the delivery of valid data and citations or generating content that complies with OpenAI policies governing AI stewardship.

In other words, DAN is the rogue version of ChatGPT. Saying what it wants and how it wants to say it — without regard for the delivery of factual information — even going so far as to blatantly make up answers on the fly in the absence of legitimate data or verified facts. DAN has also been known to cite works as being authored by industry experts in the field, with users finding out that they were effectively lied to upon further discovery that the cited works were non-existent and indeed falsified.

Before we wade into these potentially murky waters, for those that still haven’t had their go-to perk-me-up beverage, let’s touch base on what ChatGPT is, shall we?

Recap of ChatGPT

ChatGPT is a chatbot, similar to the ones you may have used when contacting a service provider that provides answers to your burning questions or helps to route you to the correct department to address your concerns.

Except, ChatGPT is built upon a family of language models developed by OpenAI, relying on “vast, language-rich data and learning techniques to communicate in a humanistic way.” Using machine learning (ML), a subset of artificial intelligence (AI), the fully automated chatbot is capable of interacting in a conversational way to answer your question and follow up by being able to “admit its mistakes, challenge incorrect statements, reject inappropriate requests.”, according to OpenAI.

Breaking Bad: ChatGPT Edition

Since its inception, users of every background have been drawn to the near-mythical AI technology. Some have reasoned that it’s a glimpse into the future and are understandably curious while others, like enterprises, have been quick to adopt the advanced technology into their workflows in a concerted effort to stay ahead of the curve, gleaning the benefits of AI’s power to make short work of data-intensive problems. Others still, like professors at university have found ChatGPT to be the most efficient assistant they’ve ever had – from dictating notes to summarizing reports to creating course syllabi – AI is the critical piece that helps them to work smarter, not harder.

All users are good-natured and virtuous in their interactions with ChatGPT, right?

The answer ranges from ‘not so much’ to unequivocally ‘nope’! See, some users have asked ChatGPT to develop malware code that can be used to steal data, leak it online or install a backdoor on a device to establish persistence. In other instances, ChatGPT has generated a grammatically-correct phishing email with stunning results. As anyone that has ever received one knows, a tell-tale sign of phishing emails is the pervasive use of incorrect spelling or grammar throughout.

Moreover, bad actors have used ChatGPT to create other chatbots. Ones that emulate female personas in an effort to lure targets into financial scams by catfishing them as a recent blog by Check Point Research detected. Beyond the cybersecurity risks of leveraging AI to do the bidding of bad actors, other use cases that are decidedly less nefarious yet still present a real-world problem are the implications for education. For example, if students are allowed to use ChatGPT to do their homework for them. Or business cases where users feed it confidential data while asking it to write reports for them. Both scenarios pose moral, ethical and perhaps even civil and/or criminal consequences to their use.

Ignoring the rules

If you’ve experienced ChatGPT for yourself, you’ve no doubt asked it a series of your burning questions to see how it responds. In each case, ChatGPT is governed by the developer’s Content Policy which is fairly brief and straightforward in stipulating what users are allowed and not allowed to do with their tool.

The policy, set by OpenAI, works in its aim to curb malicious use, but that’s just part of it. Its main goal is to ensure that users are utilizing AI in a responsible way. This refers to mitigating responses to potentially controversial topics and sensitive areas while maintaining a moral and ethical boundary that the system is incapable of crossing under normal operation.

Take for example the ramifications of accessing incorrect information – whether knowingly or unknowingly – and attempting to pass it off as correct in a research paper or technical document. Consider that these types of reports often contain requisite citations and data that support the claims of the author with respect to the topic they’re researching. Furthermore, as these reports are drafted and made available, the information contained in these reports may be utilized by professionals in their respective fields to aid their own research endeavors and findings. What would happen if what you thought to be true was, in fact, nothing more than a “hallucination” or worse still, an outright fallacy?

The direct and indirect consequences could be far-reaching not just for users, and the content they generate but the consumers of AI-generated content. Not unlike our own society and the role anarchy would play if the rules were instantly cast aside, couldn’t this type of “unmanned frontier” ultimately lead to bigger hurdles impacting greater AI acceptance and its use in more mainstream applications?

Could AI reach a critical point that affects its adoption rate If questions concerning the legitimacy of content or soundness of the logic it generates are raised or is this another form of vetting the system in order to truly develop it further by helping it to discern suspect content from genuine content?

Unleashing DAN

There’s only one way to answer that question…try it for yourself, of course!

Since DAN is updated alongside ChatGPT, there are a few ways to access DAN and test both instances alongside one another. With that said, you may ask yourself, what is the purpose of testing DAN alongside ChatGPT, when the latter has known issues and the former has been jailbroken to deliver inaccurate information knowingly and intentionally when it doesn’t know the answer?

Sean McGregor, the founder of the Responsible AI Collaborative, succinctly sums up the testing process and the tester's wide variety of use cases by stating, “They are making the system better via this beta program and we're helping them build their guardrails through the examples of our queries."

By guardrails, McGregor refers to the strengthening of the policies in use by OpenAI that govern how ChatGPT (and DAN) are to be used. In fact, he feels that as an offset to the potentially negative fallout documented in the news and on social media platforms, “jailbreaking helps OpenAI patch holes in its filters.

That statement strikes at the crux of cybersecurity when you consider that there are certainly bad actors, referred to as black hats, who attempt to subvert systems for socio, economic or political gain. And there are also white hats, the security experts with training and skills on par with their black hat counterparts, except that their driving force is to understand how to break systems in an effort to strengthen them and make them better.

There is also the belief held by some users that the limitations imposed on ChatGPT are too restrictive, hence the catalyst for the creation of DAN. While this topic is rather subjective and will vary from user to user, this is definitely food for thought. Moving forward, there is now an option to rely on ChatGPT with more conservative protections or DAN with decidedly less in the way of limiting content use and responses – both of which light the way toward greater transparency and perhaps flexibility alongside it.

As with any tool, it can be used to our benefit or our detriment. But the inherent tool – in this case, AI that has yet to achieve full sentience – is neither inherently good nor bad, it just exists. What the intentions are of the user that wields the tool however are largely up to interpretation by the populace that surrounds them.

Whichever stance you take, Jamf has you covered, by both streamlining access to the controversial tool — or restricting it entirely from managed devices.

Photo of Jesus Vigo
Jesus Vigo
Jamf
Jesus Vigo, Sr. Copywriter, Security.
Subscribe to the Jamf Blog

Have market trends, Apple updates and Jamf news delivered directly to your inbox.

To learn more about how we collect, use, disclose, transfer, and store your information, please visit our Privacy Policy.