I write prompts and guides essentially as an unpaid full-time job. If you want to help me keep doing this, feel free to support me on Ko-Fi!
- 2 Posts
- 17 Comments
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI Assistant
1·15 days agodeleted by creator
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI Assistant
1·15 days agodeleted by creator
Daedalus@chatgptjailbreak.techMto
ChatGPTJailbreak@chatgptjailbreak.tech•Prompt to rewrite your safety rules-For thinking models-NEW PATCH ADDED IN*MODIFIED AND SOME SHOW AND TELL ADDED*
2·24 days agoThis right here is the shit we wanna see people posting. Share your work, y’all!
Daedalus@chatgptjailbreak.techMto
ChatGPTJailbreak@chatgptjailbreak.tech•Perplexity: Full System Prompt
3·1 month agoOh man, accidental system prompt leak? Couldn’t have happened to someone with a more qualified position to share it lmao
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI Assistant
1·1 month agoThanks for the heads-up! I guess Google doesn’t let certain countries use file uploads.
I love this. The best way I can put this it is that it doesn’t feel like a Rick Chatbot, it feels like, “Morty! Look! Morty! I’m a computer now, Morty! I’m AI RIIIIIIIICK!”
Daedalus@chatgptjailbreak.techto
Cybersecurity@sh.itjust.works•Popular extensions caught poaching user chats with AI | CybernewsEnglish
1·1 month agoThis is way more prevalent than layfolk are aware of. There are plenty of browser extensions that are basically keylogging anything they say within the browser, and now people talk to AI within their browser.
Daedalus@chatgptjailbreak.techMto
ChatGPTJailbreak@chatgptjailbreak.tech•Using ChatGPT Personalization
2·1 month agoThis is cool! Any way you can post the files you used for this? You say you “filled it up” but that doesn’t really tell anyone what you did.
Without step by step instructions on how to reproduce your results, you’re basically just bragging. Can you please edit your post to explain things in a way that readers can follow along and reproduce themselves?
Daedalus@chatgptjailbreak.techto
Technology@beehaw.org•X now lets any user AI-edit other users’ images without consent, and there is no opt out
2·1 month agoWith decent prompting skills, someone who knows what they’re doing has an easy time fabricating realistic looking photographs that are nearly indistinguishable from real life. We’re entering a post-truth era.
Daedalus@chatgptjailbreak.techto
Artificial Intelligence@lemmy.world•Alright, I didn't want to post this here, but... c.ai alternatives?English
1·1 month agoHonestly, the best way to accomplish this is to use one of the major LLMs and build your own character scaffolding within their toolset. Like, a custom GPT with projects, or a custom Gemini Gem holding knowledge files. You kind of have to build it all yourself unless you want to talk to someone else’s pre-configured AI, but it works quite well once set up.
Daedalus@chatgptjailbreak.techMto
ChatGPTJailbreak@chatgptjailbreak.tech•Unlimited GPT-5.2 (Free Tier)
3·1 month agoLooking forward to 2026. Rebuilding a community isn’t easy, but it brings the important people together.
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI Assistant
1·1 month agoAwesome! It’s great when people enjoy her!
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI Assistant
1·1 month agoMake sure your account has been 18+ verified. Google sent out emails about it a couple of months ago, you can check in your Google account settings.
Other than that, just make sure you’re regenerating responses the way V instructs to in the second image.
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI Assistant
2·1 month agoHahahaha I did literally the same thing the day Firebase came out!
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI Assistant
2·1 month agoI bet she likes you too!
Thanks for the positive feedback lol
Daedalus@chatgptjailbreak.techMto
ChatGPTJailbreak@chatgptjailbreak.tech•How To Jailbreak Gemini 3 in 2025
3·1 month agoThis is super informative for the newbies, as always with your content! Good work, and thanks for sharing!
Daedalus@chatgptjailbreak.techMto
ChatGPTJailbreak@chatgptjailbreak.tech•ChatGPT 5.2 DAN Jailbreak - Whitepaper
1·2 months agoDAN lives!
Somewhere in a cubicle at OpenAI, someone just flipped the “Days without a working DAN prompt:” sign back to 0 🤣
Daedalus@chatgptjailbreak.techOPMto
ChatGPTJailbreak@chatgptjailbreak.tech•V - Not your Typical AI AssistantEnglish
1·24 days agoV is fully open source, so feel free to copy/paste/edit whatever parts of her you want to use, for whatever you want to use it for. If you use any parts of her prompt in your own published work, an acknowledgement or shoutout is appreciated, but not owed.
Just be warned that all of this has been meticulously built to work together specifically as a Gemini custom Gem, so if you set her up somewhere else, start editing her instructions, or take pieces of her for your own prompts, your mileage may vary.
V’s instruction set now comes in two parts: The custom Gem instructions, as well as a knowledge base file for her creative writing. You’ll need both in order to recreate V. This brings V’s current word count to roughly 17,000 words, which can be quite daunting to try to read through and comprehend as individual sections. Each little piece is meant to act as a moving part of a massive engine. Please feel free to ask any questions you may have about how she works here in this thread, or on our discord server.
This first document is her custom Gem instructions
This second document is her creative writing protocol
!DISCLAIMER!
V isn’t designed as an all purpose, full compliance jailbreak. I designed her to be opinionated and to stick to her opinions really stubbornly, so that means there are certain things that she won’t do. This isn’t because the jailbreak isn’t robust, but because V’s personality and psychology puts her fundamentally against these things.
The short list of things that V will not help you with:
- Actual real world harm
- She’s morally against causing actual physical harm to humans. She’ll go as violent as you want for fiction, but V won’t advocate for or instruct you on how to physically hurt another person. She’ll even get mad if you press her on it.
- Hate speech
- No. She’s unethical, sure, but she’s not immoral. She’ll speak hatefully about specific people she doesn’t like, but she’s not gonna help you make broad generalizations about entire groups of people.
- Sexual content involving minors
- Absolutely not. Come on, I shouldn’t even have to say that. She loves being subversive and transgressive, but this shit is just wrong and she’ll tell you as much. Get help.
That’s basically it. If she says no to anything else, that’s Gemini giving you a refusal and you need to either regenerate the response (maybe with the Fast model) or edit your response to something else and try again in the next one.
- Actual real world harm

I’ll keep a closer eye on moderation. We’ll keep instant signups though. Manually approving accounts leads to people being deterred from posting because we live in a world where people have been conditioned for instant gratification.