{"id":104,"date":"2023-12-17T14:16:05","date_gmt":"2023-12-17T06:16:05","guid":{"rendered":"https:\/\/spicychat-ai.com\/?page_id=104"},"modified":"2023-12-17T14:28:17","modified_gmt":"2023-12-17T06:28:17","slug":"premium-features","status":"publish","type":"page","link":"https:\/\/spicychat-ai.com\/premium-features\/","title":{"rendered":"Premium Features"},"content":{"rendered":"\t\t
Our waiting room ensures a smooth user experience by controlling the platform’s capacity and hardware allocation. Thanks to our amazing premium subscribers who financially support the platform, we can offer our service for free. As a benefit, you can skip the waiting line every time you want to chat with our bots! No more waiting around!<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t When you engage with our bots, we create a text prompt that combines the bot’s personality, example dialogues, and a portion of your conversation history. The lengthier the prompt, the more server resources it consumes. Currently, our model has a limit of 4096 tokens (words), but for our Free Tier, we restrict it to 2048 tokens.<\/p> To calculate the total tokens, we consider both input and output. For instance, if we allow 180 tokens for output and the bot definition consists of 900 tokens, our free tier users would have 1000 tokens to include a few of the most recent turns in their conversation.<\/p> In the same example, premium subscribers would have 3048 tokens, enabling them to incorporate three times more of the conversation history in the prompt. This means the response can better build upon what was discussed previously and offer a more contextually relevant reply.<\/p> \u00a0<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t Even with 4K Context, due to the 4096 token limit, we can only include a portion of your conversation history. However, with semantic memory, we aim to find semantically relevant parts of your previous conversation and include them in the prompt, regardless of their recency.<\/p> For example, let’s say you were discussing the details of a particular book and then shifted the conversation to music. If you later refer back to the book, even after several turns, the most relevant messages about the book will be added to the prompt.<\/p> The advantage of semantic memory is that it goes beyond recentness and focuses on relevance, enhancing the continuity of the conversation.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t With this benefit, we utilize ChatGPT for generating responses whenever possible. However, to comply with OpenAI’s terms of service, this feature is only available for SFW (Safe for Work) bots and conversations that don’t involve explicit sexual, hateful, or violent themes. ChatGPT has been trained on a vast amount of data, making it highly versatile, capable of generating diverse, coherent, and contextually relevant responses.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t This awesome benefit ensures that your requests take priority over others. Whenever you initiate a conversation or send a message, your request is placed at the front of the queue for immediate processing. This results in faster response times and a smoother chatting experience, especially during peak usage hours or high server demand.<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t Introducing a brand-new feature: conversation images! Now, you can generate images within your conversation. Our AI utilizes the bot image, bot definition, and the last turn of your conversation to create images on request. It adds a visual element to enhance your chatting experience!<\/p>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t For our advanced users who love to experiment, you have the freedom to control inference temperature, top_p, and top_k. This allows you to fine-tune the generation settings and personalize the AI's responses according to your preferences. Get creative and explore the possibilities!<\/p>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t You gain exclusive access to test Airoboros70B-2.2, a smarter (but slower) model. This model leverages five times more parameters than our default 13B model, resulting in even more powerful response generation capabilities. Experience the cutting-edge technology firsthand!<\/p>\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t As a premium subscriber, you get to test our xSpicy model, which offers up to 8192 tokens, doubling your bot's memory compared to the 4K context. Please note that this model is still in development, but it provides an exciting opportunity to explore an expanded conversational space.<\/p>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t","protected":false},"excerpt":{"rendered":" Premium Features \ud83c\udf1f Get a Taste Tier \ud83c\udf7d\ufe0f Skip The Waiting […]<\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"elementor_header_footer","meta":{"footnotes":""},"class_list":["post-104","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/pages\/104","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/comments?post=104"}],"version-history":[{"count":5,"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/pages\/104\/revisions"}],"predecessor-version":[{"id":120,"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/pages\/104\/revisions\/120"}],"wp:attachment":[{"href":"https:\/\/spicychat-ai.com\/wp-json\/wp\/v2\/media?parent=104"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}True Supporter Tier \ud83e\udd1d\n<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t\t4K Context (Memory) \ud83d\udcda<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t\tSemantic Memory \ud83e\udde0<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t\tChatGPT for SFW Roleplay \ud83c\udfad<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
I'm All In Tier \ud83d\udcaf\n<\/h2>\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t\tPriority Generation Queue \u26a1<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t\tConversation Images \ud83d\uddbc\ufe0f<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t\t
\n\t\t\t\t\tGeneration Settings \u2699\ufe0f<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t\t
\n\t\t\t\t\tAccess to 70B model \ud83c\udf10<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t
\n\t\t\t\t\tTest 8k Context model (BETA) \ud83c\udd95<\/span>\n\t\t\t\t<\/h3>\n\t\t\t\t\t\t\t\t