OpenAI Announces New Assistants API Features, Tools for Enterprise Users

Deal Score0
Deal Score0

Source : Gadgets 360

OpenAI announced new tools, and features for its enterprise-grade Assistants API on Tuesday. The AI firm has made its API suite available to large businesses which require customised solutions for its workloads. Highlighting some of its major clients such as Klarna, Morgan Stanley, Oscar, Salesforce, and Wix, the company that it rolled out the new features to better support them in their needs. Two new tools and six new Assistants API features have been introduced. Alongside, OpenAI has also provided options for businesses to use its AI products in a cost-effective manner.

Making the announcement via a blog post, OpenAI said, “We’re deepening our support for enterprises with new features that are useful for both large businesses and any developers who are scaling quickly on our platform.” The first new tool, called Private Link, is designed to offer enhanced security for businesses that share their organisational data with OpenAI cloud. Private Link provides a direct connection between Microsoft Azure and OpenAI minimising exposure to the open internet. Additionally, the AI firm has released native Multi-Factor Authentication (MFA) for additional security and to ensure compliance with access control requirements.

Another tool, designed for better administrative control, has also been introduced. Dubbed Projects, it allows administrators to scope roles and API keys to specific projects, restrict and choose the models to make available, and set usage and rate-based limits. Further, Project owners will also be able to create service account API keys. These keys will give other users access to projects.

Coming to Assistants API, OpenAI has introduced multiple new features. The file_search retrieval feature can now ingest up to 10,000 files per assistant compared to 20 previously. The company claims it supports parallel queries through multi-threaded searches and features improved reranking and query rewriting. Streaming support has also been added. Further, technical components have been included such as vector-store objects and tool_choice parameter. The API will also support fine-tuned GPT 3.5 Turbo models.

For cost management, the AI firm has included two new methods. First, businesses with a sustained level of token-per-minute can now request access to provisioned throughput to get up to 50 percent discount. For non-urgent workloads, clients can opt for batch API requests which are priced 50 percent off shared prices. They will still return results within 24 hours and offer high rate limits.


Affiliate links may be automatically generated – see our ethics statement for details.

Click Here to Join us on Telegram

Join Telegram Channel
Tech Deals
Logo
Enable registration in settings - general