![]() |
AWS is determined to bring you the most advanced FMS models in the field and constantly expand our selection to include pioneering models from front AI promotions, so you always have access to the latest progress to drive your business portward.
Today I am pleased to announce the availability of two new OpenI models with open weights in Amazon Bedrock and Amazon Sagemaker Jumstart. Oncey GPT-OS-150B and GPT-OS-20B The models are designed for text generation and reasoning tasks that offer new options for developers and organizations to create AI applications with full control over their infrastructure and data.
These open weight models excel in coding, scientific analysis and mathematical reasoning with comparable performance with front alternatives. Both models support the 128K context window and provide adjustable levels of thinking (low/medium/high) to meet your specific use requirements. Models support external tools to improve their abilities and can be used in the agent workflow, for example using frames such as chain agents.
With Amazon Bedrock and Amazon Sagemaker Jumstart, AWS gives you freedom to innovate with access to hundreds of FM from leading AI companies, including Open Open Weight Models. With our understanding of the selection of models, you can always adapt your AI workload with the perfect model.
Through Amazon Bedrock you can smoothly experience with different models, combine the ability to mix and compare and switch between providers without rewriting code – choosing a model selection to strategic advertising better to constantly develop your AI strategy as soon as new innovations appear. These new models are available in the subsoil via the OpenII compatible point. Openai SDK can be directed to this end point or use invokemodel and subsoil API.
With Sagemaker Jumstart, you can quickly evaluate, compare and customize models for your use. You can then use the original or adapted model in production using the AI Sagemaker or using the Sagemaker Python SDK.
Let’s look at this work in practice.
We start with OPENAI OPEN WEIGHT MODELS IN AMAZON BEDROCK
I will choose in the Amazon Bedrock console Access to the model from Configure and learn Part of the navigation pane. Then I will go to the two Openai models on this page and ask for access.
Now that I have access I use Cat/test Playground for testing and evaluation of models. I will select Oncey as a category and then GPT-OS-150B Model.
With this model I will start the following sample challenge:
The family has $ 5,000 on holiday next year. They can place the money in a savings account that earns 2% interest per year or in a deposit certificate that earns 4% interest per year, but without access to the holidays. If they need $ 1,000 for an emergency expenditure during the year, how should they divide their money between two options to maximize their holiday fund?
This challenge generates an output that contained a string of thinking used to create a result.
I can use these models with OPENAI SDK configuration of the API end point (Basic URL) and using the AMASON BEDROCK API key for verification. For example, I have set up these variables to use the US West (Oregon) AWS (us-west-2
) and my key API AMASON BEDROCK API:
export OPENAI_API_KEY=""
export OPENAI_BASE_URL="https://bedrock-runtime.us-west-2.amazonaws.com/openai/v1"
Now I invoke the model using OPENAI PYKON SDK.
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
messages=({ "role": "user", "content": "Tell me the square root of 42 ^ 3" }),
model="openai.gpt-oss-120b-1:0",
stream=False
)
for item in response:
print(item)
I store the code (test-openai.py
File), install addiction and run a local agent:
pip install openai
python test-openai.py
To create Agent AI, I can choose any framework that supports the Amazon Bedrock API or OPENAI API. Here is, for example
from strands import Agent
from strands.models import BedrockModel
bedrock_model = BedrockModel(
model_id="openai.gpt-oss-120b-1:0",
region_name="us-west-2",
streaming=False
)
agent = Agent(
model=bedrock_model
)
agent("Tell me the square root of 42 ^ 3")
I store the code (test-strands.py
File), install addiction and run a local agent:
pip install strands-agents
python test-strands.py
When I am satisfied with the agent, I can deploy in production using the Amazon Bedrock Agentcore capabilities, it included a fully managed running without server and memory and identity management.
We start with OpenI Open Weight Models in Amazon Sagemaker Jumpstart
In the Amazon Sagemaker AI, you can use the OpenI Open Weight models in the SageMaker studio. If I do it for the first time, I have to set the sagemaker domain. There are options to set it for one user (simple) or organization. For these tests I used one user settings.
IN Sagemaker Jumpstart View the model, I have access to a detailed description GPT-OS-150B gold GPT-OS-20B Model.
I choose GPT-OS-20B model And then the model deployment. In other steps I select the type of instance and the initial number of instances. After a few minutes, it creates an endpoint, which I can then call in the Sagemaker Studio and use any AWS SDK.
If you want to learn more, visit the GPT OSS from Openai now available at Sagemaker Jumsstart in the AWS Artificial Intelligence blog.
What to know
The new OpenIi open weight models are now available in Amazon Bedrock in the US West (Oregon) AWS, while Amazon Sagemaker Jumstarts these models on US East (Ohio, N. Virginia) and Asia Pacific (Mumbai, Tokyo).
Each model is equipped with full output capacities for thoughtful outputs, giving you detailed visibility in the model thinking process. This transparency is particularly valuable for applications requiring a high level of interpretability and verification. These models provide you with freedom to adjust, customize them and customize them to your specific needs. This flexibility allows you to fine -tune models for your unique use, integrate them into your existing workflows and even build on them and create new, specialized models adapted to your industry or application.
Safety and safety are built into the core of these models with understanding the evaluation processes and established precautions. Models maintain compatibility with the standard GPT-4 tokenizer.
Both models can be used in your preferred environment, where through an experience without Amazon Bedrock or extensive development capacity of machine learning (ML) Sagemaker Jumstart. Information about the cost of using these models and services can be found on the AIS AI AI AIAZON AMAZON prices website.
If you want to know more, check out the parameters for models and APIs to complete the chat in Amazon Bedrock documentation.
Start today with the OpenI Open Weight Models at AWS in Amazon Bedrock or Amazon Sagemaker AI Console.
– Danilo