Detailed Guide to Deploying the DeepSeek-R1 Model on Amazon Bedrock
DeepSeek-R1 is an advanced large language model (LLM) developed by DeepSeek AI, now available on Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. This model supports complex tasks such as problem-solving and programming, enabling businesses to seamlessly integrate AI into their applications. This guide provides a step-by-step walkthrough for deploying DeepSeek-R1 on Amazon Bedrock.
Log in to the AWS Management Console: Use your AWS account credentials to log in.
Navigate to Amazon Bedrock: In the search bar, type “Bedrock” and select Amazon Bedrock from the results.
Access the Model Catalog: In the left navigation pane, select Model catalog under Foundation models.
Filter and Select DeepSeek-R1: Use the provider filter to choose DeepSeek, then select the DeepSeek-R1 model.
Initiate Deployment: On the DeepSeek-R1 detail page, click Deploy.
Configure Deployment Details:
Endpoint name: Enter a name for the endpoint (1-50 alphanumeric characters).
Number of instances: Specify the number of instances (1-100).
Instance type: Choose an appropriate instance type. For optimal performance with DeepSeek-R1, it’s recommended to use GPU-based instances like ml.g6.2xlarge.
Note: Ensure you have sufficient quotas for the selected instance type.
Advanced Settings (Optional):
Deploy within a VPC: If you wish to deploy within a Virtual Private Cloud (VPC), expand the Advanced settings section and configure as needed.
IAM Service Role: Amazon Bedrock Marketplace automatically creates a service role to access Amazon S3 buckets where the model weights are stored. You can also choose to use an existing role.
Complete Deployment: Click Deploy to start the deployment process. This may take a few minutes to complete.
Note: Once deployed, your model will be available on a real-time endpoint via Amazon SageMaker. Costs will be incurred based on the chosen hardware infrastructure. Refer to Amazon SageMaker pricing for more details.
Open the Playground: After deployment, click Open in playground to access the interactive interface.
Experiment with Prompts: Here, you can test various prompts and adjust model parameters such as temperature and maximum length.
Tip: When using DeepSeek-R1 with Bedrock’s
InvokeModeland the Playground Console, it’s recommended to use DeepSeek’s chat template for optimal results. For example:<|begin of sentence|><|User|>content for inference<|Assistant|>.
Create Guardrails: You can create guardrails through the Amazon Bedrock console or API. Guardrails help prevent harmful content and assess the model based on safety criteria.
Reference: See code examples for creating guardrails in the GitHub repository.
Apply Guardrails: Use Amazon Bedrock’s ApplyGuardrail API to apply guardrails to the deployed DeepSeek-R1 model. This process includes:
Input Evaluation: Assessing user input before sending it to the model.
Output Evaluation: Assessing the model’s output before returning it to the user.
Note: If the input or output violates the guardrail, the system will return a warning message.
To avoid unintended charges, delete the deployment when it’s no longer needed:
Access Marketplace Deployments: In the Amazon Bedrock console, under Foundation models, select Marketplace deployments.
Delete the Endpoint: Locate the endpoint you wish to delete, select it, then choose Delete from the Actions menu.
Confirm Deletion: In the confirmation dialog, type confirm and click Delete to permanently remove the endpoint.
Access Permissions: To deploy DeepSeek-R1, ensure you have access to appropriate instances, such as ml.g6.2xlarge.
Quotas: Verify that your account has sufficient quotas for the selected instance type.
Safety: It’s advisable to deploy the model with guardrails to ensure safety and compliance with responsible AI policies.