如何使用 AWS AppSync 将文件上传到 AWS S3
它指出:
使用 AWS AppSync,您可以将这些建模为 GraphQL 类型.如果您的任何变更具有包含存储桶、键、区域、mimeType 和 localUri 字段的变量,则开发工具包会为您将文件上传到 Amazon S3.
With AWS AppSync you can model these as GraphQL types. If any of your mutations have a variable with bucket, key, region, mimeType and localUri fields, the SDK will upload the file to Amazon S3 for you.
但是,我无法将文件上传到我的 s3 存储桶.我知道该教程缺少很多细节.更具体地说,教程没有说NewPostMutation.js
需要改变.
However, I cannot make my file to upload to my s3 bucket. I understand that tutorial missing a lot of details. More specifically, the tutorial does not say that the NewPostMutation.js
needs to be changed.
我按照以下方式对其进行了更改:
I changed it the following way:
import gql from 'graphql-tag';
export default gql`
mutation AddPostMutation($author: String!, $title: String!, $url: String!, $content: String!, $file: S3ObjectInput ) {
addPost(
author: $author
title: $title
url: $url
content: $content
file: $file
){
__typename
id
author
title
url
content
version
}
}
`
然而,即使在我实施了这些更改之后,文件也没有上传...
Yet, even after I have implemented these changes, the file did not get uploaded...
在这个正常工作"(TM) 之前,您需要确保在引擎盖下有一些活动部件.首先,您需要确保您的 GraphQL 架构中定义的 S3 对象具有适当的输入和类型
There's a few moving parts under the hood you need to make sure you have in place before this "just works" (TM). First of all, you need to make sure you have an appropriate input and type for an S3 object defined in your GraphQL schema
enum Visibility {
public
private
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
visibility: Visibility
key: String
mimeType: String
}
type S3Object {
bucket: String!
region: String!
key: String!
}
S3ObjectInput
类型当然用于上传新文件 - 通过创建或更新嵌入所述 S3 对象元数据的模型.它可以通过以下方式在突变的请求解析器中处理:
The S3ObjectInput
type, of course, is for use when uploading a new file - either by way of creating or updating a model within which said S3 object metadata is embedded. It can be handled in the request resolver of a mutation via the following:
{
"version": "2017-02-28",
"operation": "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($ctx.args.input.id),
},
#set( $attribs = $util.dynamodb.toMapValues($ctx.args.input) )
#set( $file = $ctx.args.input.file )
#set( $attribs.file = $util.dynamodb.toS3Object($file.key, $file.bucket, $file.region, $file.version) )
"attributeValues": $util.toJson($attribs)
}
这是假设 S3 文件对象是附加到 DynamoDB 数据源的模型的子字段.请注意,对 $utils.dynamodb.toS3Object()
的调用设置了复杂的 S3 对象 file
,它是具有 S3ObjectInput 类型的模型字段代码>.以这种方式设置请求解析器可以处理将文件上传到 S3(当所有凭据都设置正确时 - 我们稍后会谈到),但它没有解决如何获取
S3Object
返回.这是附加到本地数据源的字段级解析器变得必要的地方.本质上,您需要在 AppSync 中创建一个本地数据源,并使用以下请求和响应解析器将其连接到模式中模型的 file
字段:
This is making the assumption that the S3 file object is a child field of a model attached to a DynamoDB datasource. Note that the call to $utils.dynamodb.toS3Object()
sets up the complex S3 object file
, which is a field of the model with a type of S3ObjectInput
. Setting up the request resolver in this way handles the upload of a file to S3 (when all the credentials are set up correctly - we'll touch on that in a moment), but it doesn't address how to get the S3Object
back. This is where a field level resolver attached to a local datasource becomes necessary. In essence, you need to create a local datasource in AppSync and connect it to the model's file
field in the schema with the following request and response resolvers:
## Request Resolver ##
{
"version": "2017-02-28",
"payload": {}
}
## Response Resolver ##
$util.toJson($util.dynamodb.fromS3ObjectJson($context.source.file))
这个解析器只是告诉 AppSync 我们想要为模型的 file
字段获取存储在 DynamoDB 中的 JSON 字符串,并将其解析为 S3Object
- 这这样,当您查询模型时,您会得到一个包含 bucket
、region的对象,而不是返回存储在
file
字段中的字符串code> 和 key
属性,您可以使用它们来构建 URL 以访问 S3 对象(直接通过 S3 或使用 CDN - 这实际上取决于您的配置).
This resolver simply tells AppSync that we want to take the JSON string that is stored in DynamoDB for the file
field of the model and parse it into an S3Object
- this way, when you do a query of the model, instead of returning the string stored in the file
field, you get an object containing the bucket
, region
, and key
properties that you can use to build a URL to access the S3 Object (either directly via S3 or using a CDN - that's really dependent on your configuration).
请确保您已为复杂对象设置了凭据(我告诉过您我会回到这个问题).我将使用一个 React 示例来说明这一点 - 在定义您的 AppSync 参数(端点、身份验证等)时,需要定义一个名为 complexObjectCredentials
的附加属性来告诉客户端什么 AWS用于处理 S3 上传的凭据,例如:
Do make sure you have credentials set up for complex objects, however (told you I'd get back to this). I'll use a React example to illustrate this - when defining your AppSync parameters (endpoint, auth, etc.), there is an additional property called complexObjectCredentials
that needs to be defined to tell the client what AWS credentials to use to handle S3 uploads, e.g.:
const client = new AWSAppSyncClient({
url: AppSync.graphqlEndpoint,
region: AppSync.region,
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: () => Auth.currentCredentials()
},
complexObjectsCredentials: () => Auth.currentCredentials(),
});
假设所有这些都准备就绪,S3 通过 AppSync 上传和下载应该可以工作.
Assuming all of these things are in place, S3 uploads and downloads via AppSync should work.