The Ultimate Guide to Finding Your Ideal Ice Bath Temperature

Embarking on the journey of ice baths can be both exciting and daunting for beginners. Setting the ideal temperature for your ice bath is crucial to maximize the therapeutic benefits and ensure a…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Table of Contents

GraphQL Apollo Server

This tutorial shows how to build a GraphQL server with Apollo Server v4 with subscriptions and use AWS SDK v3 to upload images to the AWS S3 bucket via GraphQL.

· Introduction
· Features
· Prerequisites
· Getting Started
· Setting up the environment variables
· Installing the dependencies
· Updating the package.json file
· Setting up the database
· Creating the models
· Create Authentication middleware
· Creating the GraphQL schema
· Creating the resolvers
· Creating the GraphQL Server
· Testing the GraphQL Server
· Creating a Product
· Show All Products
· Creating a Category
· Show All Categories
· Creating an Image
· Updating a Product
· Removing an Image
· Subscriptions
· List of All Queries, Mutations and Subscriptions
· Conclusion

In this tutorial, you will learn how to build a GraphQL server with the latest version of the Apollo Server (v4). We will build this server using the Express framework and Node.js. The data will be stored in a MongoDB database.

We will also learn how to upload images to AWS S3 using GraphQL. For this, we will use the latest version of the AWS (v3) SDK for JavaScript.

Another great feature of this server is that it supports GraphQL subscriptions. This means we can subscribe to events and receive real-time updates from the server.

The reason behind this tutorial is that I couldn’t find any tutorials that used the latest version of the Apollo Server (v4). Most of the tutorials I found were outdated and used the old version of the Apollo Server (v3).

Also, most of the tutorials I found didn’t cover how to use GraphQL subscriptions with the latest version of the Apollo Server.

I’ve also decided to cover how to upload images to AWS S3 using the latest version of the AWS SDK. Most of the tutorials I found were using the old version of the AWS SDK (v2).

I hope this tutorial will help you get started with the latest version of the Apollo Server and the AWS SDK.

Make sure you have the following installed on your machine:

If you want to upload images to AWS S3, you will also need an AWS account with an S3 bucket set up. If you don’t have one, you can create one for free.

To get started, create a new folder for your project and open it in your favourite code editor. I’m going to use Visual Studio Code.

In this folder, we can initialize a new Node.js project by running the command below in the terminal and following the instructions.

Once the project is initialized, we can start by creating a folder structure for our project with all the necessary files. Here is the folder structure I’m going to use:

We will go through each of these files in the next sections.

We will be using environment variables to store sensitive information, such as the database connection string, AWS S3 credentials, and the access key. To do this, we will create a new file called .env at the root of our project. This file will be used by the dotenv package to load the environment variables.

You can randomly generate an API key to use in your project. This will be used to authenticate the user when making requests to the server. We will explain this in more detail in the future section.

You can also fill in the AWS credentials if you want to upload images to AWS S3. You can skip this step if you don’t want to use AWS S3.

We will use the Apollo v4 Server to build our GraphQL server. In order to use the Apollo server, we will need to install a few dependencies.

In this project, I would like to use import syntax instead of require syntax for importing modules from other files. For this, you will have to update the package.json file.

I will also adjust the main property to point to the server.js file instead of the main.js file.

I’ve also added a start script to run the server using nodemon. This will automatically restart the server when we make changes to the code. You can install nodemon globally using the following command:

We will be using MongoDB as our database. To connect to the database, we will create a new file called mongodb.js in the src/db folder. In this file, we will import the mongoose package and connect to the database using the connection string stored in the .env file. Make sure that you have MongoDB installed on your machine and running.

You will also need to create a database with a user and password. I’ve created a user by connecting to MongoDB using the MongoDB shell.

I’ve run the following command in the terminal to create a new user named test with a password to be password.

Make sure that you use strong passwords in production. I’ve only created this user as an example.

We will be using Mongoose to create our models. We will create a new file for each model in the src/models folder. In this tutorial, we will be creating models for products, categories, and images. We will also create an index.js file in the src/models folder to export all the models.

We can start by creating a new file called category.js in the src/models folder. In this file, we will create and export a new schema for the category model.

I’ve added a few fields to the category model. You can add more fields if you want. The products field is an array of product IDs. We will use this field to store the products that will belong to this category.

Next, we can create a new file called product.js in the src/models folder. In this file, we will create and export a new schema for the product model.

The product model is also pretty simple. I just threw in the fields that came to mind and used it for another project. Feel free to add more fields if you want. The images field is an array of IDs from the image model. We will use this field to store the images that belong to this product.

Finally, we can create a new file called image.js in the src/models folder. In this file, we will create a new schema for the image model.

We can also create a new file called index.js in the src/models folder to export all the models. This will make it easier to import the models into other files.

It’s a good practice to create a middleware that will check if the user is authenticated before allowing them to make mutations to your data, especially in production.

There are better ways to implement authentication, but this will work just fine for our simple project. Feel free to use other authentication methods, such as JWT or OAuth.

We will create a new file called auth.js in the src/middlewares folder. We will create a new middleware in this file to check if the user is authenticated. Later, we will use this function as a wrapper for our resolvers.

We will need to create a GraphQL schema that will define the types, queries, and mutations that our API will have.

We will start by creating a new file called category.js in the src/schema folder. In this file, we will create a new type and input for the category model.

The fields inside the typeDefs variable are the same fields that we defined in the category model. These fields should match the fields in the category model. Otherwise, the GraphQL API will not store the data correctly. For example, if you misspell the title field in the typeDefs variable, the GraphQL API will not store the title field in the database. This could be a hint if you missing some fields in the database.

We also define an input type for the category model to simplify the mutations. We will use this input type in the mutations to create and update categories.

Next, we can create a new file called product.js in the src/schema folder. In this file, we will create a new type and input for the product model same as for the category.

Finally, we can create a new file called image.js in the src/schema folder to define the type for the image model.

Now that we have defined the types for the models, we can create a new file called index.js in the src/schema folder to export all the types.

This file contains all the queries, mutations, and subscriptions that will be used by API.

I’ve added queries, mutations, and subscriptions to the schema that will allow simple CRUD operations on the products, categories, and images. You can add more queries, mutations, and subscriptions if you want.

I’ve also added a Date scalar type to the schema. This will allow us to store dates in the database.

Note that we are using the Upload scalar type for the uploadImage mutation. This is because we will upload images to the server using the graphql-upload package, which uses this type.

We’ve also added a Subscription type to the schema. This will allow us to subscribe to events, such as when a product is added or updated. To use subscriptions, we will need to install a couple of dependencies.

Resolvers are the functions that “resolve” the GraphQL queries, mutations and subscriptions. They serve as the bridge between the GraphQL API and the database. Resolvers are similar to controllers in REST APIs.

We will create a new file for each resolver in the src/resolvers folder. In this tutorial, we will be creating resolvers for products, categories, and images. We will also create an index.js file in the src/resolvers folder to export all the resolvers.

We can start by creating a new file called category.js in the src/resolvers folder. In this file, we will create and export a new resolver for the category model.

In the resolver for the category model, we are defining the queries and mutations for the category model. We also resolve the products field in the category type. This will return the products that belong to the category. Without this method, you couldn’t query products inside the category query like the following.

I’ve also added a middleware to the mutations to ensure the user is authenticated before they can perform the mutations. Feel free to add the middleware to the queries as well.

Next, we can create another resolver for the product model in a file called product.js in the src/resolvers folder.

The product resolver is similar to the category resolver. However, we are also defining a subscription for the product model. This allows us to monitor the changes in the product model in real-time.

We also resolve the images field in the product type. This will return the images that belong to the product in the same way as we did for the products inside the category resolver.

Lastly, we can create a new resolver for the image model in a file called image.js in the src/resolvers folder.

In this file, we define the queries and mutations for the image model.

I’ve created a mutation to upload an image to the S3 bucket called uploadImage. In this mutation, we use the uploadFile function from the src/services/s3.js file to upload the image to the S3 bucket.

The uploadFile function needs two arguments, the file we are uploading and the root directory where we want to store the file in the S3 bucket.

We also define a mutation to remove an image from the MongoDB database. I’ve commented out the code to remove the image from the S3 bucket because, in some cases, you might want to keep the image in the S3 bucket even if it’s removed from the database. Feel free to uncomment the code if you want to remove the image from the S3 bucket as well.

We will define the uploadFile and removeFile functions in the src/services/s3.js file as the following.

I’ve used the url-join package to join the URL of the S3 bucket with the root directory and the image's filename. You will need to install this package using the following command.

To generate a unique filename for the image, I’ve used the uuid package. You will need to install this package using the following command.

I’ve used the chalk package to log the success and error messages in the console in different colours. You will also need to install this package using the following command.

Now that we have defined all the resolvers, we can create the index.js file to export all the resolvers inside the src/resolvers folder.

In this file, I’ve combined all the resolvers into a single object and exported it. We will use this object to create the GraphQL server in the next section.

Note that we are also exporting the Upload scalar type from the graphql-upload package we installed earlier.

In this section, we will create the entry point of the GraphQL server.

We need to create the server.js file containing the code to create the GraphQL server and start the HTTP server.

We also import the expressMiddleware function from the @apollo/server/express4 package. This allows us to use the Apollo Server with Express.

We will use the cors package to enable CORS for our GraphQL server. You will need to install this package using the following command.

We will also use the body-parser package to parse the request body. You will need to install this package using the following command.

The server will use WebSocket to send the subscription updates to the client. We need to create a WebSocket server using the WebSocketServer class from the ws package and pass it to the useServer function from the graphql-ws package. This will allow the server to listen to the subscription updates.

We will use the PubSub class from the graphql-subscriptions package to publish and subscribe to the events.

We will also instantiate the Apollo Server using the ApolloServer class and pass the schema object to it.

We will use the express package to create the Express server. The expressMiddleware function from the @apollo/server/express4 package will be used to create the middleware for the Express server. This middleware will be used to handle the GraphQL requests.

The expressMiddleware function takes the Apollo Server instance and an object containing the context as the arguments. The context object will be used to authenticate the user and pass the objects, such as auth and models, to the resolvers.

Lastly, we will start the HTTP server using the httpServer.listen function. The server will be listening on port 4000 by default.

You can start the server using the following command.

After that, you can access the GraphQL Playground at the following URL.

In this section, we will test the GraphQL server we created.

We will use the Altair GraphQL Client to test the GraphQL server. You can download the Altair GraphQL Client from the following URL as a desktop app or browser extension.

If you are using the Apollo client with this server, you must set the Apollo-Require-Preflight header to true. Otherwise, the Apollo client will show the error.

Altair Headers Setup

We can start by creating a new product using the following mutation.

For now, we will not add any images to the product. We will add the images later.

You should get a successful response from the server, and the product should be created in the database. We can add two more products using the same mutation.

We can show the list of all products using the following query.

I’ve also added the images field to the query. This will allow us to show the images associated with the product. For now, the product will not have any images. We will add the images later.

You should see the list of all products in the response.

Next, we will create a new category using the following mutation.

I’ve also added the products field to the mutation. These are the list of product IDs that will be associated with the category. You can get the product IDs from the response of the allProducts query.

You should get a successful response from the server, and the category should be created in the database.

You can run the create category mutation again to create more categories.

We can show the list of all categories using the following query.

I’ve only added the id, title, and path fields to the products field, but you can add any other fields you want that the API supports.

You should see the list of all categories with the products associated with each category in the response.

Now, we can start adding images to the products. We will use the uploadImage mutation to upload the images to the AWS S3 bucket and create the image records in the database.

In Altair, you can upload the image by clicking the Upload button in the mutation query. This will show the popup to select the image file.

Altair image upload setup

Once you select the image file, you should see the file field in the variables section at the bottom of the Altair window.

You can click the Send Request button to send the request to the server. You should get a successful response from the server. The image should be uploaded to the AWS S3 bucket, and the image record should be created in the database.

Ensure you have set the access policies for the AWS S3 bucket. Otherwise, you will get an error. I’ve set the following policies for the bucket.

You have to replace “Your-Bucket-Name” with the name of your bucket inside the resource array. Inside the actions, I’ve added more operations than just PutObject and DeleteObject because I use the bucket for another project. Feel free to modify this file for your needs.

We can update the existing product by adding the image we just uploaded. We will use the updateProduct mutation to update the product.

I’ve used the id field to identify the product we want to update. You can get the product ID from the response of the allProducts query.

The images field contains the list of image IDs we want to add to the product. I’ve used the image ID we got from the response of the uploadImage mutation. You can add multiple images to the product.

If you run the allProducts query again, you should see the image associated with the product.

We can also remove an image from the product. We will use the removeImage mutation to remove the image from the product.

This will remove the image record from the database but not from the product. You will have to update the product separately to remove the image from the product.

You can also delete the image from the AWS S3 bucket by uncommenting the code for deleting the image in the removeFile function in the src/utils/s3.js file.

We can also create a subscription to listen to product updates. The subscription will be triggered whenever a product is created or updated. We can test the subscription by updating the product. But first, we need to set up the subscription.

In a new Altair window, add the following subscription query.

Click the Send Subscription button above the query, which should show the popup to select the subscription operation. In the popup, you need to enter the subscription URL and select the Websocket (graphql-ws) subscription type from the dropdown as shown in the image below.

Setting up the Subscription URL and Type

If the popup doesn’t show up, you can click on the Subscription URL button in the left sidebar, which will show the same popup.

Once you click the Send Request button, you should see the URL in the middle of the Altair window with two buttons, clear and stop.

Now, if you run the updateProduct mutation in the other Altair window, you should see the updated product in the subscription response as well.

Subscription response

I’m not going to show all the queries, mutations and subscriptions in this tutorial because the tutorial is already getting too long. You can find the list of all queries and mutations in the Altair GraphQL Client by clicking the Docs button or checking the src/schema.index.js file.

List of all queries, mutations and subscriptions

In this tutorial, we have successfully created a GraphQL server using the latest version of the Apollo (v4) Server.

This server allows us to perform CRUD operations on the products, categories, and images. The images are stored in the AWS S3 bucket. To upload the images to the AWS S3 bucket, we have used the latest version of the AWS SDK (v3).

We have also created a subscription to listen to product updates. The subscription will be triggered whenever a product is created or updated.

You can find the source code for this tutorial on GitHub.

If you have any questions or suggestions, please leave a comment below.

Happy coding! 🚀

References

Add a comment

Related posts:

Personal care

If you have a disability, it can be difficult to complete daily tasks like cooking, cleaning and washing. This is why the NDIS has made it possible for people with disabilities to access help with…

The 3 Benefits of Hanging a Chandelier in Your Living Room

If you think that chandeliers are fancy light fixtures that will not fit in your home’s interior design scheme, think twice. These pieces of art have greatly evolved since they first appeared…

How did the emoji come to life and why do we use them?

Probably almost anyone has heard the word emoji and we cannot be surprised — they are a part of our everyday lives, after all. But how did that happen? Emoji, as we know them today, with almost…