Autopergamene

Back

Querying your Redux store with GraphQL

Published a year ago
12mn to read

Rationale

When working in a React application, one pain point that often comes up is Redux. People say that as soon as an application uses it, things quickly get overrun with boilerplate and "wiring" code that ultimately clogs your codebase more than it helps it. This isn't something inherent to Redux but more something to do with the best practices associated with it, and with people misusing the store for everything and anything.

But it remains true that logic in and around your store grows in parallel to your domain code. The larger your domain code, the more boilerplate you'll have as well – actions, reducers, selectors, thunks, sagas, and so on. When that ecosystem of logic stays in the background, it's not an issue, but when troubles come into play is when it leaks into your components layer as well. Components should be treated like controllers in the back-end: "transports" that are as pure and ignorant of what's going on as possible, instead focusing on merely being input-output – interactions aside of course.

But how do you keep your components ignorant of all this Redux logic when it's there precisely to help them consume your domain and state? Usually, your component will end up making use of actions, selectors, or thunks: all things that involve your component knowing how your Redux store is structured to operate. And once you start to have to handle relationships, bridging entities together, filtering, sorting, pagination and so on, the Redux layer usually grows drastically in complexity and becomes harder to consume, cluttering your components with more and more wiring.

Redux, meet GraphQL

On a recent project, we had this exact problem of our Redux logic becoming increasingly complex and harder to consume. We had a whole pipeline to construct and destruct objects and to hydrate the components, and it was not only a lot of boilerplate every time but it muddied the water of what the components were trying to achieve in the first place.

The crux of the issue is this: the more your components are aware of where your data comes from and how it's returned from your data source, the more they will try to bend themselves to it instead of keeping their API pure and their use cases open. This is usually very visible if you compare a component that was designed in isolation (in Storybook per example) with its use cases alone in mind, and a component that was made "on the job" with the actual real-world data.

Since I was very interested in Gatsby at the time, and how concise components in a Gatsby codebase can be thanks to GraphQL, I thought about introducing something similar but for our Redux store. A way to centralize all data fetching and building into a clear and simple query, that would make the components ignorant of where the data comes from and how it was fetched.

How it works

When working with GraphQL, there is one library in particular that stands out from the rest, and that's of course Apollo. It's an ecosystem of libraries to work with it in various frameworks and comes with everything you'd need. But more interestingly despite being predominantly "GraphQL branded", Apollo lets you use the GraphQL query language with other more traditional data sources such as REST APIs, databases (SQL, Mongo) and so on. Again very similar to what you find in Gatsby.

When using Apollo you'd usually have two sides: the server and the client. Your client lives on the client-side, and passes queries to your servers which answers them. So far so good. But what we built is a bit different, it uses a feature of Apollo called "local state management" which allows the client part of Apollo to both make and answer the queries itself. This means no actual server is involved, no HTTP request will be made, it's a "fake GraphQL" server running within each request and whose purpose is to query data from that same request (here, from our Redux store).

This feature wasn't made with Redux in mind, it was made to use Apollo as your store by writing and reading from an InMemoryCache instance. But on this Apollo client, you can also define resolvers which tell GraphQL basically how to retrieve the data that was asked in the query.

If per example I wrote a query like this, to get the ID of all users and the groups they're in:

query {
    users {
        id
        groups {
            id
        }
    }
}

I could tell Apollo Client how to get that data through resolvers. And since we have access to our Redux store there (since it's a global singleton), that means we can make that query functional by doing this:

import store from "./Store";

const typeDefs = gql`
        type Query {
            users: [User]
        }

        type User {
            groups: [Group]
        }

        type Group {}
    `;

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        Query: {
            users: () => store.getState().users,
        },
        User: {
            groups: (user) =>
                store
                    .getState()
                    .groups.filter((group) =>
                        user.group_ids.includes(group.id),
                    ),
        },
    },
});

And that's it, that was the proof of concept and surprisingly enough, it worked. You still have to provide a schema to the typeDefs option as you might have noticed, but that schema will not be used for validation. It will be used to know which resolvers to call but it will never be used to validate requests nor responses as it's too heavy of an operation performance-wise and is disabled for Apollo Client (ie. only Apollo Server uses it, but we don't have one here).

The advantage we had on this project, was that we had already set up a whole slew of selectors (functions that receive the state and return a piece of it) to query various parts of the state. This meant we were able to easily make the whole state queryable through GraphQL by using selectors extensively:

const resolveSelector = (selector) => selector(store.getState());

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        Query: {
            users: () => resolveSelector(getUsers),
        },
        User: {
            groups: (user) => resolveSelector(getUserGroups(user)),
        },
    },
});

Getting the data into the store

This is a great first step as it makes the components unaware of the shape of the store and the selectors, they just need to know what they need to render, and they get it. But I saw I could take it one step further and also make the components unaware of how to fetch that data and get it into the store in the first place. For this, since we were using thunks which are promises, we tied one thunk to every resolver:

const resolveSelector = (selector) => selector(store.getState());

const fetchAndResolve = async (thunk, selector) => {
    await thunk(store.dispatch, store.getState);

    return resolveSelector(selector);
};

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        Query: {
            users: () => fetchAndResolve(fetchUsers(), getUsers),
        },
        User: {
            groups: (user) =>
                fetchAndResolve(fetchUserGroups(user), getUserGroups(user)),
        },
    },
});

With this in place, this meant we were able to have our components go from this:

const Users = ({ fetchUsers, fetchUserGroups, users, groups }) => {
    useEffect(() => {
        fetchUsers();

        if (users.length) {
            users.forEach(fetchUserGroups);
        }
    }, [fetchUsers, fetchUserGroups, users]);

    return (
        <div>
            {Object.values(users).map((user) => (
                <div>
                    <h1>{user.name}</h1>
                    <h2>Groups</h2>
                    <ul>
                        {user.group_ids.map((groupId) => (
                            <li>{groups[groupId].name}</li>
                        ))}
                    </ul>
                </div>
            ))}
        </div>
    );
};

const mapStateToProps = (state) => ({
    users: getUsers(state),
    groups: getGroups(state),
});

export default connect(mapStateToProps, { fetchUsers, fetchUserGroups })(Users);

To this:

const QUERY = gql`
    {
        users @client {
            name
            groups {
                name
            }
        }
    }
`;

const Users = () => {
    const {
        data: { users },
    } = useQuery(QUERY);

    return (
        <div>
            {users.map((user) => (
                <div>
                    <h1>{user.name}</h1>
                    <h2>Groups</h2>
                    <ul>
                        {user.groups.map((group) => (
                            <li>{group.name}</li>
                        ))}
                    </ul>
                </div>
            ))}
        </div>
    );
};

export default Users;

As you can notice, the component is aware of much much less than before. It simply asks for the data it needs to render, gets it, and renders it. The useQuery hook comes from Apollo and is the hook counterpart of the Query render component that does the same thing. Both variations come with a bunch of built-in goodies such as:

  • The ability to refresh your query at any time (useful after stateful actions)
  • The ability to display loading states
  • The ability to handle errors

The query uses a @client directive that tells Apollo it's a query meant for Apollo Client and not for Apollo Server, ie. that query should not leave the current request. This is the most important part as without it Apollo will try to execute your request against a real GraphQL server – cause you could be using both this and an actual server and query both indiscriminately depending on if you pass @client or not, which is an interesting idea too.

Query arguments

This is nice for the standard use case but what about when things need to be queried with arguments, such as in the case of pagination? Well you can define arguments on your query and they'll be received by the resolvers, allowing you to write a component like this:

    const QUERY = gql`
        query($page: Int, $perPage: Int) {
            users(perPage: $perPage, page: $page, orderBy: "age") @client {
                name
                groups {
                    name
                }
            }
        }
    `;

    const Users = ({ perPage = 15 }) => {
        const {
            data: { users },
            loading,
            refetch,
        } = useQuery(QUERY, { page: 1, perPage });

        return (
            <Table
                columns={...}
                rows={users}
                loading={loading}
                perPage={perPage}
                onPageChange={page => refetch({ page, perPage })}
            />
        );
    };

    export default Users;

And power it like this (as long as your selectors/thunks are already pagination-aware for it which ours were):

Query: {
    users: async (_, { perPage, page, orderBy }) => {
        const users = await fetchAndResolve(
            fetchUsers(page, perPage),
            getUsers,
        );

        return sortBy(users, orderBy);
    };
}

As you can see, since Apollo allows us to refetch a query with different arguments, we can easily fetch intricate sets of data and arrange it precisely how the component wants, without having to make the component aware of how the data is manipulated.

Writing in addition to reading

While we didn't do this to keep the codebase approachable, this isn't an idea that is limited to reading from your store. You could very well move your write layer into your GraphQL server as mutations as well:

const QUERY = gql`
    query($id: Int) {
        user(id: $id) {
            id
            name
        }
    }
`;

const MUTATION = gql`
        mutation($id: Int, $name: String) {
            updateUser(id: $id, name: $name) { }
        }
    `;

const UserForm = ({ id }) => {
    const {
        data: { user },
    } = useQuery(QUERY, { id });
    const [updateUser] = useMutation(MUTATION);

    return (
        <Formik
            initialValues={{ ...user }}
            onSubmit={(user) => updateUser(user.id, user.name)}
        >
            {() => ({})}
        </Formik>
    );
};

export default Users;

You could then define the matching resolver, again reusing your Redux logic and thunks:

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        // ...
        Mutation: {
            updateUser: async (source, { id, name }) => {
                await updateUserThunk(id, { name })(store.dispatch);

                return resolveSelector(getUser(id));
            },
        },
        // ...
    },
});

Doing this, your components would be able to read and write to your store without ever being aware of where that data actually is, how it's fetched, how it's structured, and so on. This keeps your components "sort of" implementation-agnostic in that you could swap your resolvers midway through with a real GraphQL API and your components wouldn't see any difference.

Testing

Once we've reached this step comes the question of testing: we've decoupled our component from the Redux store but coupled it to GraphQL, so how would we test this? There's two approaches, the first and most evident one is to simply export two components:

    export const UsersTable = ({ users }) => (
        <Table columns={...} rows={users} />
    );

    const Users = () => {
        const { data: { users } } = useQuery(QUERY);

        return <UsersTable columns={...} users={users} />;
    };

    export default Users;

Then we could simply import { UsersTable } from "./Users and test this by providing dummy props directly. And that's precisely what we were doing before, but in doing so you don't test the full story either. Thankfully you can easily mock GraphQL queries and mutations in tests thanks to a MockedProvider exposed by Apollo:

const mocks = [
    {
        request: { query: QUERY, variables: {} },
        results: {
            data: myDummyUsers,
        },
    },
];

it("can render a list of users", () => {
    const result = render(
        <MockedProvider mocks={mocks}>
            <Users />
        </MockedProvider>,
    );

    // Test the component
});

As you see, you mock responses to individual queries, which means you have more insurance that your component is querying the data correctly, as Apollo will match any query exactly and it will match it only once. This might seem cumbersome but it allows us to per example:

  • Mock responses differently to a first fetch and a refetch
  • Mock responses differently depending on query arguments
  • Simulate errors, failures and so on since you can also return failed requests by providing an error field
  • etc.

This is a much more complete way to test your components as you basically mock your data source directly instead of bypassing it and testing the lower layer.

Killing Redux

Now this isn't something we did on our project but one of the benefits of this approach is that suddenly, it becomes that much easier to get data from your API to your component. Suddenly, things don't need to necessarily gravitate by the store.

We've mostly implemented resolvers by using thunks and selectors so far, but you don't really need either for this to work, the following would be completely acceptable:

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        // ...
        Query: {
            groups: () => axios.get("my/api/groups"),
        },
        // ...
    },
});

This is granted the API returns the data in the correct format but even if that weren't the case you could just wrap the call in a normalizeGroups function or something. My point is, not everything has to be in the Store; if it's only used for one page, if it's used in the same page that requests it, why bother? Why do a whole trip around the store? By doing this as often as possible you'll often notice how redundant your store becomes and how you've subconsciously trained yourself to just shove everything in it "in case you need it later".

This is of course very dependant on how complex your app is, I'm not advocating for No Redux, on the project I mentioned we definitely needed to make entities go through the store first to normalize everything, but on smaller projects that may not be a constraint.

Final Words

Doing this introduced a barrier of entry to working with our components, but not a greater one than learning how thunks, selectors and all of Redux work. While knowledge of both the components layer and Redux layer is required to make the GraphQL client return new data, it does mean writing new components with existing data requires very minimal knowledge.

We've overall made our components much more pure, and we can easily reuse and abstract our whole fetching layer by exporting query components:

const QUERY = gql`
    {
        users @client {
            name
            groups {
                name
            }
        }
    }
`;

export const UsersQuery = ({ children }) => {
    const {
        data: { users },
    } = useQuery(QUERY);

    return children(users);
};

// Somewhere else
<UsersQuery>{(users) => renderSomethingWithUsers(users)}</UsersQuery>;

Since the queries also return information about loading states and error and such, it's also very easy to centralize all that handling to a common render prop component (instead of using hooks) which is what we did:

const LoadedQuery = (props) => (
    <Query {...props}>
        {({ data, ...results }) => {
            if (results.loading) {
                return <Loader />;
            }

            if (results.error) {
                return <ErrorMessage error={results.error} />;
            }

            return children(data, results);
        }}
    </Query>
);

There is a lot of interesting directions you can go from this query layer and I for one am very excited to see GraphQL being used more and more outside of an actual GraphQL context. I think we've gotten so used to having to have our hands in the engine to get data to our components that we've forgotten how pure they should be in the first place, and it's great to be able to bring that simplicity back without leaking implementation details everywhere.

© 2020 - Emma Fabre - About

Autopergamene

Querying your Redux store with GraphQL

Back

Querying your Redux store with GraphQL

Published a year ago
12mn to read

Rationale

When working in a React application, one pain point that often comes up is Redux. People say that as soon as an application uses it, things quickly get overrun with boilerplate and "wiring" code that ultimately clogs your codebase more than it helps it. This isn't something inherent to Redux but more something to do with the best practices associated with it, and with people misusing the store for everything and anything.

But it remains true that logic in and around your store grows in parallel to your domain code. The larger your domain code, the more boilerplate you'll have as well – actions, reducers, selectors, thunks, sagas, and so on. When that ecosystem of logic stays in the background, it's not an issue, but when troubles come into play is when it leaks into your components layer as well. Components should be treated like controllers in the back-end: "transports" that are as pure and ignorant of what's going on as possible, instead focusing on merely being input-output – interactions aside of course.

But how do you keep your components ignorant of all this Redux logic when it's there precisely to help them consume your domain and state? Usually, your component will end up making use of actions, selectors, or thunks: all things that involve your component knowing how your Redux store is structured to operate. And once you start to have to handle relationships, bridging entities together, filtering, sorting, pagination and so on, the Redux layer usually grows drastically in complexity and becomes harder to consume, cluttering your components with more and more wiring.

Redux, meet GraphQL

On a recent project, we had this exact problem of our Redux logic becoming increasingly complex and harder to consume. We had a whole pipeline to construct and destruct objects and to hydrate the components, and it was not only a lot of boilerplate every time but it muddied the water of what the components were trying to achieve in the first place.

The crux of the issue is this: the more your components are aware of where your data comes from and how it's returned from your data source, the more they will try to bend themselves to it instead of keeping their API pure and their use cases open. This is usually very visible if you compare a component that was designed in isolation (in Storybook per example) with its use cases alone in mind, and a component that was made "on the job" with the actual real-world data.

Since I was very interested in Gatsby at the time, and how concise components in a Gatsby codebase can be thanks to GraphQL, I thought about introducing something similar but for our Redux store. A way to centralize all data fetching and building into a clear and simple query, that would make the components ignorant of where the data comes from and how it was fetched.

How it works

When working with GraphQL, there is one library in particular that stands out from the rest, and that's of course Apollo. It's an ecosystem of libraries to work with it in various frameworks and comes with everything you'd need. But more interestingly despite being predominantly "GraphQL branded", Apollo lets you use the GraphQL query language with other more traditional data sources such as REST APIs, databases (SQL, Mongo) and so on. Again very similar to what you find in Gatsby.

When using Apollo you'd usually have two sides: the server and the client. Your client lives on the client-side, and passes queries to your servers which answers them. So far so good. But what we built is a bit different, it uses a feature of Apollo called "local state management" which allows the client part of Apollo to both make and answer the queries itself. This means no actual server is involved, no HTTP request will be made, it's a "fake GraphQL" server running within each request and whose purpose is to query data from that same request (here, from our Redux store).

This feature wasn't made with Redux in mind, it was made to use Apollo as your store by writing and reading from an InMemoryCache instance. But on this Apollo client, you can also define resolvers which tell GraphQL basically how to retrieve the data that was asked in the query.

If per example I wrote a query like this, to get the ID of all users and the groups they're in:

query {
    users {
        id
        groups {
            id
        }
    }
}

I could tell Apollo Client how to get that data through resolvers. And since we have access to our Redux store there (since it's a global singleton), that means we can make that query functional by doing this:

import store from "./Store";

const typeDefs = gql`
        type Query {
            users: [User]
        }

        type User {
            groups: [Group]
        }

        type Group {}
    `;

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        Query: {
            users: () => store.getState().users,
        },
        User: {
            groups: (user) =>
                store
                    .getState()
                    .groups.filter((group) =>
                        user.group_ids.includes(group.id),
                    ),
        },
    },
});

And that's it, that was the proof of concept and surprisingly enough, it worked. You still have to provide a schema to the typeDefs option as you might have noticed, but that schema will not be used for validation. It will be used to know which resolvers to call but it will never be used to validate requests nor responses as it's too heavy of an operation performance-wise and is disabled for Apollo Client (ie. only Apollo Server uses it, but we don't have one here).

The advantage we had on this project, was that we had already set up a whole slew of selectors (functions that receive the state and return a piece of it) to query various parts of the state. This meant we were able to easily make the whole state queryable through GraphQL by using selectors extensively:

const resolveSelector = (selector) => selector(store.getState());

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        Query: {
            users: () => resolveSelector(getUsers),
        },
        User: {
            groups: (user) => resolveSelector(getUserGroups(user)),
        },
    },
});

Getting the data into the store

This is a great first step as it makes the components unaware of the shape of the store and the selectors, they just need to know what they need to render, and they get it. But I saw I could take it one step further and also make the components unaware of how to fetch that data and get it into the store in the first place. For this, since we were using thunks which are promises, we tied one thunk to every resolver:

const resolveSelector = (selector) => selector(store.getState());

const fetchAndResolve = async (thunk, selector) => {
    await thunk(store.dispatch, store.getState);

    return resolveSelector(selector);
};

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        Query: {
            users: () => fetchAndResolve(fetchUsers(), getUsers),
        },
        User: {
            groups: (user) =>
                fetchAndResolve(fetchUserGroups(user), getUserGroups(user)),
        },
    },
});

With this in place, this meant we were able to have our components go from this:

const Users = ({ fetchUsers, fetchUserGroups, users, groups }) => {
    useEffect(() => {
        fetchUsers();

        if (users.length) {
            users.forEach(fetchUserGroups);
        }
    }, [fetchUsers, fetchUserGroups, users]);

    return (
        <div>
            {Object.values(users).map((user) => (
                <div>
                    <h1>{user.name}</h1>
                    <h2>Groups</h2>
                    <ul>
                        {user.group_ids.map((groupId) => (
                            <li>{groups[groupId].name}</li>
                        ))}
                    </ul>
                </div>
            ))}
        </div>
    );
};

const mapStateToProps = (state) => ({
    users: getUsers(state),
    groups: getGroups(state),
});

export default connect(mapStateToProps, { fetchUsers, fetchUserGroups })(Users);

To this:

const QUERY = gql`
    {
        users @client {
            name
            groups {
                name
            }
        }
    }
`;

const Users = () => {
    const {
        data: { users },
    } = useQuery(QUERY);

    return (
        <div>
            {users.map((user) => (
                <div>
                    <h1>{user.name}</h1>
                    <h2>Groups</h2>
                    <ul>
                        {user.groups.map((group) => (
                            <li>{group.name}</li>
                        ))}
                    </ul>
                </div>
            ))}
        </div>
    );
};

export default Users;

As you can notice, the component is aware of much much less than before. It simply asks for the data it needs to render, gets it, and renders it. The useQuery hook comes from Apollo and is the hook counterpart of the Query render component that does the same thing. Both variations come with a bunch of built-in goodies such as:

  • The ability to refresh your query at any time (useful after stateful actions)
  • The ability to display loading states
  • The ability to handle errors

The query uses a @client directive that tells Apollo it's a query meant for Apollo Client and not for Apollo Server, ie. that query should not leave the current request. This is the most important part as without it Apollo will try to execute your request against a real GraphQL server – cause you could be using both this and an actual server and query both indiscriminately depending on if you pass @client or not, which is an interesting idea too.

Query arguments

This is nice for the standard use case but what about when things need to be queried with arguments, such as in the case of pagination? Well you can define arguments on your query and they'll be received by the resolvers, allowing you to write a component like this:

    const QUERY = gql`
        query($page: Int, $perPage: Int) {
            users(perPage: $perPage, page: $page, orderBy: "age") @client {
                name
                groups {
                    name
                }
            }
        }
    `;

    const Users = ({ perPage = 15 }) => {
        const {
            data: { users },
            loading,
            refetch,
        } = useQuery(QUERY, { page: 1, perPage });

        return (
            <Table
                columns={...}
                rows={users}
                loading={loading}
                perPage={perPage}
                onPageChange={page => refetch({ page, perPage })}
            />
        );
    };

    export default Users;

And power it like this (as long as your selectors/thunks are already pagination-aware for it which ours were):

Query: {
    users: async (_, { perPage, page, orderBy }) => {
        const users = await fetchAndResolve(
            fetchUsers(page, perPage),
            getUsers,
        );

        return sortBy(users, orderBy);
    };
}

As you can see, since Apollo allows us to refetch a query with different arguments, we can easily fetch intricate sets of data and arrange it precisely how the component wants, without having to make the component aware of how the data is manipulated.

Writing in addition to reading

While we didn't do this to keep the codebase approachable, this isn't an idea that is limited to reading from your store. You could very well move your write layer into your GraphQL server as mutations as well:

const QUERY = gql`
    query($id: Int) {
        user(id: $id) {
            id
            name
        }
    }
`;

const MUTATION = gql`
        mutation($id: Int, $name: String) {
            updateUser(id: $id, name: $name) { }
        }
    `;

const UserForm = ({ id }) => {
    const {
        data: { user },
    } = useQuery(QUERY, { id });
    const [updateUser] = useMutation(MUTATION);

    return (
        <Formik
            initialValues={{ ...user }}
            onSubmit={(user) => updateUser(user.id, user.name)}
        >
            {() => ({})}
        </Formik>
    );
};

export default Users;

You could then define the matching resolver, again reusing your Redux logic and thunks:

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        // ...
        Mutation: {
            updateUser: async (source, { id, name }) => {
                await updateUserThunk(id, { name })(store.dispatch);

                return resolveSelector(getUser(id));
            },
        },
        // ...
    },
});

Doing this, your components would be able to read and write to your store without ever being aware of where that data actually is, how it's fetched, how it's structured, and so on. This keeps your components "sort of" implementation-agnostic in that you could swap your resolvers midway through with a real GraphQL API and your components wouldn't see any difference.

Testing

Once we've reached this step comes the question of testing: we've decoupled our component from the Redux store but coupled it to GraphQL, so how would we test this? There's two approaches, the first and most evident one is to simply export two components:

    export const UsersTable = ({ users }) => (
        <Table columns={...} rows={users} />
    );

    const Users = () => {
        const { data: { users } } = useQuery(QUERY);

        return <UsersTable columns={...} users={users} />;
    };

    export default Users;

Then we could simply import { UsersTable } from "./Users and test this by providing dummy props directly. And that's precisely what we were doing before, but in doing so you don't test the full story either. Thankfully you can easily mock GraphQL queries and mutations in tests thanks to a MockedProvider exposed by Apollo:

const mocks = [
    {
        request: { query: QUERY, variables: {} },
        results: {
            data: myDummyUsers,
        },
    },
];

it("can render a list of users", () => {
    const result = render(
        <MockedProvider mocks={mocks}>
            <Users />
        </MockedProvider>,
    );

    // Test the component
});

As you see, you mock responses to individual queries, which means you have more insurance that your component is querying the data correctly, as Apollo will match any query exactly and it will match it only once. This might seem cumbersome but it allows us to per example:

  • Mock responses differently to a first fetch and a refetch
  • Mock responses differently depending on query arguments
  • Simulate errors, failures and so on since you can also return failed requests by providing an error field
  • etc.

This is a much more complete way to test your components as you basically mock your data source directly instead of bypassing it and testing the lower layer.

Killing Redux

Now this isn't something we did on our project but one of the benefits of this approach is that suddenly, it becomes that much easier to get data from your API to your component. Suddenly, things don't need to necessarily gravitate by the store.

We've mostly implemented resolvers by using thunks and selectors so far, but you don't really need either for this to work, the following would be completely acceptable:

const client = new ApolloClient({
    cache: new InMemoryCache(),
    typeDefs,
    resolvers: {
        // ...
        Query: {
            groups: () => axios.get("my/api/groups"),
        },
        // ...
    },
});

This is granted the API returns the data in the correct format but even if that weren't the case you could just wrap the call in a normalizeGroups function or something. My point is, not everything has to be in the Store; if it's only used for one page, if it's used in the same page that requests it, why bother? Why do a whole trip around the store? By doing this as often as possible you'll often notice how redundant your store becomes and how you've subconsciously trained yourself to just shove everything in it "in case you need it later".

This is of course very dependant on how complex your app is, I'm not advocating for No Redux, on the project I mentioned we definitely needed to make entities go through the store first to normalize everything, but on smaller projects that may not be a constraint.

Final Words

Doing this introduced a barrier of entry to working with our components, but not a greater one than learning how thunks, selectors and all of Redux work. While knowledge of both the components layer and Redux layer is required to make the GraphQL client return new data, it does mean writing new components with existing data requires very minimal knowledge.

We've overall made our components much more pure, and we can easily reuse and abstract our whole fetching layer by exporting query components:

const QUERY = gql`
    {
        users @client {
            name
            groups {
                name
            }
        }
    }
`;

export const UsersQuery = ({ children }) => {
    const {
        data: { users },
    } = useQuery(QUERY);

    return children(users);
};

// Somewhere else
<UsersQuery>{(users) => renderSomethingWithUsers(users)}</UsersQuery>;

Since the queries also return information about loading states and error and such, it's also very easy to centralize all that handling to a common render prop component (instead of using hooks) which is what we did:

const LoadedQuery = (props) => (
    <Query {...props}>
        {({ data, ...results }) => {
            if (results.loading) {
                return <Loader />;
            }

            if (results.error) {
                return <ErrorMessage error={results.error} />;
            }

            return children(data, results);
        }}
    </Query>
);

There is a lot of interesting directions you can go from this query layer and I for one am very excited to see GraphQL being used more and more outside of an actual GraphQL context. I think we've gotten so used to having to have our hands in the engine to get data to our components that we've forgotten how pure they should be in the first place, and it's great to be able to bring that simplicity back without leaking implementation details everywhere.

© 2020 - Emma Fabre - About