Skip to main content

Pixie - A scalable graph-based real-time recommendation engine by Pinterest


Recommendation engine plays an important role in user experience. Most of the giant companies like Amazon, Google, Facebook try their best to personalize the recommendations for each customer. For example, YouTube gives you personalized recommendations based on your watch history. Amazon gives you a different set of recommendations like “Related to items you’ve viewed”, “Customers also bought”, “Items you may like”,.etc. Amazon sends you personalized emails based on your browsing and purchasing history. Most of the companies have their own recommendation engines to increase their customer’s conversion rates.

Giving personalized recommendations in real-time is very very difficult. For big companies, the user base and the item catalogue will range from millions to billions. Handling such huge amount of data and recommending a user in real-time is a nightmare. In this article, I will summarize the paper “Pixie: A System for Recommending 3+ Billion Items to 200+ Million Users in Real-Time”.

Pixie was developed by Pinterest which helped them to increase their user engagement by 58% by providing real-time recommendations. Pixie is a graph-based real-time recommendation engine. It is fast, scalable and runs in constant time and is independent of the size of the graph. Let us understand the power of the Pixie. Let’s get started.

Image source: Google Images

Pinterest is a visual catalogue with several billion pins, which are visual bookmarks containing a description, a link, and an image or a video. Users can create boards by collating different pins together.

Each user can curate different pins to create a board. And a pin can be used in multiple boards. For example, we can curate all the Indian cricket team players into the board. With hundreds of millions of users on Pinterest, billions of boards would be generated. We can think of Pinterest as a giant human-curated bipartite graph of 7 billion pins and boards, and over 100 billion edges.

Bipartite graph

Can you imagine the size of the graph and the interaction data they have? From the bipartite graph, we can find the most created boards, most used pins,..etc. There would be a lot of boards that were created only once and pins that were used only once..etc. The main goal of Pinterest here is to recommend similar pins to users, as they interact with pins. We create a query set Q of pins each with different weights based on user interaction. We query this graph in real-time to recommend similar pins to the users. The query set Q is user-specific and changes dynamically as the user interacts with pins. I hope you got a high-level view of how it works. Now, we will discuss how Pixie random walk algorithm works.

Firstly, I will summarize the basic random walk algorithm. Consider a bipartite graph G, input query Q that contains only one query pin q. Given a query pin q, there can be many short random walks starting from pin q. We can record visit count for each candidate pin p (pins in the graph), which tells the number of times the random walk visited pin p. Higher the visit count, the more it is related to query pin q.

Basic Random walk algorithm:

The random walk consists of a sequence of steps and each step has 3 operations. Before diving into the explanation, I will define the notation. The pin is represented by p, the board is represented by b, E(p) contains all the edges connected to the pin p with different boards (all edges starting from pin p). E(b) contains all the edges connected to the board b with different pins. When a query pin q is given, the current pin p is initialized to pin q. Now, we are at the pin q.
  • We randomly pick an edge e from E(q) that connects q with board b. Now, we are at board b.
  • We randomly pick an edge e1 from E(b) that connects b with pin q1. Now, we are at pin q1.
  • The current pin q is updated to q1.
Bipartite graph - Pins & Boards

We repeat the above steps. Walk length is determined by the parameter alpha. We stop the random walk based on alpha. The total time complexity N is the sum of all short random walks. After completion of the random walk, we sort the visit count of all candidate pins p to find the top most related pins to pin q.

Basic Random Walk

Summarizing the random walk, when a query pin q is given, we walk randomly through the graph G, starting from q. Based on the visit count of candidate pins, we generate pins that are related to query pin and recommend them to the user in real-time. The time complexity is N and is independent of the input graph.

Pixie is an advanced version of the random walk algorithm. Below are the following improvements that were made to the random walk:
  1. Biasing the random walk towards user-specific pins
  2. Multiple query pins each with different weights
  3. Multi-hit booster that boosts pins that are related to multiple query pins
  4. Early stopping that minimizes the number of steps of the random walk while maintaining the quality of the results
1. Biasing the random walk towards user-specific pins:

For example, you have a graph G and query set Q. If you use the basic random walk, then we end up recommending the same pins to all the users having a query set Q. To generate personalized recommendations, the selection of random edge is biased based on user-specific features. In simple terms, we can assume that more priority is given to some edges based on user interaction. This will ensure that the random walk focuses on a set of pins that are more personalized to the user. The edge selection can change dynamically based on the recent user interaction with pins.

Biasing the random walk
Thick edge (High priority) - Dashed edge (Low priority)

In practice, this idea helped Pinterest to improve personalization and quality of recommendations and in turn, the user engagement is increased. Without creating different graphs for 200+ million users, they are changing edge priority/weights during random edge selection for each user to generate personalized recommendations. This also saves a lot of memory. Refer Algorithm 2 (image attached down - taken directly from the original paper) for your reference.

2. Multiple query pins each with different weights:

We cannot recommend users just based on one interaction. We should consider the entire user history to generate good recommendations. Instead of using one pin in query set Q, we include multiple query pins in the query set Q based on user interaction history. Each pin q in Q is assigned a different weight (w_q) based on time since they interacted with pin and the type of interaction with the pin. The random walk algorithm is run for each query pin q in Q. For each candidate pin p, we maintain a different visit counter (Vq[p]) for each query pin q.

Multiple query pins

Can we use the same walk length for all the query pins? It’s not fair to use a constant walk length for all query pins q. For example, if a pin is very popular, it means that it is used by many boards. The number of edges connected to that pin will be high for a popular pin. In this case, the walk length should be high to get a meaningful visit counts. So, the walk length should depend on the pin’s degree/popularity.

We achieve our goal of step distribution by allocating the number of steps based on a function that increases sub-linearly with the query pin degree and scale the per pin weights w_q by a scaling factor s_q.

scaling factor s_q

Total number of steps N_q

The walk length (N_q) depends on recency (w_q) and degree/popularity (s_q). The more steps are allocated to starting pins with high degrees. This is implemented in the lines 1, 2, 3 of Algorithm 3.

Pixie recommendations for multiple pins

3. Multi-hit booster that boosts pins that are related to multiple query pins:

If you remember, we discussed maintaining q different visit counters for each candidate pin p. How do we combine multiple q visit counters? Rather than simply summing visit counts Vq[p] for a given pin p over all the query pins q ∈ Q, we reward pins that get visited multiple times from multiple different query pins q.

Formulae for aggregating visit counts

For example, consider that we have 3 query pins in the query set Q. For a candidate pin p1, the respective visit counts look like this: [16, 0, 0]. For a candidate pin p2, the respective visit counts look like this: [8, 8, 0]. If we summate the visit counts, the total visit count is 10 (same for both the candidate pins).

In Pixie, the algorithm gives more priority to candidate pin p2 rather than candidate pin p1, because pin p2 is being visited by different query pins. This says that pin p2 is more related to query set Q. If we use the above formulae to calculate the total visit count, we end up with 16 for candidate pin p1 and 32 for candidate pin p2. This automatically gives high priority to pins if they are visited by different query pins q. This is implemented in line 5 of Algorithm 3.

4. Early stopping that minimizes the number of steps of the random walk while maintaining the quality of the results:

We are using N_q walk steps to find similar pins for a query set Q. To generate real-time recommendations, the Pixie run time is very important. If we can reduce N_q, it is a great achievement.

The solution to this problem is to terminate the walks once the set of top candidates becomes stable, i.e., does not change much with more steps. We do this by maintaining two integer variables n_p and n_v. We terminate the walk when at-least n_p candidate pins are visited at least n_v times. This is implemented in 10 to 13 lines of Algorithm 2.

Pixie Random Walk algorithm with early stopping

In practice, the early stopping helped Pinterest to achieve the same results in about half the number of steps. This is really a great achievement. The recommendations can be generated very quickly.

The above innovations on top of random walk helped Pinterest to recommend users in real-time and at the same time handle a huge amount of users.

I hope you enjoyed reading the article. If you are interested to know more about the algorithm, I highly suggest you go through the original paper that was mentioned above. In the paper, they also talked about Graph Pruning, that helped them to save memory by removing unnecessary nodes and edges from the graph. They also discussed implementation details (which uses efficient graph data structures), recommendation quality, performance and stability. After reading the paper, please let me know your thoughts in the comments section below.

Thank you so much for reading my blog and supporting me. Stay tuned for my next article. If you want to receive email updates, don’t forget to subscribe to my blog. If you have any queries, please do comment in the comment section below. I will be more than happy to help you. Keep learning and sharing!!

Follow me here:
GitHub: https://github.com/Abhishekmamidi123
LinkedIn: https://www.linkedin.com/in/abhishekmamidi/
Kaggle: https://www.kaggle.com/abhishekmamidi
If you are looking for any specific blog, please do comment in the comment section below.

Comments

  1. This is very informative and interesting for me. Thank you for such a wonderful post about Architectural Rendering Firms and for sharing. God bless you.

    ReplyDelete

Post a Comment

Popular posts from this blog

Google Colab - Increase RAM upto 25GB

Google colab is a free jupyter notebook that is hosted on Google cloud servers. We can use CPU, GPU and TPU for free. It helps you to write and execute your code. You can directly access this through your browser. If you want to use Google Cloud/AWS, it requires hell lot of setup. You have to spin a cluster, create a notebook and then use it. But, Google colab will be readily available for you to use it. You can also install libraries from the notebook itself. These notebooks are very useful for training large models and processing huge datasets. Students and developers can make use of this because it’s very difficult for them to afford GPUs and TPUs. I was trying to run a memory heavy job. The notebook crashed. Then, I came to know how I can increase the RAM. So, I thought of sharing it in my blog. There are some constraints with the notebook. You can run these notebooks for not more than 12 hours and you can use only 12 GB RAM. There is no direct method or button t

Skills required to become a Data Scientist

Data Science is one of the hottest areas in the 21st century. We can solve many complex problems using a huge amount of information. The way electricity has changed the world, information helps us to make our lives easier and comfortable. Every second, an enormous amount of data is being generated. The data may be in the form of text, image, speech or tabular. As there is a lot of growth in the field of Data Science, in recent years, most of the companies have started building their own Data Science teams to get benefited from the information they have. This has created a lot of opportunities and demand for Data Science in different domains. For the next 5+ years, this demand would continue to increase. If we have the right skills, companies are ready to offer salaries more than the market standards. So, this is the right time to explore and gain skills which enables you to enter into this field. We have discussed the importance and demand for data science in the market. Let’s disc

Top 35 frequently asked Data Science interview questions

Interviews are very stressful. We should prepare for the worse. So, we have to plan accordingly in order to crack them. In this blog, you will get to know the type of questions that will be asked during the interview. It also depends on the experience level and the company too. This blog is mainly focused on entry-level Data Science related jobs. If you haven’t read my previous blog-posts, I highly recommend you to go through them: Skills required to become a Data Scientist How to apply for a Data Science job? First of all, you must be thorough with your resume, mainly your Internship experience and academic projects. You will have at least one project discussion round. Take mock interviews and improve your technical and presentation skills, which will surely help in the interviews. Based on my experience, I have curated the topmost 35 frequently asked Data Science questions during the interviews. Explain the Naive Bayes classifier? In case of Regression, how do y

My Data Science Journey and Suggestions - Part 1

I always wanted to share my detailed Data Science journey. So, I have divided the whole journey from BTech first year to final year into 3 parts. I will share everything, without leaving a single detail, starting from my projects, internships to getting a job. You can follow the path that I have followed if you like my journey or create your own path. In 2015, I got a seat in Electronics and Communication Engineering (ECE), IIIT Sri City through IIT JEE Mains. Because of my rank in JEE Mains, I couldn’t get into the Computer Science department. I wanted to shift to Computer Science after my first year, but couldn’t due to some reasons. In our college, we have only two branches, CSE and ECE. For the first three semesters, the syllabus was the same for both the departments except for a few courses. This helped me to explore Computer Science. In the first 3 semesters, I took Computer Programming, Data Structures, Algorithms, Computer Organization, Operation Systems courses, wh

Exploratory Data Analysis and Data Preprocessing steps

Exploratory Data Analysis is the foremost step while solving a Data Science problem. EDA helps us to solve 70% of the problem. We should understand the importance of exploring the data. In general, Data Scientists spend most of their time exploring and preprocessing the data. EDA is the key to building high-performance models. In this article, I will tell you the importance of EDA and preprocessing steps you can do before you dive into modeling. I have divided the article into two parts: Exploratory Data Analysis Data Preprocessing Steps Exploratory Data Analysis Exploratory Data Analysis(EDA) is an art. It’s all about understanding and extracting insights from the data. When you solve a problem using Data Science, it is very important to have domain knowledge. This helps us to get the insights better according to the business problem. We can find the magic features from the data, which boost the performance. We can do the following with EDA. Get comfortable with

Building ML Pipelines using Scikit Learn and Hyper Parameter Tuning

Data Scientists often build Machine learning pipelines which involves preprocessing (imputing null values, feature transformation, creating new features), modeling, hyper parameter tuning. There are many transformations that need to be done before modeling in a particular order. Scikit learn provides us with the Pipeline class to perform those transformations in one go. Pipeline serves multiple purposes here (from documentation ): Convenience and encapsulation : You only have to call fit and predict once on your data to fit a whole sequence of estimators. Joint parameter selection : You can grid search over parameters of all estimators in the pipeline at once (hyper-parameter tuning/optimization). Safety : Pipelines help avoid leaking statistics from your test data into the trained model in cross-validation, by ensuring that the same samples are used to train the transformers and predictors. In this article, I will show you How to build a complete pi

SHAP - An approach to explain the output of any ML model (with Python code)

Can we explain the output of complex tree models? We use different algorithms to improve the performance of the model. If you input a new test datapoint into the model, it will produce an output. Did you ever explore which features are causing to produce the output? We can extract the overall feature importance from the model, but can we get which features are responsible for the output? If we use a decision tree, we can at least explain the output by plotting the tree structure. But, it’s not easy to explain the output for advanced tree-based algorithms like XGBoost, LightGBM, CatBoost or other scikit-learn models. To explain the output for the above algorithms, researches have come up with an approach called SHAP. SHAP (SHapley Additive exPlanations) is a unified approach to explain the output of any machine learning model. SHAP connects game theory with local explanations, uniting several previous methods and representing the only possible consistent and locally accurate ad

Latent Dirichlet Allocation - LDA (With Python code)

Latent Dirichlet Allocation, also known as LDA, is one of the most popular methods for topic modelling. Using LDA, we can easily discover the topics that a document is made of. LDA assumes that the documents are a mixture of topics and each topic contain a set of words with certain probabilities. For example, consider the below sentences: Apple and Banana are fruits. I bought a bicycle recently. In less than two years, I will buy a bike. The colour of the apple and bicycle are red. The output of LDA would look like this: Topic 1 : 0.7*apple + 0.3*banana Topic 2 : 0.6*bicycle + 0.4*bike Sentence 1 : [(Topic 1, 1), (Topic 2, 0)] Sentence 2 : [(Topic 1, 0), (Topic 2, 1)] Sentence 3 : [(Topic 1, 0.5), (Topic 2, 0.5)] Please note that the above probabilities are made up numbers for intuition. To extract the topics and probability of words using LDA, we should decide the number of topics (k) beforehand. Based on that, LDA discovers the topic distribution of docum

A year of experience as a Data Scientist

On June 3rd 2019, I joined ZS Associates as a Data Scientist after graduating from IIIT SriCity. It was my first job and was very happy to get placed as a Data Scientist through lateral hiring. If you haven’t read my Data Science journey, please read it here :) After joining, I had some awesome moments that I never experienced since childhood. I got a chance to stay in a 4 star or 5 star hotel multiple times. I got a chance to travel by flight. I travelled to Pune, Delhi and Bangalore. I saw Vizag, Pune, Delhi and Bangalore airports in less than six months. I loved it. A few office parties, outings during Diwali and New year celebrations. Above are some of the moments that I can never forget in my life. My first job allowed me to experience these first time moments. Enjoying life is more important than anything. If you don’t enjoy your life, you cannot achieve anything big. Okay, let’s go into the main topic in detail. Me (inner voice during BTech):

Complete Data Science Pipeline

Data Science is not just modelling. To extract value out from Data Science, it needs to be integrated with business and deploy the product to make it available for the users. To build a Data Science product, it needs to go through several steps. In this article, I will discuss the complete Data Science pipeline. Steps involved in building a Data Science product: Understanding the Business problem Data Collection Data Cleaning Exploratory Data Analysis Modelling Deployment Let us discuss each step in detail. Understanding the business problem: We use Data Science to solve a problem. Without understanding the problem, we can’t apply data science and solve it. Understanding the business is very important in building a data science product. The model which we build completely depends on the problem we are solving. If the requirement is different, we need to adjust our algorithm such that it solves the problem. For example, if we are build