top of page


Donation Allocation Decision Aid

(Research & Algorithm Building)


412 Food Rescue (412FR) is a nonprofit based in Pittsburgh, Pennsylvania. The mission of 412FR is to prevent perfectly good food from entering the waste stream. The organization does so by running daily operations to save surplus food from going to waste by redistributing it to communities in need.

412FR operates a technology platform called Food Rescue Hero, which allows volunteers to claim available rescues and perform transportation duties. Internally, 412FR dispatchers run the platform by manually allocating donations to recipients, and pushing out requests for volunteers.

However, the manual donation allocation process is tedious and susceptible to bias. I worked with Dr. Min Kyung Lee to design and develop a decision aid to help the 412 employee make allocation decisions more efficiently and fairly. The decision aid was based on a matching algorithm that was collectively built by multiple stakeholders. It learns individuals' belief on donation allocation and uses the learned models & Borda voting rules to recommend recipients for each donation. 

I worked on both the software development and user research on this project during the summer. After the summer, I continued the project and worked more on the design side.

This page will be focused on the user research and algorithm building. 

To read about the design part: click here 


Duration:  2 months
My Role: 
Conducted various user studies to understand fairness concepts and user needs
Coded and evaluated the matching algorithm
Worked on Information Architecture and dashboard wireframe
Background Research

We conducted interviews with representatives from different stakeholder groups to better understand the rescue process as well as their opinions on allocation fairness. 

We observed the process of how the 412 FR dispatchers make a donation allocation on site. Specific things we looked include the overall process, additional documents they used and data they referred to. 

Based on the interview and observation, we built the stakeholder map and process model. 

Stakeholder Map

stakeholder map.png
Process (2).png

We identified the breakdowns in the allocation process. 

1. Information is scattered in different places.

Dispatchers need to refer to other documents/website (google map, excel sheet, and the 412 web app). 


2. It relies a lot on the dispatchers' experience and memory

It's hard to train new employees.

Some recipients may be ignored because the dispatchers don’t remember their information. 

Some factors may be underestimated. Distance, as the most straightforward information on the screen, is usually valued much more important than other factors.


3. The decision-making process has little input from the other stakeholders

The other stakeholders don't participate in the allocation at all. 

Background Research

At this stage, we brainstormed possible solutions to assist the dispatchers in the allocation process. We explored different levels of automation, from completely selecting the recipient manually to a full automatic allocation process. We also looked into the potential values that the decision will bring to different stakeholders in the community.  

Different Levels of Automaton


Impacts of decision aid

Building Algorithm

Our framework focuses on the direct participation in designing algorithmic governance. In other words, it enables stakeholders to have a direct influence over the matching algorithm. 


To do that, we recruit 15 representatives from the four stakeholder group (412 staff, donor, recipient and volunteer). We helped them to build their own decision models, which are coded into the system and used to predict their favored recipients for each donation. The final recommendation would be generated by aggregating the predicted results of each individual, using the Borda Voting rule. 


To build the individual model, we conducted a series of four study sessions with each individual - a combination of data collection survey, participatory model making, think-aloud, and interviews. Each individual created two models using two different ways and was asked to pick one to represent itself in the final algorithm. 

Based on the background research, we defined the inputs of the algorithm. The inputs consist of the factors that stakeholders considered important in allocation fairness and the data that 412 employee referred to in the allocation process. These factors were used later to generate the pairwise comparison scenarios in the pairwise model building and as factors that participants assign scores to in the scoring model building.


1. Identifying Inputs



We created a web app that mocks a donation allocation scenario, in which one donation and two recipients that randomly vary according to the factors (defined above) are generated. Participants were asked to pick the better one between the two recipient options.

Each participant was asked to completed 40-50 questions. A regression analysis was performed on the data to model how much weight each participant put on different factors.





2. Building the pairwise model


3. Building the scoring model


The second model that the participants built is a point system.  We asked participants to create rules to score potential recipients so that recipients with the highest scores will be recommended. Once they finished, they tested how the scoring model works in 3-5 pairwise comparisons and adjusted the models in response. 



4. Model Comparison 


We tested both models on pairwise comparisons and looked at the agreeability and disagreements. We also represented the models in the graph to show how each input affects the decision. We showed the results to the participants and asked them to select the one that best represented their beliefs. 

Building Algorithm
Wireframing the Interface 

Design Goals

Present the recommendation results with valid and understandable justifications. Help the 412 dispatchers to make decisions efficiently and fairly. 

Interface Components

We went back to the employee's current decision-making process to see what information and tools they rely on. We incorporated it with the additional data that our algorithm provides to build a decision-aid dashboard.  


Recipient Information


The final version of the interface components


1. A list of recipients ranked at the top with their basic information:  

  • Dispatchers are able to to see all the top-ranked options and make comparisons between different recipients.


2. A detail view of the selected recipient:

  • Dispatchers are able to see details of a particular recipient by clicking it in the list. Information includes operational hours and contact information, scales of each factor, details of total donations, etc. 


3. Google Map:

  • Dispatchers are able to see the locations of donor and all recipients on the map. The route between the selected recipient and donor is also marked.

Wireframing Interface


Desktop Copy 2.png


List of recipients

Artboard 4.png

Responsive Map

Screen Shot 2018-10-13 at 4.42.19 PM.png


Besides showing the users the recommended options, we want to provide a solid justification for the results to explain why these recipients are ranked top.  


We decided to focused on the following two explanations:

1. Feature performance

We highlight the features that "helps" the recipient rank high in the algorithm. For example, recipients that are nearer to the donors will be ranked higher

2. Stakeholder preference

We show the users how the recipient ranks in different stakeholder groups.  


We explored different ways to visualize the data and make the information easier to understand. 


service value.png

Dashboard Mockups

Detail Card

Simulation & Testing

We got records of all donation assignments from 3/05/2018 to 8/07/2018 from 412 foodrescue. A simulation environment was created to reallocate donations based on the algorithm. We tested on three ways of picking the recipient (from the ranked list): 1. pick the first option.  2. pick the nearest recipient 3. randomly pick one from the top 10 options. 4. randomly pick one from all recipients


We compared the allocation results from human decisions with those generated by the algorithm. We analyzed results and evaluated the algorithm based on the following metrics:


1. Average distance to the donor location.

2. Min & Max donations per recipient received. 

3. SD of donations per recipient received. 

4. Percentage of organizations that have received at least one donation (change with time)

5. Each recipient's regularity of receiving donations (interval between two donations)


Results from initial simulations with the decision aid (leveraging historical data) have shown that the system achieves significant improvements in distribution equality, prioritization of areas with higher poverty rate and lower income level, etc.


# of donations a recipient got during a time

We can see the for Human Allocation (HA, purple line), some recipients got over twenty donations while many got none. With our algorithm (red & blue line), donations were distributed more equally among the recipients.


Distribution of Poverty Rate

Poverty Rate of the location of the recipients that received the donation

We can see that our algorithm (red & blue line) helps to allocate more donations to recipients with higher poverty rate. 

Participant Involvement

How many recipients received at least one donation

We can see that Human Allocation (purple line) ignored many recipients in the system while our algorithm (red & blue line) took most of the recipients into account.


Moving Forward..

I continued the project in Fall 2018 and started to work more on the design side. I made improvements on the design we had during the summer and added features like notes and feedback system. 

To see the design part of this project,  please click HERE

Simulaion & Testing
bottom of page