Title | Brown, Logan_MCS_2023 |
Alternative Title | Recipes and Reactions: An Analysis of the Interaction between Photographs and Ratings |
Creator | Brown, Logan |
Collection Name | Master of Computer Science |
Description | The following Master of Computer Science thesis examines the influence of image quality on recipe selection compared to recipe ratings. |
Abstract | This thesis presents an analysis of the impact of image quality (based on peer-provided ratings) when choosing between similar recipes compared to the rating of said recipes. Image quality data was first gathered through two surveys rating images of food. That data was then used to generate a survey with sixteen comparisons of similar recipes where the image associated with each recipe was randomly selected to be a higher- or lower-quality image, and the rating associated with each recipe was randomly selected to be in the 2-, 3-, or 4-star range (out of 5 stars). Enough data was gathered to have at least two responses for every permutation of each comparison.; ; Statistical analysis of the gathered data indicates that gender did not have a significant impact regarding the preference of image quality, rating, or the time taken to decide. Age appeared to have impacted all three of those categories, with younger participants more strongly preferring high-quality images, high-rated recipes, and taking less time to decide. Older participants took longer to decide, picked more low-quality images than other age brackets, and placed less value on rating.; ; Comparing a higher-quality image and lower rated recipe to a recipe with a lower-quality image and a higher rating, each was chosen about half the time. A strong image can clearly contribute to a lower-rated recipe being chosen, but individual preferences, such as personal preference for certain ingredients, subjective opinion about image quality, and value placed on star rating all have significant impact. |
Subject | Gender; Photography; Psychology; Marketing research |
Keywords | Marketing; Photography; Psychology; E-commerce; Electronic word-of-mouth (eWOM) |
Digital Publisher | Digitized by Special Collections & University Archives, Stewart Library, Weber State University. |
Date | 2023 |
Medium | Theses |
Type | Text |
Access Extent | 120 page pdf; 7.81 MB |
Language | eng |
Rights | The author has granted Weber State University Archives a limited, non-exclusive, royalty-free license to reproduce their theses, in whole or in part, in electronic or paper form and to make it available to the general public at no charge. The author retains all other rights. |
Source | University Archives Electronic Records: Master of Computer Science. Stewart Library, Weber State University |
OCR Text | Show Recipes and Reactions: An Analysis of the Interaction between Photographs and Ratings A Thesis in the Field of Computer Science for the Degree of Master of Science in Computer Science Weber State University December 2023 ________________________________ Dr. Robert Ball Committee Chair Nicole Anderson (Nov 14, 2023 12:31 PST) _______________________________ Dr. Nicole Anderson Committee Member ________________________________ Dr. Sarah Herrmann Committee Member Copyright 2023 Logan Brown Abstract This thesis presents an analysis of the impact of image quality (based on peerprovided ratings) when choosing between similar recipes compared to the rating of said recipes. Image quality data was first gathered through two surveys rating images of food. That data was then used to generate a survey with sixteen comparisons of similar recipes where the image associated with each recipe was randomly selected to be a higher- or lower-quality image, and the rating associated with each recipe was randomly selected to be in the 2-, 3-, or 4-star range (out of 5 stars). Enough data was gathered to have at least two responses for every permutation of each comparison. Statistical analysis of the gathered data indicates that gender did not have a significant impact regarding the preference of image quality, rating, or the time taken to decide. Age appeared to have impacted all three of those categories, with younger participants more strongly preferring high-quality images, high-rated recipes, and taking less time to decide. Older participants took longer to decide, picked more low-quality images than other age brackets, and placed less value on rating. Comparing a higher-quality image and lower rated recipe to a recipe with a lowerquality image and a higher rating, each was chosen about half the time. A strong image can clearly contribute to a lower-rated recipe being chosen, but individual preferences, such as personal preference for certain ingredients, subjective opinion about image quality, and value placed on star rating all have significant impact. Author’s Biographical Sketch Logan Brown lives in Syracuse, Utah with his wife of ten years, three young children, and a fourth on the way. He was born in Anchorage, Alaska and was raised in Layton, Utah by loving parents that always supported and fostered his interests. He obtained the rank of Eagle Scout with the Boy Scouts of America, graduated from Layton High School with high honors, and spent two years in Norway serving a mission for the Church of Jesus Christ of Latter-day Saints. Having always had a knack for computers and devices, computer science was a natural progression after returning to Utah. A passion for solving problems with technology led him to attend Weber State University where he obtained both an Associate and Bachelor of Science degree in Computer Science. This thesis represents the conclusion of a large chapter in his life spent working full-time, raising a family, and completing his formal education; a juggling act that has taught him many lessons, both academic and practical, that he will carry into the next chapter of his life. Dedication Dedicated to my wife Kayla, who has been by my side, supporting me, since before this journey began. Acknowledgments I thank my family and friends who have supported me throughout my education journey and before. Having been the personal “IT support” of those I am close to has only helped me to improve my knowledge and find more to be excited about. Special thanks to my mother, Jolyn, and my father, Corky, for always supporting my interests and giving me the opportunities to pursue them. Thanks to my brother Landon for being one of my closest friends and strongest supporters. An extra special thanks to my wife Kayla and my children, Evylise, Brenlyn, Jakten, and Treven, for putting up with the hours of homework that have kept me from them; I am all yours now. I also thank my professors and teachers throughout my time at Weber State University. Their guidance and knowledge have been critical building blocks along my journey. A special thank you to Dr. Robert Ball for his encouragement, guidance, and collaboration on this project. Finally, I thank God for the many opportunities I have been lucky to have, for bringing those I care about into my life, and for helping me find strength and support in the face of adversity. Table of Contents Abstract ............................................................................................................................... ii Author’s Biographical Sketch ............................................................................................ iii Dedication .......................................................................................................................... iv Acknowledgments................................................................................................................v Table of Contents ............................................................................................................... vi List of Tables ................................................................................................................... viii List of Figures .................................................................................................................... ix Introduction ........................................................................................................................10 Related Work .....................................................................................................................12 Preparing the Data..............................................................................................................16 Gathering the Recipes ................................................................................16 What is a High-Quality Image? .................................................................17 Phase One: Image Ratings .................................................................................................20 Introduction Page .......................................................................................21 Photo Review Page ....................................................................................22 Conclusion Page.........................................................................................24 Preliminary Results ....................................................................................24 Subtle Image Differences ...............................................................25 Phase Two: More Image Ratings .......................................................................................28 Introduction Page .......................................................................................29 Photo Review Page ....................................................................................29 Conclusion Page.........................................................................................30 Results ........................................................................................................31 Phase Three: Comparing Similar Recipes .........................................................................33 Introduction Page .......................................................................................38 Comparison Page .......................................................................................38 Demographic Page .....................................................................................40 Conclusion Page.........................................................................................42 Results ................................................................................................................................43 Difference in Image Quality Preference ....................................................43 Difference in Rating Range Preference .....................................................44 Difference in Image Quality Preference Based on Gender ........................45 Difference in Rating Range Preference Based on Gender .........................46 Time Spent Related to Gender ...................................................................46 Difference in Image Quality Preference Based on Age .............................46 Difference in Rating Range Preference Based on Age ..............................47 Time Spent Related to Age ........................................................................48 Difference in Image Quality vs Rating Range ...........................................49 Image and Rating Difference Comparisons ...............................................50 Conclusion .........................................................................................................................54 References ..........................................................................................................................57 List of Tables Table 1: Recipe Names by Category ....................................................................... 17 Table 2: Phase One Results ..................................................................................... 25 Table 3: Phase Two Results .................................................................................... 32 Table 4: Randomly Assigned Rating Ranges.......................................................... 34 Table 5: Example of a Recipe ................................................................................. 35 Table 6: Possible Permutations of Each Comparison ............................................. 36 Table 7: Example of a Seed Set .............................................................................. 37 Table 8: Image Quality Selection ............................................................................ 44 Table 9: Rating Range Selection ............................................................................. 44 Table 10: Rating Comparison High vs Low ............................................................. 44 Table 11: Rating Comparison High vs Medium ....................................................... 45 Table 12: Rating Comparison Medium vs Low ........................................................ 45 Table 13: Image Quality Selection by Gender .......................................................... 45 Table 14: Rating Range Selection by Gender ........................................................... 46 Table 15: Time Spent Related to Gender .................................................................. 46 Table 16: Image Quality Preference by Age ............................................................. 47 Table 17: Rating Range Preference by Age .............................................................. 48 Table 18: Time Spent Related to Age ....................................................................... 48 Table 19: Image Quality vs Rating Range Selection ................................................ 49 Table 20: Image Difference vs Rating Difference Quadrant Counts ........................ 52 List of Figures Figure 1: Example of an even distribution ................................................................20 Figure 2: Example of a left-skewed distribution .......................................................20 Figure 3: Example of a right-skewed distribution .....................................................21 Figure 4: Introduction Page (Phase One) ..................................................................22 Figure 5: Photo Review Page (Phase One)................................................................23 Figure 6: Conclusion Page (Phase One) ....................................................................24 Figure 7: Poorly rated Pumpkin Soup image ............................................................26 Figure 9: Introduction Page (Phase Two) ..................................................................29 Figure 10: Photo Review Page (Phase Two) ...............................................................30 Figure 11: Conclusion Page (Phase Two) ...................................................................31 Figure 12: Introduction Page (Phase Three) ................................................................38 Figure 13: Seconds to Choose .....................................................................................39 Figure 14: Comparison Page (Phase Three) ................................................................40 Figure 15: Demographic Page (Phase Three) ..............................................................41 Figure 16: Age Distribution.........................................................................................41 Figure 17: Gender Distribution ...................................................................................42 Figure 18: Conclusion Page (Phase Three) .................................................................42 Figure 19: Average Seconds by Age Range ................................................................49 Figure 20: Image Difference vs Rating Difference .....................................................53 Introduction Images have a special impact on the way people make purchasing decisions. Images offer critical information about products, especially when other information is too little or too much [1]. Marketing photography is an industry of its own, with many photographers building their entire careers taking photographs for the explicit purpose of selling products. The human experience is very visual, so it follows that visual elements will be a focus of every presentation. This focus on the visual means that using the highest quality images is a clear priority for any marketing endeavor. However, there is another key component of online purchasing that plays a vital role in the decisions that purchasers make: peer-provided ratings and reviews. In almost all online stores, reviews are a crucial component. The ubiquitous and familiar five-star rating system permeates most online stores, with many ratings systems showing one or two digits of additional specificity between stars, a similar experience to ten or one-hundred based ratings systems. The effect of reviews on the purchasing experience is hard to understate. The evidence of the impact is clear, considering that unethical falsified reviews are commonplace enough to warrant an effort to produce algorithms to detect them [2]. It is also common, in my own anecdotal experience, that many mobile apps prompt the user to provide a review in the app store, reinforcing the importance of reviews. Many products are even sent free to people for the express purpose of providing a more detailed review; another entire industry on its own [3]. Considering the importance of both photographs and ratings, I wondered which of the two had a greater impact on users. Could a lower rated option be selected simply due to a better picture? If so, how much of a rating difference might be overcome? These were the kinds of questions I set out to answer. Many factors contribute to what leads to a purchasing decision. Cost, need, size, reliability, form, and function just to name a few. One industry that removes many of the extra considerations is that of online recipes. Recipe sharing services, such as Allrecipes.com, rely heavily on both ratings and images. Recipes on a website are free to use. Eating is an experience common to all of humanity, so recipes are easily related to and familiar. These reasons lead to recipes being a unique environment in which to consider the choices people make. Recipe selection is not without contributing factors that are difficult to measure. In the same way someone might prefer a certain color of an object or device, the taste preferences of individuals are varied and can be strongly held. Additionally, considerations such as food allergies or intolerances play a significant role. In the end, a pure comparison between rating and image is difficult to make, but recipe comparisons offer some significant advantages. To this end, my research question for this thesis is the following: Can a highquality image cause people to select a recipe with a lower rating when compared side-by-side? If so, to what degree? My hypothesis is that a high-quality image can overcome a rating deficit, at least as large as half a star. 11 Related Work As I researched for this project, I found that studies regarding recipes are rare, but there are many studies and articles regarding topics such as e-commerce, images in marketing, and electronic word-of-mouth (eWOM) – things such as online reviews, recommendations, and suggestions [4], with an established link to purchase intent [5] and demonstrated importance for both customers and businesses [6], [7]. Online reviews play a vital role for both consumers and manufacturers [8], with dedicated review websites demonstrating a large influence on consumer purchasing decisions [9]. Over 90% of shoppers online read reviews before purchasing, and over 83% of online purchase decisions are based on reviews [10]. Manufacturers rely heavily on these reviews [11], some companies going so far as strategically manipulating reviews to gain influence [12]. The study that intrigued me the most was published in 2021 by Robert Zinko et al. [13] and investigated the effect of images on trust and purchase intent in eWOM, separating the products into hedonic and utilitarian categories. Zinko et al. found that images were effective for both product categories, however they found that images were more effective for hedonic products than utilitarian products. Choosing a recipe seems likely to fall into the category of a hedonic product for most people, so I expect that the image should have a significant impact. Another study performed by Zinko et al. was published in 2020 [1]. This study focused on information overload in online reviews, and how images influence information quality. The study analyzed situations that had either too little or too much information provided textually, a problem further supported by [14]. In both cases, 12 images served to increase trust by providing additional context and information to either supplement the lack of textual information available or quickly provide information that might be missed or skipped when too much information is provided. Zinko et al. found that the results suggest that images moderate negative effects caused by an improper amount of provided text. These findings support the idea that the image is especially important to the decision, and even if the right amount of data is provided as additional context, the image will only serve to improve the engagement with the product. Furthermore, images tend to be a highly valued source of information for consumers that have too much information to choose from [15]. Medical rumors with little substantiation have been found to be more trusted when presented with an image [16]. Media richness theory [17] suggests that differing media have differing capacities to facilitate understanding [18], with text-based communication being the least rich [19]. By extension, media richer than text (i.e., images) should be more effective at communicating eWOM [20]. Some interesting findings about rating consistency and bias indicate that reviewers’ rating behavior across time and products is consistent. Furthermore, reviewers with a higher absolute bias in rating behavior, tending toward more extreme or differentiated reviews, receive more helpful votes, indicating that purchasers trust in the product was more greatly affected by these reviews, be it positive or negative [21]. The data set I will be working with will not include the details of individual reviews, to isolate external factors, but such reviews clearly have a significant impact. Another study analyzed the impact of confirmation bias on how much consumers value a review and found that consumers tend to perceive reviews that confirm their 13 initial beliefs, formed from their own analysis of the product and aggregated review information, as more helpful [22]. This suggests that the initial beliefs formed in those first moments of viewing and analyzing the product are powerful. It is exactly those first impressions and beliefs that I am analyzing to determine the impact of the image as compared to the average rating. As a counterpoint to the importance of the image, Hoffman and Daugherty found that image-based elements may not be the most attended to in every condition [23]. A product image is likely to be considered a marketing produced image, which could be less trusted by consumers that recognize that the product is being framed in the best light. The findings of Hoffman and Daugherty support the notion that consumer-generated eWOM should not be ignored by marketers. Trust in third-party provided assessment seems to carry more weight in general than supplier-provided marketing materials [24]. This holds true for anonymous reviews as well, potentially due to higher amounts of selfdisclosure from reviewers [25]. Purchase intent increases with active participation in social online experiences [26], influenced by degrees of trust in the source [27], trust referring to the degree of comfort and certainty regarding another party’s actions [28], [29]. Additionally, trust can be defined as willingness to be vulnerable, and expecting not be taken advantage of [30], and defines expectations in purchasing situations [31]. Trust in known reviewers is an important part of the impact that a reviewer’s opinion has on a consumer, which is its own complex combination of factors, some of which are explored in [32]. However, this experiment will sidestep the various factors 14 associated with individual reviewer trust by providing only anonymous aggregated review data in the form of an average rating. When it comes to images, the composition of the image has been documented to have an effect. As found in the market of bed and breakfast short-term rent rooms, Liu and Guo found that image color had the greatest influence on customer choice, followed by shape and orientation [33]. While the interplay between images, reviews, and textbased information is complex, images are an effective tool for quickly communicating information to consumers [34], and there is much to be gained by including quality imagery with all products and services. In contrast to video, images communicate ideas and concepts faster [35]. Regarding food, humans interact with it in interesting ways. Consumption of food is partially rewarding to the human body because of “wanting” (incentive motivation) and “liking” (hedonic impact) the food, which have been measured in separate neural substrates [36]. Both “wanting” and “liking” are needed to experience the full reward of eating, and the dissociation of them might be linked to eating disorders [37], with the exaggeration of those signals being a potential cause [38]. This biological connection to food is a complicating factor that is difficult to reduce, but it seems likely that the hedonic “liking” aspect is at the forefront of the preferential selection of a recipe, as the motivational “wanting” aspect is likely at a similar level between similar recipes. 15 Preparing the Data To prepare for this survey, several things need to happen. The first of which is to gather and organize the data that would be used in the survey. This includes recipe information, the images that will be shown for each, and the rating information that would be considered with each comparison. Gathering the Recipes Due to the many possible factors that affect selecting one recipe over another, I considered it necessary to take steps to isolate the factors I intended to measure. One of the primary ways I reduce this complexity is to choose extremely similar recipes to use for each comparison. Similar recipes were determined by selecting them based on two common components in the title. Examples include two alfredo pizzas, two nut breads, two apple pies, and two pumpkin soups. Reducing the comparison to two similar recipes increases reliance on the rating and image. The flavors can be expected to be similar, at least in category. Personal preferences will largely be mitigated, as an individual is likely to have a similar preference for both options. To capture a diverse array of data, sixteen comparisons of recipes were gathered, with an additional two comparisons that were used for warmup examples to help ensure participants understand the survey. This resulted in a total of thirty-six recipes, as seen in Table 1. For each recipe, two images of varying quality were also gathered, one of higher quality and one of lower quality, based on my own subjective judgement. 16 Table 1: Recipe Names by Category Category First Recipe Name Second Recipe Name Pasta Salad* Pasta Salad Rainbow Pasta Salad Chocolate Chip Cookie* Chef John's Chocolate Chip Cookies Chocolate Chip Cookies V White Bread Amish White Bread Baxis White Bread Apple Pie The Big Apple Pie Apple Pie Chocolate Cake Too Much Chocolate Cake Best Chocolate Cake Alfredo Pizza Spinach Alfredo Pizza Blackened Chicken Alfredo Pizza Blueberry Muffin Blueberry Muffins II Easy Blueberry Muffins I Meat Loaf Brown Sugar Meatloaf Slower Cooker Meatloaf Buttermilk Pancake Truck-Stop Buttermilk Pancakes Buttermilk Pancakes II Cilantro Burger Cilantro Burgers Guacamole Cilantro Lime Cheeseburger French Onion Soup Fall French Onion Soup Rich and Simple French Onion Soup Fried Ice Cream Tempura Fried Ice Cream Fried Ice Cream Pumpkin Soup Harvest Pumpkin Soup Pumpkin Soup Cinnamon Roll Speedy Cinnamon Rolls Heavenly Cinnamon Rolls Jelly Doughnut Super Easy Jelly Doughnuts Jelly Doughnuts Lemon Chicken Lemon Basil Grilled Chicken Lemon Pepper Chicken I Strawberry Cheesecake Strawberry Cheesecake No Bake Strawberry Cheesecake Nut Bread Orange Nut Bread Pineapple Macadamia Nut Bread * - Recipes that are used as control/warmup recipes, not shown with differing permutations, nor included in analysis What is a High-Quality Image? When considering how much the quality of an image affects the decision of a person, one of the first questions is what, exactly, makes a given image better than another? There are many things that affect the quality of an image, and it would be accurate to say that knowing and utilizing those techniques is the foundation of a marketing photographer’s skill set. One such factor is composition, the arrangement of subjects in a photograph. As detailed by Armendariz, a photograph can be composed in many ways. The subject might be centered in the image, or in only one third of the image. Negative and positive space might be used to draw attention or evoke emotion. An extremely close shot might provide 17 focus and illuminate key details or lack the necessary context of a broader picture. Certain angles that highlight one dish’s qualities well might represent another dish poorly [39, Ch. 8]. Lighting also plays a key role in the presentation of a photograph. Light varies in quality from warm to cool, natural to artificial, harsh to gentle, and many other potential descriptors. It’s use in photography can also be modified through techniques like diffusion or reflection, or it could be placed carefully to highlight certain parts of the subject [39, Ch. 4]. Depth of field, or selective focus, refers to the technique of having only part of a photograph in focus. This can be used to highlight or draw attention to certain parts of the photograph, but also results in information being lost in unfocused areas. Careful use can give a photograph an artistic and professional look, while overuse might result in photographs looking fake or lacking enough information [39, Ch. 3]. These are only a few of the many photography techniques that could have a dramatic impact on the perceived quality of a photograph, and by association the perceived quality of the subject of that photograph, in our case, food. When it comes to food specifically, the dish itself has another set of criteria that contribute to the quality. When marketing a recipe, it would stand to reason that the best possible example of the result should be captured to represent the recipe. Is the food sufficiently cooked, but not overcooked? Is the texture appealing? Is the color of the food appetizing and attractive? Is the dish styled with a garnish or other finishing touches? Each of these considerations affects the desirability of the food. 18 All these attributes and techniques come into play when trying to capture a highquality image of food, but in the end, the actual quality of a given image is subjective. Presented with the challenge of identifying which pictures are better than others, the most empirical method of gathering the information was clear: ask the people. Thus, I performed a preliminary survey to identify the quality of each image. 19 Phase One: Image Ratings The first set of thirty-two images aimed at analyzing whether participants would agree on the overall quality of an image. It was determined that participants agreed with each other about the quality of a particular image, at least in aggregate. The ratings for most images followed a normal distribution, where the average centered on a particular rating, with tails tapering away. See Figure 1, Figure 2, and Figure 3. Figure 1: Example of an even distribution Figure 2: Example of a left-skewed distribution 20 Figure 3: Example of a right-skewed distribution IRB approval was received before distributing the survey and over ninety (90) data points were received for each image. Not all participants completed the survey, but all images illustrated a consensus about quality, and each image was then given a rating that equals the average of all the provided ratings. Demographic information was not gathered during this phase. Results for all the images rated in phase one can be seen in Appendix A. The first survey website was written using the AngularJS JavaScript framework, paired with Bootstrap elements, along with jQuery, Bootstrap Star Rating, Chart.js, and Popper.js as additional JavaScript libraries. Introduction Page The Introduction Page consists of some text thanking the user for their participation, some information about what to expect, and some description about how to perform the task. The purpose of this page was simply to welcome and inform the participants. It was discovered in testing that participants are unlikely to internalize this amount of text, so the crucial instruction to attempt to judge photographs on quality and 21 not personal preference was bolded to add variety to the page and make the instruction more visible. See Figure 4. Figure 4: Introduction Page (Phase One) Photo Review Page The primary page of the first survey was the Photo Review page. This is how each participant submitted their rating for each image. In the first phase, this data was additionally analyzed to see if participants chose to identify the issue(s) they saw with the image. The following elements were included to provide the necessary tools and context for the participants (see Figure 5 for example): • Progress indicator to inform the user of how many photographs remain. • Title of the recipe, for the context of what they are seeing. • Image to review. • Five-star interface to assign a rating. 22 • Optional dropdown menu to choose a problem with the picture. • Text input to add a custom problem (only visible if “Other” is selected as a problem). • Submit button to log the rating and advance to the next image. Figure 5: Photo Review Page (Phase One) The title of the recipe was not originally intended to be included, to reduce variables in the participants’ assessment, but it was determined that the context of 23 knowing what the image is supposed to represent was too important to the task of rating the image. Conclusion Page The final page simply thanked them and offered a chance for another participant to fill out the survey. See Figure 6. Figure 6: Conclusion Page (Phase One) Preliminary Results The data from the first phase of the project was collected and analyzed, and each image was assigned the average of all the ratings given to it. Because a consensus was observed in all the images, the average represents the best indication of the quality of the image and gives us a metric that can be used for the third phase of the experiment. The results are seen in Table 2, with more details results for phase one in Appendix A. 24 Table 2: Phase One Results Comparison Recipe Amish White Bread White Bread Baxis White Bread Orange Nut Bread Nut Bread Pineapple Macadamia Nut Bread Brown Sugar Meatloaf Meat Loaf Slower Cooker Meatloaf Blackened Chicken Alfredo Pizza Alfredo Pizza Phase 1 Spinach Alfredo Pizza The Big Apple Pie Apple Pie Apple Pie Harvest Pumpkin Soup Pumpkin Soup Pumpkin Soup Lemon Basil Grilled Chicken Lemon Chicken Lemon Pepper Chicken Cilantro Burger Guacamole Cilantro Lime Cheeseburger Cilantro Burgers Image Quality L H H L H L L H L H H L H L H L H L L H H L L H H L L H H L H L Image ID 1 2 15 16 3 4 9 10 5 6 7 8 11 12 13 14 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Avg. Image Rating 2.4615 4.0215 3.5699 2.3163 2.2000 2.0330 2.8544 3.0097 1.6344 3.0215 2.6522 1.6122 3.3871 1.8980 2.4583 2.0000 3.5098 2.4674 2.4457 3.6129 3.0645 2.8776 2.3200 4.0000 3.9891 2.6195 1.7938 3.3061 4.0510 2.2581 3.6452 1.4565 Subtle Image Differences One noteworthy observation comes from two remarkably similar images that received significantly different ratings. The recipe in question is Pumpkin Soup. The first 25 image, Figure 7, is slightly darker, the soup appears to be a little thinner, the color is a little more yellow, and a dried garnish was sprinkled in a somewhat clumped area. The second image, Figure 8, has a bit more vibrancy, the soup having some texture swirled into it, the color more orange, and a fresh-looking garnish sprinkled evenly across the dish. Each of the differences is small, but when considered all together the effect is clear. The first image received an average of 2.32 stars, while the second received an average of 4.0 stars. This particular result was remarkable in the data set, as it had a significant difference in rating, and the images were comparatively similar. The primary differences can be attributed to lighting and attention to detail. This result speaks to the substantial improvement that can be gained from high-quality photography and presentation. Figure 7: Poorly rated Pumpkin Soup image 26 Figure 8: Highly rated Pumpkin Soup image 27 Phase Two: More Image Ratings After completing the first phase, more analysis was done to consider how many recipes and images would be needed for a useful data set on which to perform the final survey. We determined that a total of sixteen comparisons would provide satisfactory data, but as can be seen in the results phase one that are visible in Table 2, only enough images for eight comparisons were gathered. Therefore, an additional set of thirty-two images were gathered to capture a larger data set. After capturing these additional image ratings, the final comparisons in phase three include sixty-four total images, associated with thirty-two recipes, organized into sixteen pairs of similar recipes. The passage of time and a change in hosting environment led to a redesign of the phase one website to gather the ratings needed for the final phase. This version of the website is considered here as phase two. The pages here represent the same concepts as those presented in phase one, but the main page had been simplified to not provide the possibility of identifying a problem with the image. The new design of the website was built using the Next.js React-based JavaScript framework, with UI elements from the PrimeReact open-source UI library. Hosting for the website was provided through Vercel, a service that provides development tools and hosting specifically geared toward the Next.js framework. Storage hosting was provided through MongoDB, a non-relational document database technology with a cloud-hosted service. These same design and hosting tools were carried forward into the final phase of the project. 28 Introduction Page Like phase one, the introduction simply describes the survey the participant will be taking and sets expectations about how they will interact with the survey. After the description, a button is provided to begin the survey. See Figure 9. Figure 9: Introduction Page (Phase Two) Photo Review Page The primary page of the survey is the Photo Review page, where the participant is presented with the image to be rated, a simple 5-star rating interface to choose their rating, and a “Next” button to advance to the next image after they have made their selection. The recipe name is provided, as well as a progress indicator at the top right to let the participant know how many images remain to be rated. See Figure 10. Note that the optional information from phase one regarding the participants’ perceived problem with the image was left out of this phase, as the information is not being used in the final phase of the project. 29 Figure 10: Photo Review Page (Phase Two) Conclusion Page The conclusion page thanks the participant, gives them a small amount of information about the purpose of the study, and offers the opportunity for another user to take the survey. See Figure 11. 30 Figure 11: Conclusion Page (Phase Two) Results The results of phase two were gathered only for the purpose of identifying highand low-quality images, comparatively, and were used to inform the third phase. The results are summarized in Table 3, where each image is associated with a recipe, and each recipe paired into a comparison. Phase two, combined with the data from phase one in Table 2, provides the additional information required to have sixteen comparisons moving into phase three. More detailed results for each image from phase two are available in Appendix B. 31 Table 3: Phase Two Results Comparison Recipe Easy Blueberry Muffins I Blueberry Muffin Blueberry Muffins II Heavenly Cinnamon Rolls Cinnamon Roll Speedy Cinnamon Rolls Strawberry Cheesecake Strawberry Cheesecake No Bake Strawberry Cheesecake Rich and Simple French Onion Soup French Onion Soup Phase 2 Fall French Onion Soup Too Much Chocolate Cake Chocolate Cake Best Chocolate Cake Jelly Doughnuts Jelly Doughnut Super Easy Jelly Doughnuts Buttermilk Pancakes II Buttermilk Pancake Truck-Stop Buttermilk Pancakes Fried Ice Cream Fried Ice Cream Tempura Fried Ice Cream 32 Image Quality L H H L H L H L H L H L L H L H H L H L H L H L H L H L L H L H Image ID 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Avg. Image Rating 2.7794 3.2500 2.8209 2.7101 3.2353 3.1884 3.2319 3.0286 4.2676 4.0423 2.6418 1.9571 2.7429 3.6176 2.3824 3.3429 4.1127 3.1618 3.6471 2.3971 3.6212 1.8714 2.7887 2.5652 3.6324 3.0423 4.1429 3.2000 2.6286 4.1304 2.3529 3.8676 Phase Three: Comparing Similar Recipes The final phase of the project is where I hoped to gain the most insight and information about how people consider a decision such as selecting a recipe or making a purchase. The information gathered in the previous phases was gathered for that purpose. As a result, each comparison was between similar recipes, and designed to try to eliminate additional factors without fully eliminating the pieces of a recipe that are critical to its composition. The purpose of the first two phases of the experiment were to gather pairs of images with empirical, survey-generated data about the quality of each image to use in analyzing the effect that a high- or low-quality image has on how a recipe is compared to a similar recipe. After gathering this information, each comparison can now be made. Each recipe was compared side by side. The number of ingredients was made the same by discarding extra ingredients from the recipe with more ingredients. Similarly with the directions, the longer recipe was trimmed to the same height as the shorter length. Additionally, a maximum direction length of six hundred characters was introduced to limit both recipes from extensive directions. An ellipsis was appended at the end of the trimmed directions. All these changes were made to reduce confounding variables. Each participant was presented with a set of eighteen comparisons. The sixteen comparisons gathered in first two phases (Table 2 and Table 3), plus two warm-up comparisons that were the same for all participants and were not used in the data analysis. 33 These first two comparisons were added to introduce a warm-up period to help the participant to understand the task. To gather data about how the rating impacts the comparison decision, each recipe received a generated star-rating in three ranges: high (H), medium (M), and low (L). This allows each recipe to be shown in its best form in some comparisons (high-quality image and high rating), and in its worst form in some comparisons (low-quality image and low rating). For each recipe, a rating was generated for each of the three ranges (H, M and L) according to Table 4. These ranges were chosen to stay within realistic ranges of what would be seen as user-generated ratings, to help the participants judge the rating as if it were an authentic, peer-provided rating. To keep the comparisons similar, each recipe was also assigned a specific number of ratings between three hundred fifteen (315) and four hundred fifty (450). Table 4: Randomly Assigned Rating Ranges Category High Medium Low Lower Limit 4.00 3.00 2.00 Upper Limit 4.85 3.85 2.85 After generating the rating values, number of ratings, and adjusting the lengths of the ingredients and directions for each recipe a given recipe contained data as outlined in Table 5. This provides all the information needed to retrieve every combination of the recipes presented in each comparison. The full list of recipes is available in Appendix C. 34 Table 5: Example of a Recipe Title Amish White Bread Recipe ID 6788 Higher-Quality (ID: 2) Lower-Quality (ID: 1) Avg. Rating: 4.0215 Avg. Rating: 2.4615 Images Number of Ratings 434 Recipe Ratings Ingredients High: 4.54 • • • • • • Medium: 3.32 Low: 2.33 2 cups warm water (110 degrees F/45 degrees C) ⅔ cup white sugar 1 ½ tablespoons active dry yeast ¼ cup vegetable oil 1 ½ teaspoons salt 6 cups bread flour Dissolve sugar in warm water in a large bowl, and then stir in yeast. Allow to proof until yeast resembles a creamy foam, 5 to 10 minutes. Mix oil and salt into the yeast. Mix in flour one cup at a time. Knead dough on a lightly floured surface until smooth. Place in a well-oiled bowl, and turn dough to coat. Cover with a damp cloth. Allow to rise Directions until doubled in bulk, about 1 hour. Punch dough down. Knead for a few minutes, and divide in half. Shape into loaves, and place into two well-oiled 9x5-inch loaf pans. Allow to rise until dough has topped the pans by one inch, about 30 minutes... * - Reference to the image as identified in Table 2/Table 3 or Appendix A/B With the data for each recipe gathered, each comparison can be presented in one of thirty-six ways, ranging from both recipes’ best image and highest rating to both 35 recipes’ worst image and lowest rating, and everything in between. Each permutation is shown in Table 6. Table 6: Possible Permutations of Each Comparison Iteration 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 One Comparison (Ex. White Bread) Recipe 1 (Ex. Amish White Bread) Recipe 2 (Ex. Baxis White Bread) Image Rating Recipe Rating Image Rating Recipe Rating H H H H H H H M H H H L H M H H H M H M H M H L H L H H H L H M H L H L H H L H H H L M H H L L H M L H H M L M H M L L H L L H H L L M H L L L L H H H L H H M L H H L L M H H L M H M L M H L L L H H L L H M L L H L L H L H L H L M L H L L L M L H L M L M L M L L L L L H L L L M L L L L 36 To capture all thirty-six comparisons for each pair of recipes, each of the permutations was randomly distributed into thirty-six different sets, each containing a version for all sixteen comparisons, ensuring every participant received a variety of comparison types. Each participant was given one of these sets of sixteen comparisons for their session. Every session was assigned a seed number at the start, zero through thirty-five, which determined which of the predefined set they were presented with. Each of the sets will be referenced as “seeds” throughout the remainder of the project. An example of a seed set can be seen in Table 7, and the full list of seeds is available in Appendix D. The Recipe ID references a recipe as defined in Appendix C. Table 7: Example of a Seed Set Seed 0 Comparison 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Recipe ID* 6751 6788 6805 7516 9142 16268 53676 72079 7181 24530 25203 13309 120772 121794 20988 16687 Recipe 1 Image Quality H H L L H L H H L H L H L H L H Recipe Rating H H M L L H M M L H M L M M M L Recipe ID* 17892 17704 8427 7565 9191 24197 90248 218905 75590 76448 92992 93611 147218 234410 173558 236708 Recipe 2 Image Quality H L H L H L L H H L H H L H L L Recipe Rating M H L M L H H M L M L H L H H L * - Reference to the recipe as defined in Appendix C. 37 Introduction Page Like the first two phases, the introduction includes a brief description of the task. In addition, I added a short video clip to illustrate and describe the task in the hopes that more participants would be prepared. See Figure 12. Figure 12: Introduction Page (Phase Three) Comparison Page The primary page participants used was the comparison page, which consisted of two recipes displayed side-by-side. Each recipe would have its high, medium, or low rating shown along with its higher- or lower-quality image, as determined by the seed the participant had been assigned. The rest of the recipe details, including the title, number of ratings, ingredients, and directions were the same for all participants across all seeds. 38 To select a recipe, the participant would click on any portion of the recipe card on the right or left side of the page. After a selection was made, the button to advance, with a “Next” label, would be enabled to allow them to submit their decision. The participants were free to change their selection as many times as they wanted before submitting it and advancing to the next comparison. There was no time limit placed on how long they had to decide, but the time between when they were first shown the comparison and when they submitted their decision was recorded. See Figure 13 for a distribution of time taken to choose a response. The chart is limited to 60 seconds, containing 97% of responses. Figure 13: Seconds to Choose An example comparison is seen in Figure 14, Harvest Pumpkin Soup (ID: 9142) compared to Pumpkin Soup (ID: 9191). This comparison (seed 4, comparison 4) has the 39 first recipe shown with its higher-rated image and its low rating (2.76) compared to the second recipe shown with its higher-rated image and its medium rating (3.07). Figure 14: Comparison Page (Phase Three) Demographic Page The Demographic Page was presented after all comparisons had been submitted. It captured the two pieces of demographic information that were considered most relevant for this topic, which were age and gender. See Figure 15. After submitting the demographic information, the participant was taken to the conclusion page. 40 Figure 15: Demographic Page (Phase Three) The age distribution of the participants is available in Figure 16. Note that the minimum required age was eighteen. The gender distribution is available in Figure 17. Figure 16: Age Distribution 41 Figure 17: Gender Distribution Conclusion Page The conclusion page very simply thanked them for their participation and allowed them to restart the process if an additional participant was available. See Figure 18. Figure 18: Conclusion Page (Phase Three) 42 Results Surveys for the final phase were taken until at least two usable samples were received for each seed of the survey (see Appendix D for the full list of seeds). Responses were discarded unless they were part of a survey that was fully completed by a person aged eighteen or older for IRB reasons. The initial two comparisons for every survey were the same and used to ensure participants were familiar with the format of the survey, and thus were not included in the results. The result was a total of eighty completed surveys by participants between ages eighteen and seventy-two. The response data was aggregated into a comma-separated value (CSV) file to be analyzed. After organizing all the gathered data into this format, the statistical analyses provided below were gathered using a script written in Python. In addition, an a priori value of 0.05 was used for determining whether to accept or reject the hypothesis being evaluated. Difference in Image Quality Preference Using a chi-squared goodness of fit test to analyze which image quality was preferred (as seen in Table 8) yields a p-value result of approximately 1.36045x10-5. This value is well below the threshold of 0.05, indicating that the null hypothesis can be rejected. Image quality clearly affected the results, with higher quality images being preferred overall. 43 Table 8: Image Quality Selection Quality H L Observed 721 565 Expected 643 643 Total 1286 1286 Difference in Rating Range Preference Using a chi-squared goodness of fit test to analyze which rating range was preferred (as seen in Table 9) yields a p-value result of approximately 5.60206x10-11. This value is well below the threshold of 0.05, indicating that the null hypothesis can be rejected. Recipe rating clearly affected the results, with high (H) rating recipes being chosen most often, medium (M) being chosen next most often, and low (L) being chosen the least often. Table 9: Rating Range Selection Rating H M L Observed 519 446 321 Expected 428 429 429 Total 1286 1286 I performed additional analyses to compare each rating range to confirm this finding. In Table 10, a p-value result of 8.39440x10-12 is below the threshold of 0.05, confirming that high (H) ratings were more frequently chosen than low (L) ratings. Table 10: Rating Comparison High vs Low Quality H L Observed 519 321 Expected 420 420 Total 840 840 44 In Table 11, a p-value result of 0.01721 is below the threshold of 0.05, confirming that high (H) ratings were more frequently chosen than medium (M) ratings. Table 11: Rating Comparison High vs Medium Quality H M Observed 519 446 Expected 482 483 Total 965 965 In Table 12, a p-value result of 7.55631x10-6 is below the threshold of 0.05, confirming that medium (M) ratings were more frequently chosen than low (L) ratings. Table 12: Rating Comparison Medium vs Low Quality M L Observed 446 321 Expected 384 383 Total 767 767 Difference in Image Quality Preference Based on Gender Using a chi-squared test for independence to analyze which quality image was preferred, separated by gender (as seen in Table 13), yields a p-value result of approximately 0.34406. This value is above the threshold of 0.05, indicating that the null hypothesis cannot be rejected. Thus, the conclusion is that gender did not have a meaningful impact on which quality image would be chosen. Gender Table 13: Image Quality Selection by Gender Female Male Both Count by Image Quality H L Both 329 242 571 392 323 715 721 565 1286 45 Difference in Rating Range Preference Based on Gender Using a chi-squared test for independence to analyze which range of rating was preferred, separated by gender (as seen in Table 14), yields a p-value result of approximately 0.87992. This value is above the threshold of 0.05, indicating that the null hypothesis cannot be rejected. Thus, the conclusion is that gender did not have a meaningful impact on which range of rating would be chosen. Table 14: Rating Range Selection by Gender Gender Count by Rating Range Female Male Both H 234 285 519 M 198 248 446 L 139 182 321 All 571 715 1286 Time Spent Related to Gender Using a one-way ANOVA to analyze the difference between gender and the average time to respond (as seen in Table 15) yields a p-value result of approximately 0.69072. This value is above the threshold of 0.05, indicating that the null hypothesis cannot be rejected. Thus, the conclusion is that gender did not have a meaningful impact on the average time to respond. Table 15: Time Spent Related to Gender Age Residual DF 1.0 1284.0 Sum Sq 29.371638 238125.302791 Mean Sq 29.371638 185.455843 F 0.158375 NaN PR(>F) 0.690723 NaN Difference in Image Quality Preference Based on Age Using a chi-squared test for independence to analyze which image quality was preferred by participants, based on age (as seen in Table 16), yields a p-value of 46 approximately 0.03527. This value is below the threshold of 0.05, indicating that the null hypothesis can be rejected. Thus, the conclusion is that age had a meaningful impact on preferred image quality. The minimum accepted age was eighteen and based on the age distribution of the participants the age range bins were normalized to provide equal sized bins for the analysis. Age Range Table 16: Image Quality Preference by Age [18.0, 28.8] (28.8, 39.6] (39.6, 50.4] (50.4, 61.2] (61.2, 72.0] All Count by Image Quality H L Both 213 185 398 233 180 413 124 67 191 72 52 124 79 81 160 721 565 1286 Difference in Rating Range Preference Based on Age Using a chi-squared test for independence to analyze which rating range was preferred, based on age (as seen in Table 17), yields a p-value of approximately 0.02189. This value is below the threshold of 0.05, indicating that the null hypothesis can be rejected. Thus, the conclusion is that age had a meaningful impact on the preferred rating range. The minimum accepted age was eighteen and based on the age distribution of the participants the age range bins were normalized to provide equal sized bins for the analysis. 47 Age Range Table 17: Rating Range Preference by Age [18.0, 28.8] (28.8, 39.6] (39.6, 50.4] (50.4, 61.2] (61.2, 72.0] All H 170 180 60 47 62 519 Count by Rating Range M L 141 87 136 97 80 51 34 43 55 43 446 321 Both 398 413 191 124 160 1286 Time Spent Related to Age Using a one-way ANOVA to analyze the difference between age and the average time to respond (as seen in Table 18) yields a p-value result of approximately 0.000006. This value is below the threshold of 0.05, indicating that the null hypothesis can be rejected. Thus, the conclusion is that age had a meaningful impact on the average time to respond. Table 18: Time Spent Related to Age Age Residual DF 1.0 1284.0 Sum Sq Mean Sq F PR(>F) 3799.854671 3799.854671 20.818916 0.000006 234354.819758 182.519330 NaN NaN The impact of age on time spent is more visible in Figure 19, where a trend can be seen indicating that the average time to choose increases as age increases. 48 Figure 19: Average Seconds by Age Range Difference in Image Quality vs Rating Range Using a chi-squared test for independence to analyze which rating range was preferred (as seen in Table 19) yields a p-value result of approximately 0.34406. This value is not below the threshold of 0.05, indicating that the null hypothesis cannot be rejected. This finding indicates that while participants generally prefer the higher quality image, and generally prefer the higher rating, there is not an interaction between the two. Table 19: Image Quality vs Rating Range Selection Image Quality Rating Range H L All H 288 231 519 M 251 195 446 L 182 139 321 49 All 721 565 1286 Image and Rating Difference Comparisons One of the questions I sought to answer with this experiment was whether a noticeable connection could be found regarding the difference in the rating that could be overcome if a higher quality picture were associated with the lower rated recipe. The finding in Table 19 suggests that this interaction was not found. However, further analysis on this subject was also conducted. To analyze this difference, I generated two values from the data. The Image Difference is generated by subtracting the unselected image’s quality value from the selected image’s quality value, and the Rating Difference is similarly generated by subtracting the unselected rating’s value from the selected rating’s value. For example, consider a comparison that was presented where the first recipe had a visible rating of 3.45 stars and an image with an average rating of 4.2, and the second recipe had a visible rating of 2.78 stars and an image with and average rating of 3.5. If the participant selected the first recipe, the calculated Rating Difference would be 3.45 – 2.78 = 0.67 and the calculated Image Difference would be 4.2 – 3.5 = 0.7. If the second recipe had been selected, the calculated Rating Difference would be 2.78 – 3.45 = -0.67, and the calculated Image Difference would be 3.5 – 4.2 = -0.7. These values indicate two things about the comparison. First, if the value is positive, it indicates that the image or rating chosen was the higher of the two presented. If it is negative, then it indicates that the chosen image or rating was the lower of the two presented. Secondly, the magnitude of the difference is represented by the size of the number. Numbers close to zero, both positive and negative, indicate a small difference between the presented options (image or rating). Larger absolute values of the number 50 indicated a larger difference in quality, with the value being the actual difference between them. For example, an Image Difference of 1.5 indicates that the chosen recipe had an image that was 1.5 “stars” higher, as determined in the preliminary surveys, than the other recipes. A Rating Difference of -0.5 indicates that the selected recipe was presented with a rating that was 0.5 stars less than the other recipe. The comparison of these two values can then give us summary details about each comparison that was made. I plotted these difference values in a scatterplot with the yaxis being the Image Difference and the x-axis being the Rating Difference. Additionally, a size component was added to indicate the number of seconds that were taken to provide the response (see Figure 20). Each quadrant in the chart represents a different fundamental scenario. Q1 represents selections where the chosen recipe had both a better-quality image and a higher rating than the opposing recipe. In the opposite corner, Q3 contains selections where the chosen recipe had both a lower-quality image and a lower rating. Between these two opposing quadrants, Q1 clearly was more often selected than Q3, with almost three times as many selections. The other quadrants represent the more interesting comparisons. Q2 holds selections where the chosen recipe had a higher-quality image, but a lower rating. Q4 holds selections where the chosen recipe had a lower-quality image, but a higher rating. These two quadrants have much closer counts between them, Q4 having only twenty-two selections more than Q2. While the counts are quite similar, one interesting difference is 51 that Q4 had a higher average time to select the choice than the other quadrants. Each of the other quadrants were about a tenth a of second from each other, while the average for Q4 was over two full seconds higher. Overall, the results for this comparison seem to suggest that there is not a clear preference for image over rating. It seems that in some situations, the image might be sufficient to cause the participant to choose the lower rated recipe, while other participants might not. Additional factors, such as personal preferences and personality, seem to account for a significant amount of the variability in the decisions around which recipe a given person will prefer. This agrees with the finding described in Table 19. Table 20: Image Difference vs Rating Difference Quadrant Counts Quadrant Q1 Q2 Q4 Q3 Count 474 315 337 160 Image Diff. + + - Rating Diff. + + - 52 Avg. Seconds 12.592 12.421 12.394 14.807 Figure 20: Image Difference vs Rating Difference 53 Conclusion Several interesting observations were made throughout this project. I enjoyed learning more about human personality, preferences, and perceptions. Eating food, and by association preparing it through recipes, is fundamental to the human experience, and yet embodies such broad variety. Taste preferences additionally vary broadly across people, and that was visible in the results that I received. One of the first things that I noticed in gathering the data for phase one of the project was that people have an instinctual and personal reaction to images of food. Based on their individual experiences and preferences, the same image of food might provoke disgust in one person, and desire in another. These reactions might go as far as physiological responses such as salivation [39]. Given this intensely personal relationship to food, gathering this information was an interesting experience. My own preferences were often affirmed by others, yet I also found examples where my personal choice differed from the consensus of participants. The first and second phases of the project focused on judging a single image of food with nothing but the name of the recipe for context. One of the clear takeaways from these phases is that personal biases are clearly visible. Some participants rated all the images quite low, while others rated them all quite high, and still others were more down the center. Thankfully, accounting for these biases was a natural part of the analysis, and the averages of the ratings provided a solid empirical basis for the relative quality of each image, allowing me to capture more details from the third phase of the project. Despite this, it became clear in phase three that personal preference about the quality of an image, or other unrelated parts of the recipes, would play a huge part in the 54 selection of a recipe, even though I went to great lengths to gather similar recipes to compare. I received anecdotal reports after the survey that varied from people admitting that they always preferred the best image, to one woman reporting that she explicitly did not pick the best image if it was too high quality, as it seemed unattainable to her. Some participants chose very quickly, and clearly did not pay much heed to the ingredients or instructions, while one man (a former chef) told me that he cared much more about the ingredient list than the images, as he had strong preferences about certain ingredients. These examples show that the experience of choosing a particular recipe has a multitude of factors that vary in their importance from person to person. This concept extends to other e-commerce applications as well. In a similar way to how some participants cared only for the image or rating, yet others cared deeply about specific ingredients, the purchaser of an online product will care about different things based on their own disposition and need. Some products will be purchased based on image alone, some based on the peer-provided rating, and still others might be sold because of the details contained deep in the description of the product. The concrete takeaways from this thesis include some interesting results. As expected, higher quality images were selected more frequently, reinforcing the idea that a high-quality image is very influential. Likewise, highly rated recipes were selected more frequently, reinforcing the idea that peer opinion is very highly valued when selecting an option. Additionally, as expected, high-rated recipes were selected most frequently, with medium-rated recipes getting the next most selections, and low-rated recipes receiving the fewest selections. 55 Interestingly, gender did not appear to have any pronounced effect on preference for either image quality or rating, nor on the average time to respond. However, age did have an impact on all three of those categories. Preference for the lower quality image increased with age, even to the point of having more selected low-quality image recipes than high-quality image recipes in the highest age range. Likewise, preference for lower ratings increased with age, causing the gaps between the rating ranges to become smaller. In the second-to-oldest age range, more low rated recipes were selected than medium rated recipes. Regarding the question I set out to answer: can a high-quality image cause someone to choose a lower-rated recipe over a higher-rated one, as great a difference as half of a star? The answer is yes, sometimes. There are several examples of just this behavior happening, but there seem to be just as many, even slightly more, where the higher-rated recipe was selected instead. It seems that personal preferences and additional factors play just as important a role when it comes to recipes. 56 References [1] R. Zinko et al., "A picture is worth a thousand words: how images influence information quality and information load in online reviews," Electronic Markets, vol. 30, no. 4, p. 775, 2019;2020. [2] M. Kolhar, "E-commerce Review System to Detect False Reviews," Science and Engineering Ethics, vol. 24, no. 5, pp. 1577-1588, 2018. [3] I. Garnefeld et al., "Online reviews generated through product testing: can more favorable reviews be enticed with free products?," Journal of the Academy of Marketing Science, vol. 49, no. 4, pp. 703-722, 2021. [4] R. A. King, P. Racherla and V. D. Bush, "What we know and don’t know about online word-of-mouth: A review and synthesis of the literature," Journal of Interactive Marketing, vol. 28, no. 3, pp. 167-183, 2014. [5] C. P. Furner, R. Zinko and Z. Zhu, "Examining the role of mobile self-efficacy in the relationship between informaiton load and purchase intention in mobile reviews," International Journal of E-Services and Mobile Applications, vol. 10, no. 4, pp. 40-60, 2018. 57 [6] J. Lee, J. N. Lee and H. Shin, "The long tail or the short tail: The category-specific impact of eWOM on sales distribution," Decision Support Systems, vol. 51, no. 3, pp. 466-479, 2011. [7] E. M. Steffes and L. E. Burgee, "Social ties and online word of mouth," Internet Research, vol. 19, no. 1, pp. 42-59, 2009. [8] P. Zhang, S. Bin and G. Sun, "Electronic word-of-mouth marketing in e-commerce based on online product reviews," International Journal of u-and e-Service, Science and Technology, vol. 8, no. 8, pp. 253-262, 2015. [9] N. Hu, L. Liu and J. Zhang, "Do online reviews affect product sales? The role of reviewer characteristics and temporal effects," Information Technology and Management, vol. 9, no. 3, pp. 201-214, 2008. [10] F. Zhu and X. Zhang, "Impact of online consumer reviews on sales: The moderating role of product and consumer characteristics," Journal of Marketing, vol. 74, no. 2, pp. 133-148, 2010. [11] D. Park, J. Lee and I. Han, "The effect of on-line consumer reviews on consumer purchasing intention: The moderating role of involvement," International Journal of Electronic Commerce, vol. 11, no. 4, pp. 125-148, 2007. [12] C. Dellarocas, "Strategic Manipulation of Internet Opinion Forums: Implications for Consumers and Firms," Management Science, vol. 52, no. 10, pp. 1577-1593, 2006. 58 [13] R. Zinko et al., "Seeing is Believing: The Effects of Images on Trust and Purchase Intent in eWOM for Hedonic and Utilitarian Products," Journal of Organizational and End User Computing, vol. 2021, no. 2, pp. 85-104, 2021. [14] C. P. Furner, R. Zinko and Z. Zhu, "Electronic word-of-mouth and information overload in an experiential service industry," Journal of Service Theory and Practice, vol. 26, no. 6, pp. 788-810, 2016. [15] P. L. Broilo, L. B. Espartel and K. Basso, "Pre-purchase information search: too many sources to choose," Journal of Research in Interactive Marketing, vol. 10, no. 3, pp. 193-211, 2016. [16] A. Y. Chua and S. Banerjee, "Analyzing users' trust for online health rumors," in International Conference on Asian Digital Libraries, 2015. [17] A. R. Dennis and J. S. Valacich, "Rethinking media richness: Towards a theory of media synchonicity," in Proceedings of the Hawaii International Conference on System Sciences, 1999. [18] R. L. Daft, R. H. Lengel and L. K. Trevino, "Message equivocality, media selection, and manager performance: Implications for information systems," MIS Quarterly, vol. 11, pp. 355-366, 1987. [19] G. Giordano and C. P. Furner, "Individual determinants of media choice for deception," Journal of Information Technology Management, vol. 12, no. 1, pp. 3051, 2009. 59 [20] C. P. Furner and R. Zinko, "Willingness to pay and disposition toward paying for apps: The influence of application reviews," International Journal of E-Services and Mobile Applications, vol. 10, no. 1, pp. 13-33, 2018. [21] B. Gao, N. Hu and B. I, "Follow the herd or be myself? An analysis of consistency in behavior of reviewers and helpfulness of their reviews," Decision Support Systems, vol. 95, pp. 1-11, 2017. [22] D. Yin, S. Mitra and H. Zhang, "When do consumers value positive vs. negative reviews? An empirical investigation of confirmation bias in online word of mouth," Information Systems Research, vol. 27, no. 1, p. 131, 2016. [23] E. Hoffman and T. Daugherty, "Is a Picture Always Worth a Thousand Words? Attention to Structural Elements of Ewom For Consumer Brands Within Social Media," Advances in Consumer Research, vol. 41, p. 1, 2013. [24] B. Bickart and R. M. Schindler, "Internet forums as influential sources of consumer information," Journal of Interactive Marketing, vol. 15, no. 3, pp. 31-40, 2001. [25] T. Sun, S. Youn, G. Wu and M. Kuntaraporn, "Online word-of-mouth (or mouse): An exploration of its antecedents and consequences," Journal of ComputerMediated Communication, vol. 11, no. 4, pp. 1104-1127, 2006. [26] L. J. Albert, N. Aggarwal and T. R. Hill, "Influencing customer's purchase intent through firm participation in online consumer communities," Electronic Markets, vol. 24, no. 4, pp. 285-295, 2014. 60 [27] Y. Li and X. Wang, "Seeking health information on social media: A perspective of trust, self-determination, and social support," Journal of Organizational and End User Computing, vol. 30, no. 1, pp. 1-22, 2018. [28] R. C. Mayer, J. H. Davis and F. D. Schoorman, "An integrative model of organizational trust," Academy of Management Review, vol. 20, no. 3, pp. 709-734, 1995. [29] J. O. Drake and C. P. Furner, "Screening job candidates with social media: A manipulation of disclosure requests," Journal of Organizational and End User Computing, vol. 30, no. 4, pp. 63-84, 2020. [30] Y. Bart, V. Shankar, F. Sultan and G. L. Urban, "Are the drivers and role of online trust the same for all web sites and consumers? A large-scale exploratory empirical study," Journal of Marketing, vol. 69, no. 4, pp. 133-152, 2005. [31] M. Keith, J. S. Babb, C. P. Furner, A. Abdullat and P. Lowrey, "Limited information and quick decisions: Consumer privacy Calculus for Mobile applications," AIS Transactions on Human-Computer Interaction, vol. 8, no. 3, pp. 88-130, 2016. [32] Q. Xu, "Should I trust him? The effects of reviewer profile characteristics on eWOM credibility," Computers in Human Behavior, vol. 33, pp. 136-144, 2014. 61 [33] X. Liu and S. Guo, "The Influence of Picture Information on Consumers' Purchase Decision," IOP Conference Series. Earth and Environmental Science, vol. 252, no. 4, p. 42047, 2019. [34] R. J. Gerrig, P. G. Zimbardo, A. J. Campbell, S. R. Cumming and F. J. Wilkes, Psychology and Life, Pearson Higher Education AU, 2011. [35] T. Mei, X.-S. Hua and S. Li, "Contextual in-image advertising," in Proceedings of the 16th ACM international conference on Multimedia, 2008. [36] K. C. Berridge, "Food reward: Brain substrates of wanting and liking," Neuroscience and Biobehavioral Reviews, vol. 20, no. 1, pp. 1-25, 1996. [37] K. C. Berridge, "‘Liking’ and ‘wanting’ food rewards: Brain substrates and roles in eating disorders," Physiology & Behavior, vol. 97, no. 5, pp. 537-550, 2009. [38] S. Peciña and K. S. Smith, "Hedonic and motivational roles of opioids in food reward: Implications for overeating disorders," Pharmacology, Biochemistry and Behavior, vol. 97, no. 1, pp. 34-46, 2010. [39] C. A. Hardman, J. Scott, M. Field and A. Jones, "To eat or not to eat. The effects of expectancy on reactivity to food cues," Appetite, vol. 76, pp. 153-160, 2014. [40] M. Armendariz, Focus On Food Photography for Bloggers (Focus On Seris): Focus on the Fundamentals, Routledge, 2012. 62 |
Format | application/pdf |
ARK | ark:/87278/s6qzsaqj |
Setname | wsu_smt |
ID | 117611 |
Reference URL | https://digital.weber.edu/ark:/87278/s6qzsaqj |