Loading...

The Dark Side of Personalization: Online Privacy Concerns influence Customer Behavior

©2013 Academic Paper 61 Pages

Summary

''Online Privacy Fears Stoked By Google, Twitter, Facebook Data Collection Arms Race'', ''Your E-Book Is Reading You'', '' 'Instant personalization' brings more privacy issues to Facebook''. These are only a few recent examples of media headlines that deal with the issue of online privacy and personalization.
Scholars and managers have repeatedly stated the benefits of personalization which is targeting products and services to individual customers, and constitutes a key element of an interactive marketing strategy. In order to accurately estimate the needs and wants of customers, it is necessary to gather a significant amount of information. Privacy concerns may arise when personal information about customers is gathered. If this arises, personalization can backfire by making clients reluctant to use the service or - even worse - developing a negative attitude towards the company.
A recent survey by Opera Software (2011) found that Americans fear online privacy violations more than job losses or declaring personal bankruptcy. This had induced politicians to introduce regulations and laws that address online privacy that safeguards consumers against online monitoring, and intrusion into confidential user information. However, privacy online remains a complicated issue for both, managers and politicians for new personalization technology emerges at a much faster pace than political regulations and guidelines.

This is the first study that establishes a link between different types of data collection, data usage, and concerns for information privacy. It also analyses the impact of privacy concerns on value, risk and usability perception of personalization, and the users’ willingness to transact with the website. Further, it develops a conceptual framework, and tests it by collecting responses to a questionnaire from an online-crowdsourcing sample of Amazon Mechanical Turk.

Excerpt

Table Of Contents


TABLE OF CONTENTS

List of Fiures

List of Abbreviations

1 Introduction

2 Literature Background
2.1 ..... Personalization vs. Customization
2.2 ..... Personalization in an Online Marketing Environment

3 Conceptual Framework and Hypotheses
3.1 ..... Privacy Concerns and CFIP
3.2 ..... Control of Personal Data and CFIP
3.3 ..... Data Gathering Method – Overt and Covert Approach
3.4 ..... Use of Data – Authorized Primary Use or Unauthorized Secondary Use
3.5 ..... Willingness to Transact
3.6 ..... Customers’ Value of Online Personalization
3.7 ..... Risk Beliefs of Online Personalization
3.8 ..... Perceived Usefulness of Online Personalization
3.9 ..... Moderating Role of Trust Beliefs between Use of Data and CFIP

4 Research Design
4.1 ..... Data Collection Process
4.2 ..... Sample Description
4.3 ..... Questionnaire Design
4.4 ..... Measures
4.5 ..... Scale Validity and Reliability
4.6 ..... Data Analysis and Results
4.7 ..... Model Evaluation
4.8 ..... Main Effects and Path Coefficients
4.9 ..... Indirect Effects
4.10 ; ; Moderation Analysis

5 Discussion and Conclusion
5.1 ..... Theoretical Implications
5.2 ..... Managerial Implications
5.3 ..... Limitations and Future Research

Appendices

References

List of Figures

Figure 1: Conceptual Model and Hypotheses

Figure 2: Convergent Validity and Cronbach’s Alpha

Figure 3: Main Effects and Path Coefficients

List of Abbreviations

illustration not visible in this excerpt

1 Introduction

“Online Privacy Fears Stoked By Google, Twitter, Facebook Data Collection Arms Race” (Menn, 2012), “Your E-Book Is Reading You” (Alter, 2012), ""Instant personalization" brings more privacy issues to Facebook” (Keane, 2010). These are only a few recent examples of media headlines dealing with the issue of online privacy and personalization. Scholars and managers have repeatedly stated the benefits of personalization, which is targeting products and services to individual customers and constitutes a key element of an interactive marketing strategy (Montgomery & Smith, 2009). To be able to accurately estimate the needs and wants of customers, it is necessary to gather a significant amount of information. Privacy concerns may arise when personal information about customers are gathered. If this arises, personalization can backfire by making clients reluctant to use the service or - even worse - developing a negative attitude towards the company. A recent survey by Opera Software (2011) found, that Americans fear online privacy violations more than job losses or declaring personal bankruptcy. This had induced politicians to introduce regulations and laws that address online privacy that safeguard consumers against online monitoring and intrusion into confidential user information (Los Angeles Times, 2011). However, privacy online remains a complicated issue for both, managers and politicians, because new personalization technology emerges at a much faster pace than political regulations and guidelines.

Online users can only perceive privacy, if they are able to control their personal data. Prior literature has identified two prerequisites to determine the users’ control of information privacy: awareness of information collection and information usage (Sheehan & Hoy, 1999). Over the last two decades, the effects of privacy concerns has been investigated comprehensively (Yuan, 2011). Although insights in the social-psychology perspective of customer’s information privacy concerns may be interesting for managers, most of the previous research focuses on general privacy concerns instead on information privacy (Laudon & Traver, 2008). Even less literature sheds light on the topic of information privacy in the context of personalization. A deeper understanding of the dimensions of control and their consequences is essential to fully understand the process online personalization. Especially if markets intend to deliver the best online personalization experience to their customers, these settings are of vital interest. Therefore, the goal of this study is to provide answers to the following research questions:

- What are the effects of different data collection methods (overt/covert) regarding private customer data on privacy concerns?
- How do different purposes of data usage (primary/ unauthorized secondary) influence concerns for information privacy and is this relationship influenced by trust that users have in the online merchant?
- Do users perform a risk-value analysis when personalization is applied by the online merchant?
- Do increased privacy concerns impact the evaluation of personalization and as a consequence users’ willingness to transact?

These insights will provide marketers and advertising strategists with practical advice in order to implement an optimal personalization application for customers. Aiming to find answers to the research questions above, the paper will be structured as follows: First, the theoretical basis will be provided by a review on the existing literature and a conceptual model will be developed. As a conclusion of the literature review, hypotheses will be presented that create a basis for the experimental setting. Second, the conducted experiment, in which the model and the hypotheses were tested, will be described. Third, the results will be discussed and interpreted in detail. Finally, the last section will present theoretical and managerial implications, insights into limitations of the current study and suggestions for future research.

2 Literature Background

2.1 Personalization vs. Customization

Personalization occurs when a company tailors their product or service offerings to the individual tastes of their customers based on personal and preference information (Chellappa & Sin, 2005). To understand the principle of ‘personalization’, it has to be clearly distinguished from ‘customization’, which is defined as the users’ ability to adapt certain criteria of the product offering, in order to better fit their individual needs (Laudon & Traver, 2008). While in customization the wish to adapt is initiated from the customer side, in personalization the marketer adapts the product or service to the customer by anticipating the customer’s needs and wants (Chaffey, 2007; Montgomery & Smith, 2009). As such, researchers refer to customization as pull marketing and personalization as push marketing ;(Milne & Rohm, 2000). Hence, with respect to customization, users provide personal data voluntarily, while personalization requires firms to either ask for information or monitor and analyze customers’ behavior, in order to adapt the product to individual needs. ;

2.2 Personalization in an Online Marketing Environment

Before personalization can take place, databases of collected personal consumer information have to be created. Data collection from online users is one of the fastest growing businesses on the online business (Angwin, 2010; Sipior, Ward, & Mendoza, 2011). Companies are able to collect an enormous amount of customer information, which can be used to deliver online experiences tailored specifically around the needs of each individual user (Ashworth & Free, 2006; Culnan, 1993; Pitta, Franzak, & Laric, 2003). This information can be divided into two groups: personally identifiable information and anonymous information. The former refers to data that enables identification, contact and discovery of an individual, while the latter refers to data that describes the individual but cannot be used to identify a specific person. Further, three subcategories of information types exist - contact information (name, address, phone, e-mail address), profile information (age, ethnicity, gender), and behavioral information (browsing and purchase history) on a single website or across multiple websites (Chaffey, 2007; Chellappa & Sin, 2005; Federal Trade Commission, 2000). The Internet enables firms to gather user data via multiple methods, which can be broadly classified as overt and covert information gathering techniques. Overt collections occur if firms ask users to answer direct questions. In this approach, users are aware of the fact that their data is gathered and used by the online company (Montgomery & Smith, 2009). In contrast, covert collections occur when firms gather data without the users’ awareness, so companies monitor and track the users’ online behavior without explicitly mentioning it to them (Xu, Luo, Carroll, & Rosson, 2011). The information gathered by these techniques can be used by the online merchant to personalize the users’ online experience with features such as personalized advertisements or indices of product websites in order to enhance convenience and search efficiency (Laudon & Traver, 2008; Tam & Ho, 2006).

Despite the improved online experience, a conflict is prevailing with online personalization: Do the users want to share private information in order to get an automatically adapted web service or do they have too many concerns about the risk of losing anonymity online? Whether users want to share control of personal information to benefit from a personalized web experience can be described as a risk-value calculation. With an increasing number of data gathering companies, users’ concerns for information privacy is on the rise. In a recent study, TRUSTe (2012) – the biggest online consumer advisor – found out that 91 percent of U.S. adults are worried about privacy online. Moreover, 53 percent do not disclose any personal information to businesses online because of mistrust. And further 88 percent tend to avoid firms that do not protect consumer privacy. Therefore, in order to get deeper insight into these privacy issues, more research has to focus on privacy concerns during the personalization process.

3 Conceptual Framework and Hypotheses

3.1 Privacy Concerns and CFIP

Since the landmark article ‘The Right to Privacy’ in 1890, in which privacy was formulated as “right to be let alone” (Warren & Brandeis, 1890, p. 193), almost every time a new technology with improved possibilities to capture, save, analyze and exchange detailed personal data emerged, privacy concerns soared (Culnan, 1993). This was particularly the case when web-based marketing occurred in 1995, which reduced the cost of gathering private information to a minimum (Laudon & Traver, 2008). Over time, the definition of privacy changed or got supplemented with extensions. One of these is information privacy, a subset of general privacy (Laudon & Traver, 2008). Recent definitions have focused on the users’ ability to control the dissemination and use of their personal information (Phelps, Nowak, & Ferrell, 2000).

Smith et al. (1996) have developed the construct of concerns for information privacy (CFIP). It is a second- order, formative construct that consists of four dimensions: collection, unauthorized secondary use, improper access and errors (Van Slyke, Shim, Johnson, & Jiang, 2006). The first dimension deals with concerns about companies gathering personal information. The second item centers on concerns about the use of the gathered data for a secondary purpose that has not been authorized by the user. Improper access, the third factor, reflects the individual’s concerns about gathered data being available for access to unauthorized third persons. Finally, the fourth dimension, errors, centers around the concerns about accuracy; that is, whether the gathered information really reflects the individuals (Smith, Milberg, & Burke, 1996).

Studies on the effect of privacy concerns and personalization have not included CFIP in their models. However, although research on the effects of CFIP in the context of e-commerce and personalization has been limited, many studies have focused on similar issues relevant to the given CFIP dimensions. The antecedent factors influencing CFIP that prior research has analyzed so far can be grouped into four categories: individual factors, social and legal norms, transaction and corporate factors, and information characteristics (Yuan, 2011). Research has also analyzed the consequences of privacy concerns. These literature findings can be grouped into beliefs, attitudes, intended and actual behavior (Yuan, 2011). A detailed overview of the prior research on CFIP and its findings can be found in appendix A and B. Summing up, privacy concerns influence human thoughts, opinions and actions. In general, privacy is seen as a precious good, which consumers value more than the disclosure of information. Connecting it to this study, CFIP should also influence users’ perception of personalization: Personal values, such as privacy concerns, affect the value a user associates with the result of personalization (Awad & Krishnan, 2006). Therefore, a higher level of privacy concerns should result in a lower value of personalized service. Due to the nature of personalization, users have to give up a certain amount of privacy so that the merchant is able to adapt the offering to the individual taste (Chellappa & Sin, 2005). However, prior research has analyzed the effect of loss aversion, stating that people overvalue what they already have compared to things that they might attain (Novemsky & Kahneman, 2005). In other words, gains are valued less than losses. Hence, users should value advantages of personalization less than the lost personal privacy. Due to this, CFIP should have a negative impact on the value of personalization. More formally:

H1: ;Online user‘s Concerns for Information Privacy has a negative influence ; ;on Customer‘s Value of Online Personalization.

Not only the value of personalization but also the risks related to it are affected by CFIP. Several privacy related risks are related to e-commerce and personalization, so for example, risk of privacy loss due to data collection, risk of improper access via third parties or risk of unauthorized secondary use of the information ;(Van Slyke, Shim, Johnson, & Jiang, 2006). Hence, if concerns for information increase, users should experience more risk. The higher the individual’s concerns about information privacy, the more risk is perceived during the personalization. Stated as hypothesis:

H2: ;Online user‘s Concerns for Information Privacy has a positive influence on Risk Beliefs of Online Personalization.

3.2 Control of Personal Data and CFIP

In current literature, privacy is widely perceived as the control of information. According to Sheehan and Hoy (2000), two dimensions determine the users’ control of information privacy: awareness of information collection and information usage. While awareness simply refers to whether users know that private data is gathered about them, usage implies how and for what purpose the gathered data is being deployed. Surprisingly, research has never concentrated on the collection and use of data as antecedents influencing CFIP in the context of personalization. A deeper analysis will provide a better understanding of antecedents of CFIP and personalization in order to maximize user satisfaction and website opportunities.

3.3 Data Gathering Method ­– Overt and Covert Approach

Nowadays, the use of customer information is one of the most important success factors in e-business. Nevertheless, the challenge of accumulating these knowledge data in a way customers feel comfortable with is still prevalent ;(Awad & Krishnan, 2006). Personal information can be gathered in two methods: overt and covert, so with and without the knowledge of the user. Montgomery et al. (2009) defines this overt/covert approach as active (to inform him or to post direct questions to the consumer) and passive (to make inferences based on transaction, clickstream or e-mail data) learning about customers. The type of data gathering has a direct connection to the control of personal information data. Knowledge that a website is collecting information about users for personalization – so an overt approach – therefore is an elementary prerequisite for control. Contrariwise, if users do not know about the fact that data about them is being collected, users have no control of it.

Research that included an overt vs. covert approach in combination with online personalization has been very limited. Xu et al. ;(2009) analyzed in a study on personalized mobile marketing how covert or overt personalization influence the perceived benefits and risks of information disclosure. They find that personalization increases the perceived value of information disclosure through both collecting methods and that perceived risk [value] has a negative [positive] impact on the value of information disclosure. Most striking is that personalization is only positively related to perceived risk of information disclosure when the data is gathered covertly, because there was no significant increase in perceived risk in an overt state.

Simply telling the user about the data collection process might show similar results as privacy seals or privacy policies. Several studies found out that informing users about the way how the web-merchant deals with privacy helps to decrease privacy concerns (Andrade, Kaltcheva, & Weitz, 2002; Nam, Song, Lee, & Park, 2006; Wirtz, Lwin, & Williams, 2007). Informing the user about the data collection enables an active two-way communication between the merchant and the user, which is an important antecedent of trust (O'Malley, Patterson, & Evans, 1997; Pitta, Franzak, & Laric, 2003). By offering a pellucid, overt approach, it is ensured that users know about the data being gathered, feel more control of the gathering process and hence trust the web partner more. The website might be able to decrease the concerns, when letting users participate in the personalization process.

Furthermore, users might feel a loss of privacy and even harm or betrayal, when they find out that data about them was gathered without an agreement (Cespedes & Smith, 1993). A personalized interface is the result of this data gathering process, so the nature of personalization allows users to realize that the website gathered private information about them. Therefore, consumers might feel a breach of trust, if the website gathers data covertly and automatically shows a highly personalized interface, which results in higher privacy concerns (Montgomery & Smith, 2009).

To conclude, if the user knows about the gathering of his personal data, he can have an impact in the process, decide whether he wants to disclose his data, and should gain more control of the personalization. Therefore, the following hypotheses are introduced:

H3a: Overt Data Collection has a positive impact on online user‘s Concerns for Information Privacy.

H3b: Covert Data Collection has a negative impact on online user‘s Concerns for Information Privacy.

3.4 Use of Data– Authorized Primary Use or Unauthorized Secondary Use

The second dimension of users’ control of information privacy is information usage (Sheehan & Hoy, 1999). Two key types of data usage exist: primary use and secondary use. Primary use of information can be defined as a company’s use of the accumulated personal data to improve sales and customer services, inventory and personnel planning and other corporate operations, which has previously been authorized by the customer (Culnan, 1993). Secondary use represents the use of the same information for a different purpose than the original reason of collecting (Culnan, 1993). This is - in most cases - not authorized by the user. A reason why companies apply personal information to an unauthorized use is that they can gain a strategic advantage through an effective secondary use (Porter & Millar, 1985). Analyzing and managing customer data is a critical success factor for all e-businesses (Awad & Krishnan, 2006). Unauthorized secondary use can happen internally - within different departments of the data gathering company – or externally – disclosure of the personal data to a third party (Van Dyke, Midha, & Nemati, 2007). However, although differences between primary and secondary use of information are well-defined in theory, in practice this distinction is not always existent. Websites gather data to enhance the user’s online experience, nevertheless, no general rules exist that regulate which methods advance and which debase the users’ online experience (Sipior, Ward, & Mendoza, 2011).

Companies can even attain customer data from firms that specialize in gathering personal data or from online advertising networks like AdWords or DoubleClick (Laudon & Traver, 2008; Liao, Liu, & Chen, 2011). To get an idea of the dimensions, one needs to understand that an average U.S. resident is profiled in about 100 different databases (VanHoose, 2003). Furthermore, about one quarter of all websites participate in ‘cookie sharing’[1] among companies (Sipior, Ward, & Mendoza, 2011). From a legal point of view, secondary use of information is valid and it is common use online to share data between firms (O'Malley, Patterson, & Evans, 1997). However, if customers realize that their private data is not kept confidential, trust declines and willingness to disclose information decreases ;(Pitta, Franzak, & Laric, 2003). This act of data collection can create an image of corporate ‘dataveillance’, which leads to increased privacy concerns (Ashworth & Free, 2006; Culnan, 1993; Foxman & Kilcoyne, 1993).

Furthermore, it is not only the release of private data that fosters privacy concerns, but the opportunistic behavior of companies: It is widely common that companies sell private information and make money on that data. Customers experience a misuse of their private information, while not even getting a share of the earnings (Dinev & Hart, 2006). ; If new technology is applied opportunistically, personal privacy will be threatened, which can result in further anxieties and a feeling of unfair treatment (McCreary, 2008; Nowak & Phelps, 1992; Preston J. , 2004).

Another reason for increased privacy concerns refers to the user’s control of data. When a user keeps his private data privately, he has total control. This situation changes if he discloses his data to a second party, nevertheless it is still obvious who has access to the data. However, it becomes more complicated to control the access and the use of the data, when a third party comes into play: The user can easily lose the overview which parties have access to his private data; moreover, the third party can even spread the information further. In this situation, the customer no longer has any control over his personal data because it is not transparent with whom the data has been shared. Hence, this process should result in higher privacy concerns.

Summing up, the more parties have unauthorized access to the user’s private data, the more privacy concerns should evolve. Therefore, the following hypothesis can be formulated:

H4a: Primary Use of Data has a negative impact on online user’s Concerns for Information Privacy.

H4b: Secondary Use of Data has a positive impact on online user’s Concerns for Information Privacy.

3.5 Willingness to Transact

The Internet enables users to conduct a ‘second exchange’ (Li, Sarathy, & Xu, 2010), so consumers can decide to ‘pay’ with their personal data in order to acquire personalized products or services online. In this study, this decision is represented by the behavioral intention of users to participate in a personalization process. The dependent variable represents the user’s wish to disclose private information to transact on the Internet. Using personalization, users are participating in a ‘privacy calculus’, so a cost-benefit analysis between value and risks of personalization (Malhotra, Kim, & Agarwal, Internet Users' Information Privacy Concerns (IUIPC): The Construct, the Scale, and a Causal Model, 2004; Li, Sarathy, & Xu, 2010; Xu, Zhang, Shi, & Song, 2009). Therefore, willingness to transact represents the individual’s assessment of the utility of the information disclosure weighted against the potential risks. Customers will accept a loss of privacy, as long as a positive net result is achieved by the disclosure of their private information (Chellappa & Sin, 2005).

3.6 Customers’ Value of Online Personalization

Individuals are likely to give up a degree of privacy, if they get something in return to compensate for it (Xu, Luo, Carroll, & Rosson, 2011). Moreover, the perceived value of the outcome of the information disclosure is potentially related to the willingness to give information. The value of online personalization and the advantages for users can be grouped in to three different categories: product, convenience and quality of service, and relationships. The most obvious advantage is the adapted product or service the online-company offers. It is aligned to meet the specific preferences and needs by incorporating the data the user supplied (Chellappa & Sin, 2005) and therefore offers a positive influence on willingness to transact. Ansari and Mela (2003) showed that click-through rates could be increased by 62 per cent when using personalized e-mails. The second group of advantages is convenience and quality of the related service: a personalized website decreases transaction and search time, because customer preferences are already known and do not have to be entered each time the customer returns to the website. This implies a more convenient service for the customer and perceived usefulness is increased (Davis, 1989; Hui, Teo, & Lee, 2007; Lee & Lee, 2009; Wolfinbarger & Gilly, 2001). Furthermore, the website will only provide online services, advertisements and recommendations that are relevant and anticipated for the user (Laudon & Traver, 2008; Phelps, Nowak, & Ferrell, 2000; Vesanen, 2007). Howard and Kerin ;(2004) found out that simply inserting the user’s name in personalized product recommendations significantly increased the response rates. Hence, this process will increase the quality of decision making (Lee & Lee, 2009). The third group of benefits is relationship related: an interactive two-way communication is enabled and a relationship is created between the company and the customer (Rayport & Jaworski, 2003; Sheehan & Hoy, 2000; Smith, Milberg, & Burke, 1996). This connotes an intangible benefit for the customer, which reduces the perceived risk and offers an intrinsic social benefit of relationship participation (O'Malley, Patterson, & Evans, 1997). Summing up, it can be stated that personalization increases customer satisfaction and customer value at hand (Ariely, 2000; Montgomery & Smith, 2009; Turban, 2008; Vesanen, 2007). Therefore, the following hypothesis can be drawn:

H5: Customers’ Value of Online Personalization has a positive influence on the users’ Willingness to Transact.

3.7 Risk Beliefs of Online Personalization

Risk evaluation and the fear of losing control over the collection, protection, and use of private information is always present online (Van Slyke, Shim, Johnson, & Jiang, 2006). Generally, risk can be defined as a subjective assessment of the possibility of loss (Dinev & Hart, 2006). In e-commerce, prior literature has identified three types of risks: economic risk, personal risk, and privacy risk (Van Slyke, Shim, Johnson, & Jiang, 2006). While economic risks concentrate on the potential monetary loss and personal risks focus on gaining insecure products or services or unsatisfying work results, privacy risks deal with the possible loss of private information or identity theft (Liao, Liu, & Chen, 2011). Research across different merchants, product types and cultures has been conducted on risks influence on willingness to engage in online transactions (Jarvenpaa, Tractinsky, & Vitale, 2000; Kimery & M. McCord, 2002; McKnight, Choudhury, & Kacmar, 2002). However, most of the studies focused on general risk or economic loss instead of the numerous sources of privacy risks, e.g. unwanted personalized advertisements, unauthorized access, or identity theft. Only a small number of studies have dealt with the influence of loss of privacy on the willingness to transact so far: Malhotra et al. (2004) found that risk beliefs have a negative impact on behavioral intentions; Li et al. (2010) proofed the same effect for privacy risk beliefs on behavioral intentions; Cocosila et al. (2009) analyzed that perceived privacy risk, mediated by perceived psychological risk, has a negative impact on behavioral intention to use; Van Slyke et al. (2006) showed a negative impact of risk perception on willingness to transact; ; Dinev and Hart (2006) found a negative influence of perceived Internet privacy risk on willingness to provide personal information to transact on the Internet. However, all of these studies did not analyze the risks concerned with online personalization. These can be classified into five categories: usability-, relationship-, monetary-, nonmonetary- and unauthorized use-related risks.

First, users are more or less compelled to disclose personal information, because some websites only work with restrictions or not at all if users do not register, which signifies problems with usability (Sipior, Ward, & Mendoza, 2011). Second, when users are aware of the fact that they are being monitored, they adapt their browsing behavior and avoid the publishing of sensitive information, which negatively influences the relationship between the two parties (Hui, Teo, & Lee, 2007; Laudon & Traver, 2008). Moreover, customers could even get aggressive and be incited to retaliation behavior, if they feel threatened or harmed (McCreary, 2008; Lee & Lee, 2009). Third, consumers can be confronted with monetary drawbacks, because firms might demand extra fees for personalizing the content (Vesanen, 2007). Fourth, users are concerned about the loss or misuse of nonmonetary goods and valuables, in connection to personalization, the loss of privacy or sensible data (Malhotra, Kim, & Agarwal, 2004). Concerns about loss of privacy are the most salient reason of denial to establish a relationship with merchants (Dinev & Hart, 2006; Liao, Liu, & Chen, 2011; Phelps, D'Souza, & Nowak, 2001). Fifth, the risk of unauthorized use like identity thefts via spyware or hacker attacks is constantly present (Chaffey, 2007; Heffes, 2005; McCreary, 2008; Pitta, Franzak, & Laric, 2003). All these risks can stimulate fear of loss of privacy in the user´s mind (Lee & Lee, 2009). Users should therefore be reluctant to disclose private information and their willingness to transact with a personalized website should decrease. More formally stated:

H6: Risk Beliefs of Online Personalization have a negative influence on user‘s Willingness to Transact.

3.8 Perceived Usefulness of Online Personalization

Similar to willingness to transact, value and risk should also influence the perceived usefulness of personalization. Perceived usefulness describes the users’ impression whether personalization improves their online experience, so in this context, whether it makes online shopping more effective, quick or convenient. This means the difference between the value of personalization and the usefulness of it lies in the specific, relevant benefits of the personalization for the consumer in this scenario. While the construct for Consumers’ Value of Online Personalization measures the consumers’ attitude to personalization in general, the Usefulness of Online Personalization construct measures the applicability and relevance of personalization to the specific online shopping scenario. Hence, it is hypothesized that, if users value personalization in general, the perceived usefulness for the online shopping task will be higher. Otherwise, if users fear the risks of personalization more, perceived usefulness will decrease. More formally:

H7: Customers’ Value of Online Personalization has a positive influence on the users’ Perceived Usefulness of Personalization.

H8: Risk Beliefs of Online Personalization have a negative influence on user‘s Perceived Usefulness of Personalization.

Marketers wish to deliver a product or service that is automatically adapted to the specific individual needs of the customers. However, for this concept, it is necessary that users realize the advantages offered by personalization. If the personalization application is not helpful and makes the interaction with the website less convenient, then users do not perceive it as useful. Nunes and Kambli (2001) present the results of a survey, in which only around six percent of online users would prefer personalization compared to customization. The reason for this is, with customization, users can adapt the product or service to their needs if a need for change exists, and make it more relevant for them. In personalization however, this option is not given. Lee & Lee (2009) found a large positive influence of usefulness of personalization on customers’ intention to use. Hence, if perceived usefulness of personalization already is high, users should be more willing to transact. Alternatively, if personalization is redundant, users’ willingness to use the service should be low. Hence, personalization has to be useful, convenient and especially relevant for online users. Therefore, the following hypothesis can be formulated:

H9: Perceived Usefulness of Personalization has a positive influence on user‘sWillingness to Transact..

3.9 Moderating Role of Trust Beliefs between Use of Data and CFIP

Similar to the offline world, relationships rely on trust, commitment and mutual benefits (Morgan & Hunt, 1994; O'Malley, Patterson, & Evans, 1997). Furthermore, successful relationship building requires the processing of information (O'Malley, Patterson, & Evans, 1997). Trust has been found to affect the attitude a consumer has towards a merchant (Moorman, Zaltman, & Deshpande, 1992). More specifically, trust being closely related to privacy is a crucial element for transactions that involve risk, so for example online transactions (Kim, Tao, Shin, & Kim, 2010; McKnight, Choudhury, & Kacmar, 2002; Reichheld & Schefter, 2000). This paper defines trusting beliefs as confidence that the firm deals responsibly with the collected customer data. This means that if trust in the online merchant is high, users should expect that merchants use the information gathered in a sensible and intelligent way. Hence, the more trust the user has in the website, the more positive will be his evaluation of secondary use of information. If the user trusts the merchant to prevent opportunistic behavior and to strive for a mutual benefit, then he could even perceive the secondary use of information as an extra personalization service by the merchant. In other words, people may trust in the merchant’s ability to decide over secondary use of information for the user’s welfare.

Furthermore, people disclose more information about themselves if they trust in the addressee (Pitta, Franzak, & Laric, 2003). This can be traced back to fewer risk beliefs, which users perceive, when trusting a merchant. Prior studies found out that trust can reduce concerns of disclosing personal information and increase willingness to transact (Liu, Marchewka, Lu, & Yu, 2005). Opposing to this, if the user does not trust the merchant and finds that data is illegitimately shared with a third party, this perception of misuse of his private data can even increase privacy concerns and decrease his willingness to transact (Liao, Liu, & Chen, 2011; Pitta, Franzak, & Laric, 2003). Therefore, the following hypothesis is stated:

H10: Trust Beliefs in the Online Merchant moderate the relationship between Use of Data and Concerns for Information Privacy.

The conceptual model and the hypotheses are visualized in Figure 1. The next section of the paper will present the conducted research and the results.

illustration not visible in this excerpt

Figure 1 - Conceptual Model and Hypotheses

4 Research Design and Results

To test the hypotheses of the conceptual framework, an online survey was developed and conducted amongst a sample of Amazon Mechanical Turk. Next, the data collection process, sample, questionnaire design, and measurement instruments will be discussed in detail.

4.1 Data Collection Process

Before the final data collection, a pre-test with 56 Mechanical Turk users was conducted, to ensure respondents understood the different scenarios and that the manipulations work correctly. The analysis of these responses showed that participants had a clear understanding of the respective scenario they read that the scenarios were evaluated realistically and easy imagine. A significant difference between the different scenario groups was observed.

4.2 Sample Description

A sample of 186 responses was obtained by recruiting participants via Amazon Mechanical Turk. This is a web service that works as an online panel, by linking researches and individual participants. Recent research states that Mechanical Turk samples bring similar results as using college students and urban samples (e.g., Fagerlin et al., 2007; Oppenheimer, Meyvis, & Davidenko, 2009). However, 23 responses had to be deleted because of not completely filling out the questionnaire and one outlier in CFIP was removed. After cleaning the data, 162 complete surveys remained. Participants completed all procedures online. Of these respondents, 40.5% are male. The age of the participants were grouped as follows: 17.8% were 18-24, 33.7% were 25-34, 17.2% were 35-44, 22.1% were 45-54, 5.5% were 55-64, and 3.7% were 65 years old or older. In terms of education, the following distribution was obtained: 1,8% had some school education but no degree, 12.9% had a high school degree, 38.0 had some college education but no college degree, 33.1% had a Bachelor’s degree, 8.6% had a Master’s degree, 1.8% a professional degree, and 3.7% had a doctorate degree.

4.3 Questionnaire Design

The design of the questionnaire was divided into seven sections. It started with a section testing the trust beliefs regarding the online merchant. After that, participants were randomly assigned to one of the four different scenarios, so a 2 (Data Collection: overt vs. covert) x 2 (Use of Data: primary vs. secondary use) design was used. To manipulate the conditions, participants were asked to put themselves into a fictional character who bought a DVD online at the online merchant Best Buy. In the overt data collection condition, participants were informed that Best Buy gathers and saves customer information to improve their personalization services, while in the covert condition this text was replaced by a standard text that promoted free shipping and the customer service hotline. Then, all groups were told that a week later, they saw personalized advertisements and recommendations online. In the primary use condition, this was displayed on the Best Buy homepage, while for the secondary use condition, the third party retailer Amazon was used. Images of the used manipulations can be found in appendix C. This manipulation process was followed by the measures of the CFIP construct. This was followed by sections for the value and risk beliefs of online personalization and, after that, two sections for usefulness of online personalization and willingness to transact appeared. The questionnaire ended with the more sensitive covariate questions for gender, age and education.

[...]


[1] Cookies can be shared and combined with other cookies, public records, survey data on the basis of an individual’s unique identification mean, for example the social security number.

Details

Pages
Type of Edition
Originalausgabe
Year
2013
ISBN (PDF)
9783954895618
ISBN (Softcover)
9783954890613
File size
2.1 MB
Language
English
Publication date
2013 (June)
Keywords
Online Consumer Trust Web Marketing Concerns for Information Privacy Personal Data Security Unauthorized Secondary Use

Author

Jörg Ziesak, M.Sc. was born in Bielefeld in 1986. He graduated in 'International Business' with a specialization in 'Strategic Marketing' at the Universiteit Maastricht in 2012. Thus, he attained the academic title 'Master of Science' with the distinction 'cum laude'. Even during his studies, the author gained hands-on experience in the online industry. The ever changing world of online business, and the boundless opportunities to adapt products or services online, motivated him to work on the topic of this book.
Previous

Title: The Dark Side of Personalization: Online Privacy Concerns influence Customer Behavior
book preview page numper 1
book preview page numper 2
book preview page numper 3
book preview page numper 4
book preview page numper 5
book preview page numper 6
book preview page numper 7
book preview page numper 8
book preview page numper 9
book preview page numper 10
book preview page numper 11
book preview page numper 12
61 pages
Cookie-Einstellungen