Help your users protect themselves from family member fraud

  • Eric Goldman

1 Nov 2015   ::   Security   ::   #fraud #social media #accounts #password management #redaction #journal article



Please cite this article using the original journal publiation and not this website:

Goldman, E. H. (2015). Help Your Users Protect against Family Member Fraud. ISSA Journal, 13(11), 31-35.

DOI/FullText at:


Social networks and similar service providers must take proactive actions to protect their users from fraud attempts and account abuse perpetrated by friends and family members. With easy physical access and intimate knowledge, these threat actors can impact not just the victim, but other users and the overall quality and reputation of your service. While service providers cannot force technical controls, such as screen saver passwords, on their users, they can adopt techniques and strategies to reduce their potential exposure and to help their users to take proactive measures to protect themselves and practice good security hygiene.


There is ever increasing media attention on offensive cyberattacks by nation states, invasive spying (by companies and countries), and high-tech attacks once thought to exist only in sci-fi movies. Even outside of the critical infrastructure community, there is a rallying call to the CISO to devote serious attention to nation-state actors and organized crime. This is the reality of our interconnected world where even a seemingly low value target can serve as a pivot point to attack others. While such threats deserve our attention, security teams must also be cognizant of less sophisticated and technical threats to their organizations and customers.

The other insider threat

In recent years, the security world has shifted from focusing on the network perimeter to a more holistic approach that addresses all layers of the stack, as well as the human element. As noted in the Verizon 2015 Data Breach Investigation Report,1 insider threats (intentional privilege misuse or data theft) and insider vulnerabilities (users targeted through phishing and pretexting as part of reconnaissance or to gain an internal foothold) are concerns that require action. In response, SIEM and DLP software and services offerings have taken center stage in the past few years. Organizations are also investing in development of better policies and governance processes.

Social network operators and other services that leverage user-to-user interactions must also consider an entirely different set of scenarios, with their own class of threat actors: the friends and family of your service end-users. Many common technical controls implemented inside the enterprise, such as password-protected screen savers, will not be enabled by your service users. Compounding the impact of less rigorously controlled systems, friends and family members – unlike some hacker on the other side of the world – often enjoy the benefit of easy physical access to, and intimate personal knowledge of, their victims.

According to a report by the American Bankers Association,2 financial fraud against elderly victims is often committed by a family member. The elderly are not alone, however. A 2011 study3 estimated approximately 500,000 children under the age of eighteen have had their identity misused by a parent. Such statistics under report actual incidences of account hijacking, misuse, or other types of fraud perpetrated by someone close. Some victims may never notice the abuse, and if they do, they are unlikely to report the perpetrator. In many cases, victims fear the potential embarrassment for themselves or their family, and they want to shield the perpetrator from penalties. Could you send your child or parent to jail?

Why you should care

It is unwise to dismiss these types of abuse as the “users’ problems” for which you are not responsible. Service providers invest vast resources to craft each interaction–maximizing tracking, monetization, and other components along the user journey. When an account is misused, it wastes computing resources and impacts the accuracy of metrics you and partners rely upon. Further, a user preoccupied with recovering from identity theft or other misuse has little time left over to actually use your service, which impacts the bottom line. In addition, a hijacked account could lead to abuse against other users, which then causes them to decrease their usage or potentially leave your service. A single incident could snowball further and become a media nightmare, even if clearly the result of a user’s disregard for his own cyber safety. Consider the numerous potential (even if frivolous) lawsuits and other legal issues, including incident reporting requirements, that can apply based on locality and sector.4 Even when no legal action is required, significant resources may be needed to clean up the damage and restore user trust in your service. Therefore, service providers must put themselves in their end users’ shoes and ensure coverage of friend and family member fraud in their security programs.

What can you do?

In this article, we will look at some approaches and practical controls to help reduce the special edge available to friend and family member would-be fraudsters. First, we will look at ways of reducing the potential exposure of data and to make access difficult in spite of physical proximity. We will also consider how we can deter attacks and make it easier for users to self-identify misuse or hijacking. Beyond controls, we will discuss managing incentives and user experience tweaks that make it easier for users to make smart personal security choices. You will then be better prepared to keep your users happy, active, and your service profitable.

Start with the data

To address these risks, start by rethinking confidentiality. Just because data belongs to a given user does not mean it should always be displayed for that user. Consider national ID numbers, like the US Social Security Numbers or UK Social Insurance Numbers; legitimate users already know such information and there is normally no need for these values to be echoed back. This applies to other types of information as well, such as bank and card account numbers; it is a VISA recommendation5 not to include the complete card account number on statements. Even if in a read-only format, unnecessary access to personally identifiable information or financial information enables identity theft and pretexting, which can be used in further attacks against your users.

When cases arise where displaying high-risk information is needed, such as helping a user identify and manage multiple bank accounts or verifying data in his profile, consider either requiring re-authentication before access or implementing redaction (data masking) such as showing only the last four digits (e.g., XXX-XX-1234). An alternative strategy is to allow users to assign meaningful nicknames that are shown instead of account numbers or other key identifiers. If using redaction, ensure your redaction strategy is consistent within and across all applications to prevent an attacker from eventually learning the full value by snooping around in enough places. Even in cases where a full value may be needed, consider implementing a “toggle” feature that defaults to a redacted view and has a time-based reset, so that the full value is only shown when needed. This helps to prevent someone nearby from shoulder surfing, and also limits the risk that a user will print the information and leave the hard copy easily accessible (the same theory applies to digital screen shots left on the user’s virtual desktop).

More broadly, consider scenarios where a user may want to legitimately extract information from your service. Where possible, provide purpose-built export, save, and print functions that provide a minimal or user-customizable amount of information. This can be accomplished, for example, by linking to a separate, server-generated PDF; for web apps you can provide print-specific CSS rules6 that can hide or remove information when using the browser’s native print functionality. By making it easy for your users to extract the information safely, you reduce the likelihood of users, intentionally or unintentionally, copying-and-pasting high-risk information that may then be stored or printed somewhere with poor security controls.

Limit the window of opportunity

Once a user is authenticated, it is important to start thinking about when to end access or require re-authentication. Many service providers cannot justify bank-like security with a ten-minute-maximum session lifetime; however, you should still consider some reasonable maximum, as well as other factors that should invalidate a user’s session, such as a data breach or when the user signs on from a new location. When terminating a session, ensure it applies to all open pages and that those pages are redirected to a page that does not display confidential or personal information to reduce unintended access after a user ends his session. In addition, it should always be easy for a user to explicitly log-out of your application by providing an easy-to-identify log-out link on each page or screen. It should also always be clear to the user when he is actively logged in and with which of his accounts, so that he can be sure he is not giving someone else unintended access. On the more technical side, applications should be tested for unnecessary local data storage and checked for common session vulnerabilities such as “back and refresh” attacks7 which allow someone with physical access to re-initialize a session thought to be terminated.

Some websites provide long-lasting sessions that persist even after the browser is closed; such settings should only be considered when sufficient fraud and abuse monitoring controls are available. Home users are unlikely to explicitly log-off from a service, but they are likely to share devices or allow friends and family members to use their computers. Long-lasting sessions therefore make users easy pickings for friends who want to hijack their accounts. Thus, if long-lasting sessions are used, consider re-authentication or another form of identity verification after a predefined amount of time has passed, and before permitting access to high-risk information or enabling certain actions. Consider a social networking site: to balance security and user experience, users are allowed read-only access to messages, but are required to re-authenticate before sending a message after a given window of inactivity. Note: some activities should always require re-authentication without regard to time, such as changing a password or mailing address.

Make detection easy

Complementary detective measures should also be implemented, depending on the potential risk of a given action. For example, if key information such as a password is changed or if a financial transaction is executed, an email and/or SMS message should be sent to the user so that he is aware of the activity and can hopefully respond if the action is fraudulent. In the case of a family member misusing the account, the perpetrator may also have access to the victim’s email account or phone and can delete the evidence. Therefore, to build a layered defense, you can provide users an online and non-clearable (entries can expire over time) history log of purchases, profile information changes, etc. For particularly high-impact changes, such as a change of billing address, it may also be prudent to include an offline confirmation or at least a reminder message to the user upon the next few subsequent log-ons. Such procedures increase the likelihood of detection, thus enabling the victim to identify and respond to the account abuse.

The password problem

Even without direct or physical access, a family member can leverage intimate knowledge about the victim in order to compromise your users’ accounts. Passwords still remain the most common form of authentication. If given the choice, a user is likely to use simple passwords that can be guessed by someone who knows him. To address this, and in the hopes of thwarting technical attacks, service providers often implement draconian password complexity requirements on users; however, burdensome complexity leads to users writing down their passwords in insecure places (physical sticky note or digitally, e.g., mypassword.txt) which are easily accessible by family members. Perhaps we should reconsider such requirements? In a study of 5,000 participants,8 Komanduri et al. found that long, simple passwords (16 characters, no complexity requirements) were easier for users to remember versus shorter, but more complex, passwords (8 characters, upper, lower, number, special character) and provided similar resiliency against attack. Beyond making passwords easier to remember without being written down, you can also provide actionable tips to users during registration such as “do not use a password you use on other sites.” You can also help your users by directing them to secure password management software, which reduces the burden of remembering so many passwords and precludes the need for password reuse. While this may seem unconventional, ISPs and banks routinely provide their users with anti-malware software. Do not assume users are lazy or do not want to protect themselves; instead, consider that they are simply not aware of tools that can make it easier to be secure.

Beyond password choice, intimate knowledge comes in handy during password/account recovery. In 2008, the email of US vice presidential candidate Sarah Palin was hacked because answers to her recovery questions were easily researchable.9 For a family member, often no research is needed. Service providers should exercise care when providing recovery questions, or alternatively, allow users to select their own questions, while providing explicit guidance to create questions that would be hard even for a family member to answer. For some services, it may be possible to utilize non-public, activity-based questions such as “Who was the last person to which you sent a private message?” Research by Dandapat et al.10 evaluated using such activity for authentication purposes. The study suggests that users typically do not share or expose such details; however, because it requires recall of actions that user may not purposefully remember, it should not be the only option for reset.

As with other high-risk actions, any attempted or successful reset should be accompanied by a confirmation email, SMS, etc. Because of easy physical access for a family member, a user-accessible history of such changes should be maintained online. Otherwise, a victim may think he simply forgot his own password on his next log-on attempt, when in reality a friend broke into the account and also deleted the email confirmation. A best practice also requires the user to confirm any type of reset or recovery through an out-of-band channel within a given period, usually by clicking a link or entering a one-time code. This limits attackers who may have local access in the browser, but do not have immediate access to the user’s phone or email.

Use psychology and incentives

In addition to technical controls and design decisions, it is imperative to encourage good security behaviors and gain user buy-in for these practices. Users typically think that companies just get hacked and that they play no part in protecting themselves. For example, a user may wonder why he is forced to change his password every 90 days. If you are not a bank or some service where your users would likely accept such enterprise controls, you can still encourage good behavior by providing the right incentives. For example, from March 10 through April 10, 2015, Hilton offered members 1,000 Hilton HHonors Bonus Points to voluntarily upgrade to more secure passwords before a mandatory deadline.11 Using incentives of real value (e.g., provide an entry into a contest) or simple gamification (i.e., just another way to earn virtual points), you can prod users to perform good security actions, such as changing their passwords periodically, on a voluntary basis. While not all users will value these incentives, many will, resulting in fewer vulnerable users. When forced action does not make sense, instead provide your users with a good value proposition for performing desired security behaviors.

Beyond buy-in, the way you communicate with users is also important. You should always be clear and purposeful with your security messaging and ensure it is properly integrated into the user journey throughout your service. For example, consider the difference between these two prompts provided before a session timeout: (a) “Click the OK button to keep your session active” or (b) “You have been inactive for some time; to remain signed-in click OK. If you are done, click Cancel to log out and protect your account.” The second message provides more information, is more impactful, and encourages secure behavior.

You can also implement reminders strategically throughout the user journey. For example, after log-on you can provide a banner message reminding users to explicitly log out when they are done or a reminder not to share passwords after they perform a password reset. A more dynamic example includes displaying a message upon log-on, after X days, prompting the user to change his password when the service does not enforce mandatory password changes. You can include the option to “Dismiss this message for 10 days.” While some users will always dismiss the message or disable it completely, some will perform the behavior, which serves to reduce your service’s overall risk. Again, note that such a banner message should have a clear purpose and be persuasive (e.g., “Your password has not been changed in over 90 days, click this link and change your password now to stay safe and secure.”). Facebook, often bemoaned for poorly communicating privacy and security controls to its users, has deployed more accessible user guides12 in order to better engage its users.


When service providers sit down to perform their annual risk assessments and planning, they should seriously consider the impact from family member fraud and account misuse. Often, we tend to focus on hacktivists or well-funded computer experts, who wield advanced 0-day exploits. However, consider the relative ease of attack from friends and family members; it takes little technical skill to wait until someone leaves the room in order to gain unauthorized access. The individual impact from any such abuse may initially seem low, but on social networks or services with increasingly social components, such as shopping or review sites, the impact can weave its ways through the community. Technology alone will not be the solution. Advanced behavioral and environmental analytics may fall short when the attacker is on the user’s normal PC and all he is doing is sending messages, not trying to abuse an API or crash the server. A one-time PIN over SMS does nothing to stop an attacker who has both the victim’s laptop and phone in his hands.

Countering the threat from the enemy one seat over requires combining well-honed technology with a little bit of planning and psychology. Think twice about what information you present and how the designed use case can become an abuse case. Remember to always think about how instructions are given and how they can be used to help users make smart, informed decisions about their behavior. Teach your users good security behaviors, and then ensure that you align behavioral objectives with the right incentives so they heed your lessons well.

References and Footnotes

  1. Verizon. “2015 Data Breach Investigations Report.” 2015 Data Breach Investigations Report. 2015 – [return]
  2. Leslie Callaway and Jerry Becker. “Stopping the Financial Abuse of Seniors.” ABA Bank Compliance, July-August 2011: 10-17 – [return]
  3. ID Analytics. “ID Analytics Study Finds Six Million U.S. Parents and Children Inappropriately Sharing Identity Information.” September 2011 – [return]
  4. The European Union and individual countries will be introducing more in-depth legislation in the coming months, for example: Hunton & William LLP, “New Dutch Law Introduces General Data Breach Notification Obligation and Higher Sanctions.” June 2015 – [return]
  5. VISA. “Primary Account Number Truncation on Cardholder Statements.” April 2009 – [return]
  6. See the W3C CSS Specs: [return]
  7. For more information, see “Demystifying Authentication Attacks”: [return]
  8. Saranga Komanduri, et al. “Of Passwords and People: Measuring the Effect of Password-composition Policies.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2011. 2595-2604 – [return]
  9. Kim Zetter. “Palin E-Mail Hacker Says It Was Easy.” Wired. September 2008 – [return]
  10. Sourav Kumar Dandapat, et al. “ActivPass: Your Daily Activity is Your Password.” Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. New York, NY, USA: ACM, 2015. 2325-2334 – [return]
  11. See: [return]