Fair Lending Regulations: How to Protect Consumers and Your Company

As a business owner, you want to connect your goods and services with people who want or need them, creating a mutually beneficial relationship between consumers and your business. In recent years, targeted marketing has become the standard in achieving this goal. This marketing approach shows ads to a subset of people based on data that has been gathered about them, instead of showing ads to a large audience. Any time that people are sorted based on traits, there’s a risk that prejudice could influence the results. How can you be sure that your targeted marketing doesn’t cross the line into discrimination?

Regulating Practices in an Evolving Market

Until the national popularity of catalog shopping ramped up in the 1870s, most Americans carried out their shopping in brick and mortar stores or open markets. Discrimination against consumers went unchecked; business owners raised or lowered their prices based on race, background, gender, or other external factors. Catalog shopping brought about an anonymity for consumers that reduced this kind of prejudicial behavior, but it didn’t eliminate it. Leaving it up to consumers to navigate a flawed system was not a sufficient protection against discrimination. Therefore, regulation was needed to create a fair market for everyone.

Two federally enforced laws were introduced to promote fair lending and marketing practices: the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA). These laws stipulate that consumers cannot be discriminated against on the basis of race, color, religion, national origin, sex, marital status, age and more. Ethical companies align their marketing practices with these laws and ensure that they adhere to them. But certain marketing campaigns run the risk of becoming discriminatory if companies aren’t vigilant.

Targeted Marketing

With advances in technology, including big data and sophisticated data analysis, the anonymity once enjoyed by those browsing the internet is gone.  Now, companies know your age, gender, how much money you make, where you live, your shopping habits — the list goes on. Now that companies know all this information about you, they can send you advertisements for products and services you are most likely to purchase. 

Another form of targeted marketing is called cross-site tracking. This is something every internet user is familiar with; you browse for a product on Amazon or elsewhere, and the next thing you know, the exact product you were looking at shows up in your social media feed as an advertisement. For some, this can feel a bit eerie, like we’re being watched. And the truth is, we are. 

For advertisers, there’s seemingly no downside. It’s an efficient and effective way to advertise to the people most likely to purchase from you. And because it’s based on data, it couldn’t be considered discriminatory, could it? Therein lies the pitfall. 

Issues with Algorithms and Discrimination

A recent real-life example can be used to illustrate. In 2022 a settlement was reached between the Department of Justice and Meta regarding allegations of discriminatory advertising on its platforms. This settlement resolved a lawsuit alleging that Meta’s housing advertising system used data such as race, color, religion, etc. to determine to whom the ads were shown – which is in direct violation of the Fair Housing Act. The advertising system made use of a machine-learning algorithm that looked for Facebook users with similar traits to an advertiser’s source audience, in other words, people that the advertiser thinks will be most likely to be interested in their product. Meta had until December 2022 to develop a new system for housing ads that did not use these discriminatory qualifications.

Another example involves Amazon and the flaws in their hiring algorithm. The technology used AI to sort job applicants from least to most desirable based on data, specifically patterns observed from resumes submitted over a 10-year period. Because the tech industry is currently a male-dominated field, most of these resumes were from men. Thus, Amazon’s hiring AI decided that male candidates were superior, elevating their resumes above those of women. This is obviously a discriminatory means of sorting candidates. The Amazon team edited the programs in an effort to make them gender neutral and remove the flaws in the algorithm before any litigation was pursued, but eventually the project was abandoned.

What Your Company Can Do

As of right now, it’s largely left up to companies to determine how best to comply with laws against discriminatory marketing practices. But there are steps you can take.

  • Make sure you understand how your targeted advertising works, and how you are employing it. This also applies to companies that outsource their marketing.
  • Monitor the terms used to filter your advertising audience. Review any reports documenting this audience.
  • Determine whether or not a platform you use employs algorithms that could use prohibited characteristics to target (or exclude) consumers.
  • Ensure that any AI assisted tools that you use are fully understood and tested before implementing them in your company.

The technology companies have access to today can be an amazing resource to reach people and grow businesses. Awareness about the hidden pitfalls of discriminatory marketing can allow us to continue utilizing exciting and new strategies, while still upholding the principles and laws regarding inclusion and fairness.

Are you ready to give your company the boost it needs with top-tier marketing? Reach out to Seapoint Digital today, and trust us to guide you through the ever-changing technological landscape.