What I Talk About When I Talk About Running
During one of the September days at 5:20 AM, after tying my shoes and enabling music and fitness app, I started my usual run. After running three kilometers in the city, I turned left and entered the pretty much-deserted road. It is a secondary technical road close to the highway, I like running there in the morning because in the afternoon there are plenty of people on bikes and rollerblades. However, that day I noticed something different.
On the road, I discovered the sprayed out marks that were the advertisements of the newly opened sports shop. They disappeared after a few weeks, rinsed by the pouring rain. Nevertheless, I did not pay a lot of attention to it.
Until one day I talked about that with my friend, who reminded me about the case of Strava. An app which gave away location of secret US army bases.
Data sources you may not realize, that they help with advertising
You can read about the scandal here. That base was hidden on all possible internet places including Google Maps. However, if something does not exist and even if it looks like a blank space on a map, but you run around that thing a couple laps, it becomes immediately apparent, that something is there.
Let’s think about other data sources:
- We talked about fitness applications - I think that now you can imagine how banners or spraying roads can appear on some streets.
- Have you ever installed a kinda useful, but small mobile app with an extensive list of permissions? Prepare for the ultrasonic beacons that allow for cross-channel marketing or check if a given ad is displayed to you on the TV.
- Expressing your personal thoughts on social media - we all do this all the time, and that data is fueling the targeting engines every day on Facebook, Twitter and other social media platforms.
- Offline attribution - outside of beacons, there is WiFi and Bluetooth connectivity which provides every single time a little bit of data to the companies.
- Using 3rd part data sets from cross-channel (e.g., sports shop exchanges data with ticketing company).
Advertising is so common nowadays, that we got used to it on a subconscious level. We do not mind, if as the result of a suitable targeting algorithm, we receive a more personalized offer. That’s good, right? We gave back a little bit, and in exchange, we gain much, much more. Fair business transaction, right?
Permanent Surveillance for Which We Have Agreed
Involuntary user tracking is one thing. We give back a lot of information voluntarily. Sometimes even too much.
However, in many cases, that’s not the companies fault. We, as users of the internet, are all guilty of many “small crimes”, sharing unnecessarily detailed information about our relatives or us (one of many examples is mentioned here).
The last essay from famous privacy activist Aral Balkan (even though it is snarky, ironic and bitter) explains it very well - in many cases it is our own fault (the users).
If the 20th century was about coal, the 21st century is about data
Aral, mentioned in the previous paragraph, coined the term surveillance capitalists. And that’s nothing new under the sun. Data has become the currency of the 21st century.
That’s why so many companies are going to collect and harvest so many of the user’s details in their Data Management Platforms. In many cases, they are not collecting the private data itself, but they are collecting metadata, bits and pieces of information that we leave - again - in many cases voluntarily.
Why metadata matters?
If you think that it cannot be dangerous, read a slide cropped in the following tweet and let that sink in for a bit:
Nowadays (being brutally honest - in the past too, it was just more expensive than now) we are able to infer meaning from metadata. Even though we do not give the exact information, most of the behavior can be produced by algorithms. But hey! They claim algorithms are blind, fair, and not vulnerable to social profiling.
Algorithms are fair and square, right?
Well, not exactly. Let’s step back a bit.
Here is an example from a book titled Weapons of Math Destruction. It is a position from 2016 that describes the societal impact of wrong machine learning models, written by Cathy O’Neil. It explores how some big data algorithms are increasingly used in ways that reinforce preexisting inequality. It was longlisted for 2016 National Book Award for Nonfiction.
Targeting and Advertising
Author of the book mentioned above is an American mathematician, with Ph.D. in that area. She worked in the company Intent Media, as described in the 4th chapter of the book, she explains how targeting in advertising flourishes and benefits from math - especially statistics and probability models. That’s nothing new under the sun - if you worked in IT, you had known that already (I am not even saying a word about ad tech).
Targeting campaigns were the base of the business of the company mentioned above. Everything was based on an unverified intuition that personalized advertisement will be well-received and expected by the end users. As you can see, rising trend and awareness in privacy-related areas are not confirming that at all.
Good Models vs. Bad Models
Humans are driven by emotions. It is related to how we have been operating for millennia. Advertisement plays with that, and in some cases even exploits that.
Here are coming wrong models, which are focusing on the pain, unawareness, naiveness and promise a solution. The key thing is that they are motivated by financial incentives, not by real help and really need.
We can agree, that a morally wrong thing is convincing people that do not have any money and even more debts, that they should get another loan (with very high interest, close deadlines, and aggressive debt collection). But there are more subtle cases.
Unlike in EU, in the USA there is no unpaid higher education. That introduced a fascinating niche called “diploma factories” - less known universities that are promising a valuable diploma and skills after paying tuition. There is no problem with the existence of those institutions, what is problematic is how they obtain new students.
Math using complicated models fuels aggressive ads which are bringing leads and potential customers to the shady businesses.
The essence of how such targeting for those companies work is building models that are preying on the poorer and less-aware people.
You can segment zip codes, and target your flyers and direct visits only in the regions with a smaller income and lower financial potential. As a supporting service, you can provide a very “preferential” loan. But, those companies went a step further, and they’ve mastered online advertising and tracking. The University of Phoenix spent in 2017 50 million solely on an ad campaign in Google, they’ve targeted a weaker segment of citizens, adding to that a vision of American dream after graduation and social promotion. And that’s only a tip of the iceberg if you will look into giants of that industry like Corinthians College.
Targeting those ads to the wealthier customers would be a waste of money, so that model fits very well and gives the most significant ROI for them. Inside their training materials for the sellers and marketers they are saying:
We are focusing on people living on an hourly basis (…). Their decisions about starting their education or staying at the university are emotionally motivated, not based on logic. Pain is a better incentive on a short-term.
Internet and technology gave such companies unusual weapon how to track, measure and iterate with their message. Every single time you got a flyer and threw it to the garbage can, you gave a valuable piece of information to the marketers - This does not work at all. Keyword targeting, tracking, enhancing user data with 3rd party datasets, data management platforms - everything brings tremendous value for that kind of actions.
Enter “Ethical Online Advertising.”
So let’s go back to the initial problem. Are we able to provide a pleasant user experience for advertising? Can we do targeting without tracking users?
Yes, and no. It depends what level of tracking is acceptable to you.
A perfect example can be a DuckDuckGo search engine, which provides advertising, but solely on the keywords, your search. No tracking, with 100% care of your privacy - you can learn more about that here.
You may question how they make money, but it is not a problem for them - their CEO explained that on Quora:
It’s actually a big myth that search engines need to track your personal search history to make money or deliver quality search results. Almost all of the money search engines make (including Google) is based on the keywords you type in, without knowing anything about you, including your search history or the seemingly endless amounts of additional data points they have collected about registered and non-registered users alike.
Their primary business model is keyword-based advertising. After searching for a particular thing in the search engine, you may observe additional suggestions based solely on the typed keyword. They do not store the history of searches, they do not leak your search results, nor intrusively spy on users via trackers.
Another business model they have mentioned are anonymous affiliations (non-tracking) which are harder to monetize for them, but still provides a nonnegligible portion of overall revenue.
Those models can be summarized as:
- Focusing on interest-based advertising instead of hyper-targeted advertising.
- Selling ads directly based on those interests, avoiding user surveillance and tracking.
- Anonymous affiliations, which can be a viable alternative for showing too many ads on your pages.
Another possibility is to explore your own domain and business. One great example is provided by Jodel - the anonymous social network build on geolocation. They created a digital ad column mechanism on their platform. You can pin your post, which can be a local ad about promotion in a pub nearby, and by that rent that space for a while to gain views and attention from others. Everything aligned with the spirit of the product.
You may wonder how users reacted. And that’s an excellent question: in such situation, it is always a bet how the community responds to such changes, especially in the case that previously there was no ads and no user tracking involved on their platform (that is the whole point of being anonymous). The perfect way is always to test that hypothesis with the community and sincerely ask for their feedback, as Jodel did here.
The Ongoing Ad Quality Issue
The whole discussion oriented around online advertising and targeting is related to a more profound problem. Ads started as a workaround for lack of business model when it comes to content delivery in digital space. It is a simplification, but instead of buying a newspaper, you could read the same articles on the internet for free. However, those articles, have authors with real needs.
Now, we cannot remove the advertising business, not because we do not have alternatives (like paywalls, crowdfunding, patrons), but most of our lives moved to an online space and in those advertising is rooted so hard that we cannot get rid of it (e.g., online shopping).
That’s why foundations like IAB have multiple initiatives focused on better ads - less intrusive, trusted, high-quality and relevant ones. Just to name a most important one with new ad portfolio.
Having such systems, advertising platforms, compliant with style mentioned above are significant to the businesses with established advertising layer and for the new ones that struggle with the decision, if that revenue model is right for them. Still the ad quality issue is a thing, and non-ethical targeting is just an element of a bigger puzzle.
Do you consider advertising as a revenue stream for your product?
Your business cannot afford to leave money on the table. However, you are struggling, because you do not know how ads will affect the reputation of your business.
As a seasoned team of IT experts, with extensive knowledge about ad tech, we can help you with determining that. Together we can build a sustainable and ethical revenue pipeline for your company.Hire us!