top of page
  • Writer's pictureElliot Figueroa

Does Airbnb know your DAC Score?

(Digital Analysis Composite)



How private is your social media account, really?


There have been recent reports of companies like Airbnb and Clearview using Artificial Intelligence (AI) and Scraping technologies to create risk assessments for a potential customer or to solve crimes. Although these technologies are a great example of our technological development surpassing our legislation in consumer protections, do we really need the government to protect us from ourselves?



If you don’t know how AI and Scraping technologies work, don’t worry let me explain. Companies like Tooly, a subsidiary of Airbnb, have developed software that searches the internet, social media and public records to create a Trait Assessment. The software looks for things like involvement in questionable sites, criminal records, negative postings, patterns of narcissism, or perceived untrustworthiness. The AI will also look at relationships and professional history along with memberships to organizations or donations to political parties. Once all the data is combined the software creates a trait analysis or consider it a Digital Analysis Composite (DAC Score) which can then be compared quantitatively against other potential customers as well as create a guest-host compatibility factor (GHOSTCOMP). Airbnb uses this DAC score and GHOSTCOMP to determine if the booker is a risk to reserving the property for the wrong reasons. Think of these scores as you would your credit score. Instead of using it to determine your creditworthiness it determines your digital profile worthiness.



The same technology is currently being used in beta testing by Clearview AI with over 600 law enforcement agencies to solve crimes. In the case of Clearview however, they are using scraping technologies to create a massive database of over 3 billion online images along with the information on those subjects. They scrape public profiles from social media sites like Twitter, LinkedIn, YouTube and Facebook to pair them with trait analysis information. They then use facial recognition to provide law enforcement data on where to locate suspects. In an interview with Kashmir Hill, a technology reporter for the New York Times, one law enforcement officer told a chilling story. They were looking for a suspect without any leads regarding his whereabouts. When they submitted his picture into the Clearview software, the result was a picture of him in the background of another person’s online image. This image led them to a gym where the suspect was a member and they were able to apprehend him.


As beneficial as this technology seems on the surface, there is a reason Clearview has been secretly operating under the radar. Software that uses facial recognition by scraping public profiles and combining them with personal information is controversial at best. However, most advocates of privacy law would consider it illegal and an invasion of consumer’s rights. LinkedIn, Facebook, Twitter, and Google have all filed for cease-and-desist measures against Clearview. Their argument is that although Clearview stands on First Amendment rights to collect public information, the process of scraping violates the terms of service set up for users of the social media platforms. In an article for CNET, Tiffany C. Li, a privacy attorney, stated that "It's really frightening if we get into a world where someone can say, 'The First Amendment allows me to violate everyone's privacy.'"


Even if the courts were able to apply the First Amendment to Clearview’s actions, the process of scraping could still violate biometrics law. Speaking to CNET, Albert Fox Cahn, a civil rights and technology attorney, said that "The way First Amendment analysis works is that just because you're protected under one law doesn't mean that you're protected under all laws," He continued; “Another way to think about this is; The First Amendment protects your right to burn the flag, but it doesn't protect you from being charged with arson.”


At the heart of this privacy matter also lies the issue of AI making psychological decisions. As software evolves and begins to create patterns of learning it becomes more evident that, even if unintended, the results will contain some sort of bias. It is true that technology should by design be less biased than a human, but AI only knows what it learns. Consider the bias, or for a technical term, the rank and scale placed on query results and its algorithms. If the software is created to treat women differently than men, for example, because the programmer feels this to be true, the software will only know to interpret results as the code is written. Take this scenario for instance. An algorithm is created to determine the best-suited candidate for an administrative assistant position. It uses historical data from the Labor Department as to the most desirable gender. It could be true that a large portion of the administrative assistant positions is filled by females. Therefore, it places more weight and consideration on female applicants making it more difficult for a male candidate to be selected.


The same can be said for AI that is used to determine a potential customer’s DAC score. Thor Benson, writing for Inverse Magazine notes that if variables in psychology are not carefully considered, “depression, anxiety or any number of other psychological struggles could be judged to be undesirable simply because the algorithm thinks they’re dangerous based on how they express themselves on social media.”


So how do we protect ourselves if legislation and precedent are not strong enough suited to do so? The answer is simple: “POSTER BEWARE.” It’s common practice for people to Google themselves or to Google others in hopes to find clues on potential dates, employment candidates or out of sheer curiosity. This practice, without the use of powerful software or massive databases, can still yield impressive results. Keep in mind, however, that the more public your digital footprint is the easier it is for a person to create a DAC score based on their opinion of your posts and available content. If a potential employer sees a pattern of questionable social activities, excessive drinking, extreme political affiliations or inappropriate behavior, it is less likely you will get an invitation to interview. The question still remains then, even without technology, how private have we made our own privacy?

9 views0 comments
bottom of page