Lessons Learned from Parler and how to manage user-generated content in iOS apps and web services

If you’ve been paying attention to the news lately in the United States, then you’ve likely heard about Parler. Parler was a social network that was based around the idea of not moderating or removing user content based on beliefs or ideologies [source] [source 2]. While Parler’s official rules stated that death threats and illegal activity were not allowed on their platform, they stood by a very hands off approach to moderation, which was evident by countless posts on their platform which remained even while in violation of their own policies.

This social network was purportedly used to help incite acts of violence. Their apps were banned in early January 2021 by both Apple and Google, several days later their website and service was banned by Amazon Web Services (AWS).

We are going to take a look at how this service was created, how it functioned, and the technical failures that caused it to go out with such a resounding bang as opposed to the quieting snuffing out of a candle.

While reading this, please be aware: We have made every effort to ensure the accuracy of the details of this article at the time of publication, but the accusations and details of the Parler app are still developing. Additional items and information are likely to come to light after this article is published.

MartianCraft and the writers of this article are not taking a stance on any position, however we find the technical aspects of this story too interesting to forgo discussion. We have always been firm and adamant champions of not cutting corners when it comes to technology, especially regarding privacy. As more and more information comes to light in the wake of Parler’s implosion it is becoming clear that the entire platform was built upon a house of cards of cutting technical corners, and gross mismanagement of user’s private and personal data.

Lessons Learned: The technical side

The technical collapse of Parler was not in how it was run but what happened when it was shut down. The final swansong of the network which promised no moderation was the leak of 70TBs of user created data including direct messages, personal information, government IDs, photographs, videos, the resurrection of deleted messages, and worst of all full and complete metadata on all of the above. Many in the media are referring to the leak of information as a hack, and that’s fair in the same sense that you can say someone hacked your facebook account when you left yourself logged in at the local library. Parler’s technical missteps, oversights, and downright corner cutting may have saved themselves some time and money in deploying the platform, but in the end it was simply throwing their users to the wolves.

The devil is in the details

When users capture photos and videos with their devices there is a lot of information that is captured behind the scenes, and attached to the media known as “metadata”. This metadata includes very sensitive information such as the geographical location where the media was captured using the device’s onboard GPS, cellular, and WiFi antennas. When users upload their media to your social network it is your job to either have a blanket policy to strip the data so that other users cannot see it, or inform the user that it is present and give them the option to consent to sharing it. In the case of Parler we are not sure if users were made aware that the metadata would be retained, but it is clear Parler was not stripping it out.

When the individual known as @donk_enby on Twitter, who was credited with enabling the scraping of all (or a majority) of Parler’s publicly available data, started reviewing the obtained media she found that it contained the original, unprocessed, raw files with all associated metadata. This metadata makes the job of law enforcement or bad actors incredibly easy. For example, in the case of the attack on the United States Capitol Building this means law enforcement can easily figure out which Parler users were present and on site—which can be viewed as helpful. In other situations this can be incredibly dangerous, such as when a user shares media of their home or family members. When this media contains the user’s geolocation they are unknowingly giving everyone in the world information as to where they live or places they frequent.

At this point in history it’s a well known industry practice to strip this metadata from media when a user chooses to share that media. At MartianCraft we feel it is the responsibility of the client application to perform this task so that it never appears on your servers where data leaks are most common and devastating. Parler failed miserably at protecting its users privacy by not considering this attack vector or by choosing to cut yet another corner.

Soft deletion of user removed content

Using a social network can at times be regrettable. Emotions can get the best of its users and they may share something that they wish to remove all traces of. While anyone who has used the Internet should know once something is posted it’s forever immortalized—there is still a desire to remove content from a platform with the basic expectation that it will actually be removed. It turns out the content users posted to Parler and then removed was not actually removed. Instead the platform operators made the decision to “soft delete” content. In the software world this means marking content as deleted rather than actually deleting it. You can think of it as the Trash or Recycle Bin on your computer, the content is moved to a location that may actually be deleted at a later time but is still accessible.

While this is a common and helpful practice for software, users should be aware of what is happening and be given the option to permanently remove their content. Since we did not have personal experience using Parler we’re not aware if users were told at any point in the user experience that content they chose to remove could actually be restored. As publicly available content was scraped from the social network it was discovered that some of it was “deleted” content which is content the user had personally and intentionally removed. The content was marked as hidden and omitted from search results, but was still there. This is a massive breach of user privacy. At best the user should be made aware their content is not actually being removed, and at worst it should be moved to a publicly inaccessible location to later be fully purged.

A key tenant of good software is software the user can trust. Transparency in where user data is being stored, and how it can be accessed, is critical. In fact, there are even regulations such as the General Data Protection Regulation and California Consumer Privacy Act that define how user data must be treated. When building a platform where user data will be collected, both directly and indirectly, it is of utmost importance to be clear with how user data will be used and respect the user’s request to actually delete it. In the case of Parler, the implementation strategy they chose may have further assisted in indicting their users all the while claiming privacy as paramount. In the case of a social network you may be looking to build, such an implementation could impact a user’s safety and well being.

It begins and ends with obfuscation

One of the primary ways that Parler was able to be compromised was reportedly due to the ability to easily decipher and reverse their API format. It is all too easy for developers to provide an additional layer of obfuscation to endpoints which makes the process a lot harder and more time intensive.

As an example, you can have a service that looks like socialapp.com/postID=00001, which would return the first post ever submitted to the platform, followed by postID=00002. This makes it very easy for someone to come along and grab a copy of every single post available. In practice, creating a harder to read API scheme would add a higher level of security while costing the development team nothing more than a token amount of time. The API URL may look like socialapp.com/content=d534328c-5b44-11eb-ae93-0242ac130002.

It hopefully goes without saying that URL endpoints should have some level of authentication attached to them that will only return data for those users who have the permission to view it. All too often apps will be able to access all data but control the output and display of that information; just because the front end doesn’t let you access the data it is still important to make sure that those same safety nets are applied to the server (backend) itself.

The need for user and content moderation

Any social app that is built for users to post their own content should have some sort of mechanism to turn off comments by a user, or disable that user entirely in the event that they are abusing the system, abusing other users, or causing other harm. When it comes to user-generated content things can get out of control rather quickly. As a service owner, you should constantly have this as a high priority to keep your users safe, keep your platform safe, and above all else ensure that what users post is not illegal.

The phrase Section 230 has become a highly discussed topic over the last several months. Formally codified as Section 230 of the Communications Act of 1934 at 47 U.S.C. § 230; “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This small snippet of US law simply provides for immunity to hosting providers and services for the content that is posted onto them.

For example, an individual is responsible for anything on their computer, such as child pornography. If you were to send an illegal file to someone via Gmail it will now also reside on Gmail’s server. Without Section 230 Gmail can be held liable for that content since they are then hosting it. Section 230 says that a content provider is not responsible for content created by users on its platform. Once a provider starts to moderate the content then they take more responsibility for that content. Section 230 allows content providers to take a more passive role in content moderation and in turn they are awarded certain protections. Removal of Section 230 protections would make the content host fully responsible for all the content their users create, which in turn would likely spell the end of user comments, social media, pictures and file sharing, and just about anything else created by the end user.

If you find this confusing you are not alone, Section 230 has been read many different ways by everyone who looks at it, including lawmakers. In reality no one seems to be entirely sure what kind of protections and under what circumstances Section 230 provides in terms of moderation, curation, and user generated content.

The first line of defense is to ensure that your app or service has a clear Terms of Service (or Terms of Usage) agreement that all users agree to when joining. This is your legal obligation to take care of your users and your platform. You should then use this agreement to moderate content that users post.

Moderation can come in several different forms:

A lot of moderation tactics aren’t scalable. For instance, flagging posts or having users email you about a particular post that they believe goes against the terms of service. While this works great for smaller social networks, as soon as your network begins to blossom you’re going to need to figure something else out. This is especially true for free to use services like Twitter, Facebook, and Parler. These services often don’t bring in enough revenue to pay for the literal army of moderators that would be required to manage the content the service produces. Users create an almost unbelievable amount of content; 6000 tweets per second, 350 million facebook images per day, and 95 million photos via Instagram per day. Some platforms may recruit members of their own community as moderators to serve this role, but all too often the moderators reflect the community trends and refuse to moderate the “hive mind” mentality that the platform has naturally evolved to.

Another method of moderation is to have a community manager that actively looks at content being posted to see if there’s anything trending, being re-posted, or talked about in large numbers. Social networks like micro.blog use this approach, and help to establish a rapport with users in an attempt to keep them posting things that align with the goals and terms of the social network. This isn’t extremely scalable either, but this coupled with user flagging can catch many things that might go against the terms of service.

A more nuclear approach to moderation is to limit certain keywords in posts, or as the user types, ask them if it’s something they believe should be posted to their accounts. Keyword limitation has many well known flaws, and you will often see users easily bypassing this system with the replacement of key letters with * or using purposeful misspellings of words. Parler’s CEO has recently stated that when and if Parler returns it will implement keyword based algorithmic moderation [source]

Regardless of which content moderation scheme you pick, at the end of the day, the user’s account isn’t truly their account. It lives on your service, and you are responsible to ensure the integrity of your service, and the safety of your users. Do not be afraid to ensure that users who violate your terms of service multiple times are not allowed to abuse your platform. Legal counsel should be employed and are able to help you draft a Terms of Service agreement that is enforceable in the event that users are abusing the platform, or causing a safety issue.

Removal from the App Store

At MartianCraft, we’ve helped to build several social networking platforms over the years. We often talk to our clients about App Store Review Guidelines and in turn the requirements of moderation of user generated content. How Parler was able to bypass these basic rules, especially when the entire platform was pitched on the modus operandi of extremely limited moderation, is not known at this time. We would certainly be very interested to see its approval audit logs and whether the company received any pushback from Apple.

Apple issued a statement regarding the removal, “We have always supported diverse points of view being represented on the App Store, but there is no place on our platform for threats of violence and illegal activity,” an Apple representative said in a statement. “Parler has not taken adequate measures to address the proliferation of these threats to people’s safety. We have suspended Parler from the App Store until they resolve these issues.”

These guidelines have been in place since the origination of the App Store back in 2008. The guidelines have constantly evolved as the App Store has become more complex, and serve as a way for Apple to keep their own users safe by moderating apps that are available on their platform.

Section 1.2 of the guidelines spell out what is required of apps that have user-generated content. Regardless of the moderation scheme that you chose from the previous section, it’s important to note that if content is offensive and a user reports it, then there should be a mechanism in place in the app to take that content out of the app and filter it so that it isn’t visible to users.

This section of the App Store Review Guidelines was cited, along with several other safety violations from the first section of the review guidelines.

1.2 User Generated Content Apps with user-generated content present particular challenges, ranging from intellectual property infringement to anonymous bullying. To prevent abuse, apps with user-generated content or social networking services must include:

Apps with user-generated content or services that end up being used primarily for pornographic content, Chatroulette-style experiences, objectification of real people (e.g. “hot-or-not” voting), making physical threats, or bullying do not belong on the App Store and may be removed without notice. If your app includes user-generated content from a web-based service, it may display incidental mature “NSFW” content, provided that the content is hidden by default and only displayed when the user turns it on via your website.

Source

Removal from AWS and the reliance on third party systems

Amazon removed Parler’s web and API presence from their platform a few days later. Amazon similarly issued a statement to Parler and the public regarding its decision to remove the platform from its hosting service.

“[W]e cannot provide services to a customer that is unable to effectively identify and remove content that encourages or incites violence against others … Because Parler cannot comply with our terms of service and poses a very real risk to public safety, we plan to suspend Parler’s account effective Sunday, January 10th, at 11:59PM PST.”

While we typically wouldn’t recommend to our clients who are looking to build social networks that they should have a backup web host to a provider like AWS (which hosts Twitter in its entirety), it is worth considering it as a thought exercise. While MartianCraft works with a variety of clients across a spectrum of industries we do have the luxury of being able to decline highly polarizing projects, or those that go against our values.

When your entire platform rests on a single vendor, even one as large and as fault tolerant as AWS, you are putting all of your eggs into one basket. If you are building a platform which is by nature designed to be a safe haven for speech that is deemed too controversial for other platforms [source], it may be worth considering some deeper redundancy. If a client were asking us about our recommendation for a platform such as Parler, I think it would be safe to say we would recommend multiple CDN systems across various vendors including having several spread across multiple countries. One has to imagine a backup host in Switzerland or another country of that nature would not have been impacted as much. Even if all traffic needed to be piped overseas for a period of time it is better than a multiple week total outage, which can effectively be a death-blow to a new social network.

In Closing

Was Parler doomed to self implode and fail? It’s hard to say, the platform itself was built on a very flimsy motto to begin with. To purposefully build a social network with the sole intention of hands off moderation while targeting users who are being banned by other platforms is a tremendous wager. There is a huge and undeniable market for free, encrypted, and secure speech, and there is a growing frustration with moderation of content growing across the world. Is a total lack of moderation the answer? If forced to predict we at MartianCraft would say no but we have been wrong with technical predictions in the past. What is more assured, and proven time and time again in history, is that when you cut corners with your technical approach things will fail and when they do they will do so in a spectacular fashion. Parler will go down in technical history if for nothing else a complete failure to protect its user’s data, especially given their entire company policy.

We leave you today with a more lighthearted thought. Michael Crichton’s character John Hammon in Jurassic Park who famously and repeatedly remarked “We spared no expense”, had his creation taken down by hiring only a single software developer who they underpaid, then ignored his complaints and requested additional compensation, leaving him to cut corners and ultimately betray his employer.

MartianCraft is a US-based mobile software development agency. For nearly two decades, we have been building world-class and award-winning mobile apps for all types of businesses. We would love to create a custom software solution that meets your specific needs. Let's get in touch.