Lifestyle

Long arm of big tech?

Wednesday, February 17th, 2021 14:05 | By

By Allan Adalla

As social media becomes part of our daily lives, tech giants have found themselves in entanglements with the law, government officials and their esteemed customers who have fallen victims of suspension or banning of their accounts.

A user who incites violence, organised attacks or deliberately share misinformation suffers this sentence as per security policies guiding various platforms. Consumer tech expert and founding editor of Gadgets Africa, Saruni Maina, says these social networking tech platforms are responsible for all the content circulating in public.

On one hand, they are to blame whenever they allow wrecking information to go public. He suggests that instead of shutting down the user accounts, they should just expurgate the misinformation.

“It’s a polarising issue. The tech companies also give a hand to the content that goes viral in their platforms. They should simply censor what users consume instead of banning them from accessing their accounts. When terrorists try using any social media platforms to air their views or in a more absurd way “show their work”, the information would immediately get kicked out,” he says.

On the other hand, Saruni says that social media consumers need to draw a line where moderation becomes censorship.

Stella Magana, a cybersecurity researcher at SheHacks, a community of women in cybersecurity in Kenya, says renowned names have been purged from social media platforms, not forgetting fake accounts in a bid to maintain the integrity of the service.

"Some prominent names have been liquidated from various social media platforms with others suspended indefinitely, in name of propagating harmful agendas that translate to real-life danger, riots and loss of life. This begs the question, what is the thin line between policing the masses and allowing for necessary revolutions take place?” she asks.

According to Twitter, most accounts are suspended because they are spammy, or just plain fake, and they introduce security risks for Twitter and all of its users. If the company suspects an account has been hacked or compromised, it may suspend it until it can be secured and restored to the account owner to reduce potentially malicious activity caused by the compromise. Similarly, it will suspend the account if it has been reported to violate rules surrounding abuse.

Dorothy Ooko, Google Head of Communications and Public Affairs, Africa says their company is committed to free expression and access to information, but it’s not anything goes on YouTube.

Google development team systematically reviews all their policies to ensure they are current, keep the community safe, and do not stifle YouTube’s openness.

“The safety of our users has always been a priority. Since our earliest days, we’ve had Community Guidelines —or content policies —that govern what videos may stay on the site, which we rigorously enforced. Our policies prohibit among other things, gratuitous violence, nudity, dangerous and illegal activities, and hate speech,” she says, adding that they use a combination of technology and people to enforce these guidelines.

“We are always working to invest in and improve on our processes and technology to enforce our guidelines. Our content policies have had to evolve to tackle evolving threats. We go to great lengths to make sure content that breaks our rules isn’t widely viewed, or even viewed at all, before it’s removed,” Ooko explains, adding that Google recognises that dealing with these issues responsibly is a critical part of the role they play in society.

Facebook has been keen on censoring misinformation after the novel coronavirus was declared a public health emergency. Janet Kemboi, communication manager at Facebook Eastern Africa, says the company started removing false claims about Covid-19 vaccine in December.

Between March and October, they had removed more than 12 million pieces of this content on Facebook and Instagram, which has been made possible by working with more than 80 factchecking partners, covering more than 60 languages around the world.

“Our data shows how our efforts are working. In April (2020) alone, we have put warning labels on about 50 million pieces of content based on around 7,500 fact-checks from partners. Ninety five per cent of the time, people who saw the label didn’t click to view. Between March and October(2020), we have put these warning labels on 167 million pieces of content.

"Another key piece to this work is effort we are making to connect people with authoritative sources of information. We have connected over two billion people to resources from health authorities through our Covid-19 Information Center and pop-ups on Facebook and Instagram with over 600 million people clicking to learn more.”

Magana is, however, quick to warn of consequences of suspending and banning user accounts by big tech. “These stated moves by tech giants have consequences and some things are best left to stabilise. How far can and will they go to contain public opinion? Not so far since technology is always changing and there just might be a ‘new next big thing’,” Magana says in conclusion.

More on Lifestyle


ADVERTISEMENT