Warning: include(/home/skullfunk/public_html/blogleeg.com/wp-admin/css/colors/diff.php): failed to open stream: No such file or directory in /home/skullfunk/public_html/blogleeg.com/index.php on line 3
Warning: include(): Failed opening '/home/skullfunk/public_html/blogleeg.com/wp-admin/css/colors/diff.php' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/skullfunk/public_html/blogleeg.com/index.php on line 3
Warning: include(/home/skullfunk/public_html/blogleeg.com/wp-includes/Requests/Utility/footer.php): failed to open stream: No such file or directory in /home/skullfunk/public_html/blogleeg.com/index.php on line 6
Warning: include(): Failed opening '/home/skullfunk/public_html/blogleeg.com/wp-includes/Requests/Utility/footer.php' for inclusion (include_path='.:/usr/lib/php:/usr/local/lib/php') in /home/skullfunk/public_html/blogleeg.com/index.php on line 6 My Blog – My WordPress Blog
While everyone seems to acknowledge that social media played a pivotal role in the 2016 elections, a deeper analysis is needed. Both Trump and Clinton used social media extensively, but they used it differently. Those differences are crucial in understanding how Hillary Clinton might have used social media to change the outcome of the election.
We tend to think about social media in too broad a sweep. Not only do we need to look at how the candidates used Facebook versus Twitter (and other social media platforms), we also need to look at how they used each platform differently from one another.
Within each social media platform are a host of different functions and a wide range of audiences to target. Not only do candidates need to consider all of the different functions and audiences, but they need to consider which functions within each platform are the most useful for reaching which target audience.
Candidates must consider their goals for using each social media platform and each function within them. Among these goals are (in no particular order):
Organizing and mobilizing your base (distributed staff, volunteers and committed voters)
Marketing ideas (both your vision for the country and policy solutions)
Writing your own narrative
Writing your opponents narrative
Creating real accessibility, transparency and accountability to the voters
What should immediately jump out from this list is that social media can be used for many of the essential goals of a campaign. That means if a campaign is not developing the comprehensive strategies, tactics, staffing and organization of staff to achieve all of these goals, they are leaving value on the table. There is much more in this list than can be covered in a single article, so what follows should be considered only a first swipe at the whole analysis.
One of the apparent differences in the way Trump used social media compared to Clinton is that he focused heavily on marketing the idea of his presidency, especially at the vision level, to the whole country (the outside game). By contrast, Clinton tended to focus on using social media to organize her supporters, focusing less on marketing ideas and more on getting people to show up for events and getting them to say they are “with her” (the inside game).
On the message delivery side of social media, Clinton left huge opportunities on the table. In particular, her campaign’s under-utilization of Facebook Live is substantial.
On an idea level alone, Trump’s grand message was about what he was going to do for America and Hillary’s message was what America would do for her. If her slogan was reversed, for example, to read “She’s with Me,” it might have resonated much more broadly like “Make America Great Again” resonated.
It is ironic to note that the Trump formulation went against the JFK frame of “ask not what your country can do for you, but what you can do for your country” even while his policies were about government doing less for the country. And Clinton’s formulation took the JFK formulation of what you can “do for your country” and made it “what can you do for Hillary?”
These messaging differences matter tremendously when your primary point of contact with voters are 140-character tweets and 420-character Facebook wall posts. Sloganeering works on social media, but only if the slogan resonates with your audience. What we saw in this election was one candidate using effective slogans that were not backed up with substance and another candidate who was all substance with no effective slogans. What Clinton left on the table is the opportunity to create a powerful slogan that both resonated with voters and was backed up by substance.
On the message delivery side of social media, Clinton left huge opportunities on the table. In particular, her campaign’s under-utilization of Facebook Live is substantial. This is a classic case of trying to get a grip on technology that launches after a campaign is in full swing. While Facebook Live was initially launched for verified users on August 5, 2015, it was not made accessible to everyone until April 6, 2016.
Consider the 2008 Obama campaign as an example of the challenge presented by technology launched after a traditional presidential campaign is launched. About five weeks before Election Day in 2008, volunteer developers gave the Obama campaign an iPhone app. When asked why the campaign did not develop its own iPhone app, its point staffer for mobile strategy Scott Goodstein reminded everyone that when the budget for the campaign was developed (at the beginning of the campaign cycle), the iPhone did not exist.
Traditionally, presidential campaign budgets are set up front and fundraising is designed to meet the costs laid out in them. This is perhaps why Donald Trump’s non-traditional campaign was more willing and able to embrace Facebook Live than the Clinton campaign. As a result, Trump used Live to flood voters across the country with notifications about 21 broadcasts in the last four days of the campaign (amassing 45 million views), while Clinton used that Facebook function 3 times in the same stretch (with only 14 million views).
And Clinton left even more on the table when she missed the opportunity to simulcast her big GOTV concert to Jay Z, Beyoncé, LeBron James, Lady Gaga and Katy Perry’s Facebook pages. Had she used a platform like Shindig, she could have pushed the feed out to all of their Facebook Live channels and their 135 million plus page fans. Had she done that, she would have reached more people with that one Live broadcast than Trump did in his 21 final days’ broadcasts. That is a lot of social media value left on the table.
Shortly after the 2008 Obama campaign, I hosted a panel of experts at the Center for American Progress to assess his use of digital strategy. That campaign was being hyped for its groundbreaking use of social media, mobile, email, web and online ads. But the consensus of the expert panel was that the campaign could have done better. Such a pronouncement is always easy in hindsight. But, as we saw in 2012, the Obama re-election campaign learned its lessons from 2008 and improved their campaign, elevating their former Chief Information Officer Michael Slaby to be Chief Integration Officer (as in data integration). From that lesson learned we got Narwhal and the birth of Big Data for Presidential elections.
But both the 2008 and 2012 Obama campaigns used social media as much to organize its supporters as it did to market its ideas to voters across the country. It used a resonating slogan that was backed up by substance. And it worked. It was the combination of the inside and outside games that created his success.
Earlier in the 2016 campaign, it seemed that Trump was all about the social media outside game, using it to market his vision to the whole country, but with very little apparent inside game organizing it supporters on the ground. Meanwhile, Clinton was hard at work developing and implementing a comprehensive inside game, using social media and video to talk to her supporters, while struggling to resonate outside of her base in the face of Trump’s ability to define her to the country in his outside game. And then, seemingly out of nowhere, emerged Trump’s inside game in the final stretch. It took nearly everyone by surprise (maybe even Trump himself). And with his ultimate combination of an outside and inside game, driven to a significant degree by social media, Trump won.
The Boston Police Department is taking heat from civil liberty groups for plans to spend up to $1.4 million on new software that scours social media and the internet for potential threats.
The attack Monday on the Ohio State University campus is just the latest illustration of why local law enforcement authorities need every tool they can muster to stop terrorism and other violence before it starts, according to Boston Police Commissioner William Evans.
Monitoring technology can quickly mine the internet, from chat rooms to social media to blog posts, for certain keywords and phrases. It can track postings in a certain geographic area, send alerts to police about potentially dangerous postings and more. Law-enforcement officials say the technology allows them to more quickly and efficiently spot possible red flags in near real-time.
Officials say the Ohio State suspect may have been inspired by the Islamic State terror group. A Facebook post by the suspect Abdul Razak Ali Artan before the attack suggested he was angry over what he perceived as mistreatment of Muslims, but didn’t express loyalty to a specific group or ideology, according to people familiar with the case.
Sharing Islamic State propaganda by itself isn’t a crime. But if someone is making threatening posts, police might then use informants or other means, including more surveillance or seeking court permission to monitor phones or computers, to gauge how serious the person is. “The more you know about someone, the more you can make informed decisions about how many resources to put into those people,” said Edward Davis, the Boston police commissioner during the 2013 Boston marathon bombings.
It is hard to say whether monitoring would have made a difference in thwarting the Boston bombers, who were allegedly motivated by online anti-U.S. jihadist teachings, because the bombers weren’t very active on social media, said Mr. Davis.
As has often been said, with great power comes great responsibility. As we saw in the recent election, social media is a great example of a powerful medium that can change minds and change lives but can also give credibility to false or misguiding information.
As someone diagnosed with Parkinson’s disease (PD) nine years ago, I’ve been thrilled at seeing social media’s growing power as an agent for good. As our advocacy community has grown, social media has allowed for more information to be circulated in the PD community than ever before, and has become a vital link through which we share experiences, raise awareness about quality of life issues, point people to clinical trials, spread knowledge about cutting-edge research and, maybe most importantly, raise critical dollars to fund it. Connecting our community more tightly together has underscored the important role each of us can play in finding an eventual cure.
A downside to the awesome power of this platform comes from not knowing or perhaps not caring about the source of information shared on social media. Just as “fake news” has flourished in an environment where speed, rather than accuracy, is what counts, patients — who are understandably vulnerable to hopeful reports about their disease — must recognize that not everything they read is equally credible. In my years of advocating for PD-related causes, hundreds of so-called “miracles” have been announced, all of which have proven to have disappointing results.
Recently there was a flurry of media coverage and excitement in the patient community about a small, open-label study in which people with PD were given a cancer drug called nilotinib. The majority reported major improvement in their PD symptoms. Recognizing a good story, highly respected media outlets including NPR featured hopeful accounts of patients who saw nilotinib as a miracle drug. Social media channels lit up with the news, causing many in the PD community to ask their physicians to prescribe it without knowing whether the drug would be safe or effective for long-term use in PD. One comment that caught my attention on social media: “I don’t understand what the problem is. It works. Give it to everybody with PD.”
Somehow, lost in all the hype was the fact that the study only included 11 people with PD given nilotinib for 6 months (one of the 12 patients died during the study) — not a scientifically valid sample size nor study design. Also lost in translation was the fact that nilotinib, when used in cancer, carries a black-box warning regarding life-threatening risks associated with the drug. As someone with no scientific background but keen interest in research, I wouldn’t have given these results a second look. Instead, I was frustrated and saddened to learn that so many people had pinned their hopes on such flimsy research.Sound, responsible science needs to be held to the highest standards, and I wish that more attention had been paid to the study’s early stage; if so, I believe the response would have been more measured. There is a purpose and a reason that protocols for human research have developed, and that is to balance the need to move as quickly as possible toward effective treatments against the need to ensure the safety of everyone who takes a drug.
The research community is working on behalf of all of us, and we owe them our gratitude. Nonetheless, scientists must be vigilant in their commitment to present promising early information only alongside a full accounting of possible risk.
As patients, we fervently wish that finding the “miracle cure” were easy. But in our heart of hearts, we know there is much more involved in finding out if a treatment is effective in slowing or stopping the progression of a disease. We, too, have a responsibility to educate ourselves, ask the tough questions, and balance the hype even as we allow ourselves cautious optimism about possible steps forward.
The PD community has accomplished so much: We are close to unraveling the underlying mechanisms of the cellular process of PD, and to vastly improved solutions for diagnosing, treating and managing the disease. Breakthroughs are sure to come if we keep invested in the process; we each have an important part to play. Nevertheless, our inability to date in finding a cure leaves us vulnerable, not just in our bodies, but also in false hope.
I ask the PD community — including family and friends — that we take a more tempered approach to what we read and with what standards we judge, especially in the online world. Our need is great, and our responsibility is greater. To the media and the grassroots social media community, please, dig deeper and make sure what you promote is the real deal. We want to believe you.
In line with the central government’s Digital India initiative, the Goa Government will on Monday launch its ambitious scheme providing limited free talktime and Internet data to the youth in the state.
“Goa Yuva Sanchar Yojana will be unveiled on Monday. It will be launched across the state providing 100 minutes talk time and 3 GB Internet data free with a SIM card (every month) for youths aged between 16-30 years,” Chief Minister Laxmikant Parsekar told PTI.
He said the government has tied up with private mobile services provider Vodafone for the scheme which will benefit nearly 1.25 lakh youth. Parsekar said the state Cabinet on November 25 had granted its approval to the programme which is in line with the Digital India initiative of the Narendra Modi Government.
The Chief Minister said the government has made a provision to discontinue with the scheme if a beneficiary is found to be misusing it.
The Digital India initiative is a flagship programme of the Modi Government that seeks to transform the country into a digitally-empowered society and knowledge economy.Launched in July last year, the programme aims to provide a much-needed thrust to areas like broadband highways, universal access to mobile connectivity, public Internet access programme and e-Governance.
Authors: Yiping Huang, Yan Shen, Jingyi Wang and Feng Guo, Peking University
Internet finance in China has developed rapidly over the past several years. Internet finance embraces a wide range of activities, including third-party payment, online lending, direct sales of funds, crowd-funding, online insurance and banking and digital money. Public sentiment towards internet finance has moved the full gamut from fever pitch to fear. What is the future of internet finance?
Optimists argue that internet finance represents a third type of financial intermediation, after direct and indirect financing, which could completely revamp traditional financial industries.
Others point out that internet finance is mainly a Chinese phenomenon and is a product of regulatory arbitrage, which could evaporate once financial regulations tighten to levels equivalent with those in advanced economies. From mid-2015, as internet finance grew and the associated risks escalated — evidenced by growing numbers of problematic peer-to-peer (P2P) lending platforms — these pessimistic assessments gained traction among financial industry practitioners, scholars and even government officials.
Qilun Wu, a famous financial commentator on Chinese microblogging website Weibo, has argued that P2P lending is a Ponzi scheme. Many share Wu’s opinion. But is this really the case?
The rise of internet finance in China has been triggered by at least three factors. First, repressive financial policy produces an undersupply of financial services, especially for small- and medium-sized enterprises (SMEs) and low-income households. This leaves a hole in the financial market. Second, regulator tolerance has provided space for internet finance to emerge and grow. And third, IT tools, especially mobile terminals and big data analysis, increasingly offer effective ways for internet finance to increase its efficiency and control financial risk.
A distinct advantage of internet finance over traditional finance is that it substantially reduces due-diligence costs. As a result, financial transactions that would otherwise be commercially unviable in the traditional financial industry are enabled. More importantly, the long-tail feature of internet technology implies that, once the system is established, the marginal cost of servicing additional customers is close to zero. So, the internet has a natural fit with the evolution of more inclusive finance in China. In this sense, internet finance is indeed a real innovation and not a passing bubble as some fear.
This suggests that internet finance has the potential to add real value to financial transactions, especially via enabling commercially profitable transactions from demand for credit that previously was commercially unviable. Whether this potential can turn into real business is dependent on several conditions, the most critical of which is the internet’s ability to continue to reduce information asymmetry for financial transactions.
For internet finance to work, institutional market participants need to have sufficient understanding of finance, access to internet channels, a quality dataset and data analysis capacities. Against these straightforward criteria, many practitioners of internet finance in China today are probably not ideally qualified. This, rather than the core of the industry itself, is why the internet finance industry is suffering from bubbles and scandal at present.
Internet finance in China fills an important gap in the market by extending financial services to customers who are insufficiently serviced by the traditional financial industry. And, it facilitates financial transactions in general by lowering costs and reducing risks through better use of customer analytics data — by reducing information asymmetry. On these two counts, internet finance offers genuine innovation.
If these features can be further improved and strengthened, internet finance should survive, especially as a form of more inclusive finance, whether or not regulations are tightened significantly. It is even possible that China is leading a new product cycle globally in this pioneering area.
As with any young but promising new industry, the risks are high for China’s internet finance sector. Many investors chase quick money through either blind optimism or Ponzi schemes. At the end of the 20th century, the United States also experienced a collapse of its internet bubble, but there was real and lasting innovation: a number of global leaders such as Google and Amazon rose from the ashes. At this stage, there are no guarantees that internet finance will even survive, as was the case with selective internet-based IT platforms 15 years ago.
To ensure the healthy development of internet finance, important conditions need to be met in at least three areas. The first requires there to be a set of ‘infrastructure’ facilities. At the minimum, this requires a network of mobile terminals or the ability to analyse available data, or both. Strictly speaking, big data still does not exist in China. The government may need to introduce a policy framework to both make useful data publicly accessible and safeguard individuals’ privacy. A credible and integrated credit reporting system for individuals and SMEs would also be valuable for internet finance credit allocation decisions.
The second condition concerns regulation of financial qualifications for industry participants. The essence of internet finance is the financial transaction. So it is vital for internet finance professionals to have a good understanding of finance, especially the related risks. A lot of problems in the internet finance sector were created by professionals who did not understand or respect basic financial rules and principles.
The third condition relates to a regulatory framework that strikes a balance between encouraging innovation and controlling risks. Internet finance is still finance, and financial transactions need to be appropriately regulated. Both too little and too much regulation could hinder the otherwise beneficial evolution of internet finance.
Yiping Huang, Yan Shen, Jingyi Wang and Feng Guo are research fellows at the Institute of Internet Finance, Peking University.
Here’s what you don’t want to do late on a Sunday night. You do not want to type seven letters into Google. That’s all I did. I typed: “a-r-e”. And then “j-e-w-s”. Since 2008, Google has attempted to predict what question you might be asking and offers you a choice. And this is what it did. It offered me a choice of potential questions it thought I might want to ask: “are jews a race?”, “are jews white?”, “are jews christians?”, and finally, “are jews evil?”
Are Jews evil? It’s not a question I’ve ever thought of asking. I hadn’t gone looking for it. But there it was. I press enter. A page of results appears. This was Google’s question. And this was Google’s answer: Jews are evil. Because there, on my screen, was the proof: an entire page of results, nine out of 10 of which “confirm” this. The top result, from a site called Listovative, has the headline: “Top 10 Major Reasons Why People Hate Jews.” I click on it: “Jews today have taken over marketing, militia, medicinal, technological, media, industrial, cinema challenges etc and continue to face the worlds [sic] envy through unexplained success stories given their inglorious past and vermin like repression all over Europe.”
Google is search. It’s the verb, to Google. It’s what we all do, all the time, whenever we want to know anything. We Google it. The site handles at least 63,000 searches a second, 5.5bn a day. Its mission as a company, the one-line overview that has informed the company since its foundation and is still the banner headline on its corporate website today, is to “organise the world’s information and make it universally accessible and useful”. It strives to give you the best, most relevant results. And in this instance the third-best, most relevant result to the search query “are Jews… ” is a link to an article from stormfront.org, a neo-Nazi website. The fifth is a YouTube video: “Why the Jews are Evil. Why we are against them.”
The sixth is from Yahoo Answers: “Why are Jews so evil?” The seventh result is: “Jews are demonic souls from a different world.” And the 10th is from jesus-is-saviour.com: “Judaism is Satanic!”
There’s one result in the 10 that offers a different point of view. It’s a link to a rather dense, scholarly book review from thetabletmag.com, a Jewish magazine, with the unfortunately misleading headline: “Why Literally Everybody In the World Hates Jews.”
I feel like I’ve fallen down a wormhole, entered some parallel universe where black is white, and good is bad. Though later, I think that perhaps what I’ve actually done is scraped the topsoil off the surface of 2016 and found one of the underground springs that has been quietly nurturing it. It’s been there all the time, of course. Just a few keystrokes away… on our laptops, our tablets, our phones. This isn’t a secret Nazi cell lurking in the shadows. It’s hiding in plain sight.
Stories about fake news on Facebook have dominated certain sections of the press for weeks following the American presidential election, but arguably this is even more powerful, more insidious. Frank Pasquale, professor of law at the University of Maryland, and one of the leading academic figures calling for tech companies to be more open and transparent, calls the results “very profound, very troubling”.
He came across a similar instance in 2006 when, “If you typed ‘Jew’ in Google, the first result was jewwatch.org. It was ‘look out for these awful Jews who are ruining your life’. And the Anti-Defamation League went after them and so they put an asterisk next to it which said: ‘These search results may be disturbing but this is an automated process.’ But what you’re showing – and I’m very glad you are documenting it and screenshotting it – is that despite the fact they have vastly researched this problem, it has gotten vastly worse.”
And ordering of search results does influence people, says Martin Moore, director of the Centre for the Study of Media, Communication and Power at King’s College, London, who has written at length on the impact of the big tech companies on our civic and political spheres. “There’s large-scale, statistically significant research into the impact of search results on political views. And the way in which you see the results and the types of results you see on the page necessarily has an impact on your perspective.” Fake news, he says, has simply “revealed a much bigger problem. These companies are so powerful and so committed to disruption. They thought they were disrupting politics but in a positive way. They hadn’t thought about the downsides. These tools offer remarkable empowerment, but there’s a dark side to it. It enables people to do very cynical, damaging things.”
Google is knowledge. It’s where you go to find things out. And evil Jews are just the start of it. There are also evil women. I didn’t go looking for them either. This is what I type: “a-r-e w-o-m-e-n”. And Google offers me just two choices, the first of which is: “Are women evil?” I press return. Yes, they are. Every one of the 10 results “confirms” that they are, including the top one, from a site called sheddingoftheego.com, which is boxed out and highlighted: “Every woman has some degree of prostitute in her. Every woman has a little evil in her… Women don’t love men, they love what they can do for them. It is within reason to say women feel attraction but they cannot love men.”
Next I type: “a-r-e m-u-s-l-i-m-s”. And Google suggests I should ask: “Are Muslims bad?” And here’s what I find out: yes, they are. That’s what the top result says and six of the others. Without typing anything else, simply putting the cursor in the search box, Google offers me two new searches and I go for the first, “Islam is bad for society”. In the next list of suggestions, I’m offered: “Islam must be destroyed.”
Jews are evil. Muslims need to be eradicated. And Hitler? Do you want to know about Hitler? Let’s Google it. “Was Hitler bad?” I type. And here’s Google’s top result: “10 Reasons Why Hitler Was One Of The Good Guys” I click on the link: “He never wanted to kill any Jews”; “he cared about conditions for Jews in the work camps”; “he implemented social and cultural reform.” Eight out of the other 10 search results agree: Hitler really wasn’t that bad.
A few days later, I talk to Danny Sullivan, the founding editor of SearchEngineLand.com. He’s been recommended to me by several academics as one of the most knowledgeable experts on search. Am I just being naive, I ask him? Should I have known this was out there? “No, you’re not being naive,” he says. “This is awful. It’s horrible. It’s the equivalent of going into a library and asking a librarian about Judaism and being handed 10 books of hate. Google is doing a horrible, horrible job of delivering answers here. It can and should do better.”
He’s surprised too. “I thought they stopped offering autocomplete suggestions for religions in 2011.” And then he types “are women” into his own computer. “Good lord! That answer at the top. It’s a featured result. It’s called a “direct answer”. This is supposed to be indisputable. It’s Google’s highest endorsement.” That every women has some degree of prostitute in her? “Yes. This is Google’s algorithm going terribly wrong.”
I contacted Google about its seemingly malfunctioning autocomplete suggestions and received the following response: “Our search results are a reflection of the content across the web. This means that sometimes unpleasant portrayals of sensitive subject matter online can affect what search results appear for a given query. These results don’t reflect Google’s own opinions or beliefs – as a company, we strongly value a diversity of perspectives, ideas and cultures.”
Google isn’t just a search engine, of course. Search was the foundation of the company but that was just the beginning. Alphabet, Google’s parent company, now has the greatest concentration of artificial intelligence experts in the world. It is expanding into healthcare, transportation, energy. It’s able to attract the world’s top computer scientists, physicists and engineers. It’s bought hundreds of start-ups, including Calico, whose stated mission is to “cure death” and DeepMind, which aims to “solve intelligence”.
And 20 years ago it didn’t even exist. When Tony Blair became prime minister, it wasn’t possible to Google him: the search engine had yet to be invented. The company was only founded in 1998 and Facebook didn’t appear until 2004. Google’s founders Sergey Brin and Larry Page are still only 43. Mark Zuckerberg of Facebook is 32. Everything they’ve done, the world they’ve remade, has been done in the blink of an eye.
But it seems the implications about the power and reach of these companies is only now seeping into the public consciousness. I ask Rebecca MacKinnon, director of the Ranking Digital Rights project at the New America Foundation, whether it was the recent furore over fake news that woke people up to the danger of ceding our rights as citizens to corporations. “It’s kind of weird right now,” she says, “because people are finally saying, ‘Gee, Facebook and Google really have a lot of power’ like it’s this big revelation. And it’s like, ‘D’oh.’”
MacKinnon has a particular expertise in how authoritarian governments adapt to the internet and bend it to their purposes. “China and Russia are a cautionary tale for us. I think what happens is that it goes back and forth. So during the Arab spring, it seemed like the good guys were further ahead. And now it seems like the bad guys are. Pro-democracy activists are using the internet more than ever but at the same time, the adversary has gotten so much more skilled.”
Last week Jonathan Albright, an assistant professor of communications at Elon University in North Carolina, published the first detailed research on how rightwing websites had spread their message. “I took a list of these fake news sites that was circulating, I had an initial list of 306 of them and I used a tool – like the one Google uses – to scrape them for links and then I mapped them. So I looked at where the links went – into YouTube and Facebook, and between each other, millions of them… and I just couldn’t believe what I was seeing.
“They have created a web that is bleeding through on to our web. This isn’t a conspiracy. There isn’t one person who’s created this. It’s a vast system of hundreds of different sites that are using all the same tricks that all websites use. They’re sending out thousands of links to other sites and together this has created a vast satellite system of rightwing news and propaganda that has completely surrounded the mainstream media system.
He found 23,000 pages and 1.3m hyperlinks. “And Facebook is just the amplification device. When you look at it in 3D, it actually looks like a virus. And Facebook was just one of the hosts for the virus that helps it spread faster. You can see the New York Times in there and the Washington Post and then you can see how there’s a vast, vast network surrounding them. The best way of describing it is as an ecosystem. This really goes way beyond individual sites or individual stories. What this map shows is the distribution network and you can see that it’s surrounding and actually choking the mainstream news ecosystem.”
Like a cancer? “Like an organism that is growing and getting stronger all the time.”
Charlie Beckett, a professor in the school of media and communications at LSE, tells me: “We’ve been arguing for some time now that plurality of news media is good. Diversity is good. Critiquing the mainstream media is good. But now… it’s gone wildly out of control. What Jonathan Albright’s research has shown is that this isn’t a byproduct of the internet. And it’s not even being done for commercial reasons. It’s motivated by ideology, by people who are quite deliberately trying to destabilise the internet.”
Albright’s map also provides a clue to understanding the Google search results I found. What these rightwing news sites have done, he explains, is what most commercial websites try to do. They try to find the tricks that will move them up Google’s PageRank system. They try and “game” the algorithm. And what his map shows is how well they’re doing that.
That’s what my searches are showing too. That the right has colonised the digital space around these subjects – Muslims, women, Jews, the Holocaust, black people – far more effectively than the liberal left.
“It’s an information war,” says Albright. “That’s what I keep coming back to.”
But it’s where it goes from here that’s truly frightening. I ask him how it can be stopped. “I don’t know. I’m not sure it can be. It’s a network. It’s far more powerful than any one actor.”
So, it’s almost got a life of its own? “Yes, and it’s learning. Every day, it’s getting stronger.”
The more people who search for information about Jews, the more people will see links to hate sites, and the more they click on those links (very few people click on to the second page of results) the more traffic the sites will get, the more links they will accrue and the more authoritative they will appear. This is an entirely circular knowledge economy that has only one outcome: an amplification of the message. Jews are evil. Women are evil. Islam must be destroyed. Hitler was one of the good guys.
And the constellation of websites that Albright found – a sort of shadow internet – has another function. More than just spreading rightwing ideology, they are being used to track and monitor and influence anyone who comes across their content. “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages. This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go.”
Cambridge Analytica, an American-owned company based in London, was employed by both the Vote Leave campaign and the Trump campaign. Dominic Cummings, the campaign director of Vote Leave, has made few public announcements since the Brexit referendum but he did say this: “If you want to make big improvements in communication, my advice is – hire physicists.”
Steve Bannon, founder of Breitbart News and the newly appointed chief strategist to Trump, is on Cambridge Analytica’s board and it has emerged that the company is in talks to undertake political messaging work for the Trump administration. It claims to have built psychological profiles using 5,000 separate pieces of data on 220 million American voters. It knows their quirks and nuances and daily habits and can target them individually.
“They were using 40-50,000 different variants of ad every day that were continuously measuring responses and then adapting and evolving based on that response,” says Martin Moore of Kings College. Because they have so much data on individuals and they use such phenomenally powerful distribution networks, they allow campaigns to bypass a lot of existing laws.
“It’s all done completely opaquely and they can spend as much money as they like on particular locations because you can focus on a five-mile radius or even a single demographic. Fake news is important but it’s only one part of it. These companies have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.”
Did such micro-targeted propaganda – currently legal – swing the Brexit vote? We have no way of knowing. Did the same methods used by Cambridge Analytica help Trump to victory? Again, we have no way of knowing. This is all happening in complete darkness. We have no way of knowing how our personal data is being mined and used to influence us. We don’t realise that the Facebook page we are looking at, the Google page, the ads that we are seeing, the search results we are using, are all being personalised to us. We don’t see it because we have nothing to compare it to. And it is not being monitored or recorded. It is not being regulated. We are inside a machine and we simply have no way of seeing the controls. Most of the time, we don’t even realise that there are controls.But we don’t know what choices they are making. Neither Google or Facebook make their algorithms public. Why did my Google search return nine out of 10 search results that claim Jews are evil? We don’t know and we have no way of knowing. Their systems are what Frank Pasquale describes as “black boxes”. He calls Google and Facebook “a terrifying duopoly of power” and has been leading a growing movement of academics who are calling for “algorithmic accountability”. “We need to have regular audits of these systems,” he says. “We need people in these companies to be accountable. In the US, under the Digital Millennium Copyright Act, every company has to have a spokesman you can reach. And this is what needs to happen. They need to respond to complaints about hate speech, about bias.”
Is bias built into the system? Does it affect the kind of results that I was seeing? “There’s all sorts of bias about what counts as a legitimate source of information and how that’s weighted. There’s enormous commercial bias. And when you look at the personnel, they are young, white and perhaps Asian, but not black or Hispanic and they are overwhelmingly men. The worldview of young wealthy white men informs all these judgments.”
Later, I speak to Robert Epstein, a research psychologist at the American Institute for Behavioural Research and Technology, and the author of the study that Martin Moore told me about (and that Google has publicly criticised), showing how search-rank results affect voting patterns. On the other end of the phone, he repeats one of the searches I did. He types “do blacks…” into Google.
“Look at that. I haven’t even hit a button and it’s automatically populated the page with answers to the query: ‘Do blacks commit more crimes?’ And look, I could have been going to ask all sorts of questions. ‘Do blacks excel at sports’, or anything. And it’s only given me two choices and these aren’t simply search-based or the most searched terms right now. Google used to use that but now they use an algorithm that looks at other things. Now, let me look at Bing and Yahoo. I’m on Yahoo and I have 10 suggestions, not one of which is ‘Do black people commit more crime?’
“And people don’t question this. Google isn’t just offering a suggestion. This is a negative suggestion and we know that negative suggestions depending on lots of things can draw between five and 15 more clicks. And this all programmed. And it could be programmed differently.”
What Epstein’s work has shown is that the contents of a page of search results can influence people’s views and opinions. The type and order of search rankings was shown to influence voters in India in double-blind trials. There were similar results relating to the search suggestions you are offered.
“The general public are completely in the dark about very fundamental issues regarding online search and influence. We are talking about the most powerful mind-control machine ever invented in the history of the human race. And people don’t even notice it.”
Damien Tambini, an associate professor at the London School of Economics, who focuses on media regulation, says that we lack any sort of framework to deal with the potential impact of these companies on the democratic process. “We have structures that deal with powerful media corporations. We have competition laws. But these companies are not being held responsible. There are no powers to get Google or Facebook to disclose anything. There’s an editorial function to Google and Facebook but it’s being done by sophisticated algorithms. They say it’s machines not editors. But that’s simply a mechanised editorial function.”
And the companies, says John Naughton, the Observer columnist and a senior research fellow at Cambridge University, are terrified of acquiring editorial responsibilities they don’t want. “Though they can and regularly do tweak the results in all sorts of ways.”
Certainly the results about Google on Google don’t seem entirely neutral. Google “Is Google racist?” and the featured result – the Google answer boxed out at the top of the page – is quite clear: no. It is not.
But the enormity and complexity of having two global companies of a kind we have never seen before influencing so many areas of our lives is such, says Naughton, that “we don’t even have the mental apparatus to even know what the problems are”.
And this is especially true of the future. Google and Facebook are at the forefront of AI. They are going to own the future. And the rest of us can barely start to frame the sorts of questions we ought to be asking. “Politicians don’t think long term. And corporations don’t think long term because they’re focused on the next quarterly results and that’s what makes Google and Facebook interesting and different. They are absolutely thinking long term. They have the resources, the money, and the ambition to do whatever they want.
“They want to digitise every book in the world: they do it. They want to build a self-driving car: they do it. The fact that people are reading about these fake news stories and realising that this could have an effect on politics and elections, it’s like, ‘Which planet have you been living on?’ For Christ’s sake, this is obvious.”
“The internet is among the few things that humans have built that they don’t understand.” It is “the largest experiment involving anarchy in history. Hundreds of millions of people are, each minute, creating and consuming an untold amount of digital content in an online world that is not truly bound by terrestrial laws.” The internet as a lawless anarchic state? A massive human experiment with no checks and balances and untold potential consequences? What kind of digital doom-mongerer would say such a thing? Step forward, Eric Schmidt – Google’s chairman. They are the first lines of the book, The New Digital Age, that he wrote with Jared Cohen.
We don’t understand it. It is not bound by terrestrial laws. And it’s in the hands of two massive, all-powerful corporations. It’s their experiment, not ours. The technology that was supposed to set us free may well have helped Trump to power, or covertly helped swing votes for Brexit. It has created a vast network of propaganda that has encroached like a cancer across the entire internet. This is a technology that has enabled the likes of Cambridge Analytica to create political messages uniquely tailored to you. They understand your emotional responses and how to trigger them. They know your likes, dislikes, where you live, what you eat, what makes you laugh, what makes you cry.
And what next? Rebecca MacKinnon’s research has shown how authoritarian regimes reshape the internet for their own purposes. Is that what’s going to happen with Silicon Valley and Trump? As Martin Moore points out, the president-elect claimed that Apple chief executive Tim Cook called to congratulate him soon after his election victory. “And there will undoubtedly be be pressure on them to collaborate,” says Moore.
Journalism is failing in the face of such change and is only going to fail further. New platforms have put a bomb under the financial model – advertising – resources are shrinking, traffic is increasingly dependent on them, and publishers have no access, no insight at all, into what these platforms are doing in their headquarters, their labs. And now they are moving beyond the digital world into the physical. The next frontiers are healthcare, transportation, energy. And just as Google is a near-monopoly for search, its ambition to own and control the physical infrastructure of our lives is what’s coming next. It already owns our data and with it our identity. What will it mean when it moves into all the other areas of our lives?
“At the moment, there’s a distance when you Google ‘Jews are’ and get ‘Jews are evil’,” says Julia Powles, a researcher at Cambridge on technology and law. “But when you move into the physical realm, and these concepts become part of the tools being deployed when you navigate around your city or influence how people are employed, I think that has really pernicious consequences.”
Powles is shortly to publish a paper looking at DeepMind’s relationship with the NHS. “A year ago, 2 million Londoners’ NHS health records were handed over to DeepMind. And there was complete silence from politicians, from regulators, from anyone in a position of power. This is a company without any healthcare experience being given unprecedented access into the NHS and it took seven months to even know that they had the data. And that took investigative journalism to find it out.”
The headline was that DeepMind was going to work with the NHS to develop an app that would provide early warning for sufferers of kidney disease. And it is, but DeepMind’s ambitions – “to solve intelligence” – goes way beyond that. The entire history of 2 million NHS patients is, for artificial intelligence researchers, a treasure trove. And, their entry into the NHS – providing useful services in exchange for our personal data – is another massive step in their power and influence in every part of our lives.
Because the stage beyond search is prediction. Google wants to know what you want before you know yourself. “That’s the next stage,” says Martin Moore. “We talk about the omniscience of these tech giants, but that omniscience takes a huge step forward again if they are able to predict. And that’s where they want to go. To predict diseases in health. It’s really, really problematic.”
For the nearly 20 years that Google has been in existence, our view of the company has been inflected by the youth and liberal outlook of its founders. Ditto Facebook, whose mission, Zuckberg said, was not to be “a company. It was built to accomplish a social mission to make the world more open and connected.”
It would be interesting to know how he thinks that’s working out. Donald Trump is connecting through exactly the same technology platforms that supposedly helped fuel the Arab spring; connecting to racists and xenophobes. And Facebook and Google are amplifying and spreading that message. And us too – the mainstream media. Our outrage is just another node on Jonathan Albright’s data map.
“The more we argue with them, the more they know about us,” he says. “It all feeds into a circular system. What we’re seeing here is new era of network propaganda.”
We are all points on that map. And our complicity, our credulity, being consumers not concerned citizens, is an essential part of that process. And what happens next is down to us. “I would say that everybody has been really naive and we need to reset ourselves to a much more cynical place and proceed on that basis,” is Rebecca MacKinnon’s advice. “There is no doubt that where we are now is a very bad place. But it’s we as a society who have jointly created this problem. And if we want to get to a better place, when it comes to having an information ecosystem that serves human rights and democracy instead of destroying it, we have to share responsibility for that.”
Are Jews evil? How do you want that question answered? This is our internet. Not Google’s. Not Facebook’s. Not rightwing propagandists. And we’re the only ones who can reclaim it.
Paytm has become the go-to digital payments app for many people in India after the demonetisation drive, and the company has added thousands of merchants to its platform within days of Prime Minister Narendra Modi’s announcement of scrapping Rs. 1000 and old Rs. 500 currency notes. From vegetable vendors to grocery stores, everyone is joining the Paytm cashless payments platform.
(Also see: What Is Paytm, and How to Use Paytm Wallet?)
But there is still some confusion about what happens after a merchant receives a payment from a customer via Paytm. How should merchants transfer money from their Paytm Wallet to bank accounts? Scroll down to find out the simple process to transfer money from your Paytm to bank accounts.
In order to transfer money from your Paytm wallet to a bank account, you need the name, account number, and IFSC code of the bank account holder. However, there is a Rs. 20,000 limit on transactions (Rs. 50,000 for merchants) if you haven’t got your KYC (Know Your Customer) process done. if you need to transfer more than the limit, then you will need get in touch with Paytm to get your KYC done.
(Also see: How to Use Digital Wallets Without Sharing Your Mobile Number)
Doing so is also quite simple: first find a Paytm KYC centre close to you, and provide the relevant RBI-approved documents (Aadhar card, passport, voted ID card, driving license, or NREGA job card). You can also type in your Aadhaar number, and then request a visit at your address. While the PAN card number is not necessary, it becomes mandatory if you want to transfer more than Rs. 50,000 in a single transaction. You can conduct unlimited transfers after the KYC is done.
(Also see: Paytm Waives Off Merchant Fees on Offline Transactions)
The KYC process can take up to 48 hours, but you can shorten it by providing Aadhar card as proof of identity as the biometric verification “results in instant completion of process,” says Paytm.
How to transfer money from Paytm to any bank account using Paytm app
Open the Paytm app on your smartphone and tap the Passbook icon
Here, select the Send Money to Bank option
Tap on Transfer
Enter the amount, account holder’s name, bank account number, and IFSC code
Hit the Send button
Paytm bank transfer app how_to_transfer_money_from_paytm_wallet_to_bank_account
How to transfer money from Paytm Wallet to bank account using Paytm desktop website
Open Paytm.com website and login to your account
Roll the mouse cursor over your name at the top-right of the screen and click on Paytm Wallet
In this window, select the Transfer to Bank option and type in the requisite details
Hit the Send Money button
Paytm bank transfer desktop how_to_transfer_money_from_paytm_wallet_to_bank_account
If you are a new Paytm user without KYC, you have to wait three days to transfer money from your Paytm Wallet to bank account. On the other hand, those who have completed the KYC process can start bank transfers immediately.
Transferring money from your Paytm Wallet to bank account will be free of charge till December 31 for merchants who have done KYC. Other customers, as well as merchants who have not yet completed the KYC process, will have to pay a 1 percent fee. You can send a minimum of Rs. 100 to your bank account via the service. Every Paytm users can transfer up to Rs. 5,000 at a time, with 25,000 per month the limit. If you are a merchant, you can transfer up to Rs. 50,000; with the limit going up to Rs. 1 lakh if you are a user who’s done his or her KYC.
Lenovo’s Phab 2 Pro smartphone may be one of the first smartphones to be Tango-ready right out-of-the-box. The company however has plans to roll out the Tango augmented reality functionality to the Moto Z (Review) as well.
At an event in Chicago, Aymar de Lencquesaing, Chairman and President – Motorola, and SVP, Lenovo Mobile Business, has confirmed that the Moto Z may soon receive Tango functionality. Lencquesaing added that the company will likely launch a Tango module to enable the Moto Z to have Tango functionality.
“The tablet folks did a phablet and worked with Google, the Tango team, to come out with a Tango phablet. Going forward, we’ll have to address as a group how do we reconcile the products that are at the fringe? We’re likely to have a Tango module to basically enable the Z to have Tango functionality,” Lencquesaing was quoted saying at the event.
Unfortunately, Lencquesaing didn’t announce a timeline for the availability of the Tango module for the Moto Z.To recall, Google’s Project Tango focuses around machine vision which means that a camera and sensor setup provides motion tracking, depth perception, and area learning. The features allow enabling augmented reality (AR) on a device that can make applications like indoor navigation, search, and gaming work. The Lenovo Phab 2 Pro sports a total of four cameras – an 8-megapixel front camera, a 16-megapixel rear RGB camera, a depth-sensing infrared camera with an imager and an emitter, as well as a motion tracking camera.The Moto Z smartphone launched in India back in October and was priced at Rs. 39,999. The highlight of the smartphone is it supports Moto Mods, which connect to the rear of the smartphones via the 16-dot connector interface.
OnePlus, the Chinese manufacturer that took the smartphone world by storm with its OnePlus One, has now announced the all new OnePlus 3T. The device’s specs promise an all new user experience, according to oneplusstore.in. Most expected by the gizmo world, the phone promises an improved software, better interface and its much needed Dash Charge feature that lets the phone charge in 30 minutes.
The fast charging feature is probably most essential for gamers and live streamers. With a bountiful 6GB of RAM, the processor gets bumped up to a Qualcomm Snapdragon 821 processor clocking at 2.35 GHz, as opposed to the One Plus 3’s 820. We can be sure that the mobile can handle multiple apps running simultaneously like a champ.
The cameras are impressive in OnePlus devices, and it continues here as the device sports 16 Megapixel cameras in back and front! The back can shoot in 4K resolution, too, with Optical Image Stablisation to compensate for shaky hands!
The 3T runs on Oxygen OS, based on Android Marshmallow.
Other usual features like fingerprint sensor, large 3,400MAh battery, dual sim cards, remain. Both the SIM card slots support 4G ofcourse, and the pieces come in gunmetal and soft gold colours, with the latter only available in 64GB.
The screen is 5.5 inches long, and protected with Corning Gorilla glass 4. The screen is an Optic AMOLED display, an improvement over the Super AMOLED from the One Plus 3. This flagship killer comes at a price of Rs. 29,999 for the 64GB variant and Rs. 34,999 for the 128GB variant. Will this device justify the company’s tagline ‘Never Settle’? Only time will tell.
The holiday season is here, and you know what that means: gift purchases. But what about the people you hate, or just want to annoy?
For your hate-buying pleasure, we present the worst tech gifts of 2016. Spend wisely!
1. Stikbox selfie stick phone case
It looks deeply ugly and adds more than a half-inch of thickness to the phone, but that’s a small price to pay for having a selfie stick on hand everywhere you go.
2. Bruno smart trash can
Its smart features only work with Bruno bags, which cost way more than regular kitchen trash bags and are only available online.
3. Designed by Apple in California
This costly coffee table book contains virtually no information about Apple or its design; it’s just nice photos of Apple’s products over the past twenty years.
It’s a smart wearable for dogs that displays colors signifying the dog’s emotion, based on its heart rate and heart rate variability measurements.
Jealous of your cousin’s beautiful kitchen? Ruin its aesthetics with the Flatev, a gigantic, ugly machine that offers its user a ridiculously inconvenient way to make “homemade” tortillas.
Destroy a marriage by giving Smartress, a mattress that uses sensors to determine whether it’s being used for sex and then sends notifications via a mobile app.
Unlike some of these products, it’s not totally stupid – it is a functional umbrella, and it has a notify feature that could help prevent you from leaving it in restaurants or cabs, which is nice.
8. Belty Good Vibes
Belty has a variety of health-tracking features that will keep your friends distracted – hopefully distracted enough that they don’t notice all the giggling happening around them.
9. June Intelligent Oven
It takes away by facilitating home cooking without any skill development, ensuring that its owner is totally dependent on an expensive piece of hardware to create home-cooked food.
10. Ampy Move
This portable battery pack claims it can recharge your phone by generating electricity from your movements as you exercise. It would make an amazing gift if it actually worked, but most reviewers seem to agree it doesn’t.
11. Samsung Galaxy Note 7
The phone is a literal deathtrap, and even if it doesn’t explode, it’ll cause the recipient a huge inconvenience considering it’s banned on all flights.