How Algorithms Influence Us Every Day

Computer algorithms have become an increasingly integral part of our personal lives and how we interact and view society as a whole. Because of algorithms, we get the latest news stories and information via Facebook and Twitter. We get Google search results that cater to our personal preferences. We’re able to take the easiest driving routes with the help of GPS, and we buy products that we might not have known we wanted due to recommendation functions.

Since algorithms play such a large part in our lives, we may overlook their pervasiveness and also overlook the way our views are affected by a seemingly subtle force. A recent article by Michele Willson of Curtin University in Perth, Australia, suggests the omnipresence of algorithms raises interesting questions about day-to-day human experience, reports Science Daily.

One key aspect that Willson examines is the reduction of the human experience into data.

“Time, bodies, friendships, transactions, sexual preferences, ethnicity, places and spaces are all translated into data for manipulation and storage within a technical system or systems. On that basis alone, questions can be posed as to… how people see and understand their environment and their relations (when all is reducible to malleable discrete but combinable units),” says Willson.

But isn’t the thought of humans as data an affront to our uniqueness? If our personalities can be neatly packaged into bits of data, we might wonder if there’s anything that makes us distinctly special. Can our identities be observed simply as clusters of objective personality traits? Can we be so easily boxed up based on aesthetic preferences, ethnicities, sexuality and places of residence?

Not only can algorithms have an impact on the way we view ourselves, they can also have an impact on the way our socio-political perceptions are shaped — forcing us to look at how we view our relationships with others.

Algorithms aren’t perfect

Recent research has shown that there’s much bias in algorithms, reports The New York Times. These biases cover everything including gender, class and race.

For example, Google’s online advertising system shows ads for high-income jobs to males much more so than to females, according to researchers at Carnegie Mellon University. If you search for a name that tends to be a more characteristically associated with African-Americans vs. a name that’s typically associated with a white person, you’re more likely to find arrest records, according to a study from Harvard University. And University of Washington researchers discovered that if you type “CEO” into a Google Images search, only 11 percent of the images show women even though 27 percent of United States CEOs are female.

Demographics as pure data could be seen as problematic for individuality and identity; after all, we live in a country where being unique is considered a right. But there are some positive aspects to these attempts to quantify human experience and preference via algorithms; just look at the success rate of marriages that began in the realm of online dating.

But they do make good matchmakers

Recent research suggests that there’s a higher rate of marriage satisfaction with couples who met online, reports Time. In a study funded by eHarmony and published in the Proceedings of the National Academy of Sciences of 20,000 Americans who were married between 2005 and 2012, 35 percent of those marriage were made possible because of internet dating. It turns out that 6 percent of marriages that began online ended in divorce or separation as opposed to 8 percent for marriages that began outside the digital world.

Aside from bringing love into many people’s lives, preference algorithms help us discover a wide array of media catered to the individual. The recommendation functions of Amazon and streaming services like Netflix and Hulu expose audiences to unexplored worlds in film, music and books. Just like your GPS app can make your drive easier, media-oriented algorithms can reveal untouched terrain by leading the user down roads that might otherwise remain less traveled. By guessing your ideal destination and refining your personalized path, algorithms can enhance how you experience media by recognizing your preferences, even if you’re unaware of them.

None of this is to say that algorithms are an absolute evil that must be eradicated, or that algorithms are the end-all be-all that can save us from loneliness. And of course, there are examples abound of how algorithms affect our everyday world like national security and financial analytics. But nonetheless, the examination of algorithms in our everyday lives gives us a chance to contemplate broader questions about our own humanity.

Why Some People Can Multitask Online and Others Can’t

The internet may be the most comprehensive source of information ever created but it’s also the biggest distraction.

Set out to find an answer on the web and it’s all too easy to find yourself flitting between multiple tabs, wondering how you ended up on a page so seemingly irrelevant to the topic you started on.

Past research has shown that we have a very limited capacity to perform two or more tasks at the same time and brainpower suffers when we try.

But my new study suggests that some people are better at multitasking online than others. Being able to switch between multiple web pages and successfully find what you want all comes down to how good your working memory is.

Working memory is the part of the brain responsible for the storage and processing of information, decision making, and planning. It is responsible for the attention, quality, and quantity of information that is stored and processed in both the short and long-term memory.

Many psychologists describe working memory as the ability to retain a specific amount of information while intervening with other information or tasks.

Previous studies have suggested that working memory plays an important role in multitasking. For example, one study showed interruptions reduced people’s ability to multitask.

This suggests our working memory can only hold a limited amount of information at any one time, limiting our capacity to think about multiple things at once.

My new research focuses on, among other things, how people’s different levels of working memory influence their multitasking behavior while using the web.

I assessed the working memory of 30 students using an automated operation span test that asked them to remember a series of written characters while solving maths questions.

I then asked them to use the web to research four topics of their choice, two they had prior knowledge of and two they didn’t.

This was particularly important as research has shown that having prior knowledge of a subject means you can study it with less effort from your working memory.

I found that participants with high working memory switched between their information topics and web search results more often than those with low working memory.

This seemed to enable them to test and retest different strategies for finding the answers they wanted. This means that they were able to divert more of their attention between different tasks.

The people with high working memory also reported that they were able to coordinate existing and new knowledge, multiple topics and deal with interruptions more easily. And they had a better grasp of trying different strategies, such as using different search engines, formulating search queries, evaluating webpages and saving results.

What’s more, those with low working memory capacity thought the previously unfamiliar topics they were researching became more complex as they went on.

They also reported that they could not generate more strategies to complete the task or evaluate and judge the content of the webpages they were looking at in the same way as they did for the topics they had prior knowledge.

Attention limits

This research confirms previous studies that have suggested that people with low working memories have more limited abilities to keep their attention on relevant information.

More specifically, it also suggests that people with low working memory cannot easily give their attention to different pieces of information in a way that would allow them to effectively multitask. This is especially true for topics they have no prior knowledge of.

What all this means is that people with low working memory abilities probably don’t find multitasking as easy as they would like. Luckily, there are ways to expand your working memory capacity through practice and exercise.

For example, Cogmed Working Memory Training involves completing tasks such as remembering visual sequences for rewards, and has been linked with enhancements in working memory in children and adults.

But technology has the greatest impact when it is designed around its users’ abilities and limitations – not when people have to train themselves to use it. For example, elderly people or people with cognitive impairments such as dementia often see a decline in their working memory.

My research shows that these people will have to work harder when they search for information on the web, especially for topics that have no prior knowledge of.

Understanding this could help lead to better website or browser designs for these groups, and helps to build their confidence online.

Computers Will Commit More Crimes Than Humans by 2040

Humans seem to have committing crimes pretty well figured out, as is evident to anyone who takes even a casual glance at the news on a daily basis. But experts say computers and robots will one-up the human race in committing crimes by the year 2040. What’s more: humans basically won’t be able to do anything to stop artificial intelligence entities from wreaking criminal havoc.

Independent cyber-defense adviser and researcher Cameron Brown told Raconteur last week that the rise in cyber-crime will largely be motivated by (what else?) money. Brown said increasing opportunities to earn a living through cyber-crime “will propel the disenfranchised and those in lower income bands to pursue a life of crime given the low risk and potential high yields.”

Experts expressed concern about already existing ransomware technology, which could soon be used against everyday people in their homes. “Ransomware-like attacks will become more prevalent with the integration of internet of things (IoT) and smart sensors into our daily lives,” senior e-threat analyst with Bitdefender, Liviu Arsene, told Raconteur. Arsene said we can expect cyber criminals to use ransomware technology to hack smart appliances in home or offices, holding access to them ransom. People who can’t pay up would be cut off from access to utilities, home security, and smart cars in these scenarios.

All of this is also a point of major concern for anti-terrorism. Tracy Fellows, the chief strategy and innovation officer at The Future Laboratory told Raconteur, “Futurists have been forecasting a sharp rise in lone-wolf terror attacks for years. But once robots can be hacked to become suicide-bombing machines, lone-robot attacks could become rife too.”

It might sound like a Philip K. Dick novel, but these are possibilities that are already in the works, and 2040 is less than 24 years away.

Has Technology Made Us All Bad Spellers?

Today, 285 spellers will compete to be the 2016 Scripps National Spelling Bee Champion. The ecstasy and the agony of becoming America’s best speller younger than 15 will be split into five rounds, beginning with today’s preliminaries and ending with Thursday’s primetime final on ESPN. You’ll see tweens do something much harder than you’ve had to do in years: Spell words out loud without spell-check or autocorrect.

While academics aren’t unified in their opinion that technology has weakened the English language, the assumption is that we’re getting worse at spelling. A handful of researchers have found that programs like autocorrect on your iPhone give way to poor spelling: A 2005 Harvard study found that 37 subjects working on a task with spell-check — versus 28 subjects working without — were essentially reliant on the technology to do the spelling for them. This wasn’t so much a factor of not knowing how to spell, but instead, laziness and confidence that they didn’t need to.

“As with most ‘effects’ research on media, there can only be correlation, not causation,” Alice Daer, a digital content strategist and former Arizona State English professor, tells Inverse. “Any changes that coincide with social technologies are equivalent to the changing of language that happens organically, with or without the technology.”

While not much evidence is out there to support the claim that technology is causing a decline in language standards, the argument is still a popular one.President Barack Obama recently told Rutgers graduates that his generation “were certainly better spellers” because they didn’t have spell-check.

If that’s the argument, then it’s not the rise of smartphone autocorrect that has caused this dramatic shift — spell-check has been a helping hand to bad spellers since it was invented in the 1970s. But there have certainly always been people bad at spelling — Ernest Hemingway and F. Scott Fitzgerald both turned in manuscripts riddled with errors; even the Founding Fathers couldn’t get a hang ofits versus it’s in the Constitution. The difference now is that with a proliferation of mass-produced and easily shared communication, it’s just a lot more obvious who is spelling incorrectly.

“My opinion is that, generally, with every technology and media trend, there are doomsayers who argue that the sky is falling,” Danielle DeVoss, a professor of professional writing at Michigan State University tells Inverse. “Twenty-five years ago when you wrote a letter to your grandma, if you spelled something wrong, she’d be your only audience. Today if you post a status message on your grandma’s Facebook wall, a heck of a lot more people will see your spelling mistake.”

While experts like DeVoss and Daer say that this area of study needs more research before it can be declared concretely true, there is a possibility that autocorrect is having a strong influence on language — not regarding our spelling abilities, but in affecting the “life” and “death” of certain words. In a 2012 study published in Nature a team of researchers argue that autocorrect, because it steers us toward using certain words, creates a sort of standardization that stamps out variety. It’s just not misspellings that “die” with autocorrect, but variations like “Xray” and older words like “roentgenogram.”

“We observed a correspondence between the tipping point of word adoption, a type of lifespan, and the characteristic human generational timescale, both of which are roughly 40 years,” lead study author Alexander Peterson tells Inverse. “Spell-checkers impose basic principles of reinforced selection, so that words deemed ‘more correct’ — by predetermined cultural or editorial standards — have a higher chance of reproducing.”

What is certain is that technology has made us take our relationship with spelling for granted. This is what makes this week’s Scripps National Spelling Bee all the more impressive.

These young competitors are so good that this year Scripps has decided to, in its own words, “raise the bar even higher” with the hope that the bee can be done away with tied matches. Also, words like scherenschnitte and nunatak (last year’s winning words) weren’t deemed hard enough to stop these young spelling champs. The competitors of today are spitting out words that dwarf their peers of yesteryear: For example, in 1940 the winning word was therapy while in 1967 it was Chihuahua.

Has the proliferation of texting and tweeting hurt the 2016 competitors’ ability to contend with the very best? If anything, it seems like their generation is kicking ass while spelling names.

Get more information on the Scripps National Spelling Bee at

The U.S. Postal Service is Testing a Service That Will Email You Your Mail

You’ve got E-mail. The U.S. Postal Service is currently testing a service that will email people photos of the envelopes of their letter-size mail. It sounds pretty invasive, but the service, Informed Delivery, will only send daily photos of your mail’s exterior, nothing inside.

So far Informed Delivery has been operating in seven different Virginia zip codes, and will now be making its way to the New York City metro area. As of now it’s free, but you need to sign up online to start getting photos of your letters. It only works for individuals, and as of now will not apply to packages. So why would you want the exterior of your mail to be photographed and emailed to you?

While it seems like a pretty new service, our mail has been photographed for some time. It’s been an effective way for the Postal Service to sort and track mail, but it’s also been an important tool for lawsuits. The tracking program debuted back in 2001, shortly after an anthrax attack killed five people.

This seems to be as far as virtual mail will go, but who knows, maybe snail mail well be a thing of the past sooner than later.

HP Presents The World’s Widest Curved All-in-One PC

We do not often look beyond Apple when it comes to computers, but we have to admit that we are intrigued by the new HP Envy Curved All-in-One PC. The soon to release computer is the world’s widest curved All-in-One PC, featuring 21:9 aspect ratio, high resolution WQHD (3440×1440), and is overall 34-inch diagonal in size. The specs include a brand new 6th generation Intel Core processor, up to 16GB of RAM, and NVIDIA GeForce 960A graphics, with the built in Bang & Olufsen audio being the stand-out feature. Find out more here.

Microsoft Introduces the Surface 3

Microsoft continues to chase the crown with the latest addition to the company’s line of in-house tablets. Although these are not the only tablets on the market running Windows software, this will be the first to run the full version, making the device more like a laptop than any of its predecessors or competitors. Users will receive the added bonus of a free update to the upcoming Windows 10 operating system. At 8.1 mm depth the Surface 3 is still a bit chunkier than the iPad Air 2, which boasts an impressive 6.1, although it does provide standardized USB ports.

The Surface 3 will go on the market May 5 at Microsoft retailers worldwide for an attractive $499.99. In the meantime, watch the promotional advertisement below.

[youtube id=”vPto6XpRq-U” width=”600″ height=”350″]

Microsoft Launches Windows 10

Microsoft’s latest operating system had its date with destiny earlier today at its Redmond, Washington headquarters when it rolled out what they’re calling “the next chapter” in Windows: Windows 10. During the live-streamed keynote, several new features were unveiled – many to remove the cumbersomeness that gripped Windows 8 – including an overhaul that takes into account the manner in which people use devices today. By creating a responsive experience that runs with a familiarity across all mediums, from personal computers and tablets to smartphones and gaming platforms, aggregating these devices will allow apps to work in similar fashion. With Android and Apple devices retaining the lion’s share of the mobile/tablet market, this move will look to help Microsoft gain a bigger piece of the pie.

In a play that’s meant to contend with Apple’s Siri, Windows 10 will debut its own voice-activated apparatus, called Cortana which will be integrated into the system’s new-look web browser. Capable of acknowledging commands, Cortana serves as a digital assistant of sorts available via voice control or at the click of a key.

Additionally, Microsoft announced that Windows 10 will be released as a free upgrade for anyone owning a computer or gadget that’s currently running Windows 8.1 or 7, the two previous versions of the software. With these recent developments, Windows 10 hopes to be a reinvigorated desktop experience.

Check out more of what the operating system has to offer in terms of key features below.

[youtube id=”teoZk3QEc40″ width=”600″ height=”350″]

Intel Compute Stick Turns HDMI Display into Computer

In the modern business age – where time is always of the essence – facility, versatility and reliability are three key qualities demanded by consumer technology. Intel – a brand noted for its top-tier standards when it comes to electronic devices – has just released the newest generation of computing, simply titled the Intel Compute Stick, that turns any HDMI display into a fully functional computer. The ultra-portable Stick contains the speed and power of a quad-core Intel Atom processor along with built-in wireless connectivity, on-board storage, and a micro SD card slot perfect for any extra storage. All the perks of a desktop computer in a gadget that can fit inside your pocket.

Look out for the Intel Compute Stick as it is set to launch sometime later this year.

The Duet Display Turns Your iPad Into a Second Monitor

New app Duet allows you to use your iDevice (compatible with Macs, iPads, iPhones, etc.) as an external monitor to your computer or laptop. Created by some of Apple’s former engineers, the retina display will flawlessly run at 60 frames per second, and with zero lag. For those looking to increase their screen space, power, productivity and speed, Duet is available for download from its website now.

[youtube id=”mVYimOiEya8″ width=”600″ height=”350″]