This time of year is our season for continuing education, when we travel to conferences to hear from a variety of experts on issues that impact our clients’ lives. Last week I attended the National Association of Personal Financial Advisors (NAPFA) fall conference and learned more about artificial intelligence (AI), big data and how both are impacting us today and where it may be headed in the future.
Before leaving for the conference I was already interested in the topic, due to an unsettling article forwarded by one of our clients from The Atlantic. Particularly concerning was the idea that we are becoming increasingly dependent on applications and networked machines than our own reasoning skills in making decisions and even planning our lives. Amazon recommends books for us and Netflix tells us what movies we will enjoy, but when might we become dependent on similar algorithms to tell us what college to attend, what career to choose, and even who we should marry?
Questions like this are usually laughed off as being way beyond what AI is capable of now, and the blueberry muffin vs. chihuahua picture is frequently used as an example of AI’s inability to discern at the same level as humans. Algorithms have become excellent at sorting through large stores of data, detecting patterns to determine facts about us and even predicting what we’ll do next. You may already have heard the 2012 story about Target exposing a teenage girl’s pregnancy well before her parents had any idea. Based on all the shopping data Target had compiled from millions of customers, if a woman purchased unscented moisturizer, a very large handbag (that could double as a diaper bag) and a colorful rug among other things, it could predict she was pregnant and would begin sending her coupons to encourage her to shop for all her future maternity needs at Target.
At NAPFA, Dr. Jennifer Golbeck, Director of the Social Intelligence Lab at the University of Maryland and Eric Rosenbach, former Chief of Staff to the Secretary of Defense and current Co-Director of the Belfer Center for Science and International Affairs at Harvard Kennedy School, shared examples of how far we’ve come in AI since the Target story, and what it means.
Facebook has been in the news for its inability to protect user data, and Cambridge Analytica used the site to acquire detailed information about users after positioning itself as a research firm, so people have become more careful about limiting their profiles to friends and changing their passwords. There are still other ways information is being collected. For example, when you ‘like’ something on Facebook, that information is always public and spreads like a virus, with your friends seeing what you ‘like’ and many ‘liking’ it too. With Facebook likes alone, algorithms can determine random facts like your IQ (based on your like of curly fries among other things!) and whether your parents divorced before you turned 25. While this is interesting, Dr. Golbeck provided an example of a far more positive way the data can be utilized.
Using Facebook posts and ‘likes’, researchers were able to determine with an 85% success rate whether individuals who had announced their plans to enter into sobriety through AA or some other program would remain sober. After studying the data patterns, apps have been developed using AI to prevent alcoholics and addicts from relapsing, and some have been so successful after research funded by the National Institutes of Health and The National Science Foundation that additional trials are underway.
Of course, Facebook isn’t the only application that is collecting data about us. While Amazon Alexa is not supposed to be listening to your conversations without you waking it up with the voice trigger “Alexa,” there are numerous examples of the device recording and sending information out that doesn’t fit this explanation. We were given one by our son for Christmas, and we’ve even had ‘her’ speak out in the middle of a conversation my husband and I were having in our kitchen with an “I’m sorry, I don’t believe I know the answer to that question.” As Rosenbach said in his presentation, Google is collecting information and forwarding it, and there is no guarantee about when it is and isn’t happening. Neither Dr. Golbeck or Mr. Rosenbach have an Amazon Alexa or Google Home as a result.
While Alexa is listening in on conversations because she’s been invited, there are apps on your phone that have had microphones running in the background 24/7. While this sounded a bit like an AI conspiracy theory to me, I learned that Alphonso software uses your phone to capture audio signals from your TV programs and connects that data with your GPS information to promote products to you – like an ad for a cruise to Greece the morning after you watched the Travel Channel. Google Android seemed to be more susceptible to this due to the number of apps it supports with this technology, and both speakers said they use Apple phones as a result.
The last example of how much the world know about us, whether we want it to or not, is the application Crystal Knows. This app was launched as a start-up in 2015, as a solution for fostering better work relationships through email communication. It scrapes the web for a person’s online information, analyzes it, and defines each person’s personality type from 64 options. Using this information, it can coach you on everything from how to write an email to a person to get your desired outcome, to determining whether the two of you would work well together. Remember, this isn’t based on a personality test you’ve taken, it’s only based on what you’ve done on the web.
In her summary, Dr. Golbeck described algorithms as Weapons of Math Destruction and agreed they are being used by many organizations in ways that are not clear to us now.
So as an ordinary citizen who is not trying to hide a Breaking Bad operation, how concerned should you be? Mr. Rosenbach insisted that our day-to-day lives are of no concern to 99% of the data collectors out there, but unlike me, he kept the Amazon Alexa he received as a Christmas gift in the box. If you are concerned, there are some things you can do to limit the availability of your information.
- Clean off your Facebook timeline to reduce the information the app holds about you. Golbeck wipes her timeline clean every 2 weeks, and recommends you download an archive of your history first if there is information you want to save. The deletion can be done following three steps:
- From your News Feed, click your name in the top left corner.
- Scroll down and hover over the post and click in the top right corner.
- Select Delete or Hide from timeline from the dropdown menu.
- Use a browser extension like Ghostery to blocks ads and limit the tracking technology that’s able to gather your data.
- Go through the privacy settings on all the applications on your phone and delete their ability to track your location. Turn off any microphone settings such as Siri, to prevent them from picking up your voice when you don’t intend for them to be listening to you.
Slate Magazine wrote an excellent series of articles in February about cybercrime self-defense in their Futurography project, and I highly recommend you read through them if you’d like to go into more detail on the topic.
On a more upbeat note, there are ways that AI can significantly provide benefits to us in the future.
The inability to drive has always been a depressing milestone for people as they age. Taking away a grandparent’s car keys when you know they are a danger to themselves and to others may not be necessary in the future with the advent of autonomous cars. Ford is already testing the concept with Argo AI. John Hopkins has been using data and working with the Department of Defense to determine the most successful ways to treat Post Traumatic Stress Disorder. Media has been used with 80% accuracy in predicting if a new mother will suffer from postpartum depression, giving doctors an opportunity to intervene.
Ready or not, AI has come a long way from choosing blueberry muffins vs. chihuahuas!