Julian Sanchez joins us this week for a discussion about online privacy in the era of mass data collection. When we’re online, what kind of data are we creating, and who’s watching us?
We talk about data mining, ad blockers, the internet of things, developer keys, passwords and fingerprint security, encrypted messaging apps, and more.
Show Notes and Further Reading
Other Free Thoughts episodes about online privacy:
- “Deconstructing the Surveillance State “with Julian Sanchez
- “The CIA Listens to Free Thoughts “with Patrick Eddington
Sanchez mentions the browser ad-ons Ghostery and NoScript and the Tails operating system.
He also mentions The Art of Invisibility: The World’s Most Famous Hacker Teaches You How to Be Safe in the Age of Big Brother and Big Data (2017) by Kevin Mitnick.
Aaron Powell: Welcome to Free Thoughts. I’m Aaron Powell.
Trevor Burrus: And I’m Trevor Burrus.
Aaron Powell: And joining us today is our colleague Julian Sanchez. He’s a senior fellow at the Cato Institute. Welcome back to Free Thoughts, Julian.
Julian Sanchez: Thanks for having me.
Aaron Powell: I wanted to talk today about digital privacy from less of a policy standpoint and more of a technological standpoint. To do that, maybe we’ll start with the question, when we’re online [00:00:30] going about our regular day of browsing the internet and watching Netflix and shopping and doing whatever else we do, what kind of data are we creating and having recorded places that we might not necessarily think about?
Julian Sanchez: To answer that in a broad, 30,000 feet way and then in a more specific way. In a broad sense, if you assume that essentially every imaginable piece of data about what you’re doing online [00:01:00] is being recorded somewhere, you will more often than not be right at least for some segment of the sites you’re looking at. It’s probably being tracked by more entities than you would guess and that’s because, in part, the business model of the internet has become surveillance. We use all these free services that operate on the premise that they’re going to be able to make their revenue not from payments from their users but by [00:01:30] selling ads or information that can be used to profile and track people. Also in part because the default in a sense has changed. For most of human history, tracking and recording details about any event was an extra step that has to be taken, it was abnormal.
Almost everything you did left no permanent structured centralized record of what you were doing, your conversations, [00:02:00] what you were reading on a day to day basis. When you talk about online activity, what’s already happening on a data manipulation device so you’re already halfway there. Additionally data storage costs have plummeted. When I was a kid in the early 1980s, the amount of data that by default on my iPhone’s hard drive would have cost you about the same as a pretty nice car. We’ve [00:02:30] hit the point where storing is in a sense as cheap as throwing away. In some cases throwing data away is cheaper than keeping it even if you don’t know yet what you want to do with it.
Aaron Powell: Keeping it is cheaper than throwing it away.
Julian Sanchez: There’s essentially an incentive to stock pile it on the theory that even if you don’t have a use for it now, you might have a use for it at some point in the future and indeed because computer processing power has grown. It’s increasingly useful [00:03:00] to have all sorts of information that before just would have been cluttered that you couldn’t have done anything with, so that’s the general answer.
Trevor Burrus: Is it such that Google has all this information on me … is it granular to the point that if someone really wanted to figure out Trevor Burrus and what he likes that you could actually do that, if you were an employee at Google and you had access to all these things that they have?
Julian Sanchez: They’re pretty stringent at least one hopes they are, they say they about [00:03:30] controlling who has access to that kind of stuff directly. Most of this is about automated systems that are deciding what kind of ads you get but I think in principle that’s absolutely the case. We’ll talk about specifics in a second. This is probably, this is the point of full disclosure to note that I think Google and some of the other technology companies we’re going to be talking about either have been or donors in some amount to Cato. I try not to keep too much track of that precisely because these are conversations that I was worried about but if that’s something you [00:04:00] find relevant, take that with as much salt as you think appropriate. I think these [inaudible 00:04:09] have an incredibly intimate picture of us.
Facebook I know has is running experiments just to see what they could deduce found that very often just from some of the social graph they could tell someone’s sexual orientation regardless of whether they’d actually identified [00:04:30] themselves as gay or straight, based on patterns of who their friends were. Surprisingly if you’re gay you probably have a lot more gay friends than most straight people. Also by looking at frequency of communication or the frequency with which people looked at other people’s pages, they could predict when someone was going to break up or get divorced and predict who then they were going to start a new relationship with often before any of that was public and probably in some cases before the [00:05:00] folks themselves were conscious of it.
There was a notorious case that was reported in the New York Times a couple of years back. In which a, and this isn’t really about online activity this is about data mining more generally in which a father angrily complained to Target that they had started sending his teenage daughter marketing materials for maternity clothes and baby formula. And said look this girl’s 15 years old why would she need this stuff? It turned out she was in fact pregnant and [00:05:30] that the company had gotten very good at detecting purchase patterns that might not be intuitive to someone who hasn’t looked at huge amounts of aggregate data but it turned out to, in tandem, very strongly predict pregnancy. Things again that might not be obvious like someone has switched from scented to unscented hand lotion, that’s very common when someone knows they’re pregnant.
A series of other changes in purchases that again might not be obvious. But, if you [00:06:00] have a huge data set and you now at Target you actually have people who then register for baby showers give you the ability to look back and then say, Okay having this crunched not by a human brain but by a high powered computers can we find patterns that with some reasonable high degree of accuracy correlate with this person is about to be pregnant or about to publicize that they’re pregnant? If you think about certainly a company like Google, [00:06:30] again sometimes I joke that if you lived in a country where the government had the kind of detailed information about you that Google does, you would truly think that was a police state. We hope this is information they’re using for our benefit and to keep providing us useful services and [inaudible 00:06:45].
It is a little sobering to recognize that in addition to probably contents of messages you’re sending that’s the obvious stuff, they just have an incredible amount of information about [00:07:00] what you’re thinking about day to day in a way that’s almost a map of your brain. What medical conditions are you searching for, what political topics are you curious about, what YouTube videos are you watching? But also maybe much more granular information. We think about when we visit a webpage of course [inaudible 00:07:23] to realize that of course probably that website has some way of tracking that [00:07:30] at least a person from a particular internet protocol address visited their page and they may have, if you’ve logged in, they may have more granular information than that about who exactly you are.
Aaron Powell: That’s like just as an interesting application of that kind of stuff is. Google we’re all familiar [00:08:30] with the type in the words that you see in this photograph to decide to hide spam bots and make sure you’re a person. Google fairly recently rolled out a new version of it where it’s just a check box and you click it and it decides-
Trevor Burrus: I was wondering about this, it must have something done at your computer.
Aaron Powell: It’s exactly that same like mouse movement and other patterns. It knows that robots move in certain ways and humans move in other ways and so it can tell.
Trevor Burrus: It’s sort of like the replicant test.
Aaron Powell: Right.
Julian Sanchez: One of the [00:09:00] obvious uses of this is tracking engagement. A new site might want to be able to talk to advertisers about how much time people really spend on the page, does it look like they’re actually reading it? Also telling the difference between an automated scrapper and some type of bot that maybe ripping the content off your site or just probing your system for hacks versus an actual human being engaging in a normal way with the site. [00:09:30] For security purposes maybe extremely useful a lot. Increasingly some of the most sophisticated sites are doing exactly that or tracking to tell the difference between a human who’s actually reading the page and interacting with it the way a human would and an automated system that of course is not reading the page because it’s just scanning everything quickly.
That kind of information can provide pretty granular data and principally you might imagine being able to use that for more specific kinds of fingerprinting. [00:10:00] Often time’s data about what pages you’re looking at and how that might connect to other pages you’re looking at is not just something that can be seen by the page you’re visiting. So if you load up a page on the web very often you’ll see maybe a flash video or a flash ad, other kinds of images there are advertisements for different sites. Usually those [00:10:30] ads are not loading up from the site you’re visiting they’re loading from some third party site. Essentially there’s you know imagine that square of space on the web page is essentially saying pull in content from somewhere else on the internet and that’s what’s displaying but because your computer isn’t connecting to that somewhere else the ad is loading up from.
That other site may have the ability to track your activity across a series of [00:11:00] different pages. So even if you’re not logged in to a site, it may be that because you are connecting to a series of different sites that the advertisers are still able to profile you and connect your activity on one part of the internet with what you’re doing at a different part of the internet. That is in fact tied to your real identity. That means even if you think you’re operating anonymously look I just loaded up this page in the New York Times, I didn’t log in or anything, I didn’t give them any information. [00:11:30] If you have not taken steps to counter this, it’s very likely that any number of advertising networks and data brokers are still able to add that particular web page visit to the bigger profile they have about you.
Aaron Powell: This sounds creepy partly that’s because the use of this stuff is advertising and commerce is turning us into money for the various outlets. It [00:12:00] also sounds profoundly useful. Let’s say I had access to the data that Google is gathering on me I can imagine this … Google could tell me you may be coming down with … you don’t know it but you’ve been showing symptoms of this thing going wrong or your mental state is this-
Trevor Burrus: Depression suicide things like that.
Aaron Powell: Is all this stuff being kept just for advertises or are we moving in a direction where we could personally gain more [00:12:30] use from this kind of big data mining about ourselves as individuals?
Julian Sanchez: There’s an entire movement called the Quantified Self Movement that is precisely about people who enjoy what’s called life logging or really granular measurement of one’s own activity.
Trevor Burrus: They’re like steps [crosstalk 00:12:54] the first level.
Julian Sanchez: We strap on Fitbits or other kinds of biometric devices [00:13:00] that will tell us how much we walked today, how many calories we might have burned. It may be awkward if you’re wearing it during intercourse for example and that just acts as a strange spike in your [crosstalk 00:13:13].
Trevor Burrus: There was just a murderer caught just by a-
Aaron Powell: Just a Fitbit, he claimed his wife … there was information found on his dead wife’s Fitbit that contradicted his story about her location or activities and was the thing that broke it for them charging him with murder.
Julian Sanchez: [00:13:30] I imagine that that’s the kind of thing we’re going to see happening with increasing frequency just because the ubiquity of network sensing devices is growing at a very rapid rate. There are you know benefits to this both personal and social. At the personal level it may be very useful to learn facts about when you tend to overeat or whether in fact you’re getting enough calories on the flip side. Or what are the [00:14:00] conditions under which you actually exercise as much as you want to or just how are you spending your time. The first thing you read in a lot of management books, the habits of effective people type of book is nobody is actually accurate just relying on their own memory about how they spend the time in their day. It can be very useful to realize, gosh when I think I’m taking a five minute break to check email or look at the news I’m actually losing half an hour.
[00:14:30] That can all be useful personally, it can be useful socially tracking appliance usage through smart appliances, the internet of things can enable more efficient greener energy use so that we do not need to burn as much fossil fuel or generate as much energy to supply our needs. Medical profession, medical researchers find enormously useful big data analysis that can help them look for patterns of either interactions with medical conditions [00:15:00] or trends in disease propagation. The reason this is all being done is that it is profitable to someone which is at least of benefit to them and often beneficial more generally either socially or to individuals. Take the [inaudible 00:15:21] case, Amazon obviously benefits when they can sell me products because they know my reading and music listening and viewing habits.
[00:15:30] It’s also useful to me that I get an email that actually is use- is not just a random list of bestsellers but there are books coming out by these authors who either you like or you are likely to like, because you like these other things. That’s handy I’ve certainly bought books or music on that basis that I might have not otherwise been aware. So yeah there is utility to all this, and that’s part of the reason we accept [00:16:00] it. The reason to be cautious though it’s just that there is nothing intrinsic to how this operates precisely because so much of the data gathering is invisible. There’s nothing that guarantees that its being used for your benefit and when I say it’s invisible I mean you can find out, there’s a plugin that I think we’re going to talk a little bit more about privacy defense technology.
There’s a plugin called Ghostery [00:16:30] that helps block gathering of information by third party websites. One of the things Ghostery can do is when you load a webpage it can tell you all the different entities that have trackers on that page, that are monitoring in some way at least your activity if only the fact that you loaded that page. You will see those names popping up again and again. Take that as a sign that is an entity then that is very likely [00:17:00] to be able to correlate the fact that you visited any page that has that company’s tracker on it.
Aaron Powell: That’s for any Trump supporters out there, if you want something legit to be mad at the mainstream media about it’s the megabytes and megabytes of trackers that they’re chewing up your data plan with whenever you visit the newyorktimes.com or other major newspapers. It’s astonishing how may there are if you install something that tells you.
Julian Sanchez: Although almost everyone [00:17:30] does that and that is borderline ubiquitous. Even if you think about email and I think Cato may do this, sorry to our marketing people, but very often when you get a marketing email it will contain a little invisible image called a tracking pixel which essentially works the same way as ads on websites you visit. That is it is loading that image from a third party site from some either Cato [inaudible 00:17:58] marketing company’s [00:18:00] site that is linked to a unique identifier. Essentially it’s a way of saying we know this person opened this email at this time. Which then is helpful because you can have a unique ID associated with a link in the email so you know did they open this and then if they opened it did they act?
How many people just deleted it without reading it? You can shut this off most email clients have an option somewhere to disable…usually [00:18:30] it says load remote images, it’s one of these things that you may have seen in your settings but if you don’t know why it’s there you might just think that this is something that it’s to save time loading stuff. It’s actually a privacy feature. It’s usually not marked that way so even if you’ve noticed it in your settings, if this isn’t something that you are pretty focused on. You might not have realized that that’s not just a data, a download saving feature but a privacy [00:19:00] feature.
Trevor Burrus: I want to ask about some things that are not, they’re still the internet and everything is the internet now, it seems the internet of things but-
Aaron Powell: You want to ask about the Juicero Juice maker?
Trevor Burrus: Of course, the Juicero that needs to know that you’re Wi-Fi, what is it that you’re Wi-Fi connected?
Aaron Powell: It’s the juice maker that only works if you’re connected to Wi-Fi?
Trevor Burrus: Yes, of course. But no, I want to ask him about Alexa and your smart TV and all of the idea that Alexa is always listening or your TV is always listening and more things are going to be listening to us [00:19:30] as it goes on. Is this something that they’re also recording? Do we think that they’re listening to the point that they have our entire conversation somewhere that we have in my leaving room, somewhere on a database at Google headquarters or it’s just listening for its name.
Julian Sanchez: In theory and if you are technically inclined I suppose you could run Wireshark on your local network and look at the patterns of traffic between the-
Trevor Burrus: Wireshark, [00:20:00] I don’t know what that is.
Julian Sanchez: It’s a device, it’s a nerdy thing that 99% of the people listening to this will not be equipped to use so it doesn’t matter. It is a piece of software that can scrutinize network traffic pretty granularly. If you wanted to see what all the devices on your Wi-Fi network are doing, how often they’re transmitting data and where they’re sending it, it can be used that way. Security people often use it for diagnostic purposes. [00:20:30] In theory my understanding is that a device like the Amazon Echo, Alexa is mostly just listening for its name and then when it hears its name, it is transmitting that information back to Amazon. If you don’t know how to use something like Wire hark well, you have to trust that’s in fact how it’s working. I know there is a relatively recent case where [00:21:00] Amazon was basically fighting with the federal government over an attempt to obtain their logs of someone’s Alexa traffic for use in a murder investigation.
Trevor Burrus: That could help with a timeline, was it in the house, was [crosstalk 00:21:16].
Julian Sanchez: That can be useful for a number of reasons including is this alibi plausible, was he in fact in town when he said he was out of town?
Aaron Powell: [inaudible 00:21:26]
Julian Sanchez: Did he say, [00:21:30] Alexa purchase the wrench and …
Trevor Burrus: It’s interesting but that brings us to the-
Julian Sanchez: I apologize by the way for everyone who’s listening to this out loud in their living room and …
Trevor Burrus: You just turned on Alexa.
Julian Sanchez: [crosstalk 00:21:45]
Aaron Powell: To be fair we do it, “Hey Siri,” and “Okay Google.”
Trevor Burrus: Yeah so we can say, “Alexa, order a sharp knife,” and maybe that just happened. Anyway for the government, we brought up what the government might be doing with some of these things which of course [00:22:00] that’s the interesting question here. Some people are very concerned with corporations too, I think that’s less true of libertarians but we might want to be concerned about what corporations are doing with our data. then we have the government and if they want this or try to get it for various reasons they could do a lot of stuff to us. As you said it’s almost like a Totalitarian state with how much data they have on us. [00:22:30] Is that something that concerns you or which I actually know of course it is, but is just something that’s happening?
Julian Sanchez: Absolutely. I always people will say why aren’t you more worried about all these companies that have this vast streams of data about us? I usually say to them. Well, google has never tried to you know black mail Martin Luther King into committing suicide. In terms of the track record, more generally yeah companies are gathering this data because they want to make money and [00:23:00] this is not true in a blanket way but by and large, this is the most pernicious thing they’re doing, without trying to tell us stuff. Whereas if we look at the history of government intelligence agencies, we see much more pernicious types of surveillance. Surveillance of political activists and civil rights leaders for purposes of harassment, surveillance for the purpose of political manipulation, [00:23:30] public opinion manipulation.
I think there’s democratic reasons to be more concerned about that kind of surveillance just in terms of motives and also of course in terms of the kind of power they’re able to exercise. Google can’t really throw you in jail. That said, current legal doctrines are such that for a lot of types of data, if a company like Google or Amazon has it, the government has pretty easy access [00:24:00] to it. Under what’s known as the third party doctrine which was established in the late 1970s before the internet or mass data mining was a thing. The idea is that with probably the exception of the contents of your interpersonal communications, that is to say exempting the contents of a voice call or a video chat or [00:24:30] maybe an email exchange, the data these companies have about you and your activities, that’s just part of their business records, is not something you have a forth interest. It can be obtained by voluntary disclosure, by a simple subpoena.
This is the reason that the NSA is infamous bulk telephone records collection program was seen by [00:25:00] the secret FISA court as not a violation of the Fourth Amendment. Because according to this doctrine, the information of the type that is kept in your phone bill and call detail records maintained by the phone company is not information you have any Fourth Amendment interest in. It doesn’t require any kind of particularized search warrant and so there was no constitutional obstacle the courts held to saying, then fine we want every American’s call history [00:25:30] to be stored for five years for future analysis. That’s not a Fourth Amendment search as far as that legal doctrine is concerned so even if you’re not especially concerned about corporate uses of it, it is worth knowing the symbiotic relationship between these companies and the surveillance state.
One of the early Snowden revelations of how section 702 of the FISA amendmentS act was being used Involved [00:26:00] a patten called prison which was specifically about the partnership of the government with the major technology platforms, the communication platforms and technology companies. Basically all of the big ones were in there, so you had Facebook and Google and Yahoo and Microsoft, AOL I think was there. They understand that there is this very useful symbiosis where companies are gathering very large amounts of data for [00:26:30] their own business purposes. It either makes them a profit, or helps them serve their users better, or helps them secure their own services as in the case we discussed earlier where you might want to profile someone’s activity to tell if it’s a human being and not some kind of bot, they might be used by a scrapper or hacker.
Then because they’ve gathered this massive amounts of data, under current legal doctrines the government has access to it, subject to a much lower standard [00:27:00] of scrutiny. The real rate limiting factor there tends to be the extent to which the companies are willing and able to fight back. Sometimes especially when it is semi-public, what’s going on, they will be more vigorous in trying to resist over over broad request for information. When it’s entirely secret though the incentives are a little bit different.
The question is [00:27:30] are you as a company with a fiduciary responsibility to shareholders going to spend time and money on very expensive lawyers to challenge a government request in front of a secret court that is probably not going to be that amenable to your challenge and which you don’t get to claim any credit for later? You don’t get to say, you see the other companies, they just gave up all this information but we fought back, [00:28:00] shouldn’t you be happy about that and give us your patronage? I’s all secret, it’s something that you are willing to do only out of pure public spiritedness, which is not how a lot of companies work.
Aaron Powell: Government can be a threat in the sense that if private companies are gathering this data, it’s accessible in some way to the government should they want it. Can government also help? You hear a lot of calls for [00:28:30] we should pass laws that would limit the amount of data that companies can collect about us, or limit how long they can store it or require data collection to be opt in instead of opt out. Do you think that those kinds of laws would be valuable or those kinds of regulations?
Julian Sanchez: I would say I’ve a little bit of two minds on this. I will say I think it is a little bit troubling from a libertarian perspective [00:29:00] how much data is being collected about people that really they don’t recognize is being gathered or how it’s being used. We may give one site some information on the premise it’s being shared internally for some reason that is useful to our reason for going to that site or for using that service. Usually somewhere on page 12 of the 30 page highly [00:29:30] legalistic terms of service that every single site or service you visit is going to have, there’s something describing their ability to share more broadly than that. Very often our computers are transmitting data that we’re not even aware is being sent. You visit a site, by default your computer is sending some information about the configuration of your browser.
What operating system are you using? What browser are you using, what plug-ins do you have available? [00:30:00] Which very often at least in combination with an IP address giving your rough location is going to be enough to more or less identify you by finger printing that particular configuration in that particular place. I’m not automatically an in principle adverse to the idea that it might be appropriate to say look, we should ensure that when people [00:30:30] are treading over this information, they’re doing so genuinely consensually not because are not sufficiently technically sophisticated to understand what they’re transmitting to companies.
On the other hand I am reluctant to endorse what would be likely to come out of any actual political process, for a bunch of reasons. One, there are different [00:31:00] cases that we have different intuitions about it I think in terms of what kinds of information is useful, for a site to collect and store. I think a lot of this directly say that just as in a sense people might have less meaningful knowledge and consent what they’re agreeing to, because no one can wave through all these legalistic privacy agreements. At the same time if you make someone act into every single benign use of information that you might make of [00:31:30] their data, that equally is going to add so much friction that you end up fore-closing good and benign use of information and I think the most significant issue here is that I think you’re likely to end up with a scenario where a lot of functions just move off shore.
You just end up with a lot of advertisers operating [00:32:00] in parts of the world that aren’t subject to US jurisdiction or traffic being grid into to sites that are operating outside US jurisdiction. It’s not really clear how you mitigate that without just balkanizing the internet and saying well you can’t link now to sites outside the US or you can’t have advertisements from companies outside the US. It’s a thorny problem. I [00:32:30] don’t give the kind of automatic rejection of that notion. I think there is some kind of case to be made for it, just on grounds of the idea that people should meaningfully consent to disclose of information of themselves but, it is difficult to see it as a practical matter, how you achieve that without a lot of other baggage [00:33:00] and without the trivial circumvention.
Aaron Powell: Then if private companies are going to continue to collect this data because it’s valuable to them and often central to their business model and we’re skeptical of getting the government involved in protecting our privacy online, we turn to other steps that we as individuals can take protect to ourselves. You mentioned Ghostery which is an ad blocker and ad blockers have gotten more [00:33:30] popular and now there’s rumors that Google is going to bake one into one of the upcoming versions of their Chrome desktop browser. Are ad blockers a good way to protect ourselves?
Julian Sanchez: I think one step as a suite of things you might want to be doing to protect your online privacy is have something like Ghostery or NoScript or a range of other privacy projecting plug ins that are baked into your browser. [00:34:00] Privacy or anonymity online is a sort of subset of security more broadly construed, they often tend to go together, complement each other and how much is appropriate to you is a relative question. If you want to ask whether a particular location is secure you need to know is it Fort Knox or is it your private home. [00:34:30] The level of security that’s more than adequate for a private home is going to be wildly inadequate for Fort Knox because the question is what you are defending against.
If it’s a question of I don’t want to be casually tracked by advertisers or companies, then yeah, that may be the primary thing you want to do. In terms of your online privacy or anonymity, broadly you might have different needs if you are [00:35:00] a journalist or an activist or an academic who’s communicating with people in parts of the world with more repressive governments. Step one though certainly, might be to do something like that.
Trevor Burrus: Use the ad-blocker? What about something like passwords? Should you be using really long passwords or different passwords for everything?
Julian Sanchez: It’s worth noting that because we’re in a public policy, [00:35:30] we’re talking about governments [inaudible 00:35:32] the NSA. At least in the short term, in terms of the practical impact on ordinary people who aren’t activists or academics or journalists, it is likely that the most realistic near term privacy threat is some kind of criminal hacker stealing your information, leaking stuff online. We’ve seen of course a high profile examples of this happening the last few years, ranging from celebrities [00:36:00] having their photos leaked to companies having their internal documents published. In terms of the way that happens, we hear a lot of focus from security folks on what are called zero day exploits, meaning some new vulnerability, it could be a software that has never before been disclosed and so hasn’t been patched,
The truth is most breaches are not the result of some zero day exploit, much more common is either [00:36:30] old vulnerabilities that just have been patched but the system hasn’t been updated. Somebody just hasn’t bothered to go to the newest version of the software, that is secure against publicly known security holes, but also password phishing or password guessing. People use weak bad passwords and they use the same password across multiple sites. The easiest way to avoid this is to use a password manager, something like 1Password, [00:37:00] there’s a whole range of these. I use 1Password but there’s a whole array, most of which are pretty good.
Trevor Burrus: What do those do?
Julian Sanchez: The idea there is that they will generate very strong and long passwords the a kind of human being might have trouble memorizing and store them or automatically fill them in. You have a little app either on your phone or your desktop that plugs into your web browser and ensures [00:37:30] that you’ve got a strong hard to crack unique password for every site so that you’re not compromised across all the sites you use if one of them is breached, and that it’s the kind of password you might not want to try and memorize. The downside of this is of course if the master password file itself is cracked, and usually those are storage encrypted … You need to memorize at least one strong [00:38:00] password which is the one you’re using to encrypt all those, two I guess because also the one you’re using to encrypt the device itself.
You do have an essential store there, but there are not a lot of practical cases, where that is breached, where even without a password manager, you’re not already essentially done. For example someone might have a key logger installed on your computer, so they’re able to see when you unlock your [00:38:30] password manager and decrypt that file and then they are able to steal the file. Of course under those conditions if you don’t have a password manager, they’re still able to see what you’re typing and steal all your passwords. On the whole I think those are a great tool and it’s probably the most simple practical thing you can do to make yourself more secure not necessarily [inaudible 00:38:52] government but in general, against attackers of any stripe.
I will say if you don’t want to use one of those, it’s not that hard [00:39:00] to have better passwords. One thing you can do is use a phrase instead of a word. Most sites now will let you pick a pretty long password. Some weird string of six, seven or eight characters with all sorts of special characters is probably not as strong as just a stringer of five English words in a row. If you can’t do that, if it won’t let you use something that long, [00:39:30] you can create mnemonics to make things more memorable. Mary had a little lamb, its fleece was white as the snow and everywhere that Mary went, the lamb was sure to go. The first letter of every word there is like 22 characters then maybe throw a couple of digits at the end, you got something that’s pretty … I wouldn’t use that one because I just used that specific example on a podcast we’re talking about security and intelligence.
Aaron Powell: Everyone’s going to have the same password, Julian. Look what you did.
Julian Sanchez: You can pick a very memorable phrase [00:40:00] and then use the first letter or the last letter of each one to create a strength that is gibberish and very long but you’re not going to forget any time soon. It would even be safe to write down. Many people will make fun of the idiot who wrote his password and post it. If you write down bank of America then you have the exact password, that’s not a great idea. Although in general physical spaces are providing a fair amount of security, [00:40:30] if you’ve got a very secure password, and the attack surface is someone who has physical access to the location where it’s stored-
Trevor Burrus: You might have other problems.
Julian Sanchez: It’s not a great idea if it’s your office where lots of people have access, but if it’s your home or your wallet … you can write it down in a more obscured way so I might write down river to remind me bank and then Mary to remind me the first letter of [00:41:00] lamb. You’ve got to write it down so that you remember what’s associated with each site without actually having to write it down in such a way that would be useful to an attacker if it was stolen.
Trevor Burrus: What about something like using your fingerprint for your phone? Because we had this with San Bernardino the question of can the government force you to put your fingerprint on a phone and we have some searching at the TSA so you’re going to search the phone? Should you be using a fingerprint or should you be using a passcode or does that really matter?
Julian Sanchez: The first thing to say is, for the way [00:41:30] in practice most people use devices, that device, your smartphone is sort of the master key to everything else unless you are very willing to do a lot of stuff manually that most people are not. It probably has stored credentials to all your other sites, so someone has access to your phone, they have access to essentially every other secure site you’re using and probably all your email. Frankly if they’ve got access to your email, they’ve got access to everything else because just about every site has some [00:42:00] sort of password reset function, which means you can reset the password by having it send your email. Even if they have two factor authentication, those popular form of two factor authentication, meaning they’re using a password plus something else, so guessing your password isn’t enough, it’s a text to that phone. The a single biggest security hole in most people’s life is their phone. That should certainly have a very strong passcode. Don’t keep it [00:42:30] un-passcoded certainly and don’t even use a four or five or six digit-
Trevor Burrus: You can only use four for an iPhone though.
Julian Sanchez: You can change it, you have to go into the settings to say I want a long form passcode but it’s absolutely worth doing. The fingerprint I guess is one of these trade off things. If your threat model is primarily the police or the government, one trade off here is that …
Trevor Burrus: If you’re a drug dealer [00:43:00] is when it adds up, so all the drug dealers [crosstalk 00:43:02]
Julian Sanchez: I don’t know whether this is a very popular podcast with narcotics traffickers but if your threat model is primarily the government, a lot of courts are holding that under the Fifth Amendment against self-incrimination. You cannot be compelled to cough up or enter your passcode or password but you can be compelled because it’s not testimony to give them a finger print and I think legally that’s [00:43:30] very plausibly the right answer. It doesn’t mean if that’s your threat mode that that may create a problem. Now this is mitigated to some extent even in that scenario because one, you only have so many tries of the fingerprint on the phone before it requires the full passcode.
It requires the full passcode if the phone has been powered down or if it hasn’t been unlocked and in 48 hours. In [00:44:00] a lot of scenarios even if it has the finger prints unlock capability by the time someone is legally compelling you to do so it’s not relevant because the pass code is at that point still required any way. You can mitigate that by using a non-standard finger so I won’t say which one I use. Most people use their thumb, if you pick a different one, it might be a little less accurate but it also means that if someone’s pushing [00:44:30] your thumb on the thing whether the government or a mugger in an attempt to get it unlocked-
Aaron Powell: You couldn’t be compelled to tell them which finger you used.
Julian Sanchez: Presumably not.
Trevor Burrus: Say look if it’s not working man.
Julian Sanchez: I don’t think the court attests to that one but this things are not perfect so would someone know whether you were sweaty and didn’t quite read it right or looks like I was really using my pinky. With that said I think on the whole [00:45:00] for most people the fingerprint reader is a security benefit because it makes it practical to use a long strong passcode if you only are going to have to enter it when you reboot your phone. Which most people don’t do on a daily basis as opposed to every time you want to use the phone? If you need to punch in something every time you are using your phone … most people are just not going to in any practical way, use some 25 character [00:45:30] complicated thing.
Even for this other security solutions like those visual patterns you see on android phones where you basically draw a little picture in theory the number of different combinations is huge. In practice almost everyone uses a much narrower range of the possible things you might draw while people just use the shape of their first initial [00:46:00] so that might not be as secure as you think. Also there are other attacks on a passcode that is being frequently entered so if you have to enter this on a device that you are using in public all the time, the odds grow that either it’s a personal thing over your shoulder or a camera that plying some kind of [inaudible 00:46:28] software analytics is able [00:46:30] to tell from movements of your fingers.
Aaron Powell: One of the waiters at Mar-a-Largo.
Julian Sanchez: Yes, one of the waiters at Mar-a-Lago is going to be able to use that passcode so in terms of just practical scenarios there are a lot of ways your fingerprint can be obtained. There are a lot of ways that code you are entering frequently can also be obtained if your doing so in public on a regular basis so to the extent the [00:47:00] fingerprint lets you choose a stronger code and you still need to enter the code whenever the phone is shut down I think that ends up being a security benefit.
Aaron Powell: The other one that we hear a fair amount about is encrypted messaging apps. The libertarianism.org team, there’s six of us and we are using some message to talk instead of emails because emails horde. We use telegram not just because it’s the messaging app of choice for ISIS but [00:47:30] …
Julian Sanchez: Do you like it? Is that there motto?
Trevor Burrus: That’s how he knows [crosstalk 00:47:35]
Aaron Powell: I don’t believe so but does that mean our communications are safe? Are these apps a good way to go?
Julian Sanchez: It depends what your threat model is. If your threat model is I think that I am specifically being targeted by one of the more sophisticated state intelligence services. They are probably in a compromised stand [00:48:00] point. To say they are probably going to hack the end device one of the end devices.
Aaron Powell: So it got into my computer.
Julian Sanchez: Right. At which point the encryption [inaudible 00:48:10] it doesn’t matter.
Trevor Burrus: We talked about what Patrick Eddington … some of the stuff that they’re probably going to pretend to be the cleaning crew of Cato and put a USB in Aaron’s computer, is that what you mean by hacking the end device?
Julian Sanchez: There are other ways people can be hacked, they don’t even need a physical entry.
Trevor Burrus: Okay, [00:48:30] so even remotely.
Julian Sanchez: There is more vulnerability is able to install a key logger. That said that’s a pretty small percentage of people and certainly of US citizens in terms of US. That wise if you are a business person who travels abroad you may well be a target of China or Russian intelligence for economic espionage reasons so it’s not a perfect solution. [00:49:00] That is not to say don’t do it. It’s absolutely worth doing because that’s the extreme case. You may have to take other measures to avoid that kind of worst case scenarios for targeting but absolutely I think it makes a lot of sense to use secure chat as much as possible. I personally use signal and I am hearing a lot of buzz recently about an app called wire I just installed recently.
I problem [00:49:30] is that it’s so recent that not a lot of other people except for extreme security nerds are on it yet but those of us pretty good I think most of the security folks I know tend to prefer signal to telegram but there are various trade offs. Signal is based on your phone number so this is one of the security anonymity trade offs so it will secure the content of the communication you do have to [00:50:00] get out to people who you want to communicate information about your phone number. It’s not great as sort of totally anonymous form of communication or as wire which is newer. Operates on a user basis that you can just give out your user name. Though it’s end to end encrypted meaning the company it does not have access to the contents of the messages. They can’t be turned over either to governments not just to the US government.
Any government that tries to force them to hand over the contents of your messages [00:50:30] and the signal just doesn’t keep much in the way of meta data for very long meaning they don’t even have a log of who is communicating. One thing I will say is it is worth it to the extent of using the stuff mostly for your own purposes and because it provides a kind of hard immunity to make this stuff default and not just something you use for a particular applications. I was talking to a friend who [00:51:00] is a journalist and was very proud about having just installed Signal finally so now finally I can have secure communications with sources. Are you using this a your default for all your communications with your sources? No, I only use it when I need to discuss something sensitive.
I said you are just wasting both of your time because if your communicating with all of your sources regularly through email, unencrypted email and then suddenly you switch to Signal for one source and then you publish a story [00:51:30] that has something classified fact in it. It’s not going to take a super genius to figure out what’s happened there. The security comes from always using the secure technologies so nothing stands out in a way that would reveal something about the activity at least in the use case of a journalist where the fact of a different kind of communication would [00:52:00] be enough to let’s say lead a leak investigation to the leaker. There’s a more general herd effect just because a lot of governments look for encrypted traffic as a sign that you must be up to no good so of course-
Trevor Burrus: So if everyone’s just using it all the time then it looks like-
Julian Sanchez: All of us use encryption basically everyday. If you have a modern smart phone, it is encrypted. If you ever connect to a site that you have to log into a password, that part at least is encrypted. In general you’re using [00:52:30] encrypted all the time because otherwise all of your traffic can be siphoned up and read by someone else. It’s just that it’s usually not visible to the user. The whole point is that it has to be so seamless enough that you don’t have to be actively engaged beyond just confirming that yes, that little lock icon is there. More often if something’s wrong an not secure icon is there to tell you maybe you shouldn’t input your password on this sight. [00:53:00] To the extent that you are not unusual either in terms of your own activity but also in terms of the general population for using encryption, that makes it less useful as a indicator of wrong doing.
Trevor Burrus: What’s coming? I’ve been always been scared of asking this questions for our tech policy friends who are looking to see what sort of things are coming both in terms of surveillance, fears, new tech to [00:53:30] keep our own privacy. Is this going to become better … security better, maybe more secure in ten years or is it going to be less-
Julian Sanchez: It’s very hard to predict because we’re in a constant arms race between like the data gatherers whether it’s for marketing purposes, intelligence purposes, criminal purposes and people who want to try and keep things secure. I think [00:54:00] the trend is definitely in favor of the data gatherers because it’s become a lot easier to keep communications secure. It still takes a fair amount of effort, if you want to be really untraceable or invisible there’s a new book out by the former hacker Kevin Mitnick called the art of invisibility. One of the things that jumps out is that if you really want to be robustly invisible, it takes a really dispiriting [00:54:30] amount of effort and there are two trends that are worth watching in the future. The big one is what’s sometimes called the internet of things but more generally the fact that basically everything has a computer embedded in it now and sensor enabled network computing devices are now essentially ubiquitous.
They are in our cars, they’re in our kitchen appliances, they maybe in our [00:55:00] bodies if you have a pace maker, they’re in sex toys now. Almost anything you can name has and increasingly will have a network computer in it and often a sensor in it and this is going to be enormously convenient in a lot of ways some of the few details at the beginning of the violent year but it does mean it’s going to be a lot harder to have that assurance to robust security. It’s no longer [00:55:30] enough to encrypt your communications, you have to worry that your television might be able to hear the partner of your key strokes from which you can very often determine what is being typed because human beings have hands that are structured in a particular way. We don’t type different letter combinations with equal speed and so this is a real attack that we’ve seen people actually be able to use.
[00:56:00] If you can hear the sound of those key strokes and you know this is someone typing English, you can very often reconstruct what is being typed from the pattern of sounds. When that becomes a realistic attack vector it’s not just can I securely encrypt this transmission but am I aware of every sensor around me when I am using that device? And that is going to become increasingly [00:56:30] impractical in a lot of ways. Especially if you want for example do something or send something. Connect with whatever is outside your home where it’s less easy to associate with you individually. The other thing I lose sleep over is what is presaged by the Apple versus FBI fight. The fight they heard over the San Bernardino shooter’s phone and whether Apple would help them crack the encryption on that. That was portrayed really as part [00:57:00] of the crypto wars.
In a more fundamental way I saw that as the first salvo and maybe a new fight over governments access to developer keys. We have this arguments about whether certain kinds of encrypted software communications platform should be built with government back doors. Basically any modern computer system, most modern computer systems use by most human beings [00:57:30] have a kind of back door already it’s called the update system. To keep your phone and your laptop and your Alexa and your smart TV secure as new vulnerabilities are discovered, it needs to accept updates. To add new features but also to ensure that vulnerabilities are patched. The way the device regulate access and ensure that what they are getting is really an update and not a piece of spy ware is … there are cryptographic [00:58:00] keys held by the developers used to authenticate that yeah, this is really a new update from Apple, this is really an update from Microsoft.
You saw in one of the legal briefs in the Apple FBI case, a footnote from the FBI saying if Apple didn’t want to have to be forced to write code themselves to help us, we would be willing to just have them hand over there encryption keys and we’ll write the code. We thought Apple would not want to have to do that and of course that was an understatement [00:58:30] because they understand that that would be catastrophic for both security and for the ecosystem of trust that software depends on. As soon as people recognize that keeping your software up to date may mean installing spyware that some government has forced the developer to turn over, you have an unsettling scenario where people become weary [00:59:00] of installing updates which creates other kinds of security problems.
Sooner or later I think some government if not ours, China, is going to start trying to demand thing like access to developer keys which is to say the back doors to all these devices we’re relying on. At which point we’re going to have an interesting discussion about models [00:59:30] for securing devices against that kind of attack which is a hard one. One solution is if you are a very hard core privacy person and nerdy enough to be willing to slog through using command lines for a lot stuff, you might know about a very secure operating system called tails. Stands for The Amnesiac Incognito, [01:00:00] I forget what the L is for, System. The idea here is that this is an operating system that you keep installed on a USB key that basically is amnesiac. It starts afresh each time but it’s also an open source product which means that when a new version of tails is published and posted online, the source code is posted online.
You can confirm that the new update version you’re downloading manually is in fact the same [01:00:30] as the widely published general release version, that everyone can look at the source code and they can confirm that it doesn’t have any spyware in it. That is secure against that kind of attack. You are manually downloading the stuff, you have a way of confirming that what you’re downloading matches source code that has been publicly vetted. That’s a lot less convenient than ‘your phone has a new update, are you ready to install it’? Now of course Apple is trying to do a lot of this stuff behind the scenes in terms of verifying [01:01:00] the authenticity of the update but by taking you out of the loop, you are in some sense required to trust Apple, required to trust Microsoft, required to trust android about what you’re getting. It remains the case that serious security and privacy protection are achievable but often come at a significant cost in effort and [01:01:30] usability.
The people I know who spend the most energy making sure that they have the capacity to remain secure and anonymous online are also people who spend a lot more time thinking about that and worrying about it and learning how to use tools to do that than normal people want to. I think one of the promising trends we are seeing is that things like Signal and Wire [01:02:00] and all these user applications are trying to solve the problem of making it as possible for ordinary non super geeks to achieve levels of privacy that used to have to know how to compile your own stuff to be able to practically use. We can see there are still a lot of places where there is that trade off where if you don’t want to roll [01:02:30] your own essentially, if you don’t want to have to be willing to personally get involved in manually confirming the security of something, that especially means trusting some third party. That is always the weak point in any security situation.
Aaron Powell: Thanks for listening. This episode of Free Thoughts was produced by Tess Terrible and Evan Banks. To learn more, visit us at www.libertarianism.org.