Spend any time on social media and odds are that you’ve interacted with at least one bot account; given how advanced they’ve become, you might not even have noticed. Paul interviews bot programmer Max Sklar about why bots are a big part of the future of online interaction and why that’s not necessarily a bad thing. They also discuss machine learning, artificial intelligence, and the deurbanization of New York City.
What is a geofence? Do we have an obligation to give data to apps for their user research? What is machine learning? Who or what is a marsbot??
0:00:04 Paul Matzko: This is Building Tomorrow, I’m Paul Matzko, and I’ve got bots on my brain. Doctors tell me I should survive because these are Twitter bots. Ask most folks, what they think of when you say those two words, Twitter bots, the odds are pretty good they’ll say something negative about Russian election interference, bots, worms, trolls and the like, we don’t like bots, but that might be because we only really take notice of bots when they are a problem. Most bots just do their thing, they make our lives a little bit more frictionless and we never even notice. Our guest today is the creator of one of these bots. I wanted to talk to him about the rise of the bots and what that means for the future of the internet. Max Sklar is a software engineer at Foursquare, developer of the Marsbot app, and host of the Local Maximum podcast, which really has a lot of great content for folks with an interest in machine learning and really in emerging tech more broadly. Welcome to the show Max.
0:01:05 Max Sklar: Thank you for having me on.
0:01:07 Paul Matzko: Okay, let’s start with Marsbot, this app you developed for Foursquare. What does it do?
0:01:12 Max Sklar: Well, there are actually two Marsbots, this is sort of part of our internal Labs team, which we have at Foursquare, where we sort of get to go off the Main Line road map of the company I would say, work on creative projects that showcase the technology of the company. So the first one was developed in 2016, and I did this with a small number of people, including the founder of the Foursquare, Dennis Crowley, but that one was basically, you would walk around your city or your town or whatever, and based on where you go, it would text you information. And so it sounds very simple, but there’s a lot of tech under the hood there that makes it all work. First of all is the core technology we have, which is determining where someone is and when they stop giving their coordinates and all the information on their phone, and then it’s also the extensive database of places and what goes on in those places, aggregation of reviews. So one of the things that I like is it’s sort of… Like if you walk into a cafe, it will immediately text you what the best thing there to order is, and so that’s given me some good experiences from time to time.
0:02:28 Max Sklar: And so that was… Seemed magical at times, and of course, in 2020 now it seems a little less, seems a little tougher to use, but the one we’re working on now, which actually was put off during the lockdown, so we had to… We originally wanted to launch this in March and now we’re launching it soon, sometime in the next few weeks, but this is sort of Marsbot for air pods. It’s Marsbot 2, and this is an audio version, and we’re using a concept called geo‐fences when somebody walks by a geo‐fence, in our case, we can tell when somebody walks by a particular storefront in New York City or somewhere else, we can trigger a sound. So some of it is just text‐to‐speech, like it just tells you some of the same information we normally have as you walk by, I don’t know, the deli or whatever, but also we allow people to leave clips. And so as you’re walking through the city, you can hear what everyone else said or you can leave something and people hear what you said. What are people gonna do with it? I’m not so sure yet. I know people are gonna get in trouble with it.
0:03:34 Max Sklar: I’m excited to find out how people get in trouble with it, but…
0:03:38 Max Sklar: I think there’ll be some pretty cool creative things coming out of that.
0:03:41 Paul Matzko: I don’t know, I think it’d be kind of fun to walk around New York City and have people just randomly ghosts utter swear words as I walk by different stores. [chuckle]
0:03:49 Max Sklar: I mean that could happen. I haven’t seen that yet, but it’s… Yeah I think people will do things with sound effects or…
0:03:57 Paul Matzko: Okay.
0:03:58 Max Sklar: Yeah. Some time… Right now, people are not sure, they’re just still giving you information on what to order, which is okay…
0:04:06 Paul Matzko: Yeah.
0:04:07 Max Sklar: But it’s not as useful when you’re just walking by and not actually in there.
0:04:11 Paul Matzko: Right, is there… Can you customize the sounds for yourself? I mean like decide which sounds you wanna hear, and which categories of sounds you don’t?
0:04:20 Max Sklar: Yeah, there’s actually… There’s a settings pan… And that’s one of the early pieces of feedback we got. There is a settings panel where you can toggle all the broad categories, it’s one of those pages with switches on there, or toggles on there, as… I’m sure you know the interface where you can like pick to want to hear these these these.
0:04:39 Paul Matzko: Right, right. That’s cool. I was actually thinking about this the other day. So the town I live in has a, what they call a museum in the streets, and as you walk by historic houses, you can, I think, dial a number. Museums often have the software too, you have an audio guide and you can put the number in and it’ll give you a little pre‐recorded thing about that object. But you can imagine those things are expensive to produce, they’re… They take lots of effort and time and money, they’re only been used in these very particular non‐organic settings, they’re top down produced as well, not organic. Like at a museum in a historical site, that kind of thing, but I can imagine that technology being used for… You could go on an organic community‐driven tour of just a street, walk down a street and you could be hearing things about this. Right now, you’re using it for commerce, for driving people into stores and whatnot, but you could use that same technology for all cool other applications I imagine.
0:05:47 Max Sklar: Yeah, anyone could record stuff and we ed… Through the app, we only allow five‐second clips, but we can allow people to make channels and have clips of any length, and you could put the geo‐fences anywhere. So if somebody wants to do a tour of a street that should be pretty easy to do. Right now it’s everyone hears everything, which probably in your town in Maine if somebody puts a walking tour on a street, then you turn on Marsbot audio and it’d be fine. You wouldn’t get too much like extra pollution, but I’m sure in the future we’ll have to say, “Okay, I just wanna follow this specific channel right now, or this specific person”.
0:06:26 Paul Matzko: Now, I watched the presentation you gave, I think it was about Marsbot One, but I think it applies to both and you talked about bots as a huge growth opportunity. And why is that? Where are the economic growth gains coming from using bots, or does it just shift growth from other areas? Like why is it actually growth as opposed to a zero sum transfer?
0:06:49 Max Sklar: Right. Well, I think that the one you’re talking about is in 2016. And every once in a while, different aspects of technology, in this case, it’s user interface… There’s a little bit of like AI in there where Marsbot, you could actually text back and forth a little bit, and people do… Although it’s not really that intelligent, although still some are much more intelligent. But we have some cool things in there. So I think the conversation at that point, and probably still today, was really around, “Where… In what case is the phone too heavyweight of a user interface?” Because it’s like, I can get things… I still feel it now where I can get… Or I probably feel it even more now where I can get things done on my desktop machine still faster than my phone. The phones have made great strides, the apps, the interfaces, all that. Certain apps are better on the phone. But for the most part, they’re equal, they’re maybe a little bit less. And it’s… Sometimes it feels a lot, like a lot to open an app and do all these things for something very simple when you could just tell it what you want it to do.
0:08:02 Max Sklar: And I don’t think we’ve really cracked that enough. And so I feel like you had this wave of apps in 2016, 2017, and people are still trying to figure it out like, “How can we make our lives easier on these phones?” I think that nothing has been a killer app yet. Although there has been some pretty impressive bot‐like interfaces, living within Slack, for example.
0:08:34 Paul Matzko: So it’s… We’re talking about then, using… I think I saw one of your demonstrations, your smartwatch… Essentially SMS messages being pushed through a smartwatch so you don’t have to pull out your phone and check the store. It’s just pops right up on your wrist is that the… Where the ease comes in?
0:08:52 Max Sklar: Sure, sure. I mean, it’s… Sometimes there’s ease and sometimes I don’t want things popping up on my watch, so it makes it very difficult. You go back and forth between “Do not disturb” and “Yes, please disturb.” Giving me information about what to order when I walk into a place, I love to have that on my watch, I could check it and then boom, I can do it. But… Or sometimes texts, sometimes I don’t want texts. It depends.
0:09:24 Paul Matzko: Back to the big picture here, you said that bots, and I think in the same presentation can form emotional connections with users.
0:09:32 Max Sklar: Yes.
0:09:33 Paul Matzko: I think that you’re pushing against something with a negative cultural connotation. I mean, when most folks hear bot right now, they’re likely to think of things like, I don’t know, Russian election interference or… Negative things, bot…
0:09:45 Max Sklar: Yeah. Or even like a phone…
0:09:46 Paul Matzko: Is not a warm, fuzzy word.
0:09:47 Max Sklar: A phone tree or something like that.
0:09:49 Paul Matzko: Yeah. Yeah.
0:09:49 Max Sklar: That’s a very bad interface.
0:09:51 Paul Matzko: It’s like, “I want a real person, don’t give me a bot, right?” On…
0:09:54 Max Sklar: Right.
0:09:54 Paul Matzko: On the helpline. So, convince a skeptical member of the public that an internet with more bots won’t be a bad thing.
0:10:02 Max Sklar: Well, it’s… It depends what kind of bot you wanna build. And it’s basically in the hands of the person building it, on whether that bot is gonna be a good guy or a bad guy.
0:10:13 Paul Matzko: Yeah.
0:10:14 Max Sklar: And I think that if it’s sort of a entertainment type of an app with maybe some good information and some good utility, where you have fun doing it, where you could say, “Hey, do X, Y and Z.” And then it does X, Y and Z and then it gives you a joke or it gives you a pun or something that’s related, people would like that. Whereas if it stumbles, it doesn’t do that, and then it poorly tries to text you stuff to try to increase “engagement” at random times, then it might not be so great. So it’s purely in the hands of the designers, the developers and the consumers to sort of figure out what they’re gonna let into their life and not let into their life. But I’ve been pretty bad with that, so… But I could do better with… I feel like I have more control in terms of what I’m building.
0:11:17 Paul Matzko: Yeah, I have found… It’s a reminder that the technologies, the underlying technology you’re talking about are not limited to Marsbot, most of our apps send us push notifications and we should, in theory, want them to be smarter about how they do so. Like I don’t want my McDonald’s loyalty app to like, send me a thing, “You want some fries at 2:00 AM right?” I want it at 11:00 AM right before lunch. Not that it always seems to get the message.
0:11:45 Max Sklar: Right.
0:11:46 Paul Matzko: Random things will pop up like…
0:11:47 Max Sklar: Oh, yeah. I should point out, I have a… If you going back into like old talks and stuff, I have a paper on that in like, 2014 on the timeliness and seasonality of places. Actually, no I have a video on that too from the…
0:12:02 Paul Matzko: Yeah, the burgers…
0:12:04 Max Sklar: Yeah.
0:12:04 Paul Matzko: And of, I forgot what you said, you did burgers and like bagels or doughnuts or something.
0:12:09 Max Sklar: Yeah.
0:12:09 Paul Matzko: Yeah, yeah that was cool.
0:12:11 Max Sklar: Yeah, pancakes yeah…
0:12:11 Paul Matzko: Yeah, yeah, that was really neat. So, I mean, it’s stuff that we should expect all of our apps to eventually… They already are adopting these technologies and tools.
0:12:22 Max Sklar: Right.
0:12:23 Paul Matzko: Yeah.
0:12:24 Max Sklar: I could give you like a bad example.
0:12:26 Paul Matzko: Sure.
0:12:27 Max Sklar: So, a few months ago I signed up for, and I don’t know, this was really pushed on us with the New York City COVID information line. And so ’cause during that time like, April, May, we were all hungry for as much information as possible on what was going on. But then I just felt like it was recycling the same empty phrases like, “Stay six feet apart, blah, blah, blah, blah, blah.” Every single day, it was just texting more like. The same… Five different cliches about the virus.
0:13:04 Paul Matzko: Yeah.
0:13:04 Max Sklar: And it was just… It was so tiring.
0:13:06 Paul Matzko: Yeah, yeah. No, and you eventually wear your user is out and they don’t use it when they would actually like it. Now, it does seem like Twitter, in particular, has a complicated relationship with bots. They do periodic sweeps of bot accounts and the like. How do you prevent them, and I guess, social media in general, from doing that to apps that are better intention? Your good apps. I mean, let’s just say the apps that aren’t being created by Russian intelligence agencies.
0:13:34 Max Sklar: Yeah, the whole topic of social media censorship is tough or moderation, really. Moderation versus censorship. Is it good or is it bad? We have a bot out, a Twitter bot out for Foursquare called swarming now, which is basically showcasing all the places that a lot of people are visiting right now and putting up some pictures. And so, you see… Sometimes you see pictures of people having a good time all around the world, which is a lot of fun. I mean, in normal times, there’d be a lot of sports events and things like that.
0:14:12 Max Sklar: And we’ve had to do our own moderation, first of all, ’cause sometimes the pictures people put up on Foursquare were not very good. And so, we had to… I think we got dinged on Twitter for that and we had to import a library that checks photos for certain things that you might not wanna be posting. And also, just looking at bad photos that aren’t gonna be photos of the event that’s happening on the place that’s there.
0:14:44 Max Sklar: And then, we have some spam problems where our numbers are off and I think we’ve been getting lots of ridiculous numbers from Saudi Arabia recently where the events are correct but the number of… I don’t believe there’s a 1000 people at a Starbucks and somebody pointed that out. And so, we’ve got like… Yeah, so every once in a while, there’s sort of a spam board. So we have our own internal problems, but then I want Twitter to know that we’re a well‐meaning bot that has good information I like following it and seeing what big events are going on around the world.
0:15:25 Max Sklar: I often call it The Good News, International News Network. It’s basically, all the biggest things happening in the world, but usually tilted towards good news, whereas your normal news station, usually bad news. But I think it’s… You could try to filter out the good stuff from the bad stuff, using machine learning and using user‐generated signals, and that works to an extent. But also, the spammers are… It’s what I call an adversarial problem, meaning, the better you get at it, the better the opponents get at gaming your system. So they’re always gonna be, whether they’re one step ahead or one step behind, they’re gonna be close enough where you’re gonna see some problems.
0:16:16 Paul Matzko: So, with Marsbot either version and Foursquare too, lots of information sharing going on about your location, about your preferences, they’re building a little profile of users. And metadata can tell a company a lot about people. And that’s the point, folks like Edward Snowden have… Whistle blowers like Snowden and others have made in the past. I’m not a privacy absolutist by any means and I think it matters.
0:16:44 Paul Matzko: The difference between opt‐in versus some sort of… Difference between opt‐in, opt‐out, difference between state actors and private actors, corporate actors, that’s meaningful to me. So I don’t come to it from all privacy, all the time. It’s fine for people to give over their privacy in exchange for services voluntarily, but where do you personally draw the line on how much information you’d like to share with app developers, including apps like your own and why?
0:17:14 Max Sklar: Well, this is the Foursquare rule, and I’m not an expert on the entire suite of Foursquare apps and what they do. And so, I don’t speak for the company, but I will say there’s a specific set of guidelines as to what’s okay and what’s not okay. And I think I usually agree with the decisions they make. I can’t say I agree with every decision, but it’s like for me, and I think this is something that we do with the company as well, although don’t take… I’m pretty sure it is, is that it has to be something that’s useful in… That I’m getting value back from.
0:17:58 Max Sklar: So in the example of Foursquare, it’s okay, I am… It’s using this information to build a profile of me and then it’s gonna use it to give me recommendations. And so, location apps. Well, location permissions. Well, it makes sense to give location permissions to an app that gives you suggestions or does maps or something like that. So first, if an app is asking for all sorts of permissions that it doesn’t need, that’s a red flag sometimes.
0:18:33 Max Sklar: How I feel about ads? I mean, sometimes some people don’t like having targeted advertising to them. That is something that I don’t mind. I feel there should just be the ability to opt‐out of that. But to me, that’s not a big privacy concern, given… It’s all going to be like… I don’t feel the ads are gonna come back to hurt me in any way. It’s just giving me better things. But one thing I really care about, and this is something that is… In terms of policy is encryption and making sure that companies and apps can appropriately encrypt their data and not have the government say, “Oh, you can only use a weaker form of encryption.” I know that was something that keeps coming up. It seems like governments around the world want to do this because they want to be able to get into things if they have to, but that makes everything less secure down the line, and that makes all the information that you upload to the cloud, less secure. And I think that’s a problem.
0:19:57 Paul Matzko: I think we all get on the basic human level, if a stranger came up to you as you’re walking down the sidewalk in New York City, and let’s just… We’ll posit in a non‐threatening way, so you’re not worried about you’re about to be mugged. And so a stranger comes up to you and says, “Hey”, and you can tell they’re from out of the city, they got an “I love New York” t‐shirt on and a lady Liberty foam headband or whatever, they’re an outsider tourist, and they say, “Hey, could you tell me where is the nearest coffee shop? Or is there a coffee shop you like in the area?” You would feel some obligation, just on a human level, to give them a good answer, I mean, you could lie to them for fun.
0:20:39 Max Sklar: Sure.
0:20:39 Paul Matzko: And send them somewhere terrible, but you would… You have an ethical human obligation to give that information to them to make their life better.
0:20:50 Max Sklar: You want people to have a good impression. You don’t want people to go home and say, “Oh, these New Yorkers, they’re even worse than I heard about”.
0:20:57 Paul Matzko: Yeah, yeah, right. And so, out of pride, out of shared humanity, whatever it is, most of the time, most of us do that, we give away information, make other people’s lives better. Does that ethical obligation or neighbourly obligation of some kind, does that extend to the digital space? Do you see an ethical obligation to share our information with… Not with Foursquare in particular, but with apps, and like Foursquare, like Google Maps, like Rever. Should we think of that as putting some sort of obligation on us?
0:21:30 Max Sklar: Oh, interesting. So I thought you were gonna go in a different direction with that question, which is the ethical obligation in terms of the people who are creating the apps, because that I totally agree with.
0:21:43 Paul Matzko: Yeah.
0:21:43 Max Sklar: I don’t…
0:21:44 Paul Matzko: I think we spend a lot of time thinking about that.
0:21:46 Max Sklar: Yeah.
0:21:46 Paul Matzko: We spend less time thinking about this, is the… Yeah.
0:21:48 Max Sklar: I don’t think… No. You know what, I don’t… My initial reaction to that is to say no, I don’t really feel that we have an obligation to give data to apps or to contribute to an online community. I guess if you make extensive use of something, it’s nice to give back, but I feel like that goes down too much of a slippery slope for me where you have all of these apps demanding your time and attention and input, and it’s like you’ve gotta be able to say… You’ve gotta be able to draw the line at some point.
0:22:33 Paul Matzko: Right.
0:22:34 Max Sklar: It’s like saying in Times Square, do you have to give a coffee recommendation to every single person that you see, it would just drive you crazy.
0:22:42 Paul Matzko: Yeah.
0:22:42 Max Sklar: I mean, and that’s why we ignore a lot of the people on the street sometimes.
0:22:46 Max Sklar: Because otherwise, it would take me an hour just to get from point A to point B as every… I always…
0:22:53 Paul Matzko: Yeah, yeah.
0:22:53 Max Sklar: I was on the street the other day, and I know somebody said “Hello,” and so I said, “Hello.” And so they could be asking me for direction or something. And they said, “Can I ask you a question?” That’s when I know, run. If there are two introductions, there is nothing good that could come out of that conversation.
0:23:11 Paul Matzko: And were they also wearing a white shirt and a black tie? Or… [chuckle]
0:23:15 Max Sklar: No, no, it’s just… It was just a random person on the street.
0:23:19 Paul Matzko: Yeah, yeah.
0:23:20 Max Sklar: I know living here, I’ve seen that before, I know that’s there whole story, and at some time in the end of it, there’s some convoluted reason why the money in my pocket has to go in his pocket. [chuckle]
0:23:34 Paul Matzko: Yeah, yeah, yeah. Well, so it’s interesting, I mean, this is something I have to talk to…
0:23:40 Max Sklar: But I feel like I know your point about it sometimes apps and people online do the same exact thing.
0:23:45 Paul Matzko: Yeah. Yeah. That’s true, ’cause people can weaponized that… Whether or not people should feel it, people do feel some sense of obligation to be hospitable. You could just always ignore whenever someone says “Hello”. But at some point you said “Hello,” because he’s like, “Well, I guess I kinda owe it to respond the way I was… ” They said hello to me, I should say hello back.
0:24:09 Max Sklar: Right.
0:24:10 Paul Matzko: For whatever reason. And people weaponized that sense of obligation to try to extract money, or proselytize.
0:24:19 Max Sklar: I’ve seen that traveling a lot. I do think people have an obligation to kind of like when they post stuff on Twitter or whatever, to not be… I don’t know, not ruin someone’s day all the time. So that seems to be like 90% of postings about this days anyways.
0:24:36 Paul Matzko: That’s true. Yeah think about the impact on others. So that’s always a good rule of thumb. Okay so you are… One last thing here, while we’re talking about bots and the like, you have a patent on some of the technology. From what I could tell, it’s called system and method for providing recommendations with a location‐based service.
0:24:57 Max Sklar: Oh, that’s…
0:24:58 Paul Matzko: What makes…
0:24:58 Max Sklar: Yeah, wow.
0:25:00 Paul Matzko: What makes that method so different from other methods? ‘Cause you have to have a difference for some things else.
0:25:05 Max Sklar: I’m impressed you’re going back into my Google scar…
0:25:07 Paul Matzko: I’m digging in, man. [chuckle]
0:25:08 Max Sklar: Yeah so that is a… So there are two patents that we have with Foursquare. One is not out yet, but it’s related to attribution, which was an interesting product that I worked on a couple of years. That… The one that you mentioned is for just the basic recommender system back when Foursquare was the city guide, it was a single app. And this is like 2011–2012, we’re actually using people’s location data, or sorry, using people’s location and using their check‐ins, which at the time we only knew someone’s location if they physically open the app and checked in and shared with their friends, but now it’s more… It’s more automatic. But that was all brand new. I mean the smartphone had only been out for a few years, and so there’s a patent related to that. The actual patent law part of the patent, I don’t under… I’ve gotten these things in the mail where they interview me and then they have a patent, and then I got the patent in the mail and there’s “Here’s your invention”. And I read it, I have no idea what I invented.
0:26:15 Max Sklar: I have no idea what this patent lawyer language is all about. But I understand how it works so… Yeah. But that was when… It feels almost obvious now to have location‐based context in all of your apps, but that was not so obvious back in 2011–2012. Or is becoming more obvious, but there was a lot of… The way I understand patents work is like you have to have some novel take on something, and so there was a lot of brand new territory to cover there. And so that’s the extent of my knowledge of patent law. But that sort of what I feel is going on there.
0:26:58 Paul Matzko: It doesn’t sound like it played a necessarily a huge role in driving what you do. Do you have any opinions about the state of tech patent law?
0:27:12 Max Sklar: Yeah. Everybody I talk to in tech hates the patent trolls, where it’s like somebody… I think somebody came along once and said, “Oh, I have a patent on all digital commerce so here’s my lawyer give me all your money.” And they shake people down. So that’s certainly not good. I feel like companies have these patents for the purpose of… For defensive purposes. So that if you have a patent on something then somebody else with a patent on something can’t just come along and claim all the rights, and so you have to work out some deal. But it’s clearly broken. It’s not the thing where only certain people… It’s not the thing where only people who are against all IP like certain libertarians think there’s a problem.
0:28:11 Max Sklar: I think everybody thinks that there’s some problem with the patent system right now. But… Yeah, I can’t say I have personal experiences dealing with this situation, but as a podcaster I’m worried about who can come along and say, “Oh, you copyright this, or this person didn’t say it’s okay for that.” Even though I have a small podcast. I do it as a labour of love. I do it for free. I give people information for free. I don’t make very much off of it right now. And so it’s like, huge liability with very little upside. That seems wrong.
0:28:53 Paul Matzko: Yeah, yeah, we wanna encourage more such projects, not discourage them. But the one example that comes to mind as you were talking is, there was a guy who… A patent troll basically who patented the button, the buy now button on… And everyone has some version of the buy now button. You just click the button and boom it goes to… Pays with your pre‐approved payment card, it goes to your default address. But someone had filed a patent… Not the person who actually… As I recall, who actually designed any of those systems. They were not an engineer. It was just some lawyer said, “Oh, that’s patentable. I’ll file a patent for that idea.” And goes around and collects rents off of companies that have a buy it now button. Which is not the point, the point is to encourage actual innovation.
0:29:48 Max Sklar: Yeah, yeah. It’s sad when you see something like that, you feel like someone’s like… There are some people who are trying to build stuff and in some ways improve the state of the world, not always successful, but at least trying. And then there’s someone who’s trying to creep off that. And it’s sort of like… And a lot of these patent trolls have their own… If you read some of their legal documents they have their own like, “Well, we’re the ones encouraging innovation because we’re upholding the patent laws.” Ugh.
0:30:19 Paul Matzko: Well, I’m someone who… I think Tyler Cowen, the economist of Marginal Revolution website has a… He has a nice short hand where it’s like you need some patent system to encourage that, at some point if there’s no patent system at all it discourages innovation ’cause people are always worried. They won’t wanna share ideas ’cause they might get stolen. But if you have too much… If you have too strong of a patent system it also discourages innovations. So there’s a sweet spot in the curve. And I don’t think I’ve ever interviewed anyone who thinks we’re at the sweet spot. Everyone recognises that something’s wrong, and usually that they’re… And in this case too, with the internet. The whole process. I mean, you filed for that patent… I’m sure it was years before it was approved.
0:31:07 Max Sklar: Yeah, it was Foursquare did that. Yeah, yeah.
0:31:09 Paul Matzko: Yeah. And in emerging tech years is… It might as well be… An app can be dead in years, you don’t move in that time frame.
0:31:18 Max Sklar: Right. But the benefit they get from that is that it’s… So first of all as soon as you file for it you start the clock. So even if it’s not approved you still have some protections as soon as you file for it, I believe. And yeah… So the ideas, I think for a lot of these companies, is just to build up a series of patents… A library of patents over time.
0:31:52 Paul Matzko: Why don’t we turn now to… You’ve referenced it a few times as we were talking about bots. I know it’s something you spend a lot of time talking about on the podcast. I listened to a few episodes about it. But machine learning. You’re a machine learning specialist. That means something to me when I say it. But I’m not sure for a non‐technical audience, those in our audience who this might be the first time they’ve actually heard someone from that field. What is machine learning in a non‐technical sense?
0:32:24 Max Sklar: Okay. I’ve tried to explain this a bunch of times on the podcast. So let me try my best. Every time it comes out a little differently. But… Well, just think about what learning is, before we talk about machines. Learning, implies that you’re just getting better over time, by receiving new data, I guess that would be in computer speak, but in non‐computer speak, it’s like, you learn through experience, you learn through information, lessons, so sometimes it’s more just experience, trial and error, sometimes it’s more book‐learning and actually understanding what’s going on. But just learning implies that you’re gonna be getting better and better at something over time, and understanding more and more over time in some cases, not in all cases. So machine learning, is just try to automate that process, so it’s basically building software, that gets better over time, as it gets more experience, which in the language of software, means gets more data.
0:33:26 Max Sklar: And so that’s the broad goal of machine learning, the language of machine learning is statistics, probability, more precisely, I use Bayesian probability theory a lot, on the show, it’s sort of… I evangelize that, as sort of the basis for how we know what’s true and what’s not true, or what’s more likely to be true and what’s less likely to be true. Because the language of probability, or particularly, Bayesian inference, is the idea that you have some set of options that you’re unsure about, which is the right one, and which is not the right one, and then as you get more information, you update that over time, you get better and better. And so Bayesian inference provides the equations for that, and so that is a great way, in my opinion, to approach machine learning. And so there’s a bunch of different sub tasks that are usually involved in machine learning, and an important… I’m not gonna go through all of them now, I’m not gonna give you the whole course, but I think an important one to know is, your basic supervised learning, where you’re basically trying to classify something, or pick a number.
0:34:45 Max Sklar: So for example, if I see a tweet or an email, I wanna classify it as spam or not spam, that would be a very simple machine learning algorithm, or if I get information on a piece of real estate. Or a house, and I know its location, its square footage, etcetera, etcetera, all this stuff, I wanna be able to output the price that I think it’s gonna go for, and then I wanna get better at that over time, as I receive information about that market. And so all of that falls under the purview of machine learning, in fact, those two examples are actually supervised learning, ’cause that’s where you’re actually trying to learn a specific value.
0:35:33 Paul Matzko: I think we get that intuitively. Even in the pre‐digital world, folks were familiar with weather forecasts, it’s a thing we like to complain about, but we all get this is a fundamentally unstable thing, lots of variables go into whether it’s gonna rain at 12:00 PM tomorrow, and so you have to deal with it in probability, so you get a… I think that’s probably the way in which most non‐technical folks interact with probability chances, is what are the odds, it’s gonna rain, is it 30% chance, versus 70% chance? I think we get intuitively, that as you get more information, as you get closer to the event, your information is more reliable, you have more of it and it’s less in the future, so your specificity can go up, the closer you get to the event. So applying that to machine learning, I think… That some of the earliest machine learning, actually, I think probably was done by the National Weather service, by NOAA, and back before it had been rolled out for consumer internet purposes over the last 10, 20 years.
0:36:43 Max Sklar: Yeah, well, that’s interesting. I should look into that. I haven’t looked into that. I know some of the earliest examples of Bayesian inference, which I consider machine learning, are… First of all, the… If you saw Imitation Game, Bletchley Park, with Alan Turing, and all that, cracking the German codes in World War II, that could be seen as a giant Bayesian inference engine, trying to figure out what the setting of these machines are, to… And then, it’s related to actuaries and insurance and a lot of different fields, but when I’m working with machine learning now, it’s usually… Well, on the recommender system side, or the try to discriminate between data side, or bunch of different things.
0:37:42 Paul Matzko: Yeah. And here I was getting all excited that your work in machine learning was gonna defeat the Nazis. No, no. So I have read, you have to feed data into to train the machines, so the machine learning needs data sets, and so you need… And the more data… As you go from no data to more data, that’s better, you get more accurate, you get better probabilities. But I have read, there are sharply diminishing returns, on the size of a data set, once you get past a certain point. Is that correct? And why or why not?
0:38:21 Max Sklar: So the answer is that, it depends. So I could give you a very simple statistical algorithm, this is so simple that people usually don’t consider it machine learning, but it’s… Machine learning is just a more complicated version of that, but I’m sure you’ve seen in school, probably in high school, like a linear regression, like a line of best fit. A good example is, if I have a bunch of points on a graph of people’s heights and weights in a class, and then I could fit a line to that data, assuming that it’s linear data, at some point, you get a line that’s as good as it’s gonna be, and then getting more data is not going to…
0:39:02 Max Sklar: Is not gonna improve the slope and location of that line ’cause there’s only like two parameters in that, there’s only like the slope of the line and the intercept of the line. So at some point, you learn where it is, and then you’re sort of done. But the more complicated of a model that you have, the more, well, you can overfit. So, for example, instead of a line, let’s say you allow some like crazy polynomial curve, and then it goes up and down and up and down, and tries to hit every single point that you’ve graphed. But it actually doesn’t tell you anything about the new points that’s coming in. It doesn’t generalise. You just overfit. You memorise the data essentially, is what it’s doing. It’s not really telling you anything about future data that’s coming in.
0:39:50 Max Sklar: So that’s a problem when you have more complicated models. And so, there are ways to fix that. One of them is to just get a lot more data. And so, sometimes you can wring more information out of more data by making a more complicated model, if you can anticipate where the complexity is going to give you a better result. And in some cases you can, and in some cases you can’t.
0:40:21 Max Sklar: Like a lot of times in marketing data, people try to do things very complex in terms of trying to figure out, okay, like, what… Like, how… So one example of something that is complex is like image recognition, because the type of complexity is really crazy, where you have different levels of abstraction as you zoom out, like edges, and crosses, and swirls, to oh, that’s a nose, that’s a mouth, that’s a face, oh that’s one particular person, that’s my friend.
0:41:00 Max Sklar: So those models can be very complicated. But sometimes things like marketing data where you have like age, gender, demographics, things like that, there’s not… There’s only so much that you could do with it. And so the more data you get might not be as useful. So it really depends on the situation and what you’re trying to learn.
0:41:19 Paul Matzko: I’m always struck by how that question of what’s complicated and simple, is not always what people intuitively, non‐specialists, intuitively find simple or complex. It’s like a really complex is, the meme is it a muffin or a Chihuahua? Image recognition is it a hot dog or someone’s knees? That’s really complex, but we don’t think of that as complex.
0:41:43 Max Sklar: It’s very simple for humans, because our I think a majority of our brain is actually dedicated to understanding images and doing image recognition. So we’re really good at it, we can look at something and immediately tell what it is. Machines are becoming pretty good at it as well. Not as good as us. But it’s actually very complicated, to write those algorithms.
0:42:12 Max Sklar: So my former professor at NYU, Yann LeCun, developed convolutional neural nets. He’s now at Facebook. And that’s sort of a big algorithm in terms of image recognition. And now, we have people developing, like training algorithms on really a crazy number of machines, a crazy number of parameters, for things like self‐driving cars, stuff like that, where it’s, yeah, it’s a very hard problem. But if you think about it, if you think about it a little more deeply, it is very complex, because what’s an image really, as far as the computer is concerned? An image is just a matrix of numbers. Each number represents a colour.
0:42:56 Max Sklar: So how do you turn a matrix of… I could see a cat and say cat, but how do you take a matrix of numbers and say cat by doing some calculations on those numbers? It almost seems like an impossible problem when you first look at it.
0:43:11 Paul Matzko: Yeah, it’s crazy we can do it at all.
0:43:13 Max Sklar: Yeah.
0:43:13 Paul Matzko: Not that we do it poorly.
0:43:15 Max Sklar: Right.
0:43:15 Paul Matzko: Yeah. So what’s the relationship then between… So we’ve been talking machine learning and we know that’s, I think all of our listeners will know that’s somehow related to artificial intelligence. What is that relationship?
0:43:28 Max Sklar: Yeah, so my view is… The way I’ve used the terms is that artificial intelligence is a broader term than machine learning. Where AI is a very amorphous term in computer science because AI is sort of anything that is intelligent. And what is intelligence? Well, I’ve seen a number of definitions, but one is like the ability to perform well in a variety of different circumstances. So you have narrow AI, where it’s like, I’m good at one thing, but it’s still, I could still get lots of instances at that one thing.
0:44:11 Max Sklar: So for example, if I’m good at telling what email is spam, I could still be a good example of AI if I can hit at all those different examples of spam, but it’s still narrow AI. And then there’s general AI, which is, something that is as smart as a person, we could say, you could reason with it, you can speak to it, and it could learn a lot of different things. And so, right now, we’re only at narrow and we will be for quite some time, although we’re widening it a little bit.
0:44:44 Max Sklar: But I think any… Look, if you think about it, like if you want to build something that’s intelligent, machine learning almost certainly has to be involved in this day and age. It’s very difficult to build something like at the forefront of AI. I haven’t seen an example, where you’re not using some type of machine learning, because otherwise you’re left with something like expert systems and decision trees and something that maybe seems smart, but really isn’t, under the hood. So yeah, one definition I’ve heard of AI, is just stuff that computer scientists haven’t figured out yet, once it’s… I think that this has been said in decades ago, where once someone figures it out, it’s not considered AI anymore, maybe 50 years ago playing tic‐tac‐toe would be considered AI, and now it’s just… That’s just code that I wrote, that an undergrad could write.
0:45:46 Paul Matzko: It’s a moving bar, for sure. And we’ve had episodes on the show, we’ve had some guests who are all across the range, when it comes to AI pessimism versus optimism. In fact, one of our guests, they’re not… It’s not entirely clear… Not everyone agrees on what it means, what artificial intelligence means, or even really what consciousness is, it’s one thing to assume that getting faster and better at recognising patterns, is that the same…
0:46:21 Max Sklar: Note, I haven’t used the word consciousness, this whole time, so we could talk about that.
0:46:25 Paul Matzko: So that’s not… It’s not clear what exactly we’re talking about, between pattern recognition, artificial intelligence and consciousness, and we do… Sometimes, it’s easy to lump those things together and not tease them out, and again, that’s a big conversation. So by some definitions, if there are people who don’t think consciousness can be replicated, there are some who do, and if you count artificial intelligence as true consciousness, a machine becoming conscious, or full artificial intelligence, then whether or not you’ll be able to achieve that, is unclear. So there are people who are optimistic that we’ll get there, that we’ll get to a very full robust AI, that becomes truly conscious, there are those who don’t. There’s also a related question, which is, if we can get to a super intelligent, super AI, we achieve the singularity, which that can either a very bad or a very good thing. So there’s all these different, I think, feelings about what the future of artificial intelligence looks like, optimistic versus pessimistic. Where would you place yourself in that spectrum, and how would you apply that to some of these hot button questions in the AI industry?
0:47:43 Max Sklar: So I would place myself at being cautiously optimistic, in following the AI news over the years, and the way the media portrays it, there could be… There are some cases of over‐hype, all the time, so that does happen, but it is pretty powerful technology, otherwise, I wouldn’t be in it. And I think it could, and it has solved a lot of really tough problems, it’s made a lot of things you wouldn’t even think of a lot more efficient, and so I think it’s been a huge plus in the world, although there have been some downsides as well. So let’s take some of these things one at a time, it’s the… I haven’t addressed the… I have addressed the consciousness issue a little bit, on the show, but not a whole episode for it. I think consciousness and intelligence, I sort of put in two different buckets, I don’t think we really understand what consciousness is, and so I think it possibly could be intelligent, but not conscious, and the other way around, probably too, but I don’t think… I haven’t seen a definition or explanation of consciousness that I’ve been happy, where, the person who’s explaining it to me really understands it.
0:49:02 Max Sklar: And I also talk about subjectivity, whereas I feel like as humans, we have a subjective experience of the world, we’re experiencing the world, whereas machine is not actually experiencing the world, it’s just things are just happening internally. There are probably people who philosophically believe, “Oh no, that’s just an illusion,” we don’t… I don’t buy that, I’m having trouble buying that, that could be a whole philosophical discussion in its own right. But in terms of just the practical applications of AI, if you look at… I’m always interested in seeing getting out of the city a little bit and seeing, AI and farming, how much there are… This is a while ago, but I saw some startups, that were taking pictures of all the crops and immediately determining where the problems are and which needed which type of intervention, and so on and so forth.
0:50:03 Max Sklar: And there’s AI in education and in health care, and I want to… I want a lot more statistical inference in health care, I feel like a lot of the times when I go to the doctor, it’s sort of guess work still, which blows my mind. I know that there’s a lot of studies underlying everything, but I feel like I should type in my symptoms into a box and it should know me and it could give me a bunch of percentages on what may or may not be wrong, and it still shocks me that there’s nothing that does… Even basic information about certain things is very hard to come by.
0:50:55 Max Sklar: So I think that there’s a lot of promise of this technology that I’m really excited about, not to mention, self‐driving cars, which I think, will change the world for the better, not everyone agrees with me. But I just feel that being able to improve human mobility to that extent, will just make people a lot more free to do things in their lives, it’ll encourage people interacting with each other more, it’ll increase the number of potential lifestyles that people might have, and it’ll just make the world a lot more efficient, in a lot of ways. So it’s… We could explore some downsides too, certainly, loss of privacy is a downside, even if you think something is private, it could be easy to use machine learning to piece together what’s going on. You can use machine learning to try to detect who wrote what based on the patterns of writing. So that alone, there’s a lot that we can watch out for there.
0:52:21 Paul Matzko: This is interesting. As you were talking about the medical example, I was put in mind of the forms of learning that doctors utilise when they’re trying to diagnose you are variations of everything we’ve been discussing. Sometimes they use a decision tree process. Do you have this symptom? Yes, then go to this branch. Do you have this system? No, okay, then it goes to this branch, and you just follow that little decision tree down and, bam, here’s your diagnosis, or they do a lot of pattern recognition, a chihuahua or a muffin type stuff. I have looked at tens of thousands of heart… What do you call it? Echocardiograms or MRIs, and I’ve looked at so many that I can very quickly identify something irregular, a tumor or a growth or a mass or something. That’s really just… Those are things that you would expect someday, even if they are somewhat still hit or miss now. I know there are some projects that use machine learning to look at scans I forget which type, but to look at some of these medical scans and try to outperform doctors, and it’s been mixed results as I recall. But you would think those are things machines can do.
0:53:37 Max Sklar: Yeah, if I remember it correctly, the radiology stuff was doing pretty well, and some of the cancer detection algorithms is image recognition was doing pretty well, but I’d have to look at the specific examples. One of the fun episodes that we did was one on where we saw an article on smart toilets, which was… I like the idea, but the way that they described how this particular solution worked sounded totally impractical. But I like the idea that you have something in your house and tells you, “Hey, something’s wrong with you. Go get it checked out,” then you don’t even have to think about it.
0:54:15 Paul Matzko: Well, they’re using that for, I forget which university it was. I just saw an article. I’m feeling like Syracuse, but I forget who it is, which some university is they’re checking the sewer mains for evidence of COVID. And so what they can do is rather than having to very expensively test every student multiple times a week, they test the sewer mains of every building on campus, so every dorm in particular to see if they’re getting evidence from their sewage that they have COVID in their system. And where they detect it, then they can shut down that dorm and test just those students multiple times for the next two weeks or something. So I thought that’s very clever, ’cause essentially we’re talking about is that rather than testing it at the toilet, you’re testing it at the sewer junction pipe, but we should do more of that. That’s cool.
0:55:06 Max Sklar: Yeah, yeah. Now, over the last six months, I don’t know what people are thinking in terms of beating this virus. It doesn’t seem to be science‐based at all or logic‐based at all. It’s almost like just do something. So I don’t know how to get… There has to be a… I don’t know if we’re ever gonna get a social change where people are gonna be more data‐driven in the way they run their own lives, and in the way that they run policy. It almost sounds like too much to ask for after my most recent experiences.
0:55:51 Paul Matzko: On the topic of COVID in New York City, so we did an episode, a couple episodes back about de‐urbanisation, and New York City was the example we talked about, ’cause the number I saw was that in two weeks in March, the population in New York City decreased by 5% as people fled. And lots of like Wall Street financial industry type folks who, yeah, they went out to the summer home in New Hampshire or Maine or whatever, and the like. And of course, there’s lots of downstream effects from that. They’re at the top of the economic pyramid, and so if they spend less on… They’re not going into the office as much anymore. That means they’re not going out to eat for lunch as much anymore.
0:56:34 Max Sklar: We have like a 200‐person office at Foursquare, and I go and I’m the only one there.
0:56:39 Paul Matzko: Wow, wow. So you’ve seen… What do you think about this whole de‐urban… And there’s been backlash against them, and Jerry Seinfeld was defending, New York is always gonna be New York and whatnot. As someone in New York City…
0:56:49 Max Sklar: Right, versus James Altucher wrote the original article.
0:56:54 Paul Matzko: Yeah, yeah, James Altucher. So as someone on the ground, what’s your impression of this conversation?
0:56:58 Max Sklar: Yeah, so it’s interesting ’cause we do prediction panels on the local maximum every year or so. I have a tech retreat, which just turned out to be a way to get away with my friends, and I have them do predictions. And so, we were talking about this stuff way back in 2018 where it was like, “Okay, we’re going to have, hopefully, improved transportation.” Some people say that transportation hasn’t improved very much over the previous decade, but if we do ever get to something, maybe not true self‐driving cars, but something close to it, then certainly, the outer suburbs, what I call, I don’t know there are various names for them, become more appealing. And if more access to broadband, better software, better hardware to work remotely, okay, that makes those areas more appealing. But I, at the time, saw this slow process, and it…
0:58:07 Max Sklar: I’d still feel like there is a good reason to have a city. There’s per… I mean, maybe not right now. I feel like right now in New York, you get all the downsides without much of the upsides. But the upsides used to be, you could meet with all sorts of different interesting people every day, and there’s a lot more that you do in person than over Zoom. Over Zoom, you don’t really have the in‐depth conversations that you want to have, you sort of… You get distracted by stuff in your house and you don’t have sort of the accidental cross‐talk that you would otherwise get. And I think that there are some companies that are going to be better remote, or at least in smaller teams remote, and then some companies will find that they’re better off with everyone being in the same place, that includes a lot of creative companies or a lot of companies where you have to share ideas very rapidly and very haphazardly is actually better. And so they’ll sort of be a great switch where they’ll be people coming in and they’ll be the people coming out. If the cities handle this correctly, they can sort of rebrand and say, “Okay, we’re losing some companies here, but maybe we can gain X, Y and Z from the resources that are now available, if we make these changes.” I don’t… I’m not very optimistic about the leadership in New York City right now, but things change, so maybe we’ll be able to change that soon.
0:59:44 Paul Matzko: Yeah, now, I did find it fascinating. I just saw another visualisations of a data set, that the machine learning and tracking people’s behaviour through the pandemic has given us information that would have been unthinkable even just a few years ago. I’m talking here about how early on the virus we were able to assess changing consumer behaviour in response to the virus by doing things like tracking phone locations and restaurant check‐in data. I think OpenTable published some data early on, where you could see that people responded to the virus before they were told they had to.
1:00:29 Max Sklar: Sure, I looked at some of that data. I actually pulled some Foursquare data too as well.
1:00:34 Paul Matzko: Yeah, maybe actually, I saw the update from you, I’m not sure, and we’ve now seen that even when they relax the shutdowns, it doesn’t track with the official stuff. Basically, consumers shut down before they were told to shut down, and sometimes they’ve actually relaxed their behaviour before they were told they could. And you can really see that in the data. Did anything about that surprise you as someone working in the field?
1:00:58 Max Sklar: Not really. When I pulled some data, I found pretty much what I would have expected. It took a few weeks to get people to really stop moving, which is… I don’t think that’s people making rational decisions. I think that was just fear and propaganda spreading. I’m not saying it certainly wasn’t irrational to be inside in New York in March and April, and then we slowly start coming back, but some of that’s seasonal and yeah, a lot of people are still not here. If you look at the data, there’s nothing surprising about what I’ve seen. It’s just… I think the open question is, how do you get people to come back if you decide it’s time to come back? I feel like at some point, you just have to let people make their own decisions and then maybe there’ll be a small group of people who… It’ll be like anything where they’re early adopters who come into the city and say, “Hey, the city as it is in September 2020, this is good, this… I could make good use of this.” For whatever reason, I mean, think about it. I don’t know. It’s a very easy to rent an apartment here right now. Things are pretty much empty. You don’t really have the bar and the restaurant scene outside for another couple of months, but I don’t know, a lot of empty space, but a lot of the city as well. Like empty space in terms of not a lot of people, but a lot of stuff here as well.
1:02:44 Paul Matzko: Right, right, yeah.
1:02:45 Max Sklar: I don’t know, I’d find the people who could make good use of that is… I’m just, this is stream of consciousness, but… I’m just…
1:02:51 Paul Matzko: Yeah, yeah, there’s people that are gonna see that as a moment of opportunity, yeah.
1:02:55 Max Sklar: Yeah, I hope so. I’m sure there are people who are… I’m sick of it. I wanna see people again.
1:03:03 Paul Matzko: Yeah, yeah, I’m sure it’s… That’s why people… You don’t move to New York City voluntarily if you’re not at some level a social. You like people around. I guess you might have to, but that’s why people move to Maine. They don’t move to Maine for the social life, they move to New York City for that.
1:03:23 Max Sklar: Yeah, I feel like if I lived in Maine, it would be less… It wouldn’t be sad not to see as many people because you’d be out in nature, and you’d be… And there’d be all sorts of things you can do. It gets really sad in the city when you can’t see anyone. I mean I had an apartment in Brooklyn that I moved out of, I was there for six years, and it was a nice studio apartment, but I didn’t realise I’d have to basically be in that one room for a month when I moved in there.
1:04:01 Paul Matzko: Yeah, there was a great article, I forget where now, but where they talked about how architects are starting to rethink a lot of stuff. I’m mean, obviously, the open office is an easy one, an open office plan, if you want people to be in cubicles, so they’re safe from contagion spread, so they’re starting to rethinking stuff like that. But the other one was houses. The open concept floor plan has been hot for decades now and, but if everyone’s at home all the time, the kids are over there doing remote learning, both parents are working remotely trying to do Zoom conferences, and you’ve got an open concept set‐up, very quickly, the noise becomes a factor. People just drive each other crazy. Yeah, being stuck in one studio room is rough for month after month.
1:04:56 Max Sklar: Yeah. I’m trying to have a set‐up in my life eventually, where I have enough space to do everything, office space in the house, everything, but it’s tough to have the means to do that. Not everyone will be able to do that. Yeah. That’s interesting. I think also in terms of, I like the compromise, which I wanted before this happened, where you can work from home one or two days a week. That way, you’re set up to work from home is expected. People are not gonna come in sick. Well, I remember one situation in particular in the last decade where I felt pressured to come into a company when I was sick, and I look back at that now. I’m like, “Oh, that’s so ridiculous.”
1:05:46 Paul Matzko: Yeah, hopefully those norms will change and even it’d be great even after we don’t all have to wear masks, if people who are sick and symptomatic wear masks, that would be, like in Japan, people who are coughing feel obligated to wear a mask just socially. If we can keep some little bits and pieces of that, that would probably be salutary.
1:06:05 Max Sklar: Yeah, in that case, if everyone’s set up to work from home, even if it’s not as good…
1:06:11 Paul Matzko: Five days a week. Yeah.
1:06:13 Max Sklar: Yeah, yeah, yeah, but then if you get sick, okay, no problem. I’ll just work from home every day this week. It’s alright.
1:06:20 Paul Matzko: That flexibility would be good. Well, before I let you go, Max, are there any new projects you’d like to plug? Obviously, our listeners should listen to the Local Maximum podcast when they get a chance. Is there anything else you’d like to mention?
1:06:35 Max Sklar: Yeah. Obviously, Marsbot for Air Pods, I don’t know when that’s gonna drop. We’ve been trying to launch this for six months now but I think we’ll get that soon. That’s something to play around with, but definitely check out the Local Maximum on your podcast app and also the website is localmaxradio.com. I’ve been adding a lot of new stuff to the website. I’ve been adding a few articles under questions about what is machine learning? What is the Local Maximum? If you’re interested in learning about that stuff that’s a good place to go to. But there’s a lot of stuff that you can search for, whether it’s guests and I just cover a wide range of topics. It’s not just, sometimes it’s mathematics. Sometimes it’s low level technology. Sometimes it’s just talking about emerging technology with people. Sometimes I’ll just talk about current events and things like that. If you enjoyed hearing from me today, if you wanna hear a little more, check out the Local Maximum. Go to localmaxradio.com.
1:07:35 Paul Matzko: That’s great. Thank you so much for your time Max.
1:07:37 Max Sklar: Thank you for having me. This has been a really great conversation and I’ve done a lot of these and I feel like we covered a lot of ground and we got a lot of good things out there today.
1:07:54 Paul Matzko: We’ve covered a lot of territory today and it’s left me about as bullish about the possibility of using bots for good instead of for ill as I am bearish about the future of legacy cities like New York. If you’re interested in that topic, do be sure to go back two or three episodes and find our episode on de‐urbanization with economist Peter Van Doren for a much more in‐depth take. As always, until next time, be well. This episode of Building Tomorrow was produced by landryairsforlibertarianism.org. If you’d like to learn more about Libertarianism, check out our online encyclopedia or subscribe to one of our half dozen podcasts.