Skip Navigation
Resources Blog Wicked Good Development Episode 16: Ted Neward's Philosophy ...

Wicked Good Development Episode 16: Ted Neward's Philosophy 101

Wicked Good Development is dedicated to the future of open source. This space is to learn about the latest in the developer community and talk shop with open source software innovators and experts in the industry.

What does philosophy have to do with software development? More than you might think! In this episode, hosts Kadi and Omar sit down with Technologist, Ted Neward and Developer Advocate, Steve Poole to discuss how philosophy is at the heart of everything. Ted provides great insight as to how his background has influenced his outlook on software development and why developers should be asking themselves the hard questions.

As Ted puts it, "It is the hard questions that are usually the good ones that will lead you to a positive outcome."

Listen to the episode


 

Wicked Good Development is available wherever you find your podcasts. Visit our page on Spotify's anchor.fm

Show notes

Hosts

Panelists

  • Ted Neward - Technologist, Philosopher, Co-Founder of Solidify @TedNeward

  • Steve Poole, Developer Advocate. Twitter: @spoole167

Transcript

 

Kadi Grigg 00:10
Hi, my name's Kadi Grigg, and welcome to another episode of Wicked Good Development, where we talk shop with OSS innovators, experts in the industry, and dig into what's really happening in the developer community.


Omar Torrres 00:20
Hola. My name is Omar, and I'll be your cohost today. We hope to bring you up to speed on the latest in the community through our conversations.


Kadi Grigg 00:28
In today's episode, we're featuring Ted Neward. So Ted, before we jump into the conversation, and Steve will get to you as well, but Ted, can you give us a little bit of a background on who you are and why you're here today?


Ted Neward 00:41
Well my name's Ted Neward. I'm kind of a I call myself a computational philosopher. That's my sort of self given title. I've been in the industry now for 30 years, been programming since 1978 with Apple II Plus with 48K of RAM, and not one, but two floppy drives. That was hot setup back then, and I've spent a lot of time as architect, as instructor, as CTO of startups. I've worked at companies as large as Rocket Mortgage and all of it I've done with a degree in international relations.


Kadi Grigg 01:22
Just shows the world you can do anything with that degree.


Ted Neward 01:24
You know, it's, it's, it's literally the degree that takes pretty much all the liberal arts and smashes them together with the possible exception of art history. We, we didn't really study much in the way of, of like Donna and, and all. So, but just about every other liberal arts degree, economics sociology, philosophy even a tiny bit of archeology, political science history, all of that kinda mashed together. And I mean, I took some fun classes in college, including one that was the political science of national conflict. The final in that class consisted of one question, two words, why war? I had fun in college. I had so much fun in college.


Kadi Grigg 02:16
The possible responses are endless there. I can only imagine.


Omar Torrres 02:20
Steve, can you, I know that we've had you on before, but can you introduce yourself again and maybe give a little spiel as to why you're here today?


Steve Poole 02:29
Yeah. Okay. So my name is Steve Poole. I am, I obviously spare wheel here. I am a DevRel person at Sonatype. Ubeen doing Java and things like that for just as well. Not quite as long, Ted, I think, but pretty close. And what can I say? I just like helping people be successful. That's always been the thing I care about and whatever it takes, however, we can help developers move forward and be just a bit more successful. Cause I've always, in the, for me, the enjoyment has always come when you help somebody move forward, help and solve a problem, learn something new. You know, that's the fun. And I did that in as a working on products and now I do that as a DevRel and it's just the best fun ever. How do you get to conferences and break fingers and things like that?


Ted Neward 03:16
You know, the funny thing, Steve, is that DevRel in, in many ways is a lot similar to international relations because it's a Smorgasboard of all the, the different things, right? It's a little engineering, it's a little sales, it's a little marketing. It's, you know, it's a little education. It's, you know, all this, this, these different things kind of brought together into a collective whole. It's part of the reasons why I like hanging out with developer advocates because you often get, you know, a very, very broad and rich collection of conversations, but also experience. I mean, you know your average DevRel team is made up of a, of a much more deeply diverse collection of people than what you get with your average engineering.


Steve Poole 04:05
Team. Yeah. And we're all opinionated and vocal because that's part of the job as well.


Ted Neward 04:10
That's the best part. Being all opinionated and stuff.


Steve Poole 04:14
Yes, yes. I mean, the last podcast we had which was about what was happening at Devoxx Poland, whichever we can go watch there was some opinions pressed, expressed because, and fun stuff because we, we enjoy this. I mean, people, its a hard job, but at the end of the day, it's such a fun thing to do.


Ted Neward 04:34
Yeah. Its is.


Steve Poole 04:35
We can't recommend it to anybody. And by the way, we're hiring Got that bit outta way.


Kadi Grigg 04:39
So let's dive in. Ted, you, you kind of walked us through some of those different models in the beginning when you were just explaining a little bit of your background and, and your interest in this. But I wanna understand, how did you transition from having a degree in international relations to then all of a sudden having a computer science background, you know, and giving all these key notes about different things across the globe. I mean, that's a, that's a major shift. So how did that come about?


Ted Neward 05:05
You know, Kadi I haven't had that question in probably 20 years when I used to get it all the time in interviews, right? It was, it was not the easiest thing in the world to start, because I have, I, you know, when I would go into interviews someplace would look at the resume, they would see, you know, graduated 1995, Bachelor of Arts in International Relations, and they would say, Okay, you know what, what, what, you know, why do you want a job as a programmer? How can you, I mean, many specs, what they were asking is how can you prove to us that you're a developer when in fact you have no, you know, academic credit to speak of? And while I was in college, I took a couple of CS courses, mostly just crashed them because I wanted to see, you know, when I was in college, about a year and a half, two years before I graduated I and a buddy were sitting down looking through the paper, you know, the want ads looking for jobs.


Ted Neward 06:02
And, you know, one roommate was actually the son of an IBM research fellow. And so his career path was pretty clear he was gonna be a programmer, and he just wanted to get out of school with the degree that required the fewest amount of credits, because, which as it turned out, was comparative literature. And his comment was, Look, you know, I've, I've talked with the people with my dad's coworkers at IBM, and at the end of the day, people in the industry don't care what your degree is in as long as you have a degree. Now, this was 1993- 94 timeframe, right? I'm not saying that that's true today. And so he just won, he got his comp degree and left, and yeah, I think went into educational software games and whatnot. But while we were in school, the other thing you said is people want to see that you can write code.


Ted Neward 06:53
And so the three of us were actually working on a board game, a war game, actually, it's, the game was called Supremacy. For anybody who remembers this from the nineties, it's a Steve's nodding, you know, vigorously. It's basically Risk plus economics and nuclear weapons. And it's for up to six players. And we had a hell of a time trying to get six people to show up at the apartment all at the same time, mentioned we were in college. And so we had this idea that we would build this, this game, and then we would run like five telephone lines into the apartment and everybody would be able to dial in. Cause again, remember this was, this was 93', 94' timeframe. We didn't have this ubiquitous WiFi that all you Gen Zers back in my day.


Kadi Grigg 07:43
Is this a LAN party?


Ted Neward 07:45
No, this was even, this was before LAN Party.


Ted Neward 07:48
This is, this is Steve's gonna love this. This was a BBS.


Steve Poole 07:51
Yes.


Ted Neward 07:52
You what that is, Kadi? You even know? You have no idea.


Kadi Grigg 07:55
I have no idea. No.


Ted Neward 07:57
In boards system. Go Google it, Look it up. It's in your history textbook somewhere. Anyway, we're building this, this game, and, you know, it was Windows programming. And so I was helping build this, and when I took a couple of CS classes, I realized that I knew more about programming than the people who were graduating with the CS degree because I was doing it for fun. And they were doing just the assignments they needed to do to get the degree and then, you know, would graduate. And that was a really interesting revelation to me. The other thing, as I said this, the roommate and I were looking for jobs, and he saw a position that was looking for a Windows developer to work on medical billing software, you know, for doctors, you know, rates and so forth.


Ted Neward 08:43
And, you know, he's like, Oh dude, you should apply for this. I'm like, Come on man, they're looking for professional programmers. He's like, Come on, what could, what have to lose and I am like I don't know, And then guys, and, and Steve, Omar, you'll, you'll know what I'm talking about when I say this. The heinous thing that he did next, he double dog dared me to interview. Now, now, now, Kadi, I don't know if this is true amongst, you know, college age women, but amongst college age men, a double dog dare is, I mean, this, this is, this is threatening everything that you hold dear. I mean, this is, this is, you know, your manhood is at stake in so many ways. And so I interviewed and out of like 40 applicants, I was one of like three finalists, which just blew me away. It absolutely stunned me that I was being, you know, seriously considered for this position.


Ted Neward 09:42
I ended up not getting it because I'm not a professional developer. And they, you know, they, they could spot that fairly quickly. But the fact that I got that far on just what I had self-taught, you know, over the last couple of years in c++ and whatnot, and again, just as a contextual note back then, c++ was the hot thing. I mean, it was, you know, 93', it w- yeah, it was the sexy thing. And you know, very much like Java in 96, very much like, you know, C# in 2002, 2003. You know, the fact that I knew anything about it was, was kinda intriguing to folks. And it really sort of brought home the idea that yeah, at that time, if you could write code, people wanted to hire you, period, full stop, regardless of what your background was in.


Ted Neward 10:33
And that really was what set me on this entirely different arc. Because prior to that, you know, I was seriously contemplating a degree, or I'm sorry, a career in like either the intelligence space, you know, working for the CIA or the DIA as an analyst or the Ambassadorial corp. I mean, you know, think about that listing audience, Ted Neward, you know, sitting there sipping mai thais with the Prime Minister of Australia. We'd have been at war in like two weeks. You, you think I'd be any better than Boris? Come on, man, give me your vote. We'll get to that later.


Omar Torrres 11:12
Ted, could you talk about your evolution from the story you just told to what you said earlier? I think maybe before we even started the call about you being a computational philosopher, What does that mean to you? Can you say a little more about that?


Ted Neward 11:26
Philosophy is a really interesting space. I have recently, in the last decade, decade and a half really rediscovered or discovered, I should say. Cause back in college philosophy was just really annoying, right? Because philosophers ask all these really annoying questions like prove you exist without using any of your senses, right? Or, you know, and just, you know, questions like that would just, would just drive people nuts. And that's, that's really what we knew of philosophers. That, and they were, you know, old Greek dudes running around in togas, you know, doing bizarre things, right? Trying to figure out if they had, you know, if you wanted to move the earth, how long of a lever would you need? You know, doing just really bizarre things. And what I've discovered in the, you know, three decades since is that philosophy is really at the root of everything. And I can prove it, I can prove it to you because it's called the Wikipedia philosophy game. Have you ever heard of it?


Kadi Grigg 12:32
No, but I'm intrigued.


Ted Neward 12:33
Take any subject, anything, anything, right? Any subject, any person, any topic, anything that's in Wikipedia, go to that page, click on the first link. Then from that page, click on the first link. Then from that page, click on the first link. And if you do this often enough, eventually you will end up at philosophy. Because if we can prove it, it became a science. And if we can't prove it, it became a religion Philosophy is really at the heart of everything. Aristotle remember was the first natural philosopher and much of what he studied turned into the sciences as we know it. And philosophy in many cases. I mean, the word means love of learning, love of knowledge, right? Depending on how you wanna translate the Greece of the Greek. And as I got deeper into development, particularly as people started asking me for higher and higher level opinions, particularly when we would get down to the subject of doing keynotes and whatnot, it really begins to become more of a philosophical inquiry, right?


Ted Neward 13:42
And many of the tools that I used back in college to study things like why nations go to war, the, you know, creating models, not models in the sense of software models as we think of them, but models in terms of models of thinking, right? When we start thinking about object oriented programming and then we start thinking about functional programming, for example, those are really just two models of how to think about code, right? And so if you want to analyze the models, you are essentially now engaging in the act of philosophy. You're trying to ask certain questions. What does functional programming do well? And what does object programming do poorly or vice versa? Because functional programming is not the be all end all that some of the functional folks want it to be. Particularly if you're trying to deal with mutable state, and frankly, the world is mutable.


Ted Neward 14:34
I mean, you know, we can, we can get into some really interesting debates about data. The notion of data. You know, does data ever change or is it simply that the tupple that we're dealing with around data, the fact that the four of us are on this call at this moment in time, that will be me immutable for that moment in time, But the next moment in time that may actually be a different set of data. Does that mean that the data changed or that we're just simply constantly aggregating new sets of data? And if that's the case, then this really gets to the whole notion of event streaming, right? The databases really shouldn't be mutable, but should in fact capture the state of the world at any given moment in time, right? And that we should never delete data because at that time the data was correct.


Ted Neward 15:22
But then we also run into scenarios where in fact, what we discover is that under our understanding of the data was incorrect. Cause remember, the Greeks used to have this, you know, belief that, you know, there were certain things that were the smallest you could get, right? Those were what they called atoms. That's why we used that term in modern science. And that everything was made up of four elements, Earth, air, fire, and water. Right? Well, okay, today our knowledge is, is different. It's better. So again, if we're trying to model knowledge in, in, you know, a database, does that mean we wanna go back and change that data? Or do we wanna represent what we incorrectly understood back then? These are all philosophical questions, right? This is all a notion of do we want our database to represent truth or do we want our database to represent our understanding of the truth? And we are right back to wearing togas out there in Athens arguing with one another or hiring sophists to do it for us, right? That's where we get the words office tree. They were the, they were literally the first lawyers you could pay somebody to go and argue your point in the square, which was the Greek form of democracy. Those were the sophists, right?


Steve Poole 16:34
It sounds you've just labeled blockchain as a philosophy.


Ted Neward 16:38
Don't get me started on blockchain, Steve, just don't, Cause you'll have to.


Steve Poole 16:45
But then everything…


Ted Neward 16:46
Well, see, there's, the thing is, is philosophically right, Blockchain, blockchain assumes that they can be the source of truth, right? And this gets into some of that, that interesting philosophical notion of is there such a thing as as objective truth, right? I and a college buddy of mine and our medieval studies professor, literally we were the only three in the medieval studies lab one day. And cause nobody else showed up, and we three spent, we spent the entire time plus another hour at the coffee house on campus arguing over whether or not there was a notion of universal truth. Is there in fact a notion of universal truth or is in fact all truth merely my and your perception of it? And therein lies some of the roots of conflict, right? I mean, you know, we, we can start to look at some of the current scenarios, but even even questions of physics, right?


Ted Neward 17:46
Is the speed of light at constant? No, Einstein's, you know, later theories proved that that light in fact was, you know, potentially relative in certain scenarios that parallel lines could in fact end up intersecting. And when you can't even trust physics to be true 100% of the time, then can we really argue that anything has universal truth? This is what philosophers do, right? And that's why if we can prove it, it becomes a science. If we can't, it becomes a religion, right? I mean, you know, the flat earthers right now are a religion, whether they wanna admit it or not, they're a religion. And, you know, the, if, if they could prove it, then they, you know, I mean, I, that's the thing, right? Can, you know, tell a flat earth or define the edge of the planet and look down? And until they can do that, it remains a religion. Computational philosophy, Well, just let me, the other half of this is computational philosophy is how do we apply philosophy to computing, to programming, to all the things that we four of us make our money around? And that's the heart of the term.


Kadi Grigg 18:55
We were speaking with a senior security researcher yesterday Ax Sharma, who was reporting about protestware So that's something new, I think, in the development community and something I personally have never seen of or heard of until yesterday that kind of boggled my brain a little bit, where developers are now kind of almost self sabotaging, but putting up, you know, separate messages about, you know, get better news information, et cetera. What are your thoughts on these types of activities?


Ted Neward 19:22
So a I'm not familiar with the term protestware either, so my, if I start, if I start making assumptions in this that are incorrect based on the conversation you had yesterday, stop me. Just so you know, you can remain at least as, as accurate as possible. But I mean, part of the thing that, you know, and this is where, this is where we're gonna dive a little bit into some of the economics during the great resignation a year or two ago, and even before then with some of the efforts around unionization at various companies, right? We're seeing unions in many respects. You know, Starbucks is looking at union efforts. Trader Joe's is looking at union efforts and Google has been staring down some union efforts. And this is something that the, the programming space has talked about for quite a while, right? Originally the conversation was we should have some sort of governing body so that not just anybody can call themselves programmers, but only people who are in fact, you know, trained as engineers, right?


Ted Neward 20:25
In many other spaces, you know, I can't just put on a white lab coat and call myself a doctor. There are certain exams and certain things that, that have to be proven in order for me to make that assertion. Whereas anybody today can call themselves software engineer and we should change that. The unionizing effort now though, is much more about the individual developers, programmers, you know, asserting their own power in many respects. And so, for example, there was an effort at Microsoft to resist the fact that some Microsoft software was being used to help INS here in the US and many, many people on the left side of the political spectrum despise INS in many cases, because they don't really seem to have a need to follow our concept of civil rights. They can detain you for any reason. They can deny you your one phone call.


Ted Neward 21:21
They, you know, if you live within 50 miles of the border, any border here in the US, the INS can absolutely set aside. And so many people at Microsoft, many people at Google and some other companies were saying, No, man, that's crap. And I will not as an individual allow my work to support a cause that I do not support. Now, that in of itself raises some really interesting questions because if I'm paid for the work, do I still have ownership? Do I still have any sort of, of, you know, associativity to the work? And if I do, then that raises some interesting intellectual property concerns, especially around things like legal liability. If I do not, then technically the only option I really have as a worker is to quit and not work for that company anymore. Which, you know, to some degree that's, that argues partly in favor of the union efforts because that was the same argument that they made, you know, 150 years ago around, well, if I don't want to lose a finger in the, you know, in the machine, my only option is to quit.


Ted Neward 22:31
Well, unions were designed to protect people's physical safety and allow them to have a comfortable place in which to work and earn a living and so on and so forth. But now we can make this, and I'm squinting pretty hard here, right? To say, you know, okay, mental health, we've spent a lot of years just in the most recent times talking about mental health and the importance of mental health and work life balance. And, you know, if working for a company is gonna lop off pieces of my soul, and if we value mental health, doesn't that mean that we should be thinking about, you know, these same kinds of, of efforts. And so, you know, does that mean that Microsoft has to check in with every single one of their developers every time they wanna sell software to somebody? I mean, if I, if I'm a democrat, Microsoft wants to sell Microsoft Word to the Republicans, do I get to veto that sale?


Ted Neward 23:30
There's really interesting, you know, balance of power that goes on around some of this. And, you know, I remember not too long ago when we really thought the internet was gonna be this great democratizing force in social media and the Arab Spring and all of of that stuff. And then we discovered that social media was actually being weaponized in, in several ways. And all of a sudden, so I mean, social media went from, Oh my God, this is the most amazing thing ever. It's gonna democratize the universe to, oh my god, social media is the weapon of tyrants everywhere it's gonna, you know, and protest. I mean, this is one of the interesting things and the frustrating things about philosophers is I didn't answer your question because I don't know the answer to that question because the questions in some cases are more interesting than the answers, right? I think the really interesting question is how do each of us feel about protestware? And what does that say about our relationship to our employers, to our fellow man, to our fellow developers, you know, et cetera. Because the answers to those questions in some cases are going to be the long ones that will decide whether or not the industry is shifting fundamentally, or whether or not this is just, you know, yet another interesting footnote in the history of computer science.


Steve Poole 24:53
Yeah. So I think the particular thing we're talking about here, process where is, so the, the specifics are they, you are using a piece of code and you've been, it, you know, dependency, been running forever, and one day it doesn't do what you thought anymore. And it does something not necessarily benign, but something like put up a protest something right, Or, or in some cases they're saying specifically to do a Ukraine, it was about trying to give people in Russia maybe access to information they weren't normally access to. So it's not malware, but it's unexpected and possibly unwanted side effects. And so the interesting thing is how this is becoming, it gets to an under, I dunno, a sort of a common understanding that we've all had about open source, which was that it was for the good of everybody. And so whenever you have an instance where somebody changes their code for, you know, for bad practices, bad reasons like malware, or actually in this case for protestware suddenly it's like you are, you are, you have to check your assumptions about how and why you're using open source because that idea that it was there forever and you could use it forever, always for good. Now it's like, well, no, not so back, not so much now.


Ted Neward 26:11
Well see, here's the funny thing about open source, Steve, is that in many respects we are, there's, there's so many different threads that are, that are firing inside my head right now. They're, they're, they're, they're battling for control of the utex. That is my mouth a open source. We don't really think about open source the way that original advocates of open source thought about it, you know, because realistically speaking, if you don't like the protest, you can, you can fork a copy of the, the open source and you can now support it yourself, right? That was always the model, right? That's why it would be free forever, because I could see the code, I could get my own copy of the code. That's not the way that most of us interact with open source, though most of us treat open source as a product, not as an open source project.


Ted Neward 27:05
And now this starts to get into some interesting, you know, a there's, there's some interesting assumptions that are being turned on their head when I say that. If I say open source is a product, you know, there are certain things that we make assumptions about with respect to product. We do not expect Microsoft word to flash up alava Ukraine when I start Microsoft Word as a splash screen. But if they did, you know, how many of us would be upset or how many of us would, would, you know, would, would just say, okay, whatever, and move on. Do they have a right to put that message up? There were a lot of people who will argue, they absolutely do not, and that would be an interesting one to try in the courts. But the other thing is, you know, much of the open source space, what a lot of people really don't think about is something that economists frequently do.


Ted Neward 27:58
And this is, you know, used to be that five years ago, if I said the word supply chain, most people would have no idea what I was referring to. Now, of course, you know, worldwide with the disruption pandemic, the shortage of labor, et cetera, everybody's getting comfortable with it. But open source is part of the supply chain of your software. And one of the things that frequently, I mentioned earlier McDonald's, one of the things that made McDonald's successful as this global enterprise is they took control of their supply chain. They make their own buns. They, they raise their own cows. They, they farm their own lettuce and tomatoes and so forth. They, when you, when you drive out of McDonald's with that, you know, machine assembled burger that you've got every aspect of what you're about to eat, including the cardboard packaging, the Keenan came from suppliers that Microsoft, or sorry, Microsoft McDonald's owns, right?


Ted Neward 28:57
They own their entire supply chain. And it's funny because it just, you know, just yesterday there was a Twitter that came across where somebody you knows cart, it was a comic and two, you know, people in hard hats obviously working on some construction site. And there was a picture of a screwdriver with a $2 price tag on it. And the one hard hat is looking at the other hard hat and saying, Really, you bought the screwdriver instead of making it yourself. What kind? You know, because we do that all the time as developers, you know, why do you insist on building everything? Well, I mean, there is an argument to be made here that if you build it, it's a part of your supply chain and therefore you have full control over it and you would never be victims of the protest world. The other side of this is, if I depend on a particular package and they decide to put up protestware, and especially if they decide to do it in a public manner, am I legally liable?


Ted Neward 29:51
Forget lava Ukraine for a second. Let's suppose they put up a neo-Nazi flag, right? Am I legally liable for displaying? Because I mean, anytime we talk about something that could be used for good, it can also be used for bad. We have to acknowledge that. Am I legally liable now for antisemitic messaging from my software? If I Yeah, I can. Katie's like, Oh geez. Yeah. These are the kinds of things, and this is partly why so many companies for, I mean, developers get so frustrated all the time when it's like, why can't I just use an open source package? Because that's an additional element to the supply chain that we need to make sure that we understand and that we understand the legal liability implications of using, because that is a big deal. That is a very, very big deal.


Steve Poole 30:40
Yeah, I mean, you touched there on that shows that some of this, that the line between protestware and


Ted Neward 30:48
Malware, it's a very, very thin line, has more to do with the intent of the person writing the wear than it does with the actual wear itself, right? Because again, you know, and this is something personally my politics are left of center, but this is something that most NRA gun activ, you know, gun rights activists will tell you it's, you know, guns don't kill people. People kill people. That's true. Because it really, you know, the gun is just a tool. And you know, I, i believe me, I definitely wanna see more guns off the streets of the United States and will probably lose a few listeners from me saying that. But the point is that any, anything we build, anything we do, you know, if I create a knife, I can use it to cut my steak during dinner and I can use it to stab somebody.


Ted Neward 31:37
Anything can really be used for both benign and malicious purposes. And so, in many respects, you know, when we talk about viruses, I mean another word for virus is self modifying code. And another word for self self modifying code is lisp. I mean, you know, it's, it's, this is a tool that can be used for both good and bad things. I mean, you know, another name for virus is production patch. You know, we, we, we change the running software and keep it going. And this is part of the reason why personally, I think as an industry we need to stop spending so much time focusing on churning out all these candidates who know how to do big o notation around algorithms and what, and start churning out people who have a deeper background in philosophy. My personal preference, the United States would pass a law that philosophy is taught in kindergarten because children are the world's most natural philosophers. If you've ever been around a five year old you know it because they just keep saying why, why, why? And that is exactly what philosophers do. It's why they're so annoying.


Kadi Grigg 32:48
I told my sister I wouldn't pick up her kid anymore from daycare because of that.


Ted Neward 32:52
. But Katie, that's the frustrating thing is we shut that down, right? As parents, we shut that down because we don't have the answers to those questions. And we think as parents that if we don't have the answers, we're somehow failing our children.


Kadi Grigg 33:07
Yeah, that's true. And the thing is, is if you admit you don't know, you're not that, you know, you're not far off. And I think it's okay to admit you don't know, right? Yes.


Ted Neward 33:15
And that's the thing is admitting that we don't know is, is absolutely the first step to knowledge. I only know that I know nothing Socrates said


Steve Poole 33:26
That. Yeah. So Katie, you've gotta think about it. It's not a, it's not a, you've, you are missing the opportunity to spend time with a five year old philosopher.


Kadi Grigg 33:33
It's True.


Ted Neward 33:35
You know, that's the thing that I regret in many respects. My kids are, you know, 29 and 2220, yeah. 22. And you know, I regret that in many cases I did that parental thing of shutting them down when they were young and asking those questions because that, that, you know, we, we do stifle children's natural curiosity about things. Now in some cases we do it because we're trying to keep them alive. No, don't touch that. You know, that hot cookie sheet. But I actually, my eldest, I blogged about this a long, long time ago and a long since defunct blog, my eldest was like five or six and we were taking cookies out of the oven and, you know, he wanted cookies. So he was reaching and it's like, no, don't touch that. It's hot. And he was reaching, don't touch that. And the third time it's like, okay.


Ted Neward 34:27
And he touched it and burned his finger, right? Just a little blister tight burn. And I blocked about the fact that, you know, this is what parents refer to as natural consequences, right? You, you know, we have a different acronym for it today. FAFO I think is how, how it goes, right? You know, f*ck around and find out. And you know, and I got a number of people who are like, Oh my God, you're a horrible parent for allowing your child to burn himself. It's like, you know, that is one of the ways we learn is through natural consequences. You know, you can't be around your kid all the time, etcetera. And you know, this actually goes back to again, goes back to philosophy. There are two kinds of knowledge really that we have. One is knowledge that we've accumulated from our own experience.


Ted Neward 35:18
I touched the cookie sheet, I burned my finger. Other is knowledge that we've acquired from other people, right? I personally have never taken crack heroin because I assume it's bad, right? I didn't have to experience it in order to discover how, how terrible it's, and children, we, we want to allow them to learn from experience as long as the experience doesn't hurt them too badly. But if you think about it, and this goes back to Katie, when you and I were in crack of right at, at the conference, conferences are an attempt to try to teach people through our experiences as speakers so that they don't have to burn themselves on a cookie sheet. And yet in some cases we try to replicate the experience by showing them demos. Right? Which, is that really replicating the experience? Or is that in fact just saying, Look, look, there's a pothole here, I can show you the pothole. Or, you know, I mean, it, it's, it's, it's, it's just, I mean this is what I mean. Philosophy underlies everything. It's everywhere.


Omar Torrres 36:26
Is there anywhereright now in software development where we should be applying more philosophical thinking? And I'm just, my question stems from, is there anywhere in software development where we are heading towards one of those natural consequences? Or maybe we should be, you know, trying to watch out. Or maybe some people are saying, watch out, but we're waiting for that natural consequence before we have a reaction.


Ted Neward 36:54
So Omar, let me ask you a question cuz that's what us philosophers do. We ask questions.


Omar Torrres 37:00
Right? Ok, Yes.


Ted Neward 37:01
You get in your car, you drive down the road and you see a bicyclist in front of you and all of a sudden something happens and the bi bicyclist goes down and you know, the, the the figure, the bicyclist, the rider is right there in the lane of traffic and you can choose to swerve left. Cause if you keep going, you'll, you'll kill them, right? Ok. You'll just run them right over. You're traveling fast enough. If you swerve left though, there is a strong chance you'll create a five car pile up, which potentially could be fatal. You can't tell what's your choice. This is known as the trolley problem. By the way,


Kadi Grigg 37:40
I always struggle with this one. Even since college, the trolley problem is brutal.


Ted Neward 37:44
Okay? But, but here, here's where things get even more interesting. You are now the developer writing the software for the self driving automobile.


Ted Neward 37:57
And you are faced with that choice. Which do you choose? And just for bonus points, are you in fact legally liable if you write the codes to run the bicyclist over accepting that one certain fatality is better than potentially five fatalities? Right? Either way you go. There's either way you go, Are you liable? Are you the developer of the software? Cause you are making that decision ahead of time that this computer will just faithfully execute, right? You can make the argument in the heat of the moment that you didn't have time to really sit down and analyze the problem, right? Even the most intricate analysis of the trolley problem loses side of the fact that you have to make a split second decision, right? But as a developer, we don't, we actually get to decide that long in advance. So a if you write the code to do the, you know, okay, I'm gonna go with the known as opposed to the unknown, does that make you legally liable?


Ted Neward 38:59
And does that make you a murderer?


Omar Torrres 39:01
That's a great question.


Ted Neward 39:02
If you're getting the impression that I'm thinking that AI and machine learning desperately need ethical oversight, you would be right.


Omar Torrres 39:09
That's right.


Ted Neward 39:10
That's the, that's the most obvious case, right?


Omar Torrres 39:13
Okay.


Ted Neward 39:14
That is, and, and you know, when I was at Rock Mortgage one of the teams that I was leading was working on a problem in AI and machine learning. And we actually consulted with a team inside of Rock Mortgage specifically geared towards thinking about, you know, morals and ethics of the, of, of anything data science and machine learning related. Because if no other reason, you know, Amazon has gotten into trouble recently, they, they created a system that would do analysis of resumes to determine whether or not they should proceed forward with a candidate based on the resume. And do you know what Amazon discovered that the best chances of getting hired at Amazon were for those people who were white males. Yeah. Katie's eyebrows went and hit the ceiling. Yeah.


Kadi Grigg 40:06
Wow. I wasn't expecting that.


Ted Neward 40:08
Well, because here's the thing, when you do machine learning, it's a giant pattern recognition machine. It was recognizing what was already true at Amazon. It had nothing to do with skills. It had to do with the fact that most of the developers at Amazon are white males. And so the machine learning said, Oh, well, given that that's what leads to success, then this is clearly the best. And Amazon very quickly shut that program down because that's not what they want to do. But that's the danger of machine learning, is it can tell us the way things are now, but it can't necessarily infer the things the way we would like them to be. And so, you know, I it that's great for image recognition, right? Here are all these images of wiener dogs and here are images of hot dogs, and we can learn the diff the subtle differences between the two and eventually get to a point where we can create an algorithm that recognizes wiener dogs as opposed to hot dogs, right?


Ted Neward 41:06
But it can't then look at a doberman and say, Oh, that's a dog too. See, this gets back to some of the notion of what we do for models, because all models are broken, but some of them are useful. How do we validate that the model we we are building is not in fact, broken so badly that it's not useful? And so software and machine learning, they can give us insights that will get to conclusions we've already raised, but they can't necessarily get us to new conclusions. That's why I actually hate the phrase artificial intelligence, because even if we could get a machine to assume, you know, classic science fiction trope, even if we could get a machine to be fully sentient, and by the way, let's, we'll have to debate what that term sentient really means. Do you have to feel emotions in order to be alive?


Ted Neward 41:59
This is something that we've debated in both the philosophical and the religious space. Do animals have souls? That was a major debate in religious circles about, you know, a couple hundred years ago. But you know, we know that animals can feel things, but what about plants? I mean, that's one of the reasons vegetarians don't eat animals because they say, I don't want to eat anything that can feel pain. Hate to tell you this, plants can feel pain. We have actually discovered, you know, through, through deep analysis of some of the electrical activity in a plant, they can feel pain, right? So does that mean we stop eating plants too? I mean, you know, what do we end up eating soil and green for everybody? And this is, you know, yeah, .


Ted Neward 42:44
So go back to your original question, Omar. The whole machine learning space is definitely a place that needs ethical oversight. It needs philosophical inquiry. I think most things in software require some philosophical underpinning. And I think the best thing that software developers can do is learn elements of philosophy, but it's also gonna be the most frustrating thing because once you start down this path, it really, really, you know, it really begins to, to, it calls into question so many things and you really start looking at the world differently. It's a drug. And once you take it, you never look at the world the same way. You really don't.


Omar Torrres 43:25
Steve, do you have any other questions? And I'm only looking at time. I actually really enjoy this conversation, so


Steve Poole 43:32
No, I, we can go forever. But I thought Ted's touching on is, the thing that that gets me is when we started way back with software it was sort of contained and now the world runs on relied software across the board, everything easy. Software and developers haven't really grown up to that. And that is a, that is a challenge. I mean, we see that, you know, we talked about protestware, but that this idea of I own a piece of software and whatever it does is, is up to me, that was okay when you had five people using it, but when it could be embedded in anywhere, you know, And the same with the ai, all this sort of stuff. It's this all run ahead of us without, I don't think moral, but some, some education to people who write code about that they have responsibilities and they need to think more about what they're, what they're writing. You know, that comes out in the, when we talk about security vulnerabilities, we see how often good intentions get subverted into bad things. Because people were going, Oh, no will ever, nobody will ever use this for bad purposes. And nowadays, if you let a bad guy use it, can get to it, they'll subvert it. You know, that's life.


Ted Neward 44:46
Well, and the, and the thing is, Steve, it's, this is where history comes into play because it's not like this is the first time we've ever had this particular dilemma. You know, remember Robert Oppenheimer actually, as soon as they discovered that Germany was not in fact working on a nuclear weapon, you know, and as soon as we had in fact taken the heavy water plants that Germany would've relied on, he wanted to stop the development of the atomic weapon. And, and the US Army said no. And one of the things that I found personally fascinating when I was at uc Davis, one of the classes we had was physics 1 37, the physics of nuclear weapons. And it was taught by two grizzled old physicists, one of whom had actually been at White Sands New Mexico, working on the project. And he said that the day they set off the first weapon, there was actually a betting pool going amongst the scientists as to whether or not the weapon would work.


Ted Neward 45:48
And then he paused and he said, actually, there were three categories. About half of the physicists thought it would work as expected. Another, you know, basically two thirds of the half remaining thought that it wouldn't work. And there was a vocal contingent that thought that once the reaction started, it would never stop that. In fact, it would just keep creating chain reactions and turn the entire earth into a nuclear fireball. The earth would basically turn into another sun. Yeah, you could have heard a pin drop in the classroom until a young woman at the very front said, Then why did you let them set it off? And he'd obviously gotten that question before because he just looked at her and said, Young lady, this was a project run by the United States Army. We were not in control. It was not our choice. The army wanted their weapon.


Steve Poole 46:45
Yep.


Ted Neward 46:45
And you know, nuclear power is, is one of the most powerful things that we have you know on this planet, right? We can use it to create vast amounts of energy in a controlled manner. We can also use it to create a really, really big bomb. And every technological development that mankind has ever come to has in fact been something that we could use as either a boon to mankind or a weapon. I mean anything, right? You know, even if it's just I invent this really, really cool thing and now I weaponize it by taking it away from the people who need it. I mean, as, as we're seeing now with the, you know, grain and farming and whatnot, right? We've optimized, you know, the, the, the agricultural space so much that now Ukraine is under conflict and the fields are under attack, and that's disrupting supply chain.


Ted Neward 47:38
And now there's good, good chunk of the world, it's potentially gonna starve because, you know, we've done all this optimization and now food is itself a weapon. It's always been a weapon, but we're seeing it play out here in the 21st century. You know, this isn't a new problem and it isn't necessarily one that we as software developers can solve. Cause that's the other great problem we have, is thinking that software can solve everything, right? I remember talking with a startup here in Seattle that was gonna solve the problem of corruption in India by creating a social media site where you could report corruption and therefore people could in fact, root out corrupt agents. And it's like, yeah. And how do you think that's not gonna be corrupted? Because the first thing I'm gonna do is I'm going to falsely report the one non-corrupt. I mean, bribes are frankly a fact of life in many of the governments, you know, and you could argue many western governments as well.


Ted Neward 48:37
It's just the bribes are different and, you know, squirreled away more. At the end of the day, you know, there's all these things that we can do. Social media can be weaponized, but it can also be a tremendous power for good. The atomic weapon can be weaponized, obviously atomic power, but can also be used for a tremendous amount of good. This is not a technological problem. This is a human problem. And one of the things that doctor developers would be really, really well served is to understand that you cannot solve human problems with technology. It can only help humans solve human problems.


Omar Torrres 49:16
I only for the sake of timing I actually really, I was gonna say, how can you close this out in terms of what, what can developers be thinking about or what should they be thinking about as they, you know, keep, keep doing the work, keep inventing keep innovating what, would be some of the last words you, you could leave for developers?


Ted Neward 49:41
? So a, understand that whatever you build will be used in ways that you didn't intend, right? That's true. Both from a philosophical perspective as well as from a security perspective, right? Anytime you build an api, remember that there are always two clients for it. One, the front end that you build, and the other tell net. There will always be attackers who will try to attack your API through something other than, you know, the front end that you build for it. Your software will be used in ways that you cannot imagine. And if you really have a concern for that, then you should build in safeguards such as, for example, encrypting data at rest. If you don't want data to be used potentially maliciously, and don't try to anticipate all the failure scenarios, don't try to anticipate how data will be used, right? You know, you can't possibly imagine all the creative ways in which data could be used for malicious purposes unless you're an attacker yourself.


Ted Neward 50:42
You really just don't have the breadth of experience. Just accept that it's okay to say, I don't know, but you can still take steps to try to, you know, correct for that, like encrypting data at rest, like thinking about, you know, how the keys will be managed. Like wondering what happens if a system administrator decides to go rogue, right? How do we protect our systems against the CIS admin, which is an interesting question to ask. And yet, this is exactly what led us to system administrators can no longer see passwords in a properly managed environment. You know, it's, it's, by asking a number of these questions, by engaging in some of this philosophical inquiry, you're going to come up with a lot of questions that don't have good answers or don't have answers at all. And that's okay, right? That's, that's part of philosophy, because then if you ask a question and realize there is no answer and realize that's a problem, now you are potentially doing something that nobody has done before, you know, at least successfully.


Ted Neward 51:45
Because if they had, there would be an answer, and now you could potentially be on the cusp of doing something really, really useful if you can figure out a way to, in fact, allow us to secure systems without requiring passwords. Hey, that's a great thing. And we are now starting to see a rise of password less forms of authentication and security, right? If you can think of ways to try to, you know, prevent people from being able to do malicious things with software by requiring, you know, a second form of authentication, right? There's 2FA right there, so on and so forth. It begins with asking those questions and it begins with, you know, asking those questions of people that are not you and are not like you because you want a diverse perspective. You know, one of the reasons why Amazon ran into trouble with their resume system is because they actually you know, they tested it on themselves and because their team was made up of white guys, the software worked great for white guys.


Ted Neward 52:49
Facial recognition software has this exact same problem. This is where diversity of teams really comes into play, particularly diversity of thought, right? If you really want to test whether or not your software is, is accessible, hire somebody who's heart of sight or, or colorblind or, you know, has some sort of, of motion disorder such that they can't really control the mouse or, you know, hire somebody onto your team to do that. If you really, really, if you're committed to doing, you know, software that is accessible to everyone, right? As opposed to just, Hey, do we have these things? You know, go through the checklist, we're good. Right? You know, this notion of of asking yourself, not how do I satisfy the requirements, but how do I actually, you know, what, what are we building? Who is it for? Why do they want it? Why do they care? Why should they care? These are hard questions to answer, but as philosophers will tell you, it's the hard questions that are usually the good ones, and they will lead to, you know, positive outcomes. So, you know, it begins with learning some basics of philosophy and then asking hard questions. And that's, that's about as concrete as I can get.


Omar Torrres 54:06
Excellent. Well, thank you so much for being on Ted. I feel like the conversation has been enlightening, at least to me. I've learned a lot, so it feels good. Steve, any final words or No,


Steve Poole 54:19
No, no. It's been, we could go on forever, but, so let's not.


Omar Torrres 54:23
Okay.


Kadi Grigg 54:26
Thanks for listening to another episode of Wicked Good Development, brought to you by Sonatype. This show was co-produced by Kadi Grigg and Omar Torres, and made possible in partnership with our collaborators. Let us know what you think, and leave us a review on Apple Podcast or Spotify. If you have any questions or comments, please feel free to to leave us a message If you think this was valuable content, share this episode with your friends. Till next time.

 

Picture of Kadi Grigg

Written by Kadi Grigg

Kadi is passionate about the DevOps / DevSecOps community since her days of working with COBOL development and Mainframe solutions. At Sonatype, she collaborates with developers and security researchers and hosts Wicked Good Development, a podcast about the future of open source. When she's not working with the developer community, she loves running, traveling, and playing with her dog Milo.