Friday, September 7, 2018

Not that multiverse

I recently started reading the book "Fooled by Randomness" by Nassim Taleb. So far it is not a book I would recommend to most people (the person who suggested I read it said he usually recommends people start with his most recent book, Antifragile). The author covers very interesting content, but not in a way that is easy to follow or digest. This is the first of probably (hopefully?) a series of posts trying to translate the subject of Taleb's book to an easier to digest format.

While I lived in Ann Arbor during graduate school there was a turn I had to drive about once a month. The unfortunate thing about this turn was that it was a left turn immediately after taking a left at a light. It was so close that I had to make a decision: either move into the middle lane of the road, which was a left turn lane for traffic coming the opposite direction, or remain in the line of traffic, and wait for any oncoming traffic to be clear.

After a few times of taking the turn, I wondered which of the two not-great options I should choose going forward. It seemed to me that I could either risk a low likelihood of a head-on-collision, or a relatively higher likelihood of being rear-ended in the other lane. I settled on staying in my lane and risking being rear-ended because of how much more destructive head-on collisions are.

A few years after making the decision, I made the left turn, and waited for the oncoming traffic to clear as usual. The person who was driving behind me saw the brake lights and stopped. Unfortunately, the person behind them didn't and bumped the middle car into mine. It was fairly minor damage all around, but it is easy to wonder given what happened if I actually made the right choice.

One of the messages from Taleb is that there is complexity in judging the quality of a decision based on random outcomes. For the person who bought a lottery ticket and won, it was a good decision. However, we should advise each person not to buy lottery tickets because in most versions of the universe, the individual you are talking to does not win. 

This notion of "most versions of the universe" is a useful one when talking about randomness since it lets you still give weight to things that didn't happen. And while it can be a good idea to update your estimates of probabilities as you get new information, the fundamentals before an event are the same as they are after. As an example, after being rear-ended I did conclude that maybe I should be a bit more aggressive in taking my turn between oncoming traffic. But the fundamentals of my decision didn't change because of it.

Tuesday, April 17, 2018

Soft vs. Hard constraints

​Last week, at a meeting to prepare for an on-site kickoff with a client, I was asked if I had any real-life examples of the "squishy rules"​ I wanted to discuss with the customer. At first nothing was coming to mind, but my airline helpfully solved that problem for me on my way to the kickoff.

After I had started my first flight, my second flight was cancelled. I found myself in the customer service line behind several other people also trying to figure out how to satisfy their constraints and priorities in the best way possible (scheduled meetings the next day, no private jets, how far were they willing to drive a rental car). What struck me was how much those constraints and priorities varied among the 4 people ahead of me in line. Some people were fine with getting in the next night, others (like me) were willing to give up anything except being on time the next day.

Now, you may have noticed above that I combined constraints and priorities into a single list. When I had booked my flight, I chose to fly to the actual city I was headed to. Once that flight was cancelled, I had a choice to make. What used to be two hard constraints now gave me zero "feasible solutions" -- I could either miss one day of the one-and-a-half day kickoff, or I needed to fly to a different city. Now, some very creative people find themselves in this situation and will fly to some other middle city and then to their destination. But my airline either didn't or couldn't suggest those options, and if you had asked me before the cancellation if I would consider a 3-leg trip, I would have given a flat no. So if I had no possible solutions, what could I do?

Well, this happens a lot. People will often list their preferences as needs until pressed. And as long as there is a feasible solution, it doesn't have to become obvious. One of the people ahead of me in line chose not to give up any of their hard constraints, which meant there were still no options available. It was obvious that something had to give unless the goal had changed -- nevermind, I didn't need to go to that city after all. But knowing which of your rules to turn "squishy" is the key to still achieving your goal.

In my case, I flew to a neighboring city instead. In fact, my boss had flown directly to my alternate city and planned from the start to drive the remaining distance -- he had never made flying to the final city a constraint. As a result of this experience I also finally bought some plane tickets for the summer I had been putting off for weeks. I am now flying to the 2-hour-away airport for less than half the price of the tickets to the actual city.

Have you ever realized you were overconstraining your problem? Which constraints turned out to be a lot squishier than you realized?

Saturday, February 3, 2018

​You can't inspect in quality

This is just a short post on applying industrial engineering principles to daily life.

At some point in my education, someone told me that it is impossible to inspect in quality. At the time it made sense from what I knew about inspections: people are bad at noticing rare events

Since then I have found a semi common application at home where attempting to inspect in quality is both tempting and a bad idea... Cleaning up broken glass. I have no idea how other people do it, but the system I have found to avoid the unpleasant outcome of stepping on glass is to clean extremely thoroughly twice, and then to conduct my first inspection. If I find any glass, I assume there are several more pieces I missed and do another cleaning pass.

Do you have any tips to speed up this process? Thoughts of other scenarios where it is tempting to try to "inspect in" quality? Leave your thoughts in the comments!

Saturday, October 14, 2017

Solving the “real” problem

In undergrad when I learned about the field of operations research I assumed people would write down their objective and constraints, get the optimal solution, and then do whatever the model told them to. Eventually I took a class from an adjunct professor my first year of grad school who explained that the hardest part of working in OR was convincing people to implement the output of the model. Basically "decision makers" (aka, people who did not know math) would not believe the output of the model, and so we had to design things so they could follow all the steps in our analysis.

I internalized that people would have reasons not to believe the model, but for a long time continued to believe it was mostly because of mistakes people made. You would build them a beautiful model, and then they would see the solution and realize that they forgot to give you important constraints. Or they would see the result and just determine it too weird and insist on a sub-optimal solution which looked more like what they had been doing. Over time I developed a more complete list of why people would not trust a model, but I still fundamentally thought of the models as right.

At some point though, that changed. I stopped thinking of people as the problem. I started this blog under the premise that not solving the right problem (type 3 error) was avoidable, but took a careful study to get to the problem you should solve. Even now, I have continued to find it challenging to really talk about that mental shift. In fact, this particular blog post has been sitting in purgatory since July while I was figuring out just the right way to convey the distinction.

But yesterday, while reading the HBR article “Are you solving the right problems?” the author described reframing a problem as not simply redefining the “real” problem, but instead recognizing that there is a better problem to solve. Realizing that you could be solving a better problem is not a simple process. It often requires attempting to solve other problems first. Even the notion of a “better” problem is not straightforward. It may have to do with the intractability of your current problem, or the realization that your first solution does not achieve what you thought it would.

If you are reading this and have a problem that could use some reframing, feel free to reach out to me or leave a comment here. Oftentimes, just explaining the situation to someone a bit further from the problem is all it takes to shift your context.

Thursday, August 24, 2017

Influencer book review: part 2

​​When I need to explain why I think industrial engineering is different than other engineering disciplines, I often point to the fact that we see the whole world as systems. While all engineers need to model pieces of the world, in my opinion industrial engineers take a wider view largely because they ​consider people as part of the system. Suddenly the way you approach problems is different because you can't just ​expect that a person will do what you tell them. You have to accept the reality that people will do what they're going to do, and your job is to design a system where what they decide to do is what you need.

In "Influencer" the authors start by suggesting you identify very explicitly what your goals are in a way that they are actionable. You then identify the moments at which people have a choice to support that goal, or not. Finally, you use "influence" to help them choose to support that goal. The authors then spend the majority of the book on the "6 keys of influence" which are ensuring there are personal, social, and structural: motivation and ability, which are encouraging the person to make the choice which aligns with the goal. The book itself is full of examples of what each of these look like, but you can think of times you were personally unmotivated in doing homework or felt unable to do something you ​believed​ you should.

And this brings us back to industrial engineers who see the world as systems. In the first chapters when the authors ​referred to structural motivation and ability, I had no clue how that was supposed to be different than the personal and social categories. But when I got to the actual chapters, I recognized exactly the mindset industrial engineers use to effect change. We try to design changes to almost be easier than doing things the old way. I remember trying to figure out how on earth you are supposed to have an orderly office supply drawer, and the flash of insight when I was looking at pictures and realized the first step was to have about 1/3 of the items I presently had. The system (a jumble of career fair pens and highlighters and countless other trinkets) made it impossible for me to have an organized drawer. It wasn't that I was personally unable to do this, it was the structure itself that made my goal impossible. This became particularly clear after I fixed the system (removed most of the content of the drawer) and was able to organize the remainder.

In the book the authors do actually make the connection that industrial engineers have been the pioneers of structural ability. However, the goal of the book really is to give the reader a framework to effect change using all 6 approaches at once. The main premise of the book is that most people trying to effect change only use one or two approaches, which​ is  simply not enough. I think the underlying reason it is not enough is that people are different. Some people will happily change their behavior if it will get them a bonus while others will only do it if it would be embarrassing not to. Therefore, by encouraging a certain behavior "on all fronts" persay,
​you can hope to actually reach everyone.

Wednesday, July 19, 2017

Book review: Influencer

I have been picking up highly recommended books for years on business-y topics that seemed interesting. However, until a couple months ago when I found a position as a data scientist at Mashey, reading most of them had never topped my to-do list. One of those books was called "Influencer: the new science of leading change."

The book is impressive to me for carefully articulating how little it takes to really create change, but also the complexity behind those small differences. I have noticed that it really does seem to be the small things that determine outcomes, but had not fully articulated what made that set of small things special. In the book the authors describe those things as "crucial behaviors." They give examples of settings where a leader was able to articulate and change one or two problem-specific behaviors which stopped the spread of disease, got inner city kids successfully through college, and many other cases.

While many business books lay out a roadmap to success that may or may not work, this one covers an approach that feels very familiar to me. As an example, I have found that while the everyday interactions matter for my kids, there are pretty infrequent "critical moments" where if I notice and take the opportunity, I can teach them something really important. Further, the whole premise of this blog is that if you can identify the right problem to solve, you will be much more successful in your projects.

I actually am jumping the gun a bit on posting this since I've only made it through part 1 of the book so far, but the framework is genuinely inspiring to me. Hit me up in a month and I can give you the complete run-down.

Thursday, April 27, 2017

The research process

I naturally am a big picture person who learned during grad school to also have a more detail-oriented mode. I had not gotten to that point in my second year, and my advisor pointed out that I had this habit of working out the bigger picture and then would immediately jump to trying to prove something (though not necessarily the right something). For the next two years I had a post-it note on my computer with "Big -> medium -> details" on my monitor.

Eventually I didn't need the note any more, but the smooth transition between levels of research focus stayed present in my mind as I continued my PhD. The summer before my last year I took some time to work on an independent project. I had my initial ideas of what the big picture was, but discovered in working out the details that there were interesting high-level concepts which I would not have come up with without going through the math. I realized that research is not just a one-shot transition through the levels, but that ideally you may traverse the range of focus levels a number of times to finish a project.

Which brings us to the picture at the top from xkcd. I like the idea of the "research focus knob" because there is no way to get from the big picture to the details without going through the intermediary levels. More than that, I think it makes it clearer that your goal is not to just go in a straight line from big picture to the details, but instead to pick the right level of the research problem at every point in the project.

Thoughts or questions are welcome!