Tuesday, April 17, 2018

Soft vs. Hard constraints

​Last week, at a meeting to prepare for an on-site kickoff with a client, I was asked if I had any real-life examples of the "squishy rules"​ I wanted to discuss with the customer. At first nothing was coming to mind, but my airline helpfully solved that problem for me on my way to the kickoff.

After I had started my first flight, my second flight was cancelled. I found myself in the customer service line behind several other people also trying to figure out how to satisfy their constraints and priorities in the best way possible (scheduled meetings the next day, no private jets, how far were they willing to drive a rental car). What struck me was how much those constraints and priorities varied among the 4 people ahead of me in line. Some people were fine with getting in the next night, others (like me) were willing to give up anything except being on time the next day.

Now, you may have noticed above that I combined constraints and priorities into a single list. When I had booked my flight, I chose to fly to the actual city I was headed to. Once that flight was cancelled, I had a choice to make. What used to be two hard constraints now gave me zero "feasible solutions" -- I could either miss one day of the one-and-a-half day kickoff, or I needed to fly to a different city. Now, some very creative people find themselves in this situation and will fly to some other middle city and then to their destination. But my airline either didn't or couldn't suggest those options, and if you had asked me before the cancellation if I would consider a 3-leg trip, I would have given a flat no. So if I had no possible solutions, what could I do?

Well, this happens a lot. People will often list their preferences as needs until pressed. And as long as there is a feasible solution, it doesn't have to become obvious. One of the people ahead of me in line chose not to give up any of their hard constraints, which meant there were still no options available. It was obvious that something had to give unless the goal had changed -- nevermind, I didn't need to go to that city after all. But knowing which of your rules to turn "squishy" is the key to still achieving your goal.

In my case, I flew to a neighboring city instead. In fact, my boss had flown directly to my alternate city and planned from the start to drive the remaining distance -- he had never made flying to the final city a constraint. As a result of this experience I also finally bought some plane tickets for the summer I had been putting off for weeks. I am now flying to the 2-hour-away airport for less than half the price of the tickets to the actual city.

Have you ever realized you were overconstraining your problem? Which constraints turned out to be a lot squishier than you realized?

Saturday, February 3, 2018

​You can't inspect in quality

This is just a short post on applying industrial engineering principles to daily life.

At some point in my education, someone told me that it is impossible to inspect in quality. At the time it made sense from what I knew about inspections: people are bad at noticing rare events

Since then I have found a semi common application at home where attempting to inspect in quality is both tempting and a bad idea... Cleaning up broken glass. I have no idea how other people do it, but the system I have found to avoid the unpleasant outcome of stepping on glass is to clean extremely thoroughly twice, and then to conduct my first inspection. If I find any glass, I assume there are several more pieces I missed and do another cleaning pass.

Do you have any tips to speed up this process? Thoughts of other scenarios where it is tempting to try to "inspect in" quality? Leave your thoughts in the comments!

Saturday, October 14, 2017

Solving the “real” problem

In undergrad when I learned about the field of operations research I assumed people would write down their objective and constraints, get the optimal solution, and then do whatever the model told them to. Eventually I took a class from an adjunct professor my first year of grad school who explained that the hardest part of working in OR was convincing people to implement the output of the model. Basically "decision makers" (aka, people who did not know math) would not believe the output of the model, and so we had to design things so they could follow all the steps in our analysis.

I internalized that people would have reasons not to believe the model, but for a long time continued to believe it was mostly because of mistakes people made. You would build them a beautiful model, and then they would see the solution and realize that they forgot to give you important constraints. Or they would see the result and just determine it too weird and insist on a sub-optimal solution which looked more like what they had been doing. Over time I developed a more complete list of why people would not trust a model, but I still fundamentally thought of the models as right.

At some point though, that changed. I stopped thinking of people as the problem. I started this blog under the premise that not solving the right problem (type 3 error) was avoidable, but took a careful study to get to the problem you should solve. Even now, I have continued to find it challenging to really talk about that mental shift. In fact, this particular blog post has been sitting in purgatory since July while I was figuring out just the right way to convey the distinction.

But yesterday, while reading the HBR article “Are you solving the right problems?” the author described reframing a problem as not simply redefining the “real” problem, but instead recognizing that there is a better problem to solve. Realizing that you could be solving a better problem is not a simple process. It often requires attempting to solve other problems first. Even the notion of a “better” problem is not straightforward. It may have to do with the intractability of your current problem, or the realization that your first solution does not achieve what you thought it would.


If you are reading this and have a problem that could use some reframing, feel free to reach out to me or leave a comment here. Oftentimes, just explaining the situation to someone a bit further from the problem is all it takes to shift your context.

Thursday, August 24, 2017

Influencer book review: part 2

​​When I need to explain why I think industrial engineering is different than other engineering disciplines, I often point to the fact that we see the whole world as systems. While all engineers need to model pieces of the world, in my opinion industrial engineers take a wider view largely because they ​consider people as part of the system. Suddenly the way you approach problems is different because you can't just ​expect that a person will do what you tell them. You have to accept the reality that people will do what they're going to do, and your job is to design a system where what they decide to do is what you need.

In "Influencer" the authors start by suggesting you identify very explicitly what your goals are in a way that they are actionable. You then identify the moments at which people have a choice to support that goal, or not. Finally, you use "influence" to help them choose to support that goal. The authors then spend the majority of the book on the "6 keys of influence" which are ensuring there are personal, social, and structural: motivation and ability, which are encouraging the person to make the choice which aligns with the goal. The book itself is full of examples of what each of these look like, but you can think of times you were personally unmotivated in doing homework or felt unable to do something you ​believed​ you should.

And this brings us back to industrial engineers who see the world as systems. In the first chapters when the authors ​referred to structural motivation and ability, I had no clue how that was supposed to be different than the personal and social categories. But when I got to the actual chapters, I recognized exactly the mindset industrial engineers use to effect change. We try to design changes to almost be easier than doing things the old way. I remember trying to figure out how on earth you are supposed to have an orderly office supply drawer, and the flash of insight when I was looking at pictures and realized the first step was to have about 1/3 of the items I presently had. The system (a jumble of career fair pens and highlighters and countless other trinkets) made it impossible for me to have an organized drawer. It wasn't that I was personally unable to do this, it was the structure itself that made my goal impossible. This became particularly clear after I fixed the system (removed most of the content of the drawer) and was able to organize the remainder.

In the book the authors do actually make the connection that industrial engineers have been the pioneers of structural ability. However, the goal of the book really is to give the reader a framework to effect change using all 6 approaches at once. The main premise of the book is that most people trying to effect change only use one or two approaches, which​ is  simply not enough. I think the underlying reason it is not enough is that people are different. Some people will happily change their behavior if it will get them a bonus while others will only do it if it would be embarrassing not to. Therefore, by encouraging a certain behavior "on all fronts" persay,
​you can hope to actually reach everyone.

Wednesday, July 19, 2017

Book review: Influencer

I have been picking up highly recommended books for years on business-y topics that seemed interesting. However, until a couple months ago when I found a position as a data scientist at Mashey, reading most of them had never topped my to-do list. One of those books was called "Influencer: the new science of leading change."

The book is impressive to me for carefully articulating how little it takes to really create change, but also the complexity behind those small differences. I have noticed that it really does seem to be the small things that determine outcomes, but had not fully articulated what made that set of small things special. In the book the authors describe those things as "crucial behaviors." They give examples of settings where a leader was able to articulate and change one or two problem-specific behaviors which stopped the spread of disease, got inner city kids successfully through college, and many other cases.

While many business books lay out a roadmap to success that may or may not work, this one covers an approach that feels very familiar to me. As an example, I have found that while the everyday interactions matter for my kids, there are pretty infrequent "critical moments" where if I notice and take the opportunity, I can teach them something really important. Further, the whole premise of this blog is that if you can identify the right problem to solve, you will be much more successful in your projects.

I actually am jumping the gun a bit on posting this since I've only made it through part 1 of the book so far, but the framework is genuinely inspiring to me. Hit me up in a month and I can give you the complete run-down.

Thursday, April 27, 2017

The research process



I naturally am a big picture person who learned during grad school to also have a more detail-oriented mode. I had not gotten to that point in my second year, and my advisor pointed out that I had this habit of working out the bigger picture and then would immediately jump to trying to prove something (though not necessarily the right something). For the next two years I had a post-it note on my computer with "Big -> medium -> details" on my monitor.

Eventually I didn't need the note any more, but the smooth transition between levels of research focus stayed present in my mind as I continued my PhD. The summer before my last year I took some time to work on an independent project. I had my initial ideas of what the big picture was, but discovered in working out the details that there were interesting high-level concepts which I would not have come up with without going through the math. I realized that research is not just a one-shot transition through the levels, but that ideally you may traverse the range of focus levels a number of times to finish a project.

Which brings us to the picture at the top from xkcd. I like the idea of the "research focus knob" because there is no way to get from the big picture to the details without going through the intermediary levels. More than that, I think it makes it clearer that your goal is not to just go in a straight line from big picture to the details, but instead to pick the right level of the research problem at every point in the project.

Thoughts or questions are welcome!

Wednesday, February 22, 2017

Stochastic Optimization as a mindset

Stochastic optimization (finding the best solution when you have randomness) is a tricky topic. As an example, think about picking the fastest route for driving home from work. Depending on the traffic, different paths might be fastest. It might make sense at first to say "I want to pick the path that will be the fastest today." But when that outcome has uncertainty, it is usually impossible to answer until after you actually have driven home.

There are a number of different ways people handle this issue in optimization. For driving home, it probably makes sense to pick the route with the lowest expected transit time. Over the years of driving that path, some days might take longer than you'd like, but in the long run you'll come out ahead. Other times when you have a dinner-appointment, it might make sense to pick the route that has the lowest probability of taking more than 25 minutes so you are not late. If you were managing the electric grid and avoiding power outages, you will account for the uncertainty in a different kind of way.

As you build a model of the world, different objectives will compress or amplify the effects of uncertainty. Frequently when you are driving somewhere, there will be several routes that have basically the same expected transit time. But the likely worst-case (say, average of the worst 5%) of driving times will often be very different across routes. In the first case, the uncertainties are a relatively small issue because they all get averaged away. In the latter case, extreme cases have a much bigger effect, and therefore the uncertainty will too.

Since models are just attempts to represent reality in a useful way, which model to use for a stochastic problem will depend a lot on your best guess of the costs of uncertainty. If you use an expected-value objective but care a lot about the worst-case tails, you are going to have a bad time when you implement your solution. On the other hand, if you optimize for the worst case for a low-stakes situation like how much inventory to order for a promotion, your company probably will not stay in business too long.

While I have been focusing on either worst-case or expected value as an objective, there are countless ways to design your stochastic optimization model. The short version is that in the field, we are typically trying to reduce the random outcome of our decisions to a single number which allows us to pick the "optimal" solution. While there are ways to optimize over "multiple objectives," they still tend to focus on either subjective decision making or weighting the objectives to obtain a single number.

I welcome comments either on the blog or directly to me. I'm getting this topic ready for a short talk, and I think this will be only my second talk on optimization to a room full of not-optimization people.