## Why Agile work

Everything happens for a reason.

### Truth

Imagine you stand on top of hill. It is foggy day, you can merely see the length of your own arms. You are trying to climb to the top of hill. What strategy you would use to get to the top in the most efficient way?

This is classic hill climbing problem. Whatever strategies, you have risks of running into local optimization, instead of global maximum.

We can avoid this limitation by occasionally using simulated annealing. This allows search algorithm to break out of local maxima by first making it worse.

Agile promotes incremental, iterative development process, to battle with uncertaintities nature of software development. Sounds similar to hill climbing, right? If you think they are similar, where is simulated annealing in Agile? Where do you see Agile getting worse before getting better?

The truth is Agile is a very lousy way of doing optimization. It creates false hope, and leads to chaos. Why, because it most certainly always create local optimization. It is optimizing for wrong top of hill. Your company is not making enough of money? What the heck has anything to do with Agile. When did Agile says anything with productivity, making more sales, or simply cutting expense.

You think development methodology is the problem? Horribly wrong. Agile is not any succesful in project delivery than any other development methodologies. When did you hear success stories about how Agile helps the firm bring the second life? IBM pushed Agile, look at how it lives now.

If you believe Agile can save your stupid development team, who always make bad decisions, you probably have a better chance in success believing in hill climbing algorithms.

## What aren't work for developer

As a developer, the most important two things are coding, and talking to client (or proxy of client, like business analyst). However, there are a lot things look like work, but they aren’t contribute much to the product you are working on. It can be easy for people to feel good about themselves by spending time doing these type of things, but that’s mostly likely for personal interests, instead of growing your product. Chances are, they are being distracted from far more important things that need to be done excellent.

Think broadly, you certainly won’t have time to deal with those things that look like work but aren’t , which is far better than not having time to deal with those things doesn’t look work but are.

Confuse what are work and what aren’t, is like fighting without arms.

### things that look like work but aren’t

• corporate affairs, hosting annual party, work on office event logistics - this makes people feel good by contributing to the wider community, and be appreciated by senior managers. But you probably feel confused about doing dirty work vs leadership.

• winning award - external validation are helpful when you run out of other options. type of like proving the efforts to your mother that you are not wasting your time on meaningless stuff. It could just an indication how good you are at marketing instead of making progress.

• speaking at external event that not relevent to your domain - good for personal interests, but pretty much waste of time for growing your product.

• happy hours with team - it might sounds fun, but rarely find useful with random topics in a large group.

• reading news, friends on facebook or watch tiktok - recreation type of thing. rarely find useful, not to say it full of false news.

### things that doesn’t look like work, but they are

• keep regular update with your clients - it is instinct for developer to feel their most imporant jobs are coding. Failed to do so, you are likely solving a wrong problem for long enough that you don’t get any appreciation for your hard work.

• keep regular update with your corporate stakeholders - people feel this is kiss ass, but you won’t be able to do bold initative without enough of support from your stakeholders. You can have great ideas on driving product, but not neccesarily people will support you only because you have good idea.

• keep regular session with your mentor, share what you feel imporant, seek suggestions, and gather feedbacks.

• planning - it is not limited to drawing architecture diagram type of planning. it could also be presentation, one to one meeting, or product roadmap.

## Of Point Spreads and Predictions Using a Kalman Filter

posted from - http://www.rawbw.com/~deano/articles/kalman.html, 20 January, 1997

### Introduction: Why Do We Need Predictions?

In the work I do, I find little reason to estimate point spreads or to predict scores. This is because those methods are mostly used to try to win in Las Vegas, an objective I have never had. No, my goal is and always has been to figure out how to construct a good team, “basketball engineering,” as I’ve pitched it to a few people.

This objective is carried out by studying how individuals work or by studying how teams work, with the intention that the two approaches will merge and lead to the same conclusions.

On the team side, one of my focuses has been to understand how points scored and points allowed relate to win/loss records: if a team averages 3 points per game more than their opponents, what is their expected winning percentage? I have, in fact, gained numerous insights through the development of the  method, which answers questions just like this. This method, in combination with matchup probabilities, also does a good job of prediction and has been used in one public study showing that there is no distinct added home court advantage in the playoffs. Methods like this, however, do not explicitly account for one piece of information that seems important:  strength of schedule.

Let me first hedge a little and say: There is actually no hard evidence proving that a 10 point win over a strong opponent means anything more than a 10 point win over a weak opponent. Intuitively, we believe that this _must be true_ and I have little doubt that an empirical study would show that it is true (something that you yourself can do). What I will present here relies on this unproven belief and, for those theoretical types out there, actually proves it if you read carefully and think about it.

In large studies, the strength of opponents balances out or can be factored out in some way. In small studies, the strength of opponents does not balance out, which is a primary motivation for this work. For example, during the 1992-93 season, Michael Jordan missed 4 games and the Bulls went 1-3 in those games. Given that the Bulls were 57-25 on the season, this immediately implies that Jordan is _much better_ than his replacement, B.J. Armstrong. But does it? The Bulls’ three losses were all to playoff teams, including one to the Knicks who had the best record in the regular season that year. In addition, none of these losses was by more than 6 points. The Bulls’ one win was a 28 point blowout. How do we put all this together to create some picture of the relative value of Armstrong to Jordan? We can do it in our minds, but we wouldn’t all agree. Or we can use a mathematical method whose basis we can agree upon.

Let me now take a quick diversion into the benefits of numerical methods whose results are consistent from person to person, not subjective. If you don’t want to hear me sound off like Billy Graham after a physics class, I suggest you just leap past my preachings to the rest of the article.

Are you sure you want to read this? It could be _worse_ than Billy Graham on physics. It could be Bill Clinton on health care! It could be Newt Gingrich on ethics! It could be Bob Dole on anything! Last chance. Click here or forever hold your peace…

For many issues, it is fine that we make subjective judgments and that we don’t all agree on those judgments. Argument is underrated in this world, as long as it’s rational and we don’t start killing each other because you think Jordan is 20 points better than Armstrong and I think he’s only 10 points better. (No subtle reference to the Middle East intended there.) I make my living off people disagreeing and, no, I am _not_ a lawyer.

But when things have to get done, we have to agree on some basic rules. We have to have a consistent set of methods for characterizing the truth. Usually, if we cannot agree on the big picture, we can start by looking at the details. We may not agree how much better Jordan is than Armstrong, but we can agree that if Armstrong took Jordan’s place for four games and the Bulls won all four, that is an _indication_ that Armstrong isn’t as bad as Jordan. We should also agree that if all four games were against very weak teams, then that indication isn’t as strong. Finally, we should agree that if all four games were against strong teams, then we have reason to wonder what’s going on – Armstrong is beginning to look pretty good.

If we can find a mathematical method that adequately characterizes those details we agree upon, we have made a step towards agreeing upon the big picture. Often, there are several mathematical methods that can characterize agreed-upon details. Sometimes those methods disagree on the big picture. Many times they don’t. The more information that they account for, the more likely they are to agree upon the big picture.

Of course, these methods can still be wrong. Some of the best models in environmental engineering agree on many things, but they can’t predict real circumstances very well. It frustrates me to no end when people argue over methods whose predictions are all pretty close to one another but that are also all quite far from predicting reality. That’s why I am not trying to place the method of this article in competition with other similar ones which use the same information to get similar results. All of the methods have about equal value for doing what we want: taking scores and assigning “ratings” based on those scores, the opponents, and whether the game was at home or on the road. They all make similar predictions. They all eliminate a large part of the subjectivity. Arguing over what is left of the subjectivity within the methods is foolish and left to people who like to call themselves fools.

Now back to our regularly scheduled article …

On the individual side of my research, I also have never taken explicit account of the strength of opponents. Specifically, Michael Jordan drives and jukes against the toughest defenders every night, just as Joe Dumars tries to contain the toughest offensive players every night. Direct measurements of what they do then show Jordan and Dumars to be actually somewhat worse than they actually are. What I will present here is the skeleton of a method that can account for this bias.

### The Method: A Kalman Filter

The method I will present here to handle varying strengths of opponents is called a statistical filter.  Statistical filters are methods for estimating _something_ using statistics. In this case, the filters can be used to estimate the strength of a team using a team’s game-to-game progression of points scored, points allowed, whether they were at home or on the road, and who they played against. There are several methods out there that take only this information to produce rankings and/or predict scores – Doug Norris used to have one but I can’t find him on the web anymore, ESPNet has one, and World Wide Rankings and Ratings (WWRR) has four or five. I believe that Doug’s method and the ones from WWRR are “original”, meaning that they dreamed them up on their own, a feat for which I applaud them until my hands turn red. However, an “optimal” technique for using this information has been around for a long time. This technique is called a Kalman Filter. Even though I said the Kalman Filter is “optimal”, I am not claiming a Kalman Filter is any “better” than anything else. Every paper that has ever been written about the Kalman Filter has stated that it is “optimal”, so I’m just regurgitating.

Kalman filters are used by NASA to predict the path of missiles and planes. They are also used to predict weather. They are used on Wall Street. Recently, they were introduced to environmental law by yours truly. A Kalman filter is clearly a very practical tool and it only makes sense that it has applications in basketball. It was actually used in a football prediction program when it was introduced to me about five years ago.

As another illustration of how one might use the Kalman Filter, I present the following chicken scratch:

For this strip, I owe a debt of gratitude to Scott Adams, writer of Dilbert. No, he didn’t draw this, nor did he write the dialog. Actually he didn’t do diddley except get popular enough so that you could tell what I drew even though I am a lousy artist.

### So How Does a Kalman Filter Work in Basketball?

Conceptually, we know that a good offense on average will do relatively better against a poor defense and relatively worse against a good defense. Let’s start with that concept and attach some numbers. Last year’s Utah Jazz had a good offense on average with an  of 111.7 in a league where the average rating was 105.9. They played against the following “good defensive teams” a total of 18 times: Chicago (2), Miami (2), New York (2), Portland (4), San Antonio (4), and Seattle (4). These teams had a weighted average defensive rating of 101.5 (weighted by games played against Utah). The Jazz played against the following “poor defensive teams” a total of 16 times: Charlotte (2), Dallas (4), LA Clippers (4), Milwaukee (2), Philadelphia (2), and Toronto (2). These teams had a weighted average defensive rating of 109.6.

According to my methods, the Jazz offensive rating should have been about 107 against the good defensive teams (their rating dropped) and about 115 against the poor defensive teams. In actuality, the Jazz ratings were 109.3 and 112.0, respectively. The results aren’t as good as I had hoped, but this sample was small and I have not done an extensive analysis to determine whether my methods work on larger samples than those here. I believe they will work, but I’d like to ultimately check… unless someone else would like to do it ( Ask me!).

One of the methods says to predict the Jazz offense vs. Team _B_ defense as

 (Jazz Off. Rtg) (_B_ Def. Rtg) Jazz Off. Rating vs. Tm _B_ Def. = ------------------------------ (1) League Avg. Rtg.


Try this yourself here:

How Will Team _A_ Offense Do Against Team _B_ Defense?

Team _A_
Offensive Rating
Team _B_
Defensive Rating
League Avg. Rating Team _A_ Offense
vs.
Team _B_ Defense

Technical note: Mathematically, this relationship says that Utah’s offensive performance is _linearly_ related to both the average Utah offense and to the opposing defense. “Linearly” means that if the average Jazz offense improves by 10% then the Jazz offense vs. Team B’s defense also improves by 10%, not 8% or 50%. I only introduce this because a Kalman Filter is strictly only “optimal” if this relationship is linear. The second method I introduce is not linear. (For people with a statistics background, note that a Kalman Filter is optimal only if offensive ratings and defensive ratings are Gaussian distributed. As I showed in Basketball’s Bell Curve, this is also essentially true. The success of the Correlated Gaussian method substantiates this.)

### Hypothetical Example

Here is how the Kalman Filter will work for a game where team _A_ plays at team B:

1. Evaluate the offensive and defensive  for both teams. If possible, evaluate team A’s ratings on the road and team B’s ratings at home. If this is not possible, take team A’s ratings and make each of them worse by 1 point per 100 possessions; take team B’s ratings and make them better by 1 point per 100 possessions. For example, if team _A_ is Utah and team _B_ is Seattle, we evaluate Utah’s ratings as 110.7(=111.7-1.0) and 105.5(=104.5+1.0) and Seattle’s ratings as 109.7(=108.7+1.0) and 99.6(=100.6-1.0).
2. Predict how team _A_ will do against team _B_ using the equation above. Using the Utah-Seattle example, we find that Utah’s offense vs. Seattle defense should have a rating of 104.1(=110.7*99.6105.9) and that Utah’s defense vs. Seattle should have a rating of 109.3(=105.5*109.7105.9).
3. After the game, input the actual ratings. In this example, let’s assume that Seattle won 99-94 with 88 possessions each. Utah’s actual ratings were then 106.8(=9488*100) offensively and 112.5(=9988*100) defensively.
4. Adjust the offensive and defensive ratings for both teams according to these formulas (slight revision on 11/16/97), which essentially tell you how strongly to weight the game results. Here, Utah’s offense exceeded predictions, but their defense was worse than predicted, so we would adjust their ratings downward. Similarly, Seattle’s offensive rating goes up and their defensive rating goes down. (I will attach numbers in the real example on the Bulls below.) Note that if we had not accounted for quality of competition, it looks like Utah’s offense got worse because it only scored 106.8 points per 100 possessions compared to 111.7 against the league. But by recognizing that Seattle is a good opponent and that Utah is on the road, Utah’s offense actually did well.

(Points can be used in place of ratings above. I like to use ratings rather than points because they do not fluctuate as much as points. But, in terms of ease of use, points are preferable because no calculation of possessions is necessary. Specifically, we could have used Utah’s points per game on the road, Seattle’s points per game at home, the league average of points per game, and the final score of the game to replace Utah’s road ratings, Seattle’s home ratings, the league average rating, and the final ratings of the game.)

### The Bulls Example

As an example of this entire method, let’s return to the four Bulls games that Jordan missed where they went 1-3 against fairly tough competition. (Note of 11/16/97: The numbers have been revised below due to a fix in the variance of the predicted rating.)

#### Game 1 at Boston

1. Chicago’s offensive and defensive ratings during the ‘92-93 season were 110.8 and 104.2, respectively. Their first game without Jordan was against Boston in the Garden, so we are approximating Chicago’s ratings as 109.8 and 105.2 for this game. Boston’s season ratings were 106.7 and 105.8, which get adjusted to 107.7 and 104.8 for this home game.
2. The predicted ratings for this game are 108.5 for Chicago and 106.8 for Boston: Chicago is the predicted winner. This uses the league average rating of 106.1.
3. In reality, Boston beat Chicago 101-96. With a pace of 96.2 possessions, this means that Chicago’s actual offensive and defensive ratings were 99.8 and 105.0, respectively. The offense was worse, but the defense was actually slightly better than predicted.
4. Using a prior variance of 20 for both Chicago’s ratings and the Celtics’ ratings, the variance of the expected ratings is about 40[=(20*20 +109.82*20+104.82*20)/(106.12) for the offense, = (20*20 +104.22*20+107.7220)/(106.12) for the defense]. In general, ratings fluctuate from game to game with a standard deviation of 12 (or a variance of 150). Hence, the Kalman weight is 0.2145 [=41/(41+150)]. The updated road offensive rating for the Bulls is then 107.9 [=109.8+0.2145(99.8-108.5)]. The variance on this new estimate is 15.7 [=(1-0.2145)20], only a slight drop from before. For the defense, the updated rating is 104.8 [=105.2+0.2145(105.0-105.2)], a slight improvement. The variance decreased to 15.8.

#### Game 2 vs New York

1. As mentioned above, Chicago’s offensive and defensive ratings during the ‘92-93 season were 110.8 and 104.2, respectively. Their second game was against New York at home, so we are approximating Chicago’s ratings as 111.8 and 103.2 for this game. New York’s season ratings were 104.4 and 98.1, which get adjusted to 103.4 and 99.1 for this game in Chicago.
2. The predicted ratings for this game are 104.4 for Chicago and 100.6 for New York: Chicago again is the predicted winner.
3. In reality, New York beat Chicago 104-98. With a pace of 88.7 possessions, this means that Chicago’s actual offensive and defensive ratings were 110.5 and 117.2, respectively. The offense was better, but the defense was much worse than predicted.
4. Again using a prior variance of 20 for both Chicago’s ratings and the Knicks’ ratings, the variance of the expected ratings is about 39[=(20*20 +111.82*20+99.12*20)/(106.12) for the offense, = (2020 +103.22+103.42)/(106.12)]. Again with a score variance of 150, the Kalman weight is 0.2092 [=39.7/(39.7+150), roundoff differences]. The updated _home_ offensive rating for the Bulls is then 113.1 [=111.8+0.2092(110.5-104.4)]. The variance on this new estimate is 15.8 [=(1-0.2092)100]. For the defense, since it played so poorly, the updated rating jumps from 103.2 to 106.6 [=103.2+0.2092(117.2-100.6)], The variance of the Chicago defensive estimate decreased to 16.0.

#### Game 3 vs San Antonio

1. Going into the second home game without Jordan, the Bulls’ offensive rating at home is 113.1 and their defensive rating is 106.6. Their opponent, San Antonio, had season ratings of 107.8 and 105.1, which get adjusted to 106.8 and 106.1 for this game in Chicago.
2. The predicted offensive ratings for this game are 113.1 for Chicago and 107.3 for San Antonio: Chicago should win by about 6.
3. In the game, San Antonio won 107-102, using 93.0 possessions. This means that Chicago’s actual offensive and defensive ratings were 109.7 and 115.1, respectively. It was a bad game for the Bulls on both ends, not by a terrible amount, but bad enough to lose a game they should have won.
4. From the previous home game, our uncertainties in the Chicago offensive and defensive ratings are 15.8 and 16.0, respectively. The variances of the predicted offensive and defensive ratings are 38.6[=(15.8*20 +112.02*20+106.12*15.8)/(106.12)] and 36.4 [=(16.0*20 +103.72*20+106.8216.0)/(106.12)]. With a score variance of 150, the Kalman weight for the offensive estimate is 0.2045 [=38.6/(38.6+150)] and that for the defensive estimate is 0.1952 [=36.4/(36.4+150)]. The updated _home_ offensive rating for the Bulls is then 112.4 [=113.1+0.2045(109.7-113.1)]. The variance on this new estimate is 12.6 [=(1-0.2045)15.8]. For the defense, the updated rating once again gets worse, going from 106.6 to 108.1 [=106.6+0.1952(115.1-107.3)], The variance of the Chicago defensive estimate decreased to 12.8 [=(1-0.1952)*16.0].

#### Game 4 vs Dallas

1. Going into the last home game without Jordan, the Bulls’ offensive rating at home is 112.4 and their defensive rating is 108.1. Their opponent, Dallas, had the worst season ratings of anyone in the league at 98.2 and 113.2, which get adjusted to 97.2 and 114.2 for this game in Chicago.
2. The predicted offensive ratings for this game are 121.0 for Chicago and 99.0 for Dallas: Chicago should blow out Dallas by more than 20 points (depending on pace)….
3. …And they did, winning 125-97, using 91.8 possessions. This means that Chicago’s actual offensive and defensive ratings were 136.2 and 105.7, respectively. The Bulls exceeded offensive expectations, but allowed Dallas a few points extra.
4. From the previous home game, our uncertainties in the Chicago offensive and defensive ratings are 12.6 and 12.8, respectively. The variances of the predicted offensive and defensive ratings are 37.0[=(12.6*20 +112.42*20+114.22*12.6)/(106.12)] and 31.6 [=(12.8*20 +108.12*20+97.2212.8)/(106.12)]. With a score variance of 150, the Kalman weight for the offensive estimate is 0.1980 [=37.0/(37.0+150)] and that for the defensive estimate is 0.1738 [=31.6/(31.6+150)]. The updated _home_ offensive rating for the Bulls is then 115.4 [=112.4+0.1980(136.2-121.0)]. The variance on this new estimate is 10.1 [=(1-0.1980)12.6]. For the defense, the updated rating once again gets worse, going from 108.1 to 109.2 [=108.1+0.1738(105.7-99.0)], The variance of the Chicago defensive estimate decreased to 10.6 [=(1-0.1738)*12.8].

The above calculations are duplicated below, allowing you to change the two somewhat subjective parameters in the procedure: the variance in the prior estimate of all ratings (which I set to 100) and the variance of the game ratings (which I also set to 100). The variance of the game ratings is quite consistent with my records. The variance of the prior estimates states how sure we are with those estimates; since we are not sure of these estimates due to Jordan’s absence, I set these relatively high. Feel free to vary these parameters below to see the effects.

Change These Parameters

Prior Variance Score Variance

See the Results Here

Bulls at Home Bulls on the Road

Offensive Rating Defensive Rating Offensive Rating Defensive Rating

Prior 111.8 103.2 109.8 105.2 Prior

After G1 vs Knicks
(L, 110.5-117.2 in ratings)  After G1 vs Celtics
(L, 99.8-105.0)

After G2 vs Spurs
(L, 109.7-115.1)

After G3 vs Mavs
(W, 136.2-105.7)

Even though not many games were played, we can already get some idea that the Bulls were not as good without Jordan. The offense went up slightly, but not enough to be certain about, and the defense went down quite a bit. For that season, my numbers had Armstrong’s offense being just as efficient as Jordan’s, but his defense being considerably worse, so this Kalman result is consistent with that. Overall, these few games indicated that the Bulls’ expected winning percentage went from about 0.762 to about , or a loss of an additional  games over the course of a season. This seems small to me based on the difference in talent between Jordan and Armstrong, but seems about right given the Bulls’ performances in the games he missed, which is the only information the method uses.

This raises the issue of uncertainty. These four games cannot present a perfect picture of the difference between Armstrong and Jordan. Just the noise of basketball – players getting hurt, teams playing back to back nights, Dennis Rodman “not being interested” – prevents us from being sure about _any_ rating. The Kalman Filter “knows this” and tells us roughly how sure we should be with the ratings it gives us. With the parameters above, our final variances in the offensive and defensive ratings of the Bulls at home are and , respectively. These have gone down about  from our prior estimate, so we feel only somewhat more confident about the estimate than before. But it gives us a foothold for other comparisons we might make between Jordan and Armstrong.

(Technical remark: I could have estimated the prior Bulls’ offensive and defensive ratings differently, for instance, by just using those games in which Jordan played. The prior ratings are the ‘null hypothesis’ we are testing against, as traditional statisticians phrase it; our null hypothesis would then have been that not having Jordan made no difference in the Bulls and we were seeing if we could disprove this hypothesis.)

### Conclusions

This Kalman Filter is a powerful tool for evaluating situations where strength of opponents is important. This is actually quite common in basketball, where teams don’t play a fully balanced schedule, some teams certainly playing a more difficult schedule than others, even over the course of an entire season. I hope to use it quite a bit, though it is still a little labor intensive for me to implement.

There are a couple weaknesses of the filter that I will mention here at the end. First, the reason I hadn’t really introduced it before was because I never saw it as a very good predictor when someone like Jordan was missing from a team. Because the filter looks only at teams, it cannot account for teams that change, like when a player is injured. When people put out team ratings, what are they really measuring if significant players miss a few games? We know that significant players make a difference in our predictions, but methods like this don’t explicitly account for those players absence or presence. I took advantage of this “weakness” above by turning it around and using the method to identify the difference between the Bulls with Jordan and the Bulls without Jordan.

A second weakness is also a strength. The Kalman Filter’s generality of applicability (to other fields) is great, but it also implies that it doesn’t have built in a lot of the details of those fields. I had to build a simple model to “predict” basketball games to use in the Kalman Filter. This simple model is not precisely what happens in basketball; a more complex model may be more accurate, but then it becomes much more difficult to implement in a Kalman Filter.

Finally, this method has the weakness that it says that a team that blows out another team always improves its overall rating. Unless you read Can the Bulls Be Perfect?, you are probably wondering “How is that a weakness?”. A recent finding I made in writing that article indicated that a blowout doesn’t necessarily make you a better team and can actually imply that you’re not as good. This was a very unusual result, but one that I cannot dismiss. I also think that it can be built into the Kalman Filter. The thoughts on how to do that will have to wait a while because they are technical enough that most people won’t want to hear them. Besides, this article is long enough.

In trying to end on a positive note, I want to mention that this method holds a key to defensive ratings. Because good defensive players are often assigned to guard the best players, their defensive numbers may not look very good unless we take into account the quality of the players they have to guard. Doug Steele does something like this in his defensive ratings, but he has indicated to me that it is a lot of work. Hopefully, this is an easier way to do it.

### Kalman Filter References

A reference on the history of the Kalman Filter is this military page. The military does use Kalman Filters for a lot, so they should know about it.

Another reference for the Kalman Filter is this fairly technical paper by two people from North Carolina. I found this paper to be very useful to refresh my memory on this topic. If you know the Kalman Filter well, this paper is too trivial for you. If you don’t know it and are not technically inclined, this paper is probably too advanced, but the example is still pretty good.

### Acknowledgements

Most importantly, I owe thanks to Dick Donald for introducing me to this topic many years ago. Second, I want to thank George Pinder for reviving my need to know this stuff and to one of his students, Graciela Herrera, for helping me to relearn it quickly. Finally, I make a second mention of this University of North Carolina paper, by Welch and Bishop, who did a good job with it. I hope that this work adequately reflects these people’s abilities to teach it.

## Git Branching Strategy

A collection of git branching strategies.

• Git Flow Workflow The first proposal to use Git Branches. Ideal for projects that have a scheduled release cycle. Source: git flow workflow

• A Simple Git Branching Model A simple git branching model. Source: a simple git branching model

1. master must always be deployable
2. all changes made through feature branch (pull request + merge)
3. rebase to avoid/resolve conflicts; merge into master
• GitLab Flow A simpler branching alternative to git flow workflow. _Source: GitLab Flow_

• GitHub Flow A light weight, branch-based workflow. Source: GitHub Flow

• Release Flow A branch-based workflow promoted by Microsoft. Source: Release Flow

## Get Things Done in the Shortest Time

Some examples of people quickly accomplishing ambitious things in the shortest time.

Constraints foster creativity (if the right people are in the team). When JFK announced (in 1961) USA’s intention to put a man on the moon before the end of the decade, he set a time constraint that could not be moved without considerable global embarassment. In less than 9 years, it was accomplished. Perhaps the formula is: a deadline + right people + leadership support.

• BankAmericard. Dee Hock was given 90 days to launch the BankAmericard card (which became the Visa card), starting from scratch. He did. In that period, he signed up more than 100,000 customers. Source: Electronic Value Exchange.

• P-80 Shooting Star. Kelly Johnson and his team designed and delivered the P-80 Shooting Star, the first jet fighter used by the USAF, in 143 days. Source: Skunk Works.

• Marinship. ”Shipyard construction was begun promptly after a telegram from the United States Maritime Commission was received by the W. A. Bechtel Company. The telegram was received on 2 March 1942, the Sausalito site selected on 3 March, and a proposal to build the shipyard presented in Washington DC was made on 9 March. Ten minutes into the presentation U. S. Maritime Commission administrators told the W.A. Bechtel Company to build the shipyard. Physical construction began on 28 March. Construction start was delayed two weeks to allow the 42 families living on Pine Point, which was scheduled to be demolished to build the shipyard, to move.” The first ship was completed on September 15 of that year, 197 days after receiving the telegram. Source: Marinship on the Fast Track.

• The Spirit of St. Louis. In 1927, Donald Hall and Charles Lindbergh designed and built _Spirit_ in 60 days. “To determine the amount of fuel the plane would need, Lindbergh and Hall drove to the San Diego Public Library at 820 E St. Using a globe and a piece of string, Lindbergh estimated the distance from New York to Paris. It came out to 3,600 statute miles, which Hall calculated would require 400 gallons of gas.“ Source: Ryan Airlines gave Lindbergh wings.

• The Eiffel Tower. The Eiffel Tower was built in 2 years and 2 months; that is, in 793 days. When completed in 1889, it became the tallest building in the world, a record it held for more than 40 years. It cost about $40 million in 2019 dollars. Source: Eiffel’s Tower. • Treasure Island. In 1935, San Francisco decided to commemorate the completion of the Golden Gate and Bay Bridges by building a new island as a home for the Golden Gate International Exposition. Treasure Island, a 400 acre man-made island in the middle of the San Francisco Bay, was the result. Construction started in 1935 and was complete by March 1937. Source: San Francisco Fair: Treasure Island. • Apollo 8. On August 9 1968, NASA decided that Apollo 8 should go to the moon. It launched on December 21 1968, 134 days later. Source: Apollo Spacecraft Chronology. • The Alaska Highway. Starting in 1942, 1,700 miles of highway were built over the course of 234 days, connecting eastern British Columbia with Fairbanks, Alaska. Source: The Alaska Highway. • Disneyland. Walt Disney’s conception of “The Happiest Place on Earth” was brought to life in 366 days. Source: Under Construction: A look inside Walt Disney’s Disneyland. • The Empire State Building. Construction was started and finished in 410 days. Source: Empire State Building. • The Berlin Airlift. On 24 June 1948, the Soviet Union initiated a blockade of Berlin. Two days later, the Berlin Airlift commenced. Over the following 463 days, the US, the UK, and France flew 277,000 flights with 300 aircraft to deliver the supplies required to support 2.2 million Berlin residents. On average, a supply aircraft landed every 2 minutes for 14 months. Source: The Candy Bombers. • The Pentagon. The construction of the world’s largest office building was led by Brehon Somervell. The decision to proceed with the project was made on a Thursday evening. Initial drawings were completed that Sunday. Construction started two months later, on September 11 1941, and was finished on January 15 1943, 491 days later. When asked when something was needed, Somervell’s go-to response was “the day before yesterday”. Source: The Pentagon. • Boeing 747. Boeing decided to start the 747 program in March 1966. The first 747 was completed on September 30 1968, about 930 days later. Source: Boeing 747: A History. • The New York Subway. The first contract was awarded on February 21 1900. 28 stations opened and general operation commenced on October 27 1904, 4.7 years later. In April 2000, the MTA decided to build the Second Avenue Subway. The first phase, with 3 stations, opened on January 1 2017. Source: The New York Times. • TGV. On April 30 1976, the French government approved a plan to build a high-speed rail link between Paris and Lyon, the first high-speed rail line in Europe. This line was to use completely new electric locomotives, also to be developed in France as part of the project. The ensuing line opened on September 26 1981, 1,975 days later. On September 24 1996, the California High-Speed Rail Authority was formed. The completion of the first phase of California’s high-speed rail project, a line connecting San Francisco and Anaheim, is currently estimated to happen in 2033, 37 years (i.e. around 13,000 days) after the authority was formed. Source: On the Fast Track. • USS Nautilus. The US decided to build the world’s first nuclear submarine in July 1951. It entered service on September 30 1954, 1,173 days later. Source: Cold War Submarines. • JavaScript. Brendan Eich implemented the first prototype for JavaScript in 10 days, in May 1995. It shipped in beta in September of that year. Source: Brendan Eich’s history of the language. • Unix. Ken Thompson wrote the first version in three weeks. Source: UNIX: A History and a Memoir. • Shenzhen. In one year, between 1998 and 1999, Shenzhen added 1 million residents (a 22% increase), growing from 4.4 million to 5.4 million people. Source: PopulationStat. • Amazon Prime. Amazon started to implement the first version of Amazon Prime in late 2004 and announced it on February 2 2005, six weeks later. Source: The making of Amazon Prime. • Luckin Coffee. Luckin Coffee was founded in October 2017. Their first stores opened on January 1, 2018. On September 3 2018—245 days later—they passed 1,000 directly-operated stores in China. Source: Why is Luckin Coffee the best experimental field for Tencent Smart Retail? • San Francisco proposed a new bus lane. on Van Ness in 2001. Its opening was recently delayed to 2021, yielding a project duration of around 7,300 days. “The project has been delayed due to an increase of wet weather since the project started,” said Paul Rose, a San Francisco Municipal Transportation Agency spokesperson. The project will cost$310 million, i.e. $100,000 per meter. The Alaska Highway, mentioned above, constructed across remote tundra, cost$793 per meter in 2019 dollars.

## People in Curve

If we plot one people on a graph, in the range of strength and weakness, it could well be a bell curve, assume the one has more skills at average level, but some of the skills will be at either side of long tail.

When two being put together, if they have samiliar set of strength and weakness, they will have a higher chance of resonate, strength becomes stronger, weakness becomes weaker.

However, when two has too much of distance in each other, it might end up with weaken the strength, reinforce the weaness.

This is more of static view, while the reality might be more dynamic in the long run.

The curves start where they are, but up to people to draw the future.

## Email Writting

I recieve a couple emails daily. If not handled carefully, could end up spend a lot of time in reading emails that I cannot add value, or wasting other people’s time. Below is a good example of how to make email more crisp.

From:         Jack Dorsey
To:             All Employees
Date:         October 13, 2015

Team,

We are moving forward with a restructuring of our workforce cutting our staff so we can put our company on a stronger path to grow spend the money better. Emails like this are usually riddled with corporate speak so I’m going to give it to you straight.

The team has been working around the clock to produce streamlined roadmap for simplify our plans for Twitter, Vine, and Periscope and they are shaping up to be strong. The roadmap is focused on the experiences which will have the greatest impact doing stuff we hope people will like. We launched the first of these experiences last week with Moments, a great beginning, and a bold peek into a pretty big gamble on the future of how people will see what’s going on in the world Twitter.

The roadmap is also a plan to change how we work, and what we need to do that work. Product and Engineering are going to make the most significant structural changes to reflect our plan ahead bear the bruntWe feel strongly that Engineering will move much faster with a smaller and nimbler team We’ve got way too many engineerswhile remaining the biggest percentage of our workforce. And the rest of the organization will be streamlined in parallel and once we’ve cut that group we’ll have too many of everybody else.

So we have made an extremely tough decision: we plan to part ways with fire up to 336 people from across the company. We are doing this with the utmost respect for each and every person. But it’s not their fault; we hired them when we shouldn’t have. Twitter will go to great lengths to take care of each individual by providing generous exit packages give them decent severance and help finding a new job.

Let’s take this time to express our gratitude to all of those who are leaving us we are firingWe will honor them by doing our best Letting them go will make it easier for us to serve all the people that use Twitter. We do so with a more purpose-built team which we’ll continue to build strength into over time, as we are now enabled to reinvest in our most impactful priorities Having shed the people we don’t need, we’ll have the money to hire the people we really want.

Thank you all for your trust and understanding here. This isn’t easy. But it is right. The world Our shareholders needs a strong Twitter, and this is another step to get there. As always, please reach out to contact me directly with any ideas or questions.

Jack

I spent some quality time with my daughter last weekend, and it is always interesting to obseve how my baby girl tried to convey and comprehend complicated information with limited wordings.

Sure no expert, but it feels like baby is trying to do a very sophiscated map-reduce process, when it comes to communication. When baby speaks, there is a high chance she is not speaking her mind, and nature avoidance of too complicated stuff - map. Or She use her own way to simplify this world into something she can understand - reduce.

This also poses a challenge to me, as I need to be very selective in what words to use when trying to get the message into her mind.

For example, this is a typical conversation I had in the weekend:

me: hey, baby, can you count from 1 to 10

baby: 1, 2, 3, 4, 5, ........

me: hey, girl, how many hands you have

baby spent some quality time to look at her hands, and I can literally see her mentally counting 1, 2, … then she proudly loud out with strong confidence : ~three

I could be overthinking for some time. But I feel this is not the scenario only applies to baby. When people dealing with complexity more than they could handle , I see chances they use simialr strategy. Either they are not speaking their mind, or they draw into wrong conclusion based off misleading facts, or even worse, causing cascading damages.

I guess it takes practices to speak and write precisely. Just some interesting observations keep notes of here. Might come back on this later if I am able to find an achievable solution.

## News Site I follow

This is curated list of sources I follow regularly. Sites are put into 3 categories, Recommend, Watch, Remove.

• TBD

## Learn to Speak & Write

The most important two things to learn for most people are: speak, and write. If you are able to do both things well, you are invincible. You will be able to influence people’s mind and their behavior. Unfortunately, there isn’t much of things people can learn in public. However there seems to be some patterns, below are list of lesson learned for myself. Hope it is useful. Most applies to both speak and write.

1. speak the most precise and concise message, using least words.

3. stay on the message, always has a purpose.

4. never be afraid of silence or not able to use many fancy words. be able to explain things short is good.

5. state the conclusion before describing the solution.

6. when in conflicts, the longer you keep the ball in play, the more you learn from each other.

7. frame the problem in a way that people care about, so that solution hit them.

8. keep the words very short, don’t use long sentence unless really necessary (or in a way you are confident you can master it).

9. state the problem instead of describing the solution