cache

Across playing low stakes live and microstakes online, I have won somewhere around 15k in profit playing poker.

I used to have a lengthier post full of calculation and balance. This is a much shorter post, more practical and generalized.

should I be playing

I recently came across describing poker tilt with Kahneman’s systems of thinking. The first system is automatic, fast, and emotional. The second system is deliberate, slow, and conscious. We use the first system when we are tilted. We abandon rational thoughts, becoming disillusioned gamblers.

I think about 10% of players are winners in the long run. In a small enough sample, anyone can be winning. Looking at this variance calculator, we’d need hundreds of thousands of hands to get an approximate win rate. And even then, a theoretically winning player could still be losing. Therefore, poker makes an excellent hobby but a poor source of income.

Lastly, money. How much money should I have to play? We can't make any bets if we don't have money. To calculate the Risk of Ruin of poker, I personally vouch for just taking a sample of your last n sessions, getting a mean/variance, and seeing how many standard deviations we are from the mean. We answer questions like “are these games too big for me?”. F(n) = e^(-2nm/v), v=variance, m=mean.

This is a thread from Linus, one of the greatest cash game players of all time, before his ascent to greatness. To me, he just seems indomitable even during his rise, ignoring the naysayers, always inquisitive and enjoying the game. And I think that’s how I think the game should be.

pot odds

People say poker is a game for the math-inclined, but the only equation you need is pot odds.

  • Pot odds is risk/reward or b/(p+2b); this equation is really only applicable for rivers. For example, when villain bets a half pot sized bet, the pot odds is .25. If our call generates a win over a fourth of the time, then it’s printing. Sometimes, a villain will give us a great discrepancy between pot odds and how often we should be calling. These villains are called fish.
  • Much less useful than pot odds, but when constructing a bluff on a river with a range advantage, the theoretical bluff to value ratio should be b/(p+b). I mainly use this equation to keep me at bay from not over-bluffing. For example, a half-pot sized bet should have a bluff to value ratio of 1 to 3. Caveats: we need to have a range advantage, and this equation doesn’t account for villain having traps and raising. Consequences: this is ignoring exploit sizings … the greater the bet size, the greater the EV gained; however, even though theoretically a bigger bet will have more bluffs, when you actually have the nuts, villain is never calling your 3x shove on the river. (This video on trapping frequency is also a relevant watch.)

preflop

  • In a low stack-to-pot ratio situations, we want to have a hand like AQo or 77 for immediate showdown. In a high stack-to-pot ratio situations, we can consider hands like 89s or A7s to cooler/stack someone.
  • We should play more hands in position and less hands out of position. Ideally, in terms of where we sit, we want (1) aggressive players to the right of us so we can react and (2) passive players to the left of us so our aggression can go unchecked.
  • I just checked my preflop values for a site, my VPIP (voluntarily put in pot %) is 26 and my PFR (preflop raise %) is 18. I don’t think these numbers will deviate very much. A low VPIP/PFR is easier to play; the solver tells us a range advantage equals carte blanche to start blasting. You can consider playing even less hands due to rake; I looked at some solver outputs for 5-7x opens in casino rake environments, and there are situations when we should be folding even JJ and TT to a single early position raise.
  • Deviating preflop from optimal solver results is fine if you have a plan. For example, when people behind you only 3bet with premiums, it’s fine to just call in position. I also employ a light 3bet squeeze when there is a raise from a weak range and too many callers, specifically isolating a weak player.

general

There are some wizards of the game, but I think a general sense of how to play optimally is good enough.

  • In general, tight aggressive is the correct style to play.
  • People at my stake tend to call with draws and raise with made hands. Bet big to get calls from draws. If we are bluffing, we don’t need a large sizing on rivers to get flush draws to fold if draws miss.
  • Raises on the river are severely under-bluffed.
  • Most EV is won by being in position of fish, usually a LAG type of player.
  • There are some BXB (bet flop, check turn, bet river) and BB lines that is going to be profitable against most population villains, especially on boards like dry paired boards (villain unlikely to connect) or blind vs blind (villain range too wide). However, I personally like to be villain specific when going for these red-line exploits.
  • Raises are going to generate more folds than bets. A small raise can be really effective against a weak polar range.
  • Bigger bets is an exploit against villains who over call.
  • Some players will respond to absolute sized bets more so than relative sized bets. E.g. A 300 dollar bet into a 600 dollar pot might be very big for someone. Sometimes on rivers, you might need to size down to get called.
  • I have a smaller sizing when I am trying to be raised or if villain is under-calling. I have a larger sizing to get called by lower equity hands or to generate folds.
  • More important than balance is knowing your image and knowing your opponent.

multiway

Many hands are played multi-way with nonstandard sizing.

  • Use smaller bet sizing and bluff less in general multi-way. Against a half pot bet against 1 other player, we should call 66%. Against 5 players, we should call 20% of the time. Realistically, this number ought to be less. Half pot is big multi-way, but over-fold against even bigger bets.
  • If we have a very strong hand and the board is likely to be bet, we just want to check (to raise) to cooler someone. If it’s unlikely to be bet, we want to bet ourselves. If we have a good hand (but not a very strong hand), we can consider betting ourselves so we can react to a raise. In general, a lot of fishy villains telegraph hand strength from bet sizing or raises. You want to play a reactionary game against fishy players. Let them act, and then you get to react almost perfectly.
  • The person immediately calling a bet in a multi-way pot has a strong range. Given most situations, I usually just fold middle pair or worse.
  • Obviously, shift your range to be value heavy against villain who overcall. But since people over-fold to raises, it’s often good to check raise when we block a really strong hand on boards where villain is capped or doesn’t want to put more money in.

Actual strategies are villain specific and more nuanced. But this should be a good start.

live tells

I have been getting better at live tells. Here are some examples:

  • Villain reaching for chips defensively when you start to bet is weak.
  • A fishy player might say something regarding how weak they are. “I’m on a draw”, “I was afraid you’d snap call” are all signs of strength.
  • I used to think checking their cards is a sign of weakness (check suits/draws). However, there are some really good players that just do this too.
  • A player breathes heavily both after a bluff and a value bet. Someone with a value bet might start to become more comfortable after a minute. Someone nervous might have really stiff actions and speech.
  • A player making a nervous move when they think you are not watching is much more indicative of weakness than if they act in an obvious manner.
  • I have some subtle fake tells that I give out: fumbling chips and stopping, holding chips tightly, holding my breath, intermittently gulping. I don’t do anything special when I am bluffing.
  • Lastly, never believe what people have unless they show.

other

  • Be aware of collaboration. When two players always enter the pot together and there are suspicious betting patterns (to bully a third person out of pot) or showdown with very weird hands, it could be a sign.
  • There could also be signs of cheating. I have been cheated twice in home games (once in Asia and once in NYC); I don’t play home games anymore. I think the most likely source of cheating are dealer card mechanics or marked cards.
  • PT4 is pretty good software to collect your hands. There are nice discord channels you can find to talk about hands.

E-commerce is growing exponentially. I frequently see overnight successes of such companies through viral marketing; globalization and software have made it trivial for someone to open a store and saturate a product.

I spent about 1000 USD with 300 annual recurring costs (which I eventually stopped) selling on Shopify with ads on Facebook Ads, earning about 200 annually in revenue. I also had some other experiments with trying to move bulk on Amazon and trying to set up some ads on Tiktok. I lost money, and I consider this as just education fees.

I have another product on an unmentioned platform, with about 1000-3000 in annual revenue, netting half of that as profit (this online store is ongoing).

Here are some things learned:

  • All businesses are just trying to answer one question: “How do I get someone to give me money?” I’m very logical, so I have always thought the answer to that question was the exchange of utility. However, I have learned that in markets where there is a general surplus, the choices we make are often emotional.
  • Everything we buy is a choice, and almost everything we buy is due to interruption marketing. Large aggregators of data make extremely accurate guesses on what we might like, and money is spent so we make decisions we might not otherwise have made.
  • A good market fit means you’ll pay less to get someone to buy something. A good brand or story means you can get someone to pay more to buy something.

It was kind of costly to run these experiments but it’s educational to see the systems that sellers use. What surprised me the most in all of this was that a disproportionate amount of money is spent on acquisition (and not on the actual product). A pair of shoes can cost $100 on a shelf but can only cost $10 to make. With future advances in automation, AI content, and consumer data collection, it can cost even less.

I’ve been getting some FOMO and mixed thoughts on BTC. This is just a quick rundown to help me think through whether I should invest.

The weakening fiat

The argument I see the most often for crypto is that it’s a hedge (reserve asset) against a diluted fiat currency.

The US is using an increasing proportion of capital to service debt year over year. We are not in a recession yet (debt:gdp – note 2008 and 2020), which means that the US could potentially increase the money supply even more. In the short term, increasing money supply increases liquidity and assets increase. However, in the longer horizon, historically, when inflation is high, people like to buy supply-limited value stores like precious metals or real estate.

Money will flow from the stock market and into these government securities to serve the increasing debt; for example, t-bills and interest rate regulation. As the US spends more and more money in foreign affairs and military to preserve its status as a leading superpower (e.g. billions of dollar for Ukraine.), it could be reminiscent of a downfall similar to that of the Dutch in the 1720s or the UK in the 20th century. (1)

Is crypto a reserve asset? I don’t think so yet. Within a few years, there will be overall inflation and circumstantial defaults/liquidity risks. I think it unlikely that crypto will be a hedge against inflation (like gold) in the current cycle, as people will sell crypto in times of less liquidity. Crypto peaked at Dec 2007 (great recession), dropping around from 20k to 3k. I think we are in a similar point in the cycle.

Instability and exodus

The global narrative of crypto is strengthened by globalization and pessimism. The wealthy will seek less tax and safety, while the less affluent seek freedom and reduced income inequality. In recent history, crypto has been on the forefront of many monetary exoduses due to national pressures. For reasons mentioned above (more investment into T-bills and less into the stock market, higher taxes, etc.), companies may also want to move outside of the US to seek more benefits.

Will there be a lot of struggles with the nation-state in the future? Maybe (1) (2). Maybe not. Hopefully it won’t be violent, as there is a general surplus. Moreover, great social inequality already exists in the modern era without violent revolutions.

What about nations? Historically speaking, changes in the global order results in war; if this is no exception, what would happen to the value of crypto in the wake of fallen nations?

A risky investment

I see a lot of staunch advocates on Twitter and Reddit saying that it’s risky to not hold crypto, that fiat money will be worthless in the future. After reading many articles and news on crypto, buying some on an exchange, and looking into some crypto projects, I still think that it’s a risky investment.

On the consumer level, it’s risky because your access to it involves you remembering a secret recovery. Most entities like BlackRock and Fidelity just offer it through an ETF, which I think is a substandard way of owning the asset. Not only is there a 1.5% fee, it’s not FDIC insured and you don’t get rights to the underlying asset.

It’s also risky because of the prevalence of “dumb money”. The FTX exchange defaulted as a result of holding illiquid (like Silicon Valley bank) and fraud and volatile assets. Also, the Luna Terra fraud incident. Right now, the price of Solana is really high as a result of rug-pull NFT schemes. I was unaware of the extent of scams related to crypto until I conducted my research.

It’s risky for the nation-state as well. There is so much regulation on crypto, with countries like China, Bangladesh, Qatar, Morocco, etc. banning crypto. Even many countries that have not banned crypto hold oppressive laws, maybe under, I think, the guise of preventing money laundering.

The adoption of crypto

I think many countries hold public and private positions on crypto. For example, the government of Bhutan has been secretly mining BTC; if a small country like Bhutan has been mining BTC privately, and if you look at the current very high hash-rate and NVIDIA stock prices, you can only imagine the amount of mining done by larger countries.

In September 2023, 29% of all people in India had crypto, 18% of HK, and 13% of Americans, growing at around 34% year over year. The number of active / new addresses are increasing as well. I think this gives teeth to the idea that shutting it down in a country does not kill the narrative.

Tiny data exploration

I downloaded the entire blockchain of an altcoin and did some pool mining.

I also did some really rough multiple linear regression just to get a general idea on what determines or is well-correlated with BTC price. I created a tiny dataset consisting of a bunch of factors in the first half of 2022. I had some prices of securities, some momentum indicators, FRED economic data that includes GDP, liquidity, some free sentiment data, google trends data, general blockchain data like hash-rates, etc. In my data, I found that Coinbase stock price and google trends for the phrase ‘crypto’ are the best features to determine (or are strongly correlated by) BTC price.

I’ll look at some other things in the future to get my feet wetter.

I also moved a small amount of BTC to a cold storage, more so just going through the motions to see what’s it like.

Conclusion

Even though I don’t think BTC is the reserve asset people hype it up to be, I am bullish on the idea. The price will fluctuate as people sell and buy in different market conditions. Crypto sentiment and macroeconomic conditions will likely be the most significant factors influencing price. And staying informed is prudent.

I just wanted to grow something.

Peppers are pretty cool; they are pretty easy to grow, they self-pollinate, requires little space, are productive, and takes about 3 or 4 months from seedling to fruit.

I enjoy spicy food, and so I also plan on making some hot sauce and pickled peppers.

Things learned:

  • Even though I live in excellent pepper-growing climate, cold 50 degree nights contributed to stunted growth.
  • Broad mite infestation needs to be taken care of immediately. A month before fruiting, I discovered some curls on the leaves on my pepper plants. Broad mites are terrible because they suck the sap out of your leaves and are pretty much invisible. This product: [flying skull nuke-em] is only somewhat useful.
  • Some people recommend that you should water mature plants twice a week, but I should have only watered once a week. Over-watering plus the overall lack of indoor air-circulation results in plant edema.
  • I probably also didn’t need to go so deep into growing peppers. I consumed tens if not hundreds of hours of pepper content. There are videos on variables affecting germination rates, fertilizers, etc. However, in the end, it really doesn't matter that much as mother nature will take care most of it.

varieties

Some varieties I grew: Jalapeno, Cayenne, Jimmy Nardeloo, Tabasco, Black Pearl, Cayenetta, Santaka, Fish, Jigsaw.

I ended up sharing a lot of my seedlings to friends.

I made a tool to use to save hands and give a strategy on how to play different hands. This project will no longer exists.

Demo:

The first deployment of this was over on GCP. I redid it on Heroku. I used cookiecutter django for easy setup, with mailgun/maildev, sentry, postgres, redis (managed by heroku).

I bought a domain (dysk.app).

I gave up working on this project because as I learned more about poker, the less I thought what I was working on made sense. I wanted to create an app that saves key poker hands, and would theoretically help me play hands like a flow chart.

There are also excellent existing free-tier apps that does what my app set out or can pivot to. Why create a lesser version of something that's already so good available free?

  • GTO Wizard (free) 100bb pre-flop charts and simple solutions.
  • PokerTraker (paid) for poker hand bookkeeper and statistics
  • Poker Bankroll Tracker (free) to record key hands, do bankroll management
  • WASM postflop (free) GTO solver
  • Some MDA apps I think the author doesn’t want me sharing. Some MDA stuff I was parsing myself. (Parse over 1M hands to see where we can exploit).

[This post was originally written in 2022.]

I spent 90% of my time doing stuff that I ended up scrapping.

Products like Zapier are restrictive (no external libs, at least at the time of writing this) and also very expensive; the cost is on another order of magnitude.

I also could have set up automation with mac crons. However this is a hard restriction as my machine can't be shut down or sleeping.

There are out-of-the-box solutions like AI Platform Notebooks or Colab for writing and running code with live machines. Google also offers other managed products like Cloud Composer, not to mention the plethora of other non-Google solutions out there. These are great, I even tried a few of these out, but for one reason or the other (wrong use case, too many bells and whistles, expensive, restrictive, etc.), I decided to play with other toys.

I almost went with Papermill as writing code in Jupyter is easy and fast. Papermill gives .ipynb files a level of productionization by allowing parameterization and execution. You can run something like papermill gs://bucket-name/input.ipynb gs://bucket-name/output.ipynb -f parameters.yaml to run and store (integrations include Google Cloud Storage).

There's are also best practices that go with productionizing Jupyter (testbook for tests, nbdime for diffs, etc.). Jupyter is super easy to write in, and it supports many different programming languages. This is essentially the process that Netflix and Bilibili use (+ this talk and this post).

The tradeoffs for using Jupyter are speed, size (install notebook/kernel, papermill, apis), and money (VMs can be expensive). I created an MVP for this (pubsub –> papermill).

In the end, the stuff that was productionized was just a rust API, a tiny CLI tool in rust with reqwest and clap that is basically a wrapper around another library.

fn buy_asps(reqs: Endpoint) {
   let get_request = reqs.post_request("/orders", body);
}

When we get an event from PubSub, we trigger this business logic with a command like cargo run -- --date 20210517 --function account --is_live. There is enough flexibility to run different functions on different dates, using a live account or a paper account, and even more, depending on the PubSub signal. The signal is created by Cloud Scheduler, which is basically a cron. All of this is controlled in GCP's UI. The binary gets run by Cloud Functions.

Disclosure: this is no longer running.

This project's costs less than a 30 cents a year, and is basically just a lambda function. Mostly an exploratory project to learn about different tooling.

After some notes in rust, I created a rust app in rust deployed on Google Kubernetes Engine (GKE).

Here are some of the tooling/stack I went with. I started with the basic diesel example in actix/examples/diesel to bring up something with basic get/post features.

  • Dev Tooling
    • Use systemfd cargo-watch to automatically rebuild your code and watch for change. systemfd works by creating a parallel process, and then works with conjunction with cargo watch to reload your app whenever you save. Sometimes the reload doesn't work as intended; I had a weird bug where I had to restart whenever I added a new endpoint.
  > cargo install systemfd cargo-watch
  > systemfd --no-pid -s http::5000 -- cargo watch -x run
  

I also created a tiny frontend to support the functionality. I also added an auth so that people. Ask me for the auth token if you'd like to play with it!

  • Create:
  > curl -S -X POST --header "Content-Type: application/json" --data '{"text":"Hello World!"}' http://localhost:8080/post --header 'Authorization: Bearer ######'
  
  {"id":"3afdebd0-673f-4a93-96f0-69e2ab99c756","text":"Hello World!"}
  
  • Get:
  > curl -X GET http://localhost:8080/post/3afdebd0-673f-4a93-96f0-69e2ab99c756 --header 'Authorization: Bearer ######'
  
  {"id":"3afdebd0-673f-4a93-96f0-69e2ab99c756","text":"Hello World!"}
  
  • List:
  > curl -X GET http://localhost:8080/post/list --header 'Authorization: Bearer ######'
  
  [{"id": "3afdebd0-673f-4a93-96f0-69e2ab99c756", "text": "Hello World!"}]
  
  • Delete
> curl -X DELETE http://localhost:8080/post/3afdebd0-673f-4a93-96f0-69e2ab99c756 --header 'Authorization: Bearer ######'

{"id":"3afdebd0-673f-4a93-96f0-69e2ab99c756","text":"Hello World!"}

deployment

We dockerize our application and load it into gcp's container registry. Cool!

> docker run gcr.io/rust-post/rust-post-crud:v1
Starting server at: 127.0.0.1:8080

I clicked some buttons in the gcp UI, mapped the LoadBalancerIP to A recordof my domain, and it just worked, live on rust.bwang.io.


Below are some notes I took when I was looking into rust, no idea where else to post it.

Rust comes included with some nice modern tooling. Cargo is the dependency manager and build tool. Rustfmt is like gofmt, opinionated coding style across developers.

I thought that it could also be a good opportunity to document and learn.

1. Rust

I'm following the docs from doc.rust-lang.org with references from other parts of the internet. There's also rustlings and rust-lang/examples that I'm also looking at for code examples. Here's are some popular tooling.

1.1 Hello World

Do cargo new to intiate a project, and cargo run (builds if there are diffs and runs) to see what's up. The cargo.toml file is called the manifest. You can use it as a dependency manager, add meta information, specify build details (path, tests, etc), etc. The cargo command takes advantage of this manifest file to coordinate more complex projects (as opposed to just using rustc).

Other information:

  • Use cargo check to check your code to see if it compiles (faster than actually building). Builds debug executables are stored in ./target/debug. Build release executables are stored in ./target/release
  • The main function is the entry point into a program.
  • Use let to assign variables. Variables are immutable by default, use mut to make a variable mutable. Apparently there are a lot of nice things for handling reading by references in rust. We'll see about that later.
  • std::io::Result is a type a user uses to handle exceptions. It has two states (enums Ok and Err), .expect() checks for the error and handles it somehow.
  • std::cmp::Ordering is another enum that returns Less, Greater, and Equal when you compare two values. The match expression uses a arms pattern, similar to case. This seems to be a pattern, you can combine Result and match for error handling. See below (note that parse is a method to convert type to an annotated type, in this case, u32):
  // example 1
  let guess: u32 = match guess.trim().parse() {
      Ok(num) => num,
      Err(_) => continue,
  };
  

1.2 Rust Concepts

1.2.1 Mutability

In rust, variables are immutable by default (aside: however, because rust allows variable shadowing, we can bind a variable twice with a different value but at a different memory, we're basically creating a new variable). Mutating (mut) an instance in place maybe faster than creating a new instance at a different memory, but creating another instance might have higher clarity when writing code.

Constants (const) are a little different from immutable variables (let). Constant types must be annotated and are evaluated at compile-time, whereas a let binding is about a run-time computed value.

1.2.2 Data types

Rust is a statically typed language so we know all the types of variables at compile time. Even when converting a variable to a different type (example 1), we need to annotate it.

Rust has four primitive scalar types: integer (defaults to i32), floats (defaults to f64), bools, and characters. You can do basic mathematical operations on number types. char literals (4 bytes of a unicode scalar value) are specified with single quotes, string literals uses double quotes.

Rust has two primitive compound types: tuples (fixed length, assorted types), arrays (fixed length, single type). Note that arrays are different from vectors (variable length).

// example 2

// tuples
let x: (i32, f64, u8) = (500, 6.4, 1);
let first = x.0;

// arrays
let mut y: [i32; 3] = [0; 3];
let first = y[0]

1.2.3 Functions

For functions, I think is pretty standard. () evaluates to expressions where as {} is an expression (returns something). Statements have semicolons, expressions don't. Functions return the last expression implicitly.

// example 3

fn function(x: i32) {
    let y = {
        let x = 3;
        x + 1
    };

    // returns y implicitly
}

1.2.4 Ownership

Ownership is one of the key concepts of rust.

For starters, a stack is a part of memory that's last in, first out, everything fixed size. When you exit a function, you're popping off plates off the stack, and all the variables with it. (C++ uses RAII, which is rust's drop function). A heap is for data with unknown size at compile time (for example, a string), and the heap returns some information (pointer, size, etc) to store on the stack.

For rust, each value has an owner. There can only be one owner at a time. when the owner goes out of scope, the value is also gone with it. Let's say we have two variables referencing the same string:

// example 4

// s1 not valid
let s1 = String::from("hello");
let s2 = s1;

// s1 valid
let s1 = String::from("hello");
let s2 = s1.clone();

Traditionally, s1 and s2 point to the same reference. To ensure memory safety, rust no longer considers s1 here valid after s2 is created. There are no double memory free errors when there's a one to one relationship between references and resource. If we really wanted to, we can do a .clone() for heap variables to copy data.

Likewise, for exiting and entering functions, the ownership of a heap variable changes, and the previous variable is decommissioned.

The ownership of some stack variable, for example, integer or a memory reference, can still be used after the owernship changes. It doesn't use the .drop() function, it uses .Copy() instead.

// example 5

fn main() {
  let s = String::from("hello");
  takes_ownership(s);
  // s is no longer valid here
  
  let s1 = String::from("hello");
  borrows_reference(&s1)
  // s is still valid here
  
}

fn takes_ownership(some_string: String) {
  println!("{}", some_string);
}

fn borrows_reference(some_string: String) {
  s.len()
}

But how do we use some heap variable after it enters a function? We can borrow the variable via reference. Because memory addresses (references) are stored on the stack and uses .Copy(), the ownership of the resource is still in the main function, nothing gets decommissioned. Pretty cool design.

1.2.5 Slice Type

This piece of code returns the first word in a string:

// example 6

fn first_word(s: &String) -> &str {
    let bytes = s.as_bytes();
    for (i, &item) in bytes.iter().enumerate() {
        if item == b' ' {
            return &s[0..i];
        }
    }
    &s[..]
}

fn main() {
    let mut s = String::from("hello world");
    let word = first_word(&s); // immutable reference
    s.clear(); // error!
    println!("the first word is: {}", word);
}

It turns the string into an array of bytes, and then looks for the b' ' byte, and returns the slice of a string. Notice that in the example above, &s is an immutable reference to the string, and s.clear()is a mutable reference (modifies the value), and fails.

Other information:

  • Rust uses snake case for functions and variable names.
  • Double quotations // denotes the start of a comment.
  • Control flows are also pretty self-explanatory. Truthiness for control flows must evaluate to a bool. Similar to golang. Rustacians prefer for loops due to safety and conciseness.
  • References are not mutable by default, but it CAN be mutable ...
  • “Only one person borrow at a time” to ensure no data races. You can have multiple immutable references to data, but you can only have one mutable reference to a piece of data. You also can't have a mutable reference while you have an immutable one.
  // example 7
  
  let mut s = String::from("hello");
  let r1 = &mut s;
  let r2 = &mut s;
  
  // fails, simultaneous borrow
  println!("{}, {}", r1, r2);
  
  • Dangling references: you can't have references to nothing; compile error.

1.3 Structs

Structs are similar to tuples. Here's how you would create an instance of a struct:

// example 8

struct User {
	username: String,
  email: String,
  sign_in_count: u64,
  active: bool,
}

let mut user1 = User {
	email: String::from("someone@example.com"),
	username: String::from("someusername123"),
	active: true,
	sign_in_count: 1,
};

user1.email = String::from("anotheremail@example.com");

If the instance is mutable, all the fields of a struct are mutable. You can also have tuple structs: no names, just types of the fields, but their types of are of the type defined by the struct.

1.3.1 Example with Structs

// example 9

#[derive(Debug)]
struct Rectangle {
    width: u32,
    height: u32,
}

impl Rectangle {
    fn area(&self) -> u32 {
        self.width * self.height
    }

    fn can_hold(&self, other: &Rectangle) -> bool {
        self.width > other.width && self.height > other.height
    }
}

fn main() {
    let rect1 = Rectangle {
        width: 30,
        height: 50,
    };
    let rect2 = Rectangle {
        width: 10,
        height: 40,
    };
    println!("rect1 is {:#?}", rect1);
    println!(
        "The area of the rectangle is {} square pixels.",
        rect1.area()
    );

    println!("Can rect1 hold rect2? {}", rect1.can_hold(&rect2));
}

Need to add #[derive(Debug)] so that Rectangle gets the Debug trait when printing structs. Traits are like interfaces. In this example, we also added two methods, the latter to use another Rectangle struct for the syntax.

Other information:

  • {:#?} for pretty formatting structs
  • If you need to deference in C/C++, you need to use an –> operator, but in rust, referencing and dereferencing is automatic.
  • You don't need &self as a parameter for methods; these are called associated functions.
  • You can split up impl blocks

1.4 Enums

Here's how the standard library defines the enum for IpAddr:

// example 10

struct Ipv4Addr {
    // --snip--
}

struct Ipv6Addr {
    // --snip--
}

pub enum IpAddrKind {
    V4(Ipv4Addr),
    V6(Ipv6Addr),
}

enum IpAddr {
    V4(String),
    V6(String),
}

enum Message {
    Quit,
    Move { x: i32, y: i32 },
    Write(String),
    ChangeColor(i32, i32, i32),
}

impl Message {
    fn call(&self) {
            //
        }
    }

let m = Message::Write(String::from("hello"));
m.call();

With this, we can define a function to take any IpAddrKind, like so: fn route(ip_kind: IpAddrKind) {}. We can also create instances of specific IpAddrKinds: let four = IpAddrKind::V4;. Within an Enum, we can have a wide variety of types, like Message, given above. You can also define methods on enums.

You can use the match control flow operator on enums like in the random number guessing example. This allows the compile to confirm all possible cases are handled.

1.4.1 Option

There's a useful Option enum that you can use to define nullable objects. The <T> syntax is used to denote that the Some variant of the Option enum can hold one piece of data of any type. If we use None rather than Some, we need to tell rust what type of Option<t>.

// example 11

enum Option<T> {
    Some(T),
    None,
}

let x: i8 = 5;
let y: Option<i8> = Some(5);
let sum = x + y;

The above code won't work because you can't add an i8 to a value that might not be a i8.

Other information:

  • A common pattern to handle nulls is to use match like so:
  // example 12
  
  fn plus_one(x: Option<i32>) -> Option<i32> {
  	match x {
  		None => None,
  		Some(i) => Some(i + 1),
  	}
  }
  
  let five = Some(5);
  let six = plus_one(five);
  let none = plus_one(None);
  
  • You have to cover all the cases when matching enum, else compile error.
  • You can use _ to match any value that aren't specified before it.
  • The following two pieces of code are the same. You can use if let to match one value. You can also do a if let and else to specify a non-trivial function for the _ condition.
  // example 13
  
  let some_u8_value = Some(0u8);
  match some_u8_value {
  	Some(3) => println!("three"),
  	_ => (),
  }
  
  if let Some(3) = some_u8_value {
  	println!("three");
  }
  

1.5 Project Management

This is chapter 7 in the rust-lang book. Rust comes with the module system to manage your code's organization. More about it. cargo.toml defines a package and contains information on how to build crates. The top level module is usually main.rs or lib.rs depending on if you're writing a program or library.

Packages are a cargo feature that lets you build, test, and share crates. You create a package with the command cargo new; it contains a cargo.toml that describes how to build crates. Crates are like bundled functionality for modules, mapping to a single executable. Everything is private by default including functions, modules, and structs.

1.5.1 Use

Here's an example of an actual project I found on github that uses mod and use, use brings the module into scope.

Use the as keyword to alias a new name, for example use std::io::Result as IoResult;. The name available in the new scope is by default private. Use pub useto make it public.

You can use nested paths to put a bunch of use things together like so: use std::{cmp::Ordering, io}; or use std::io::{self, Write}; (self references itself). You can also use the glob operator to bring all public items into scope like so: use std::collections::*;

Put external packages into Cargo.toml under dependencies. A bunch of them are available at crates.io. The standard library (e.g. use std::collections:Hashmap;) is automatically imported.

  • Start relative paths with the super keyword, e.g:
  // example 14
  
  fn serve_order() {}
  
  mod back_of_house {
      fn fix_incorrect_order() {
          cook_order();
          super::serve_order();
      }
      fn cook_order() {}
  }
  
  • It's not idiomatic to bring the function to scope, only the module that has the function.

1.6 Standard Library

1.6.1 Vector Example

  • Create: let v: Vec<i32> = Vec::new();. or let v = vec![0];
    • vec! is a macro for convenience
  • Add: v.push(5);
    • note that push uses a mutable reference, so while this is happening you can't hold another reference
    • Remember that variables are immutable by default
  • Drop: go out of scope
  • Get:
  // example 15
  
  let v = vec![1, 2, 3, 4, 5];
  let third: &i32 = &v[2];
  println!("The third element is {}", third);
  
  match v.get(2) {
  	Some(third) => println!("The third element is {}", third),
    None => println!("There is no third element."),
  }
  
  • Iterating: for i in &v {}or for i in &mut v {} to change elements

You can use vectors in conjunction with enums to store data of multiple types:

// example 16

enum SpreadsheetCell {
	Int(i32),
	Float(f64),
	Text(String),
}
let row = vec![
	SpreadsheetCell::Int(3),
	SpreadsheetCell::Float(10.12),
];

There is also an introduction to strings and hashmaps in the rust-lang book but I figured one is enough. I can just google as I go along. I didn't expect this book to be this fucking long.

1.7 Errors

There are two classes of errors:

  1. Recoverable errors

    • handle with Result<T, E> like how we have in the past.
  2. Unrecoverable errors

    • calls panic!

You can use unwrap to directly access the Some() value of a result. Likewise, you expect is the same but can also return a custom panic statement.

Another super common pattern in rust is error propagation. The code below are the same. We can place the ? operator after a Result value to return an Error value, else return an Ok value.

// example 17

fn read_username_from_file() -> Result<String, io::Error> {
    let f = File::open("hello.txt");

    let mut f = match f {
        Ok(file) => file,
        Err(e) => return Err(e),
    };

    let mut s = String::new();
    match f.read_to_string(&mut s) {
        Ok(_) => Ok(s),
        Err(e) => Err(e),
    }
}

fn read_username_from_file() -> Result<String, io::Error> {
    let mut f = File::open("hello.txt")?;
    let mut s = String::new();
    f.read_to_string(&mut s)?;
    Ok(s)
}

fn read_username_from_file() -> Result<String, io::Error> {
    let mut s = String::new();
    File::open("hello.txt")?.read_to_string(&mut s)?;
    Ok(s)
}

Other information:

  • Attempting to access information that doesn't exist, for example, beyond the end of a vector, will also call panic!
  • Common pattern:error.kind returns an error enum which you can use to handle different types of errors.
  // example 18
  
  let f = match f {
    Ok(file) => file,
    Err(error) => match error.kind() {
      ErrorKind::NotFound => match File::create("hello.txt") {
        Ok(fc) => fc,
        Err(e) => panic!("Problem creating the file: {:?}", e),
  	  },
      other_error => {
        panic!("Problem opening the file: {:?}", other_error)
  	  }
  	},
  };
  

1.8 Generics, Traits, Lifetimes

Duplicating code is added work, looks shitty, and can lead to errors. One way to remove the duplication of code is to write reusable functions. But in a typed language, what if you wanted to have a function to handle multiple types? In our signature we can use a generic type fn get_some<T>(list: &[T]) -> &T {. We can also define types in structs with the generic type; you can define single, multiple types like so:

// example 19

struct Point<T> {
    x: T,
    y: T,
}

struct Point<T, U> {
    x: T,
    y: U,
}

Similarly with structs, like with the Option enum, it's also possible to hold generic data types.

Traits are a collection of methods defined for an unknown type; it's like generic types but for functions (it's really similar to interfaces, but there are a few differences). Here's an example in rust-by-example that I think is pretty good. impl Trait is straightforward for me, but there are more complex things you can do, described here. There's a where clause and a + syntax.

The scope of which that reference is valid (lifetime) is inferred in rust most of the time. Lifetime generics is created to prevent dangling references, which means that a program is trying to reference data other than the data it's intending to reference. The longest function below doesn't work, but the longest_2 function works. Why? We don't know whether we're returning a borrowed value .as_str()or not.

// example 20

fn main() {
    let string1 = String::from("abcd");
    let string2 = "xyz";

    let result = longest(string1.as_str(), string2);
    println!("The longest string is {}", result);
}

fn longest(x: &str, y: &str) -> &str {
    if x.len() > y.len() {
        x
    } else {
        y
    }
}

fn longest_2<'a>(x: &'a str, y: &'a str) -> &'a str {
    if x.len() > y.len() {
        x
    } else {
        y
    }
}

There's also rust lifetime elision rules/exceptions so I'm just omit that in my notes for now. If I find something interesting, I might write about it later.

Other information

  • For generics, you'll need to make sure it works for all types, else you get a compile error
  • Rust uses monomorphization which is the process of filling in specific types at compile time
  • Most people use the name &'a when denoting a lifetime

1.9 Tests

In this format:

fn prints_and_returns_10(a: i32) -> i32 {
    println!("I got the value {}", a);
    10
}

#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn this_test_will_pass() {
        let value = prints_and_returns_10(4);
        assert_eq!(10, value);
    }
}

Do cargo test -- --test-threads=2 --show-output to run all tests in the project with 2 threads. At the top level, it's customary to create a tests directory next to src. Cargo will know to look for integration tests in that directory.

1.10 Sample Projects

Reading from this, just writing some stuff down, nothing too comprehensive.

1.10.1 Grep Project

  • use std::env:args to read command line arguments
  • separate of concerns by having a main.rs and business logic in lib.rs
  • use eprintln! to print errors

[This post was originally written in 2020.]

This post is dedicated primarily on making a cicd pipeline, a pretty generic boilerplate pipeline that uses Cloud Build, which is google's CICD serverless platform. Modularly building upwards so that my next project can use this. We're using Cloud Source Repos to host repos, but the pattern is all the same. All these CI pipelines follow the same structure:

  • There's a steps file to define steps (e.g. steps to test a pipeline)
  • There's a trigger to invoke some steps (e.g. pushing to a branch)

My trigger is defined as any changes to my master branch, and my cloudbuild.yml file has pretty self-explanatory steps: run a test, build docker image, push docker image (and optionally deploy docker image).

This tutorial, which is what I'm generally following, has a separate branch that has a separate cloudbuild.yaml file to manage deployment. This way, the CI step which is generally managed by the developer is decoupled from the CD step, which is generally managed by automated systems. The CD step uses the kubectl builder to automatically link a cluster to deploy, which is a nice abstraction. I made some custom modifications to conform to the way I want feature-branch tested and the final result deployed.

Feature branch runs test and builds docker image.

image-20201012224313973

image-20201012224339864

Master branch runs test, builds, pushes image, and deploys

image-20201012230527371

Cool! Everything works like a charm. You can also do more fancy things. I think if you have more complex docker images, you can simply layer images on top of the GCP builder images. These A-records are mapped to LoadBalancer IPs:

adding a service

I'm using postgresql as the backend hosted on Cloud SQL; it's a managed database service that does scaling, security, and backups for you. For postgres tooling, everything I'm using is pretty standard: psycopg2/sqlalchemy/alembic via the flask ecosystem.

image-20201021224538288-1

Anyways, everything works great! I can also see it exposed on a lightweight front end I made.

I am using cloudsqlproxy to run everything local against a production database, but alembic makes it really easy to sync your data models up.

Note that sqlalchemy expires on commit; this means that when the data is transferred from local memory to be committed onto the database, the object can't be used anymore. It doesn’t matter too much since you do business logic before you commit anyway.

Cool! I’m ready to hire some engineers now for my project (joking).

Other random stuff:

This is just me trying out Nomad and Waypoint. This post was originally wrote in 2020.

nomad

Nomad acts as an online server/system to manage applications on that infra. (Alternative: k8s-terraform) Also, according to the intro talk, it's a really good platform for serverless compute.

What nomad does better than kubernetes:

  • true serverless: maximizes utilization, minimizes resources needed
  • can use windows/legacy systems, nomad isn't limited to kubernetes
  • light: single binary

We can run a simple local server/client with nomad agent -dev -config config.hcl.

Clients are basically worker nodes, and servers makes sure there's high availability (everything is running across regions); our servers and clients create a cluster of virtual machines that have instances of consul and nomad, etc. We can port-forward (tunnel-through-iap) virtual machine ports to local ports (similar to kubectl port-forward).

I'm connected to my nomad cluster locally and I can just schedule a docker image simply with nomad comands: run/plan/stop, etc. It's like kubectl apply -f but with a .nomad file.

image-20201019235928035

I scheduled a hello-world image. I see that it's running in hashistack-client-1. I see that consul automatically picked this guy up in services; the service mesh is all mapped out, etc.

Kubernetes is probably still better in most cases. In terms of other alternatives, there's also rancher, docker swarm, mesos , etc.

waypoint

Hashicorp also announced waypoint; it’s a tool to build and deploy.

image-20201020131628900

  deploy {
    use "exec" {
      command = ["kubectl", "apply", "-f", "<TPL>"]
      template {
        path = "./example-nodejs-exec.yml"
      }
    }
  }

It’s like a makefile that also connects to k8s, AWS, GCP, and also nomad.

Other things:

  • people need to use google cloud shell or other cloud terminal equivalents more when writing tutorials like they do here
  • api if you just want nomad to be some compute
  • Nomad Links: Good list of tooling, example with binary instead of docker, a nice templating walkthrough, a library of reference nomad files