A co-worker of mine used to keep all his notes in one large notepad file. He literally used Windows notepad.Then when he needed to check his notes, he would simply do keyword searches on the file. It went back years and proved to be a better reference than a fancy knowledge base. This file is a similar dump for myself. It includes everything I read, notes and cheatsheets. Organized with markdown but may move for emacs org mode format in the future. Creative commons license.
https://slife.org/japanese-proverbs/
成功する事よりも、失敗しない事の方が重要だ。- Seikou suru koto yori mo, shippai shinai koto no hou ga juuyou da.
Meaning: It’s more important to not fail than to succeed.
“I constantly see people rise in life who are not the smartest, sometimes not even the most diligent, but they are learning machines. They go to bed every night a little wiser than they were when they got up and boy does that help, particularly when you have a long run ahead of you.” ― Charles T. Munger
Simple tools for complex interactions.
Tools over process.
Layers
import numpy as np
class Layer_Dense:
def __init__(self, n_inputs, n_neurons):
self.weights = 0.01 * np.random.randr(n_inputs, n_neurons)
self.biases = np.zeros((1, n_neurons))
def forward(self, inputs):
self.output = np.dot(inputs,self.weights) + self.biases
one_hot = [0, 1, 0, 0, 0, 0]
Activation Functions
softmax activation outputs a probability distribution
example output: soft_max = [0.9, 1.2, 0.6]
Loss functions
Categorial cross entropy is used to compare ground truth probability (y targets) against predictions (y-hat)
import math
""" one hot encoding """
= [1,0,0]
target_values
= [0.7, 0.1, 0.2]
softmax_output
""" note: natural log e**x = b """
= -(math.log(softmax_output[0])) loss
Generic loss class
import numpy as np
class Loss:
def calculate(self, output, y):
= self.forward(output,y)
sample_losses = np.mean(sample_losses)
data_loss return data_loss
Cross Entropy Loss class
for classification problems
# Common loss class
class Loss:
# Calculates the data and regularization losses # given model output and ground truth values
def calculate(self, output, y):
# Calculate sample losses
= self.forward(output, y)
sample_losses # Calculate mean loss
= np.mean(sample_losses)
data_loss # Return loss
return data_loss
# Cross-entropy loss
class Loss_CategoricalCrossentropy(Loss): # Forward pass
def forward(self, y_pred, y_true): # Number of samples in a batch
= len(y_pred)
samples # Clip data to prevent division by 0
# Clip both sides to not drag mean towards any value
= np.clip(y_pred, 1e-7, 1 - 1e-7)
y_pred_clipped # Probabilities for target values -
# only if categorical labels
if len(y_true.shape) == 1:
= y_pred_clipped[
correct_confidences range(samples),
y_true ]# Mask values - only for one-hot encoded labels
elif len(y_true.shape) == 2:
= np.sum(
correct_confidences *y_true,
y_pred_clipped=1 )
axis# Losses
= -np.log(correct_confidences)
negative_log_likelihoods return negative_log_likelihoods
accuracy - how often the largest confidence is the correct class
gradient in gradient descent is the result of the partial derivative of the loss function with respect to the weights
another way of looking at it is the impact that x has on y
https://www.instructables.com/Understanding-how-ECDSA-protects-your-data/
Allows verification of authenticity without compromising security. It is impossible to forge a signature. It does no encrypt the data but ensures it is not tampered with.
Algo (high level)
choose random point on curve, point of origin
generate a random number, private keys
apply equation to private key and point of origin, public key
Sign the file
equation(use the private key, with a hash of the file), signature
signature is divided into R and S
Verification
equation(S, public key) == R
ECDSA uses SHA1 hashes
transaction input and output - there will be a difference between them which is the miner fee
transactions form a chain
users keys can unlock previous output in the chain proving ownership
change address - address of new and old user
UTXO - unspent transactions database
It is called when a non-existent function is called on the contract.
It is required to be marked external.
It has no name.
It has no arguments
It can not return any thing.
It can be defined one per contract.
If not marked payable, it will throw exception if contract receives plain ether without data.
Solidity fallback function:
It has no name, no arguments, no return values. It external and payable. Defined once. Called when non-existent function called.
Externally owned accounts are those that have a private key; having the private key means control over access to funds or contracts.
A contract account has smart contract code, which a simple EOA can’t have. Furthermore, a contract account does not have a private key. Instead, it is owned (and controlled) by the logic of its smart contract code: the software program recorded on the Ethereum blockchain at the contract account’s creation and executed by the EVM.
account addresses are derived directly from private keys: a private key uniquely determines a single Ethereum address, also known as an account.
There is no encryption as part of the Ethereum protocol—all messages that are sent as part of the operation of the Ethereum network can (necessarily) be read by everyone. As such, private keys are only used to create digital signatures for transaction authentication.
Starting with a private key in the form of a randomly generated number k, we multiply it by a predetermined point on the curve called the generator point G to produce another point somewhere else on the curve, which is the corresponding public key K: K = k * G
the generator point is always the same for all Ethereum users
Ethereum only uses uncompressed public keys; therefore the only prefix that is relevant is (hex) 04.
The test most commonly used for a hash function is the empty input. If you run the hash function with an empty string as input you should see the following results:
Keccak256(““) = c5d2460186f7233c927e7db2dcc703c0e500b653ca82273b7bfad8045d85a470
SHA3(““) = a7ffc6f8bf1ed76651c14756a061d662f580ff4de43b49fa82d80a4b80f8434a
Ethereum uses Keccak-256, even though it is often called SHA-3 in the code.
Ethereum addresses are unique identifiers that are derived from public keys or contracts using the Keccak-256 one-way hash function.
We use Keccak-256 to calculate the hash of this public key:
Keccak256(K) = 2a5bc342ed616b5ba5732269001d3f1ef827552ae1114027bd3ecf1f086ba0f9
Then we keep only the last 20 bytes (least significant bytes), which is our Ethereum address:
001d3f1ef827552ae1114027bd3ecf1f086ba0f9
A transaction is a serialized binary message that contains the following data:
Nonce
A sequence number, issued by the originating EOA, used to prevent message replay
Gas price
The amount of ether (in wei) that the originator is willing to pay for each unit of gas
Gas limit
The maximum amount of gas the originator is willing to buy for this transaction
Recipient
The destination Ethereum address
Value
The amount of ether (in wei) to send to the destination
Data
The variable-length binary data payload
v,r,s
The three components of an ECDSA digital signature of the originating EOA
Computer programs
Smart contracts are simply computer programs. The word “contract” has no legal meaning in this context.
Immutable
Once deployed, the code of a smart contract cannot change. Unlike with traditional software, the only way to modify a smart contract is to deploy a new instance.
Deterministic
The outcome of the execution of a smart contract is the same for everyone who runs it, given the context of the transaction that initiated its execution and the state of the Ethereum blockchain at the moment of execution.
EVM context
Smart contracts operate with a very limited execution context. They can access their own state, the context of the transaction that called them, and some information about the most recent blocks.
Decentralized world computer
The EVM runs as a local instance on every Ethereum node, but because all instances of the EVM operate on the same initial state and produce the same final state, the system as a whole operates as a single “world compute
0x0 special contract creation address
contracts only run if they are called by a transaction
contracts are atomic
To delete a contract, you execute an EVM opcode called SELFDESTRUCT. That operation costs “negative gas,” a gas refund, thereby incentivizing the release of network client resources from the deletion of stored state.
function syntax:
function FunctionName([parameters]) {public|private|internal|external} [pure|view|payable] [modifiers] [returns (return types)]
estimating gas cost:
var contract = web3.eth.contract(abi).at(address); var gasEstimate = contract.myAweSomeMethod.estimateGas(arg1, arg2, {from: account});
To obtain the gas price from the network you can use:
var gasPrice = web3.eth.getGasPrice();
And from there you can estimate the gas cost:
var gasCostInEther = web3.utils.fromWei((gasEstimate * gasPrice), ‘ether’);
This type of attack can occur when a contract sends ether to an unknown address. An attacker can carefully construct a contract at an external address that contains malicious code in the fallback function.
The first is to (whenever possible) use the built-in transfer function when sending ether to external contracts.
The second technique is to ensure that all logic that changes state variables happens before ether is sent out of the contract (or any external call).
A third technique is to introduce a mutex.
The current conventional technique to guard against under/overflow vulnerabilities is to use or build mathematical libraries that replace the standard math operators addition, subtraction, and multiplication (division is excluded as it does not cause over/underflows and the EVM reverts on division by 0).
ERC20
The ERC20 Interface in Solidity:
contract ERC20 {
function totalSupply() constant returns (uint theTotalSupply);
function balanceOf(address _owner) constant returns (uint balance);
function transfer(address _to, uint _value) returns (bool success);
function transferFrom(address _from, address _to, uint _value) returns
(bool success);
function approve(address _spender, uint _value) returns (bool success);
function allowance(address _owner, address _spender) constant returns
(uint remaining);
event Transfer(address indexed _from, address indexed _to, uint _value);
event Approval(address indexed _owner, address indexed _spender, uint _value);
}
data structures
mapping(address => uint256) balances;
mapping (address => mapping (address => uint256)) public allowed;
transfer - wallet to wallet direct transfer of tokens, uses the ‘transfer’ function
approve and transfer - two transaction
https://bitcrowd.dev/folding-sections-of-markdown-in-vim/
https://neovim.io/doc/user/fold.html
config/nvim/lua/config/options.lua
local vim = vim
local opt = vim.opt
opt.foldmethod = "expr"
opt.foldexpr = "nvim_treesitter#foldexpr()"
format a file gg =G
indent visual select < or >
M-x projectile-discover-in-directory
create file spc-. then type filename
open neotree: SPC-o-p
buffers - hold data, usually file
snapshot:
pip3 freeze > requirements.txt
install:
pip3 install --upgrade pip && pip3 install -r requirements.txt
lint:
pylint --disable=R,C app/main.py
format:
black app/*
test:
python -m pytest -vv --cov=main test_main.py
mkcd(){
mkdir -p "$1"
cd "$1"
}
mkpyvenv(){
venv=$(basename $(pwd))
python3 -m venv ".${venv}"
source ".${venv}/bin/activate"
}
src(){
venv=$(basename $(pwd))
source ".${venv}/bin/activate"
}
docker init - create a Dockerfile
docker build -t
docker run -d –name mycontainer -p 80:80 myimage
docker rm
ollama list - list all models
ollama run codellama:7b - download and run the 7 billion parameter codellama model
ollama run codellama:7b-python - fine tuned model for python
import ollama
from langchain import Tool, initialize_agent
# Define the function to interact with Ollama's local service
def ollama_local_query(prompt: str) -> str:
# Initialize Ollama client
= ollama.Client(
client ="http://localhost:11434"
base_url# Adjust the base_url to your local server's URL
)
# Create a request
= ollama.GenerateRequest(prompt=prompt)
request
# Send the request and get the response
= client.generate(request)
response
# Extract the text from the response
return response.text
# Create a Tool instance for Ollama
= Tool(
ollama_tool ="OllamaTool",
name=ollama_local_query,
func="Tool to interact with Ollama language model",
description
)
# Initialize the LangChain agent with the Ollama tool
= initialize_agent(
agent =[ollama_tool], agent_type="zero-shot-react-description", verbose=True
tools )
tailwindcss -w -i styles/main.css -o static/css/main.css
embedding - numeriocal representation of content into a vector that encapisulates its semantic content, for processing
semantic similarity - distance between two vectors
word2vec - learns embeddings by prediciting surrounding words https://arxiv.org/abs/1901.09813
embeddings are usually transformer based models
= "i am a query"
text = embeddings.embed_query(text)
query_result = ["some","list","of","documents"]
words = embeddings.embed_documents(words) doc_vectors
vector distance: euclidian distance or cosine similarity
Vector Euclidian distance
from scipy.spatial.distance import pdist, squareform
import numpy as np
import pandas as pd
= np.array(doc_vectors)
X = squareform(pdist(X)) dists
vectorstores - langchain to store and querying vectors
similar_vectors = vector_store.query(query_vector, k)
retreiver - langchain component that queries the vector database on a given index
custom retreiver:
from langchain.schema import Document, BaseRetriever
class MyRetriever(BaseRetriever):
def get_relevant_documents(self, query: str, **kwargs) -> list[Document]:
# Implement your retrieval logic here
# Retrieve and process documents based on the query
# Return a list of relevant documents
= []
relevant_documents # Your retrieval logic goes here…
return relevant_documents
DocArray - in memory vector store
Steps to setup a chatbot with langchain
setup a document loader
store docoments in a vector store
setup chatbot retrieve from the vector store
alembic downgrade base
alembic upgrade head - upgrade to latest version
alembic revision –autogenerate -m “some comments”
dis.dis(
dir(
help(
identity - location in memory of an object
id() - location in memory
type()
mutable - dict,list
immutable - strings tuples, integers *reassignment changes identity
Dunder methods - Double UNDERscore methods “add” or magic methods
help(object.method)
File iterate through file
with open('somefile','r') as my_file:
for line in my_file:
print(line)
coerce booleans
ex.
bool(0) -> False
bool(4) -> True
singleton - one copy, example None
= None
a = None
b id(a)
id(b)
will be the same value
PEP8 prefer 4 spaces to indent code
define a list names = [‘Evee’,‘Harper’,‘Lily’,‘Linus’]
A for loop ccan have a else clause. Code in the else clause will execute if the for loop did not hit a break statement.
positive = False
for num in items:
if num < 0:
break
else:
postive = True
set default method
count = {}
for name in names:
count.setdefault(name,0)
count[name] += 1
def funcname(arg):
body
stride
every_other_name[0:4:2]
Files
fin = open('etc/passwd')
for line in fin:
print line
with open('/tmp/names.txt','w') as fout:
fout.write('Evee\n')
Using file as seq
def add_numbers(filename):
with open(filename) as fin:
return add_nums_to_seq(fin)
def add_nums_to_seq(seq):
results = []
for num, line in enumerate(seq):
enumerate(seq):
results.append('{0}-{1}'.format(num,line))
return results
Object - grouping together state and methods
Class - define objects
class Animal
generator example
xs = (x for x in range(4))
xs.__next__()
objects store their type in their class attr
type()
issubclass()
isinstance()
dir()
hasattr(
getattr(
prefer EAFP easier to as for forgiveness
globals() - introspect the global namespace
globals()[foo] = ‘bar’ <– globals dict IS the global namespace
locals() - introspect the local namespace
f-strings - PEP 498, ex. f”{name}”
inspect module :wqa
django-admin startproject myproject
python manage.py runserver
python manage.py startapp
python manage.py showmigrations
python manage.py migrate
python manage.py makemigrations
python manage.py sqlmigrate
python manage.py createsuperuser
settings.py - INSTALLED_APPS[] list of apps
urls.py - routing
rule of thumb: each layer powers of 2, decreasing
model performance/deal with over fitting:
use a dropout layer
start with 20 - 30%
will result in more epochs
make more data
for image data
keras.preprocessing.image.ImageDataGenerator(....)
example of loading train a test data sets from tf data sets:
= tfds.load('mnist', split='train', as_supervised=True, with_info=True)
mnist_train, info
= tfds.load('mnist', split='test', as_supervised=True) mnist_test
use Keras to retrieve data:
=cache_dir, cache_sub_dir=cache_subdir) tf.keras.utils.get_file(fn, url, cache_dir
read csv using pandas:
import pandas as pd #<-- pd by convention
= ['col1','col2'] # assuming first row in csv is not col names
column_names = pd.read_csv(path_to_csv, name=column_names) some_dataframe
example model layers:
= tf.keras.models.Sequential([
new_model None, 1)),
tf.keras.layers.InputLayer((30,6,padding='casual', activation='relu'),
tf.keras.layers.Conv1D(68),
tf.keras.layers.LSTM(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(1)
tf.keras.layers.Dense( ])
curl api.ipify.org - return external ip address
mix phx.new statmeet
cd statmeet
edit config/dev.exs
mix deps.get
mix ecto.create
mix phx.gen.auth Accounts User users
mix phx.gen.html Markups Note notes contents:text
Example of adding Hello page
lib/
"/", AppWeb do
scope ...
"/somepath", HelloController, :index
get
end
lib/
defmodule HelloWeb.HelloController do
use HelloWeb, :controller
def index(conn, _params) do
(conn, "index.html")
renderend
end
lib/
defmodule HelloWeb.HelloView do
use HelloWeb, :view
end
lib/
<section class="phx-hero">
<h2>Hello World, from Phoenix!</h2>
</section>
mix phx.gen.schema Markups markups content:text
mix phx.gen.html Catalog Product products title:string description:string price:decimal views:integer
mix ecto.gen.migration update_notes
edit file in priv/repo/migrations/
mix ecto.migrate
self()
spawn_link(self(), fn -> raise “oops” end)
spawn(…)
send(pid, {:get, :hello, self()})
flush()
{:ok, pid} = Agent.start_link(fn -> %{} end)
Agent.update(pid, fn map -> Map.put(map, :hello, :world) end)
Agent.get(pid, fn map -> Map.get(map, :hello) end)
opts \ [] - optional parameter that will default to empty list
""" Cheat sheet """
= pd.read_csv(some_file_path)
some_data
""" show cols """
some_data.columns
""" Select a column into a pd series """
= some_data.Price
some_series
""" Selectinng features, note features assigned to "X" var by convention """
= ['price', 'amount', 'sku']
some_features
= some_data[some_features]
X
""" Get shape notice it is a attribute """
some_df.shape
""" Selecting """
== 'apples') & (some_df.another_col == 'fuji')]
some_df.loc[(some_df.some_col
== 'apples') | (some_df.some_col == 'grapes')]
some_df.loc[(some_df.some_col
'apples', 'oranges'])
some_df.loc[some_df.some_col.isin([
""" Mapping map() fun passed to map expects a single value from a Series,
returns a new Series. """
= some_df.some_feature.mean()
some_mean map(lambda p: p - some_mean)
some_df.some_feature.
""" Apply apply() - call a fun on each row """
def remean_feature(row):
= row.some_feature - some_mean
row.some_feature return row
apply(remean_feature, axis='columns')
some_df.
""" Mappings as built-ins """
= some_df.some_feature.mean()
some_mean - some_mean
some_df.some_feature
""" Combine info """
+ " - " + some_df.some_feature_2
some_df.some_feature_1
""" Sorting and grouping """
'feature')
some_df.groupby(
'feature').another_feature.min
some_df.groupby(
'feature_a','feature_b']).apply(lambda df: df.loc[df.feature_c.idxmax()])
some_df.grouby([
'feature_a']).feature_b.agg([len,min,max])
some_df.groupby([
""" Convert Pandas datatype """
'float64')
some_df.some_feature.astype(
""" Select NaN entries """
some_df[pd.isnull(some_df.some_feature)]
""" Filling in Nan """
"unknown")
some_df.some_feature.fillna(
""" Rename a column """
={'from_name': 'to_name'})
some_df.rename(columns
""" Joining dataframes """
pd.concat([some_df_a, some_df_b])
= some_df_a.set_index(['title', 'date'])
left_df = some_df_b.set_index(['title', 'date'])
right_df ='_l', rsuffix='_r')
left_df.join(right_df, lsuffix
""" Misc useful funs """
= pd.get_dummies(train_data[features])
X
.notnull()
'max_rows', 5)
pd.set_option(
X.describe()
X.head()
some_df.some_feature.mean()
some_df.some_feature.unique()
some_df.some_feature.value_counts()
some_df.some_feature.dtype
some_df.dtypes
""" Create a decision tree using scikit """
from sklearn.tree import DecisionTreeRegressor
= DecisionTreeRegressor(random_state=1)
some_model
some_model.fit(X,y)
some_model.predict(x)
""" Scikit calc MAE """
from sklearn.metrics import mean_absolute_error
= some_model.predict(X)
predict
mean_absolute_error(y, predict)
""" scikit create a test split """
from sklearn.model_selection import train_test_split
= train_test_split(X, y, random_state = 0)
train_X, val_X, train_y, val_y
""" Then define some model on the train_X and train_y data """
mean_absolute_error(val_y, some_predictions)
""" Random Forests """
from sklearn.ensemble import RandomForestRegressor
= RandomForestRegressor(random_state=1)
forest_model
forest_model.fit(train_X, train_y)
= forest_model.predict(val_X)
some_predictions
""" Imputation """
from sklearn.impute import SimpleImputer
= SimpleImputer()
some_imputer = pd.DataFrame(some_imputer.fit_transform(X_train))
imputed_X_train = pd.DataFrame(some_imputer.transform(X_valid))
imputed_X_valid
""" Imputation removed column names; put them back """
= X_train.columns
imputed_X_train.columns = X_valid.columns
imputed_X_valid.columns
print("MAE from Approach 2 (Imputation):")
print(score_dataset(imputed_X_train, imputed_X_valid, y_train, y_valid))
""" Categorical Variables one hot is usiually best, from kaggle """
= (X_train.dtypes == 'object')
s = list(s[s].index)
object_cols
from sklearn.preprocessing import OneHotEncoder
""" Apply one-hot encoder to each column with categorical data """
= OneHotEncoder(handle_unknown='ignore', sparse=False)
OH_encoder = pd.DataFrame(OH_encoder.fit_transform(X_train[object_cols]))
OH_cols_train = pd.DataFrame(OH_encoder.transform(X_valid[object_cols]))
OH_cols_valid
""" One-hot encoding removed index; put it back """
= X_train.index
OH_cols_train.index = X_valid.index
OH_cols_valid.index
""" Remove categorical columns (will replace with one-hot encoding) """
= X_train.drop(object_cols, axis=1)
num_X_train = X_valid.drop(object_cols, axis=1)
num_X_valid
""" Add one-hot encoded columns to numerical features """
= pd.concat([num_X_train, OH_cols_train], axis=1)
OH_X_train = pd.concat([num_X_valid, OH_cols_valid], axis=1)
OH_X_valid
""" Ensure all columns have string type """
= OH_X_train.columns.astype(str)
OH_X_train.columns = OH_X_valid.columns.astype(str)
OH_X_valid.columns
""" sklearn pipelines """
from sklearn.pipeline import Pipeline
= Pipeline(steps=[
categorical_transformer 'imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
(
])
""" sklearn cross-validation """
from sklearn.model_selection import cross_val_score
= -1 * cross_val_score(my_pipeline, X, y,
scores =5,
cv='neg_mean_absolute_error')
scoring
"""
XGBoost rules of thumb:
- n_estimators usually between 100 - 1000
- In general, a small learning rate and large number of estimators
will yield more accurate XGBoost models
"""
from xgboost import XGBRegressor
= XGBRegressor()
my_model
my_model.fit(X_train, y_train)
= XGBRegressor(n_estimators=500)
my_model
my_model.fit(X_train, y_train, =5,
early_stopping_rounds=[(X_valid, y_valid)],
eval_set=False) verbose
module plugs must have two funcs: init, call
defmodule NothingPlug do
def init(opts) do
optsend
def call(conn, _opts) do
connend
end
go get -u ./...
Used for integrating applictions with other services.
Important ! register redirect url’s immediately.
Redirection attack where a access token can be interpreted by an attacker.
Dont register multiple redirect url’s to compensate for starting from multiple states in the app. Instead use the state parameter.
https://scrimba.com/learn/learnreact
global var ReactDOM
.render(<h1>Hello</h1>, document.getElementById("root")) ReactDOM
All React components must act like pure functions with respect to their props.
Pascal case component, not camel case
Wrap it in angle brackets in ReactDOM.render()
Components can have parent child relationship
function Navbar() {
return (
<div></div>
)
}
function MainContent() {
return (
<div></div>
)
}
.render(
ReactDOM<div>
<Navbar />
<MainContent />
</div>,
document.getElementById("root")
)
Like html with some differences.
html => JSX
class => className
create a react project
-react-app hello npx create
add a router
npm install react-router-dom@6
install material ui: https://mui.com/getting-started/installation/
Use mustache syntax
import React from "react"
import ReactDOM from "react-dom"
function App() {
const userName = "me"
return (
<h1>Hello {userName} !</h1>
)
}
.render(<App />, document.getElementById("root")) ReactDOM
// Somewhat bogus example
export default function Contact(props) {
return (
<div className="contact-card">
<img src={props.image}/>
<h3>{props.name}</h3>
<div className="info-group">
<p>{props.phone}</p>
</div>
<div className="info-group">
<img src="./images/mail-icon.png" />
<p>{props.email}</p>
</div>
</div>
)
}
function App() {
return (
<div className="contacts">
<Contact
="./images/mr-whiskerson.png"
image="Mr. Whiskerson"
name="(212) 555-1234"
phone="mr.whiskaz@catnap.meow"
email/>
<Contact
="./images/fluffykins.png"
img="Fluffykins"
name="(212) 555-2345"
phone="fluff@me.com"
email/>
</div>
) }
// contrived example from scrimba
= [
jokesData
{setup: "I got my daughter a fridge for her birthday.",
punchline: "I can't wait to see her face light up when she opens it."
,
}
{setup: "How did the hacker escape the police?",
punchline: "He just ransomware!"
}
]
export default function Joke(props) {
return (
<div>
.setup && <h3>Setup: {props.setup}</h3>}
{props<p>Punchline: {props.punchline}</p>
<hr />
</div>
)
}
export default function App() {
const jokeElements = jokesData.map(joke => {
return <Joke setup={joke.setup} punchline={joke.punchline} />
})return (
<div>
{jokeElements}</div>
) }
Do not use the “()” for function call in the JSX
import React from "react"
export default function App() {
function handleClick() {
console.log("I was clicked!")
}
function handleOnMouseOver() {
console.log("MouseOver")
}
return (
<div className="container">
<img
="https://picsum.photos/640/360"
src={handleOnMouseOver}
onMouseOver/>
<button onClick={handleClick}>Click me</button>
</div>
) }
React.useState()
const [someval,setterFunc] = React.useState(someInitVal)
Converting a Function to a Class
You can convert a function component like Clock to a class in five steps:
Create an ES6 class, with the same name, that extends React.Component.
Add a single empty method to it called render().
Move the body of the function into the render() method.
Replace props with this.props in the render() body.
Delete the remaining empty function declaration.
In applications with many components, it’s very important to free up resources taken by the components when they are destroyed.
We want to set up a timer whenever the Clock is rendered to the DOM for the first time. This is called “mounting” in React.
We also want to clear that timer whenever the DOM produced by the Clock is removed. This is called “unmounting” in React.
used for side effects ie API calls
https://github.com/pmndrs/react-three-fiber
https://threejs.org/docs/index.html#manual/en/introduction/Creating-a-scene
scene -> camera -> renderer -> attach to canvas
geometry -> mesh -> scene.add()
frustum - A frustum is the name of a 3d shape that is like a pyramid with the tip sliced off. In other words think of the word “frustum” as another 3D shape like sphere, cube, prism, frustum.
mesh = geometry + material + orientation
Example set camera width to canvas
function render(time) {
*= 0.001;
time
const canvas = renderer.domElement;
.aspect = canvas.clientWidth / canvas.clientHeight;
camera.updateProjectionMatrix();
camera
...
npx create-react-app my-app
cd my-app
npm install three @react-three/fiber
npm install --save husky lint-staged prettier
npm run start
## initialize minimum node project with a minimal package.json
npm init -y
## install a package
npm i <some package>
npm i express
https://datatracker.ietf.org/doc/html/rfc7519
https://www.bezkoder.com/jwt-json-web-token/
https://www.bezkoder.com/react-express-authentication-jwt/
https://blog.galmalachi.com/react-nodejs-and-jwt-authentication-the-right-way
JWT is typically in header
x-access-token: [header].[payload].[signature]
USE HTTPS
JWT does NOT secure your data. JWT does not hide, obscure, secure data at all.
The purpose of JWT is to prove that the data is generated by an authentic source.
So, what if there is a Man-in-the-middle attack that can get JWT, then decode user information? Yes, that is possible, so always make sure that your application has the HTTPS encryption.
Browser Server
Browser: Local Storage
IOS: Keychain
Android: SharedPreferences
Header
Payload
Signature
{
"typ": "JWT",
"alg": "HS256"
}
What is stored in the token
iss: issuer
iat: time issued
exp: expiration
{
"userId": "some user",
"username": "anon",
"email": "anon@anon.io",
// standard fields
"iss": "zKoder, author of bezkoder.com",
"iat": 1570238918,
"exp": 1570238992
}
const data = Base64UrlEncode(header) + '.' + Base64UrlEncode(payload);
const hashedData = Hash(data, secret);
const signature = Base64UrlEncode(hashedData);
https://www.bezkoder.com/node-js-jwt-authentication-mysql/
User Registration
POST api/auth/signup => Check and sve user to db
<= Register successfully message
User Login
POST api/suth/sigin => Authenticate
<= Create JWT string with secret return {token, user info, authorities}
Access Resource
JWT on x-access-token => Check JWT signature, get user info and authenticate header
<= return content based in authorization
User Login
POST api/suth/sigin => Authenticate
<= Create JWT string with secret return {token, *refreshToken, user info, authorities}
Access Resources with Expired Token
Request data with JWT => Validate and throw TokenExpiredError in the header
<= Return Token Expired message
Token Refresh
POST api/auth/refreshToken => Verify Refresh Token
<= return {new token, resfreshToken}
program:
piece of code that lives on the blockchain
programs are stateless
programs interact with accounts for data
accounts
stores data
users can have 1,000s of accounts
Configuring Solana
Install rust https://doc.rust-lang.org/book/ch01-01-installation.html
Install Solana: https://docs.solana.com/cli/install-solana-cli-tools#use-solanas-install-tool
set Solana network to localhost
solana config set --url localhost
start a local Solana node
solana-test-validator
Install mocha, anchor, npm anchor, npm solana/web3.js
npm install -g mocha
cargo install --git https://github.com/project-serum/anchor anchor-cli --locked
npm install @project-serum/anchor @solana/web3.js
Create a project
anchor init myproject --javascript
Generate local Solana wallet
solana-keygen new
Get public key for local wallet
solana address
airdrop sol
solana airdrop 5 93SAmhpBneKq6UybsFbn5gf9kzAcooCz732bGaGiBehg --url https://api.devnet.solana.com
nanoservices - a service whose overhead outweighs it’s utility
block template:
block_type label_one label_two {
key = value
embedded_block {
key = value
}
}
var.somevariable
local.someobject.somevar
module.someobject.somevar
last resort prefer puppet,chef, ansible
local - executes on local server
remote - executes on remote server
can happen at creation or destruction
example file provisioner with heredoc syntax:
provisioner "file" {
content = <<EOF
access_key =
secret_key =
EOF
destination = "/home/aws-user/.s3cfg"
}
example random int
resource "random_integer" "rand"{
min = 10000
max = 99999
}
merge() - takes two maps and merges them.
terraform init
terraform plan
terraform apply
precedence: env, file, command line
use workspaces as recommended by Hasihcorp
terraform workspace new Developement
terrform plan -out dev.tfplan
terrform apply "dev.tfpaln"
use vars from workspace
locals {= lower(terrform.workspace)
env_name
= {
common_tags = local.env_name
Environment
} }
3 options: varibales file, env var, secrets management
use env vars for credentials by simply exporting and referencing them
"bucket" {
module = "some-bucket"
name = ".\\Modules\somefiles"
source
}
"aws_s3_bucket_object" {
resource = module.bucket.bucket_id
bucket ...]
[ }
registry.terraform.io
code and deploy without need to worry about infra and scale
only available for node.js and python as of 4/7/22
PAAS - good for web apps
supports more languages than Compute Services
Postgres and MySQL
== s3
block storage
16 persistent disks
== NAS
blob storage
dist transaction support
depends on true time, 200ms clock drift globally
M.P.R. or N.C.S.
Moving -> Network
Processing -> Compute
Remembering -> Storage
working with buckets
gsutil
gsutil ls gs://some_bucket
gsutil mb -l somelocation gs://some_bucket
gsutil label get gs://some_bucket
gsutil label get/set …
gsutil label ch -l “label:value” gs://somebucket
gsutil versioning get gs://somebucket
gsutil versioning set on gs://somebucket
Use ls -a to see versioning
-a Includes non-current object versions / generations in the listing (only useful with a versioning-enabled bucket). If combined with -l option also prints metageneration for each listed object.
gsutil ls -a
gsutil acl ch -u AllUsers:R gs://somebucket/someobject
gcloud
gcloud compute machine-types list –filter f1-micro
gcloud config get-value project
gcloud services list
gcloud compute instances list
gcloud compute instances create somevm
gcloud compute instances delete somevm
gcloud config init
gcloud config list
gcloud config configurations create SOMECONFIG
gcloud config configurations activate SOMECONFIG
gcloud config set|unset
gcloud config get-value
gcloud compute machine-types list –filter=“NAME:f1-micro”
gcloud compute machine-types list –filter=“NAME:f1-micro AND ZONE:us-east*”
gcloud compute instances list
gcloud compute ssh myhappyvm
curl -H “Metadata-Flavor:Google” metadata.google.internal/computeMetadata/v1/
curl -H “Metadata-Flavor:Google” metadata.google.internal/computeMetadata/v1/instance
gcloud config get project
gcloud config set compute/region us-east
gcloud config set compute/zone us-east1-b
gsutil mb -p playground-someproject -c Standard -l us -b on gs://challengevmbucket
gcloud compute instances create challengevm --preemptible --no-restart-on-failure --maintenance-policy=terminate --machine-type=f1-micro
AAA Data flow
-> AuthN authen
-> AuthZ authorization, IAM Identity and Access Management
authz hierarchy
organization
folders
project
-> Acct accounting system records failed logins, GCS Object Lifecycle management
Least Privelege
defense in depth
fail securely
https://owasp.org/Top10/A04_2021-Insecure_Design/#secure-design
primitive roles
viewer ro
- editor view + change
- owner view + editor + access and billing
predefined roles - used for specific GCP resources
custom role
project or organization level
user
serviceAccount
group
domain
allAutheticatedUsers - *resource public to any gmail account DONT USE
allUsers - anon
collection of accounts and service accounts
every group has a email addr
binds members to roles
attach policies to resource
Managing policies
gcloud [GROUP] add-iam-policy-binding [RESOURCE-NAME] --role [ROLE-ID-TO-GRANT] --member user: [USER-EMAIL]
gcloud [GROUP] remove-iam-policy-binding [RESOURCE-NAME] --role [ROLE-ID-TO-REVOKE] --member user: [USER-EMAIL]
commands missing
atuscaling
firewall ruls
service accounts