PostList

2022년 11월 16일 수요일

Deploy and Connect to Azure SQL Edge on Mac M1 Chip

 

Problem:
SQL Server does not seem to be available for the Mac M1 chip.

Solution:
Instead, we can utilize Azure SQL Edge.

1. Deploy an instance for Azure SQL Edge using a Docker image

1.1. From a terminal, run this.

docker run --cap-add SYS_PTRACE -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=YourPassword' -p 1433:1433 --name azuresqledge -d mcr.microsoft.com/azure-sql-edge

note: Change the password from "YourPassword" to yours.


2. Connect to the Azure SQL Edge instance using Azure Data Studio


2.1. Click Create a connection



2.2. Input the info below.

server: localhost

user name: sa

password: your password



ref:
- https://hub.docker.com/_/microsoft-azure-sql-edge
- https://medium.com/geekculture/docker-express-running-a-local-sql-server-on-your-m1-mac-8bbc22c49dc9
- https://medium.com/geekculture/how-to-install-sql-server-in-mac-m1-41121e110214

2022년 11월 15일 화요일

Git commit and push

git init -b main

git add .

git status

git commit -m "initial commit"

git push

2022년 11월 14일 월요일

Create a remote Git repository from a local one


1. run the following commands.

gh auth login

gh repo create 


ref:

1. https://cli.github.com/manual/gh_repo_create

2. https://docs.github.com/en/get-started/importing-your-projects-to-github/importing-source-code-to-github/adding-locally-hosted-code-to-github


Install gh on Mac OSX

Problem that we can solve: 

- command not found: gh

- command not found: brew


1. Press Command+Space and type Terminal and press enter/return key.


2. Copy and paste the following command in the Terminal app.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

and press enter/return key. Wait for the command to finish.


3. Copy and paste the following command in Terminal app.

echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile


4. Copy and paste the following command:

brew install gh


5. If gh is still not working,

eval $(/opt/homebrew/bin/brew shellenv)


6. Copy and paste the following command:

brew install gh

 

ref: 

1. https://macappstore.org/gh/

2. https://apple.stackexchange.com/questions/148901/why-does-my-brew-installation-not-work


2019년 6월 17일 월요일

Tutorial. Guestbook with Istio 1.1.7 and Visualization




# Installing Istio 1.1.7 on Minikube
minikube start --memory=8192
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.1.7 sh -
cd ./istio-*
export PATH=$PWD/bin:$PATH

for i in install/kubernetes/helm/istio-init/files/crd*yaml; do kubectl apply -f $i; done
kubectl apply -f install/kubernetes/istio-demo.yaml
kubectl get pods -n istio-system 

# wait until all containers are running. Keep checking.
kubectl get pods -n istio-system

# create an ingress gateway
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')

export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

export INGRESS_HOST=$(minikube ip)

export GATEWAY_URL=$(minikube ip):$(kubectl get svc istio-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')

export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT

# Create Guestbook app configurations
mkdir Guestbook
cd Guestbook
mkdir Istio-1.1.7

# ./redis-master.yaml

apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379

# ./redis-slave.yaml

apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
app: redis
role: slave
tier: backend
spec:
ports:
- port: 6379
selector:
app: redis
role: slave
tier: backend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-slave
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 1 # haan: was 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 6379

# ./frontend.yaml

apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# comment or delete the following line if you want to use a LoadBalancer
type: NodePort
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: frontend
labels:
app: guestbook
spec:
selector:
matchLabels:
app: guestbook
tier: frontend
replicas: 1
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 80

# ./Istio-1.1.7/guestbook-gateway.yaml

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: guestbook-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: frontend
spec:
hosts:
- "*"
gateways:
- guestbook-gateway
http:
- match:
- uri:
exact: /frontend
- uri:
exact: /
route:
- destination:
host: frontend
port:
number: 80
- route:
- destination:
host: frontend
port:
number: 80





# Inject Istio to the Guestbook app
istioctl kube-inject -f redis-master.yaml > Istio-1.1.7/redis-master-istio.yaml

istioctl kube-inject -f redis-slave.yaml > Istio-1.1.7/redis-slave-istio.yaml

istioctl kube-inject -f frontend.yaml > Istio-1.1.7/frontend-istio.yaml

# Apply the Istio-injected Guestbook

kubectl apply -f Istio-1.1.7/redis-master-istio.yaml

kubectl apply -f Istio-1.1.7/redis-slave-istio.yaml

kubectl apply -f Istio-1.1.7/frontend-istio.yaml


# Apply Gateway for the Guestbook
kubectl apply -f Istio-1.1.7/guestbook-gateway.yaml

# check the access point
echo $GATEWAY_URL 
192.168.64.40:31380

# visit the link through a browser
# leave a message



# Collect Metrix
# save as metrics.yaml
# Configuration for metric instances
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: doublerequestcount
namespace: istio-system
spec:
compiledTemplate: metric
params:
value: "2" # count each request twice
dimensions:
reporter: conditional((context.reporter.kind | "inbound") == "outbound", "client", "server")
source: source.workload.name | "unknown"
destination: destination.workload.name | "unknown"
message: '"twice the fun!"'
monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a Prometheus handler
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: doublehandler
namespace: istio-system
spec:
compiledAdapter: prometheus
params:
metrics:
- name: double_request_count # Prometheus metric name
instance_name: doublerequestcount.instance.istio-system # Mixer instance name (fully-qualified)
kind: COUNTER
label_names:
- reporter
- source
- destination
- message
---
# Rule to send metric instances to a Prometheus handler
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: doubleprom
namespace: istio-system
spec:
actions:
- handler: doublehandler
instances: [ doublerequestcount ]

# save as log-entry.yaml


# Configuration for logentry instances
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: newlog
namespace: istio-system
spec:
compiledTemplate: logentry
params:
severity: '"warning"'
timestamp: request.time
variables:
source: source.labels["app"] | source.workload.name | "unknown"
user: source.user | "unknown"
destination: destination.labels["app"] | destination.workload.name | "unknown"
responseCode: response.code | 0
responseSize: response.size | 0
latency: response.duration | "0ms"
monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a stdio handler
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: newloghandler
namespace: istio-system
spec:
compiledAdapter: stdio
params:
severity_levels:
warning: 1 # Params.Level.WARNING
outputAsJson: true
---
# Rule to send logentry instances to a stdio handler
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: newlogstdio
namespace: istio-system
spec:
match: "true" # match for all requests
actions:
- handler: newloghandler
instances:
- newlog
---


kubectl apply -f metrics.yaml
kubectl apply -f log-entry.yaml


kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &

# query logs


kubectl logs -n istio-system -l istio-mixer-type=telemetry -c mixer | grep "newlog" | grep -v '"destination":"telemetry"' | grep -v '"destination":"pilot"' | grep -v '"destination":"policy"' | grep -v '"destination":"unknown"'



# Visualization

# Install Kiali

bash <(curl -L https://git.io/getLatestKialiOperator)

# Setup Port
kubectl port-forward svc/kiali 20001:20001 -n istio-system

# Access to Kiali
https://localhost:20001/kiali

For Chrome. If the access is denied due to security reasons,
chrome://flags/#allow-insecure-localhost
then enable "Allow invalid certificates for resources loaded from localhost".




# Screenshots of the visualization







Tutorial. Guestbook with Istio 1.0.0


# Installing Istio 1.0.0 on Minikube

minikube start --memory 6144
curl -L https://git.io/getLatestIstio | ISTIO_VERSION=1.0.0 sh -
cd ./istio-*
export PATH=$PWD/bin:$PATH
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
sleep 5
kubectl apply -f install/kubernetes/istio-demo.yaml
kubectl get pods -n istio-system

# wait until all containers are running. Keep checking.
kubectl get pods -n istio-system

# create an ingress gateway

export GATEWAY_URL=$(minikube ip):$(kubectl get svc istio-ingressgateway -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')

# create a logging configuration
# save below as new_metrics.yaml

# Configuration for metric instances
apiVersion: "config.istio.io/v1alpha2"
kind: metric
metadata:
  name: doublerequestcount
  namespace: istio-system
spec:
  value: "2" # count each request twice
  dimensions:
    reporter: conditional((context.reporter.kind | "inbound") == "outbound", "client", "server")
    source: source.workload.name | "unknown"
    destination: destination.workload.name | "unknown"
    message: '"twice the fun!"'
  monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: prometheus
metadata:
  name: doublehandler
  namespace: istio-system
spec:
  metrics:
  - name: double_request_count # Prometheus metric name
    instance_name: doublerequestcount.metric.istio-system # Mixer instance name (fully-qualified)
    kind: COUNTER
    label_names:
    - reporter
    - source
    - destination
    - message
---
# Rule to send metric instances to a Prometheus handler
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
  name: doubleprom
  namespace: istio-system
spec:
  actions:
  - handler: doublehandler.prometheus
    instances:
    - doublerequestcount.metric
---
# Configuration for logentry instances
apiVersion: "config.istio.io/v1alpha2"
kind: logentry
metadata:
  name: newlog
  namespace: istio-system
spec:
  severity: '"warning"'
  timestamp: request.time
  variables:
    source: source.labels["app"] | source.workload.name | "unknown"
    user: source.user | "unknown"
    destination: destination.labels["app"] | destination.workload.name | "unknown"
    responseCode: response.code | 0
    responseSize: response.size | 0
    latency: response.duration | "0ms"
  monitored_resource_type: '"UNSPECIFIED"'
---
# Configuration for a stdio handler
apiVersion: "config.istio.io/v1alpha2"
kind: stdio
metadata:
  name: newhandler
  namespace: istio-system
spec:
 severity_levels:
   warning: 1 # Params.Level.WARNING
 outputAsJson: true
---
# Rule to send logentry instances to a stdio handler
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
  name: newlogstdio
  namespace: istio-system
spec:
  match: "true" # match for all requests
  actions:
   - handler: newhandler.stdio
     instances:
     - newlog.logentry

---

# apply the logging
kubectl apply -f new_metrics.yaml

kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath='{.items[0].metadata.name}') 9090:9090 &

# query logs
kubectl -n istio-system logs $(kubectl -n istio-system get pods -l istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') -c mixer | grep \"instance\":\"newlog.logentry.istio-system\"


# Create Guestbook app configurations
mkdir Guestbook
cd Guestbook
mkdir Istio-1.0.0

# ./redis-master.yaml

apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-master
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
- name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379

# ./redis-slave.yaml

apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
app: redis
role: slave
tier: backend
spec:
ports:
- port: 6379
selector:
app: redis
role: slave
tier: backend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-slave
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 1 # haan: was 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
- name: slave
image: gcr.io/google_samples/gb-redisslave:v3
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 6379

# ./frontend.yaml

apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# comment or delete the following line if you want to use a LoadBalancer
type: NodePort
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
- port: 80
selector:
app: guestbook
tier: frontend
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: frontend
labels:
app: guestbook
spec:
selector:
matchLabels:
app: guestbook
tier: frontend
replicas: 1
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
- name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# access an environment variable to find the master
# service's host. To do so, comment out the 'value: dns' line above, and
# uncomment the line below:
# value: env
ports:
- containerPort: 80

# ./Istio-1.0.0/guestbook-gateway.yaml

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: guestbook-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: frontend
spec:
hosts:
- "*"
gateways:
- guestbook-gateway
http:
- match:
- uri:
exact: /frontend
- uri:
exact: /
route:
- destination:
host: frontend
port:
number: 80
- route:
- destination:
host: frontend
port:
number: 80


# Inject Istio to the Guestbook app
istioctl kube-inject -f redis-master.yaml > Istio-1.0.0/redis-master-istio.yaml

istioctl kube-inject -f redis-slave.yaml > Istio-1.0.0/redis-slave-istio.yaml

istioctl kube-inject -f frontend.yaml > Istio-1.0.0/frontend-istio.yaml

# Apply the Istio-injected Guestbook

kubectl apply -f Istio-1.0.0/redis-master-istio.yaml

kubectl apply -f Istio-1.0.0/redis-slave-istio.yaml

kubectl apply -f Istio-1.0.0/frontend-istio.yaml


# Apply Gateway for the Guestbook
kubectl apply -f Istio-1.1.7/guestbook-gateway.yaml

# check the access point
echo $GATEWAY_URL 
192.168.64.40:31380

# visit the link through a browser
# leave a message



# query logs
kubectl -n istio-system logs $(kubectl -n istio-system get pods -l istio-mixer-type=telemetry -o jsonpath='{.items[0].metadata.name}') -c mixer | grep \"instance\":\"newlog.logentry.istio-system\"