Monday, June 30, 2025

Sync data from Informix to Oracle using Debezium/Kafka Connect/Kafka/JDBC Sink

 Following commands and details will be helpful when configuring Debezium to sync data from Informix to Oracle database.

1. Docker compose to create Kafka connect, Kafka, Zookeeper

Copy Informix connector and streaming jar files into connectors directory in docker compose file exists

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.5.0
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
    ports:
      - "2181:2181"
  kafka:
    image: confluentinc/cp-kafka:7.5.0
    ports:
      - "9092:9092"
    environment:
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
  connect:
    image: my-debezium-connect  # Custom image you’ll build below
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "8083:8083"
    environment:
      BOOTSTRAP_SERVERS: kafka:9092
      GROUP_ID: connect-cluster
      CONFIG_STORAGE_TOPIC: connect-configs
      OFFSET_STORAGE_TOPIC: connect-offsets
      STATUS_STORAGE_TOPIC: connect-status
      KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
      CONFIG_STORAGE_REPLICATION_FACTOR: 1
      OFFSET_STORAGE_REPLICATION_FACTOR: 1
      STATUS_STORAGE_REPLICATION_FACTOR: 1
      VALUE_CONVERTER_SCHEMAS_ENABLE: "false"
      KEY_CONVERTER_SCHEMAS_ENABLE: "false"
    volumes:
      - ./connectors:/kafka/connectors

2. Build docker compose

sudo docker compose build --no-cache

3. Stop docker compose docker containers

sudo docker compose down

4. Start the docker compose containers

sudo docker compose up -d

5. Check the docker connect container logs

sudo docker compose logs -f connect

6. Informix Kafka connect json

{
  "name": "informix-source-connector",
  "config": {
    "connector.class": "io.debezium.connector.informix.InformixConnector",
    "tasks.max": "1",
    "database.hostname": "172.27.xx.xx",
    "database.port": "1528",
    "database.user": "xxxx",
    "database.password": "xxxx@23",
    "database.dbname": "xxxx",
    "database.server.name": "informix",
    "database.informixserver":"debezium_test",
    "topic.prefix": "informixcdc",
    "table.include.list": "table1",
    "snapshot.mode": "schema_only",
    "name": "informix-source-connector",
    "schema.history.internal.kafka.bootstrap.servers": "kafka:9092",
    "schema.history.internal.kafka.topic": "schema-changes.informix"
  }
}

7. Deploy Informix source connector to Kafka connect

curl -X POST http://localhost:8083/connectors -H "Content-Type: application/json" -d @informix-source.json|jq

8. Check the status of the connector

sudo curl http://localhost:8083/connectors/informix-source-connector/status | jq

9. Check the Kafka connect deployed connectors

curl -s http://localhost:8083/connector-plugins | jq

10. Delete Informix source connector

sudo curl -X DELETE http://localhost:8083/connectors/informix-source-connector

11. Login to Kafka connect container 

sudo docker exec -it cdc1-connect-1 sh

12. Oracle JDBC sink connector

{
  "name": "oracle-sink-connector",
  "config": {
    "connector.class": "io.debezium.connector.jdbc.JdbcSinkConnector",
    "tasks.max": "1",
    "topics": "informixcdc.informix.table1",

    "connection.url": "jdbc:oracle:thin:@//172.27.xx.xxx:1521/PDB_TST",
    "connection.username": "TEST",
    "connection.password": "test123",
    "connection.driver": "oracle.jdbc.OracleDriver",

    "insert.mode": "upsert",
    "delete.mode": "none",

    "primary.key.fields": "mobile_no",
    "primary.key.mode": "record_value",
    "table.name.format": "table1",

    "auto.evolve": "false",
    "auto.create": "false",
    "dialect.name": "OracleDatabase",

    "schema.evolution":"basic"

  }
}

13. Deploy Oracle sink connector

curl -X POST -H "Content-Type: application/json" --data @oracle-sink.json http://localhost:8083/connectors

14. Check Oracle sink connector status

curl http://localhost:8083/connectors/oracle-sink-connector/status|jq

15. Restart Oracle sink connector

curl -X POST http://localhost:8083/connectors/oracle-sink-connector/tasks/0/restart

16. Check configuration of Oracle connector

curl http://localhost:8083/connectors/oracle-sink-connector/config | jq

17. Login to Kafka and check first 5 messages in the Topic

docker exec -it cdc1-kafka_1 sh -c "kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic informixcdc.informix.table1 --from-beginning"

/bin/kafka-console-consumer   --bootstrap-server kafka:9092   --topic informixcdc.informix.smgt004   --from-beginning   --max-messages 5

18. Retrieve details of the Topic

/bin/kafka-topics --bootstrap-server kafka:9092 --describe --topic informixcdc.informix.smgt004




Thursday, February 20, 2025

Check certificate expiry using Python

 Following python script can use to check HTTPS certifcate expiry.

import ssl
import socket
import requests
from datetime import datetime

# Configuration
HOST = "mobitel.lk"  # Change to your domain
PORT = 443
THRESHOLD_DAYS = 30  # Alert before expiry (days)
API_URL = "https://your-api.com/alert"  # REST API to notify

def get_ssl_expiry(host, port):
    context = ssl.create_default_context()
    with socket.create_connection((host, port)) as sock:
        with context.wrap_socket(sock, server_hostname=host) as ssock:
            cert = ssock.getpeercert()
            expiry_date = datetime.strptime(cert['notAfter'], "%b %d %H:%M:%S %Y %Z")
            return expiry_date

def send_alert():
    data = {"message": f"SSL certificate for {HOST} is expiring soon!"}
    response = requests.post(API_URL, json=data)
    print(f"Alert sent: {response.status_code}")
# Check expiry

expiry_date = get_ssl_expiry(HOST, PORT)
days_left = (expiry_date - datetime.now()).days
if days_left <= THRESHOLD_DAYS:
    send_alert()
    print(f"Certificate expiring in {days_left} days. Alert sent!")
else:
    print(f"Certificate is valid for {days_left} more days.")

Thursday, July 25, 2024

Gemini Flash LLM use to process image

You can use following code to process images using Googel Gemini LLM. 

import streamlit as st
import os
import google.generativeai as genai
from PIL import Image

genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
model=genai.GenerativeModel("gemini-1.5-flash")

def get_gemini_response(input,image):
    if input!="":
        response=model.generate_content([input,image])
    else:
        response=model.generate_content(image)
    return response.text

st.set_page_config(page_title="Generative AI")
st.header("Gemini Image App")
input=st.text_input("input : ",key="input")
uploaded_file = st.file_uploader("Choose an image...", type=["jpg","jpeg","png"])
image =""

if uploaded_file is not None:
    image=Image.open(uploaded_file)
    st.image(image, caption="Uploaded Image", use_column_width=True)
submit=st.button("Tell me about image")

if submit:
    response=get_gemini_response(input,image)
    st.subheader("The response is ")
    st.write(response)

Refer gitlab source for more information;

https://gitlab.com/sujithdc/gemini-ai-text-image

Gemini Pro AI Text Processing

You can use the following python code to process your text using Google Gemini GPT pro LLM model. To execute the code, you need to get API key from "https://aistudio.google.com/app/apikey" and install required python packages using "pip install <library>".

from dotenv import load_dotenv
load_dotenv()
import streamlit as st
import os
import google.generativeai as genai

genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
model=genai.GenerativeModel("gemini-pro")
def get_gemini_response(question):
    response=model.generate_content(question)
    return response.text

st.set_page_config(page_title="Generative AI")
st.header("Gemini AI App")
input=st.text_input("input : ",key="input")
submit=st.button("Ask Question")

if submit:
    response=get_gemini_response(input)
    st.subheader("Gemini Response")
    st.write(response)

Refer gitlab source for more information;

https://gitlab.com/sujithdc/gemini-ai-text-image

Friday, August 25, 2023

Mongo Shell Commands

 Following commands can be used in mongo db shell.

show dbs

  • list of all the databases

use shops

  • switch to a specific database or creates a database (shops) 

db.createCollection("products")

  • creating the collection (products)

show collections

  • display a list of all the collections 

 db.products.insertOne({"name":"prod_name1"})

  • insert a document into a collection

db.products.insertMany([{"name":"prod_name2"},{"name":"prod_name3"}])

  • insert many products in one command

db.products.insertOne({"name":"test_prod4","vendor":{"name":"vendor_1","address":"address_1"}})

  • insert many products in one command with multiple objects

db.student.insertOne({"name":"test_prod5",,"vendor":{"name":"vendor_1","address":{"address_line1":"line1","address_line2":"line2","city":"city"}}})

  • insert many products in one command with multiple objects

db.products.find()

  • retrieve all documents

db.products.update({name:"test_prod1"},{$set:{price:"120.00"}})

  • update a document in the "products" collection. updates first document only for name:"test_prod1" 

db.student.update({"name":"test_6"},{$set:{age:26}},{multi:true})

  • updating all document

db.student.update({_id:ObjectId("64dc5856e4cebb07084d8c45")},{$unset:{price:"120.00"}})

  • updating the document using object id and remove price attribute

db.teaches.update({"th_name":"test_th_2"},{$set:{student:[{"std_name":"test_1"},{"std_name":"test_2"}]}})

  • adding the Student object field

db.student.find({$and:[{"name":"test_4"},{"age":"40"}]})

  • AND operation. returns all objects with name =test_4 and age=40

db.student.find({$or:[{"name":"test_4"},{"name":"test_2"}]})

  • OR operation and returns name = test_2 or test_4

db.student.drop()

  • drop student collection

db.teaches.remove({"name":"t_name1"})

  • remove document name =t_name1

db.teaches.deleteOne({"name":"t_name2"})

  • remove one document with name=t_name2

db.student.aggregate([{$lookup:{from:"teaches",localField:"std_name",foreignField:"student.std_name",as:"studentDetails"}}])

  • aggregation operation in MongoDB to join the "student" collection with the "teaches" collection

db.mycol.find().pretty() 

  • return results in formatted way



Wednesday, July 19, 2023

#!/bin/bash

ps -ef |grep "java -jar" > /apps/appcheck

x=$(cat /apps/appcheck |grep "TestApp.jar" |wc -l)
echo $x

        if [ $x = 0 ]

        then
                date
                echo ""
                echo "starting... check logs "
                cd /apps/sujith/test
                nohup java -jar -Xms128m -Xmx256m TestApp.jar 2 >> TestApp.log &
        elif [ $x = 1 ]

        then
                date
                echo "stopping..."
                cat /apps/appcheck|grep "TestApp.jar" | awk -F " " '{print $2}' | xargs kill -9
                echo "stopped"
                date
                echo "starting..."
                cd /apps/sujith/test
                nohup java -jar -Xms128m -Xmx256m ThripleNine.jar 2>> TestApp.log &

        else
                echo "error occurred. check."

        fi


Tuesday, June 27, 2023

kubernetes pod sample files

simple nginx pod

apiVersion: v1
kind: Pod
metadata:
    name: nginx-pod
    labels:
        app: nginx
        tier: dev
spec:
    containers:
        - name: nginx-container
          image: nginx

multiple containers in one pod

apiVersion: v1
kind: Pod
metadata:
  name: nginx-caching-server
  labels:
    purpose: demonstrate-multi-container-pod
spec:
  containers:
  - name: nginx-container1
    image: nginx

  - name: busybox-container2
    image: busybox
    command:
      - sleep
      - "3600"

replica controller

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-rc
spec:
  replicas: 3
  template:
    metadata:
      name: nginx-pod
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx-container
        image: nginx
        ports:
        - containerPort: 80
  selector:
    app: nginx-app

replica set

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx-rs
spec:
  replicas: 3
  template:
    metadata:
      name: nginx-pod
      labels:
        app: nginx-app
        tier: frontend
    spec:
      containers:
      - name: nginx-container
        image: nginx
        ports:
        - containerPort: 80
  selector:
    matchLabels:
      app: nginx-app
    matchExpressions:
      - {key: tier, operator: In, values: [frontend]}

daemon set
daemon set guarantee to up and run one instance in every node

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-ds
spec:
  template:
    metadata:
      labels:
        name: fluentd
    spec:
      containers:
      - name: fluentd
        image: gcr.io/google-containers/fluentd-elasticsearch:1.20
  selector:
    matchLabels:
      name: fluentd

up and run pods for special nodes only using daemon set(diskType=ssd)

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ds
spec:
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
      - name: nginx-container
        image: nginx
      nodeSelector:
        disktype: ssd
  selector:
    matchLabels:
      name: nginx


batch execution pod. it will print 9 to 1 when execution.

apiVersion: batch/v1
kind: Job
metadata:
  name: countdown
spec:
  template:
    metadata:
      name: countdown
    spec:
      containers:
      - name: counter
        image: centos:7
        command:
         - "bin/bash"
         - "-c"
         - "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"
      restartPolicy: Never

nginx deployment pod

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy
  labels:
    app: nginx-app
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx-app
    spec:
      containers:
      - name: nginx-container
        image: nginx
        ports:
        - containerPort: 80
  selector:
    matchLabels:
      app: nginx-app

redis deployment using Recreate
In "Recreate", it will terminate all containers at once and restart required containers.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
  labels:
    app: redis

spec:
  replicas: 10
  selector:
    matchLabels:
      app: redis
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
        - name: redis-container
          image: redis:5.0

redis deployment using RollingUpdate
in "RollingUpdate", requested number of containers will be updated.
new instance will up and old instance will down in this scenario.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
  labels:
    app: redis

spec:
  replicas: 15
  selector:
    matchLabels:
      app: redis
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 2
      maxUnavailable: 2
  minReadySeconds: 10
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
        - name: redis-container
          image: redis:6.0 #Upgrade to 6.0 --> 6.0.16 --> 6.2.6

volume usage
when using volume, it can replicate to every containers config in pod file

apiVersion: v1
kind: Pod
metadata:
  name: sidecar-pod
spec:
  volumes:
  - name: logs
    emptyDir: {}

  containers:
  - name: app-container
    image: alpine
    command: ["/bin/sh"]
    args: ["-c", "while true; do date >> /var/log/app.log; sleep 5; done"]
    volumeMounts:
    - name: logs
      mountPath: /var/log
     
  - name: log-exporter-sidecar
    image: nginx
    ports:
      - containerPort: 80
    volumeMounts:
    - name: logs
      mountPath: /usr/share/nginx/html

volume "emptyDir" usage

apiVersion: v1
kind: Pod
metadata:
  name: nginx-emptydir
spec:
  containers:
  - name: nginx-container
    image: nginx
    volumeMounts:
    - name: test-vol
      mountPath: /test-mnt
  volumes:
  - name: test-vol
    emptyDir: {}

nginx deploymnet with 2 replicas

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80