Snippets Collections
As blockchain adoption surges across various industries, businesses are increasingly exploring decentralized technologies to elevate their online gaming platforms. One standout approach is integrating blockchain with a BET365 clone script—a customizable software solution that replicates the features and functionalities of globally renowned platforms. But to make this integration seamless and future-ready, choosing the right blockchain network becomes critical.
In this forum discussion, we’ll explore the technical suitability of leading blockchain networks such as Ethereum, Solana, and Polygon for scalable applications based on a BET365 clone script. We’ll examine each network’s scalability, transaction speed, fees, ecosystem maturity, and compatibility with smart contract integration.

Why Blockchain Integration Matters in a BET365 Clone Script
Before diving into network comparisons, it’s essential to understand why blockchain is being integrated with a BET365 clone script in the first place. The key benefits include:

Transparency: Every transaction is immutably recorded, enabling trust among users.
Smart Contracts: Automate payouts, manage game rules, and reduce human intervention.
Security: Decentralized infrastructure reduces single points of failure.
Crypto Payments: Offer support for various cryptocurrencies, facilitating global transactions.

With these enhancements, a BET365 clone script becomes more than just a clone—it becomes a next-gen platform powered by decentralized finance (DeFi).

Ethereum: The Pioneer With Trade-offs
Pros:
Vast developer community and extensive documentation
Secure and time-tested smart contract capabilities (Solidity-based)
Large ecosystem with DeFi and NFT integration potential
Cons:
Scalability bottlenecks: Ethereum handles ~15 TPS (transactions per second), which could lead to latency during peak usage
High gas fees: Unpredictable and expensive transaction costs can frustrate users and impact platform profitability

Verdict: While Ethereum remains the industry standard for smart contracts, it’s better suited for projects prioritizing security and interoperability over transaction speed. A BET365 clone script running on Ethereum may require Layer-2 solutions (e.g., Arbitrum or Optimism) to address scalability and cost concerns.

Solana: High-Performance for Real-Time Interactions
Pros:
Blazing-fast transaction speeds (65,000+ TPS)
Exceptionally low transaction fees (~$0.00025 per transaction)
Ideal for real-time applications due to its Proof-of-History consensus
Cons:
Network downtime has occurred in the past
Smaller developer ecosystem compared to Ethereum

Verdict: Solana is a top choice for platforms that need high-frequency interactions with minimal latency. A BET365 clone script deployed on Solana can efficiently handle a large user base, ensuring seamless transactions even under heavy load. Its scalability makes it ideal for launching platforms where rapid user interactions are crucial.

Polygon: The Best of Ethereum, Without the Bottlenecks
Pros:
Built as a Layer-2 solution for Ethereum, maintaining compatibility
Significantly reduced transaction fees
Scalable infrastructure (up to 7,000 TPS)
Supported by major DeFi projects and exchanges
Cons:
Layer-2 architecture may introduce centralization concerns for some use cases
Occasionally relies on Ethereum mainnet, which could inherit its latency

Verdict: Polygon strikes a balance between Ethereum's security and Solana's scalability. It’s ideal for businesses seeking faster deployment and cost-efficiency while maintaining compatibility with Ethereum tooling. For developers using a BET365 clone script, Polygon offers a seamless path to build user-centric platforms without compromising performance.

Technical Considerations When Choosing a Blockchain for Your BET365 Clone Script
When choosing the right blockchain, consider these core technical criteria:
Smart Contract Support: Solidity is the dominant language. Ethereum and Polygon support it natively, while Solana uses Rust.

EVM Compatibility: If your BET365 clone script is EVM-compatible, networks like Polygon and BNB Chain offer smooth integration.
Transaction Finality: Look for networks that ensure rapid confirmation times to enhance user experience.
Development Ecosystem: A vibrant developer community means more tools, libraries, and support.
Infrastructure Maturity: Uptime, node availability, and integration with oracles (like Chainlink) are vital for real-world data feeds.


Conclusion: The Right Blockchain Can Supercharge Your BET365 Clone Script
To sum it up:
Use Ethereum for security, rich features, and when you're targeting users familiar with DeFi ecosystems.

Go with Solana if you need lightning-fast processing, high throughput, and near-zero fees.

Choose Polygon for scalability, low cost, and Ethereum compatibility—ideal for mainstream adoption.

Ultimately, the choice depends on your platform’s priorities: speed, cost, developer familiarity, or ecosystem integration.
Looking to Build a Blockchain-Integrated BET365 Clone Script?
  
If you're considering launching a scalable, future-ready platform using blockchain, Coinsqueens offers industry-leading development solutions. Our BET365 clone script is customizable, secure, and built to integrate seamlessly with blockchain networks like Ethereum, Solana, and Polygon. With full-stack blockchain development expertise, we help businesses launch fast, scale faster, and stay ahead in a rapidly evolving space.

For more info:
Call/Whatsapp - +91 87540 53377
Email:sales@coinsqueen.com
Visit https://www.coinsqueens.com/blog/bet365-clone-script 

open setting in vs code (ctrl + ,) ,then  go to  search bar and search -->>      .codeiumconfig        
then open first opection ,  and click  edit in setting.json .  then redirect  -->> 
   codeium.enableConfig            and   add this 2 line --->>>>





    // aftr this line, the code is not visible in the provided snippet 53-54
    "autoSuggestions": false,
    "autoComplete": false,  
fzZ1rAEzVaIZRiajU9yVeC:APA91bFoE3_Ftjw1YSm-k9tQE4JGT_fi1nvQO4z9EV1LNXwSnVMG8DuhO_28atJj0cN5Cnu1MV--sawwCFhqf_gVrG6tjrY4ADPvQmVsmCzVmo9RgiRmfGWa9pQfpJfLKI1sEkxaE8tr
import $ from 'jquery';

class Masthead {
	constructor(element) {
		this.$element = $(element);
		this.$slider = this.$element.find('.masthead--slider');

		this.init();
	}

	init() {
		const itemCount = this.$slider.find('.masthead--slider-item').length;

		if (itemCount > 1) {
			this.$slider.addClass('owl-carousel');

			this.$slider.owlCarousel({
				loop: true,
				dots: true,
				animateIn: true,
				items: 1,
				onInitialized: (event) => this.firstSlideClass(event),
				onTranslated: (event) => this.firstSlideClass(event),
			});
		} else {
			this.$slider.removeClass('owl-carousel');
		}
	}

	firstSlideClass(event) {
		// Get all real (non-cloned) items
		const $realItems = this.$slider.find('.owl-item:not(.cloned)');
		const $allItems = this.$slider.find('.owl-item');

		// Get the first real item DOM reference
		const $firstRealSlide = $realItems.first();

		// Get the index of current slide
		const currentIndex = event.item.index;

		// Get the current DOM element
		const $currentItem = $allItems.eq(currentIndex);

		// Remove first-slide from all
		// $allItems.removeClass('first-slide');

		// Compare: if the current item is the original first slide (or a clone of it)
		const firstSlideContent = $firstRealSlide.html();
		const currentContent = $currentItem.html();

		// Use HTML comparison or some unique attribute to detect match
		if (firstSlideContent === currentContent) {
			$currentItem.addClass('first-slide');
		}
	}
}

$('[data-masthead]').each((index, element) => new Masthead(element));
clientIPAddress 2600:8800:1114:2100:20cc:3f94:9269:851b
X-ClientId: E43FA849401B4D9E9932483BDCAD74B1
X-FEServer PH0PR07CA0032
Date:4/9/2025 4:18:18 AM
T1. Write a python program to import and export data using Pandas Library Functions.
import pandas as pd
csv_file = r"\sample_data.csv"
csv_data = pd.read_csv(csv_file,sep=",")
print("Data imported successfully:")
print(csv_data)
excel_file = r"\sample.xlsx"
excel_data = pd.read_excel(excel_file)
print(excel_data)

T2. Demonstrate the following data pre-processing techniques on the given dataset 2
a. Standardization
b. normalization
c. summarization
d. de-duplication
e. Imputation

Program:
import pandas as pd
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.impute import SimpleImputer
# Sample Data with Missing Values and Duplicates
data = {
 'Name': ['Alice', 'Bob', 'Charlie', 'Alice'],
 'Age': [25, 30, 35, 25],
 'Salary': [50000, 60000, None, 50000],
 'City': ['New York', 'Los Angeles', 'Chicago', 'New York']
}
# Create DataFrame
df = pd.DataFrame(data)
# a. Standardization
scaler = StandardScaler()
df[['Age', 'Salary']] = scaler.fit_transform(df[['Age', 'Salary']])
print("\nStandardized Data:\n", df)
# b. Normalization
normalizer = MinMaxScaler()
df[['Age', 'Salary']] = normalizer.fit_transform(df[['Age', 'Salary']])
print("\nNormalized Data:\n", df)
# c. Summarization
summary = df.describe()
print("\nData Summary:\n", summary)
# d. De-duplication
df_deduplicated = df.drop_duplicates()
print("\nDe-duplicated Data:\n", df_deduplicated)
# e. Imputation
imputer = SimpleImputer(strategy='mean')
df[['Salary']] = imputer.fit_transform(df[['Salary']])
print("\nData with Imputed Values:\n", df)

T3.Implement findS algorithm and Candidate elimination algorithm
def find_s_algorithm(examples):
    # Start with the most specific hypothesis
    hypothesis = ['0'] * len(examples[0][0])

    for attributes, label in examples:
        if label == 'Yes':  # Only consider positive examples
            for i in range(len(hypothesis)):
                if hypothesis[i] == '0':
                    hypothesis[i] = attributes[i]
                elif hypothesis[i] != attributes[i]:
                    hypothesis[i] = '?'  # Generalize
    return hypothesis

def candidate_elimination_algorithm(examples):
    num_attributes = len(examples[0][0])
    # Start with most specific S and most general G
    S = ['0'] * num_attributes
    G = [['?' for _ in range(num_attributes)]]

    for instance, label in examples:
        if label == 'Yes':
            # Remove from G any hypothesis inconsistent with the instance
            G = [g for g in G if consistent(g, instance)]

            for i in range(num_attributes):
                if S[i] == '0':
                    S[i] = instance[i]
                elif S[i] != instance[i]:
                    S[i] = '?'
        else:  # label == 'No'
            G_new = []
            for g in G:
                for i in range(num_attributes):
                    if g[i] == '?':
                        if S[i] != instance[i]:
                            g_new = g.copy()
                            g_new[i] = S[i]
                            if g_new not in G_new:
                                G_new.append(g_new)
            G = G_new
    return S, G

def consistent(hypothesis, instance):
    for h, x in zip(hypothesis, instance):
        if h != '?' and h != x:
            return False
    return True

# Each row is a tuple (attributes, label)
dataset = [
    (['Sunny', 'Warm', 'Normal', 'Strong', 'Warm', 'Same'], 'Yes'),
    (['Sunny', 'Warm', 'High', 'Strong', 'Warm', 'Same'], 'Yes'),
    (['Rainy', 'Cold', 'High', 'Strong', 'Warm', 'Change'], 'No'),
    (['Sunny', 'Warm', 'High', 'Strong', 'Cool', 'Change'], 'Yes'),
]

# Find-S Output
hypothesis = find_s_algorithm(dataset)
print("Final hypothesis from Find-S:", hypothesis)

# Candidate Elimination Output
S, G = candidate_elimination_algorithm(dataset)
print("Final specific hypothesis (S):", S)
print("Final general hypotheses (G):", G)


T4. . Demonstrate regression technique to predict the responses at unknown locations by fitting the linear and polynomial regression surfaces. Extract error measures and plot the residuals. Further, add a regulizer and demonstrate the reduction in the variance. (Ridge and LASSO)

import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
#incmd , pip install statsmodel
# 1. Generate synthetic data
np.random.seed(42)
X = 2 - 3 * np.random.normal(0, 1, 100)
y = X**3 + X**2 + np.random.normal(0, 5, 100)
X = X.reshape(-1, 1)

# 2. Split the dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# 3. Linear Regression
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
y_pred_lin = lin_reg.predict(X_test)
rmse_lin = np.sqrt(mean_squared_error(y_test, y_pred_lin))
r2_lin = r2_score(y_test, y_pred_lin)

# 4. Polynomial Regression
poly = PolynomialFeatures(degree=3)
X_train_poly = poly.fit_transform(X_train)
X_test_poly = poly.transform(X_test)

poly_reg = LinearRegression()
poly_reg.fit(X_train_poly, y_train)
y_pred_poly = poly_reg.predict(X_test_poly)
rmse_poly = np.sqrt(mean_squared_error(y_test, y_pred_poly))
r2_poly = r2_score(y_test, y_pred_poly)

# 5. Residual Plot for Polynomial Regression
residuals = y_test - y_pred_poly
plt.figure(figsize=(8, 5))
sns.residplot(x=y_pred_poly, y=residuals, lowess=False, color='g')
plt.title("Residual Plot - Polynomial Regression")
plt.xlabel("Predicted")
plt.ylabel("Residuals")
plt.axhline(0, color='red', linestyle='--')
plt.show()

# 6. Ridge Regression
ridge = Ridge(alpha=1)
ridge.fit(X_train_poly, y_train)
y_pred_ridge = ridge.predict(X_test_poly)
rmse_ridge = np.sqrt(mean_squared_error(y_test, y_pred_ridge))
r2_ridge = r2_score(y_test, y_pred_ridge)

# 7. Lasso Regression
lasso = Lasso(alpha=0.1)
lasso.fit(X_train_poly, y_train)
y_pred_lasso = lasso.predict(X_test_poly)
rmse_lasso = np.sqrt(mean_squared_error(y_test, y_pred_lasso))
r2_lasso = r2_score(y_test, y_pred_lasso)

# 8. Plotting all models
X_range = np.linspace(X.min(), X.max(), 100).reshape(-1, 1)
X_range_poly = poly.transform(X_range)

plt.figure(figsize=(10, 6))
plt.scatter(X, y, label="Original Data", alpha=0.6)
plt.plot(X_range, lin_reg.predict(X_range), label="Linear", color="blue")
plt.plot(X_range, poly_reg.predict(X_range_poly), label="Polynomial (deg 3)", color="green")
plt.plot(X_range, ridge.predict(X_range_poly), label="Ridge", color="purple")
plt.plot(X_range, lasso.predict(X_range_poly), label="Lasso", color="orange")
plt.title("Regression Models Comparison")
plt.xlabel("X")
plt.ylabel("y")
plt.legend()
plt.grid(True)
plt.show()

# 9. Print performance
print("Model Performance Summary:\n")
print(f"Linear Regression     -> RMSE: {rmse_lin:.2f}, R²: {r2_lin:.2f}")
print(f"Polynomial Regression -> RMSE: {rmse_poly:.2f}, R²: {r2_poly:.2f}")
print(f"Ridge Regression      -> RMSE: {rmse_ridge:.2f}, R²: {r2_ridge:.2f}")
print(f"Lasso Regression      -> RMSE: {rmse_lasso:.2f}, R²: {r2_lasso:.2f}")

T5. Demonstrate the capability of PCA and LDA in dimensionality reduction.
# Import necessary libraries
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis

# Load the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target
target_names = iris.target_names

# Apply PCA to reduce dimensions to 2
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)

# Apply LDA to reduce dimensions to 2
lda = LinearDiscriminantAnalysis(n_components=2)
X_lda = lda.fit_transform(X, y)

# Plot PCA results
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
colors = ['navy', 'turquoise', 'darkorange']
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
    plt.scatter(X_pca[y == i, 0], X_pca[y == i, 1], alpha=0.8, color=color, label=target_name)
plt.title('PCA on Iris Dataset')
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')
plt.legend()

# Plot LDA results
plt.subplot(1, 2, 2)
for color, i, target_name in zip(colors, [0, 1, 2], target_names):
    plt.scatter(X_lda[y == i, 0], X_lda[y == i, 1], alpha=0.8, color=color, label=target_name)
plt.title('LDA on Iris Dataset')
plt.xlabel('Linear Discriminant 1')
plt.ylabel('Linear Discriminant 2')
plt.legend()

plt.tight_layout()
plt.show()

# Explained variance for PCA
explained_variance_ratio = pca.explained_variance_ratio_
print("Explained variance by PCA components:", explained_variance_ratio)


T6. KNN
# Import necessary libraries
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt

# Load the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target

# Split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)

# Standardize the features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Initialize the K-NN classifier with K=5
knn = KNeighborsClassifier(n_neighbors=5)

# Fit the model
knn.fit(X_train_scaled, y_train)

# Predict on the test set
y_pred = knn.predict(X_test_scaled)

# Evaluation
print("Accuracy:", accuracy_score(y_test, y_pred))
print("\nClassification Report:\n", classification_report(y_test, y_pred))

# Confusion Matrix
conf_matrix = confusion_matrix(y_test, y_pred)
sns.heatmap(conf_matrix, annot=True, cmap="Blues", xticklabels=iris.target_names, yticklabels=iris.target_names)
plt.xlabel("Predicted")
plt.ylabel("Actual")
plt.title("Confusion Matrix - KNN")
plt.show()


T7.Apply suitable classifier model to classify the credit status to be good or bad on German credit dataset.csv, create confusion matrix to measure the accuracy of the model (using Logistic Regression/SVM/Naïve Bayes).
Dataset -> https://online.stat.psu.edu/stat857/node/215/
# Step 1: Import libraries
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, accuracy_score, ConfusionMatrixDisplay

# Step 2: Load the German Credit dataset
df = pd.read_csv("German credit dataset.csv")

# Step 3: Preprocess the data
# Encode categorical columns
df_encoded = df.copy()
label_encoders = {}

for column in df_encoded.select_dtypes(include=['object']).columns:
    le = LabelEncoder()
    df_encoded[column] = le.fit_transform(df_encoded[column])
    label_encoders[column] = le

# Step 4: Split data into features (X) and target (y)
# Assuming 'Creditability' or similar is the target column; adjust if needed
target_column = 'Creditability'  # Update this if your dataset has a different column
X = df_encoded.drop(target_column, axis=1)
y = df_encoded[target_column]

# Step 5: Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Step 6: Train a classifier (e.g., Logistic Regression)
model = LogisticRegression(max_iter=1000)
model.fit(X_train, y_train)

# Step 7: Make predictions
y_pred = model.predict(X_test)

# Step 8: Evaluate the model
cm = confusion_matrix(y_test, y_pred)
accuracy = accuracy_score(y_test, y_pred)

print("✅ Confusion Matrix:\n", cm)
print("\n🎯 Accuracy Score:", accuracy)

# Optional: Display confusion matrix visually
ConfusionMatrixDisplay(confusion_matrix=cm).plot()


T8.Apply train set split and develop a regression model to predict the sold price of players using imb381ipl2013.csv build a correlation matrix between all the numeric features in dataset and visualize the heatmap. RMSE of train and test data.

# Import necessary libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error

# Load the dataset
file_path = "imb381ipl2013.csv"  # Replace with your file path if needed
data = pd.read_csv(file_path)

# Display basic information and head of the dataset
print("Dataset Info:")
print(data.info())
print("\nFirst 5 Rows:")
print(data.head())

# Check for missing values and drop rows with NaN
data.dropna(inplace=True)

# Define target variable (Sold Price) and features
X = data.drop(columns=['Sold Price'])
y = data['Sold Price']

# Split the data into training and testing sets (80-20 split)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

# Build the Linear Regression model
model = LinearRegression()
model.fit(X_train, y_train)

# Make predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)

# Calculate RMSE for train and test data
rmse_train = np.sqrt(mean_squared_error(y_train, y_train_pred))
rmse_test = np.sqrt(mean_squared_error(y_test, y_test_pred))

print(f"\nRMSE on Training Data: {rmse_train:.2f}")
print(f"RMSE on Test Data: {rmse_test:.2f}")

# Build correlation matrix for numeric features
numeric_features = data.select_dtypes(include=[np.number])
correlation_matrix = numeric_features.corr()

# Plot heatmap of the correlation matrix
plt.figure(figsize=(10, 8))
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm', fmt='.2f', linewidths=0.5)
plt.title('Correlation Matrix of Numeric Features')
plt.show()

T11. For the glass identification dataset, fit random forest classifier to classify glass type

import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.preprocessing import StandardScaler
import seaborn as sns
import matplotlib.pyplot as plt

# Load dataset
# Download from: https://archive.ics.uci.edu/ml/datasets/glass+identification
# Assuming the file is named 'glass.csv' with proper column names
column_names = ['RI', 'Na', 'Mg', 'Al', 'Si', 'K', 'Ca', 'Ba', 'Fe', 'Type']
data = pd.read_csv('glass.csv', names=column_names)

# Features and target
X = data.drop('Type', axis=1)
y = data['Type']

# Normalize features
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Train-test split
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.2, random_state=42)

# Random Forest Classifier
clf = RandomForestClassifier(n_estimators=100, random_state=42)
clf.fit(X_train, y_train)

# Predictions
y_pred = clf.predict(X_test)

# Evaluation
print("Accuracy:", accuracy_score(y_test, y_pred))
print("\nClassification Report:\n", classification_report(y_test, y_pred))
print("\nConfusion Matrix:\n", confusion_matrix(y_test, y_pred))

# Optional: Plot confusion matrix heatmap
plt.figure(figsize=(8,6))
sns.heatmap(confusion_matrix(y_test, y_pred), annot=True, fmt="d", cmap='Blues')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.title('Confusion Matrix')
plt.show()


T12.. Implement the K-Means clustering algorithm using Python. You may use a library such as scikit-learn for this purpose

# Import necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler

# Generate sample data using make_blobs
# Create 300 samples with 3 cluster centers
X, y_true = make_blobs(n_samples=300, centers=3, cluster_std=0.60, random_state=42)

# Standardize the data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Visualize the raw data
plt.scatter(X_scaled[:, 0], X_scaled[:, 1], s=50, c='gray', marker='o')
plt.title('Generated Raw Data')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.show()

# Apply K-Means Clustering
kmeans = KMeans(n_clusters=3, random_state=42, n_init=10)
kmeans.fit(X_scaled)

# Get the cluster labels and cluster centers
y_kmeans = kmeans.labels_
centers = kmeans.cluster_centers_

# Visualize the clusters
plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=y_kmeans, s=50, cmap='viridis')
plt.scatter(centers[:, 0], centers[:, 1], c='red', marker='X', s=200, label='Centroids')
plt.title('K-Means Clustering Results')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.legend()
plt.show()

# Print cluster centers and inertia
print("Cluster Centers (after scaling):\n", centers)
print(f"Inertia (Sum of Squared Distances): {kmeans.inertia_:.2f}")

# Calculate the optimal number of clusters using the Elbow Method
inertia_values = []
k_range = range(1, 11)

for k in k_range:
    kmeans = KMeans(n_clusters=k, random_state=42, n_init=10)
    kmeans.fit(X_scaled)
    inertia_values.append(kmeans.inertia_)

# Plot the Elbow Method
plt.plot(k_range, inertia_values, marker='o')
plt.title('Elbow Method to Determine Optimal k')
plt.xlabel('Number of Clusters (k)')
plt.ylabel('Inertia (Sum of Squared Distances)')
plt.show()


T13. Implement the Agglomerative Hierarchical clustering algorithm using Python. Utilize linkage methods such as 'ward,' 'complete,' or 'average.

# Import necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import AgglomerativeClustering

# Generate sample data using make_blobs
# Create 300 samples with 3 cluster centers
X, y_true = make_blobs(n_samples=300, centers=3, cluster_std=0.70, random_state=42)

# Standardize the data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Plot the raw data
plt.scatter(X_scaled[:, 0], X_scaled[:, 1], s=50, c='gray', marker='o')
plt.title('Generated Raw Data')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.show()

# Define linkage methods to be used
linkage_methods = ['ward', 'complete', 'average']

# Plot dendrograms for different linkage methods
plt.figure(figsize=(15, 5))
for i, method in enumerate(linkage_methods):
    plt.subplot(1, 3, i + 1)
    Z = linkage(X_scaled, method=method)
    dendrogram(Z, truncate_mode='level', p=5)
    plt.title(f'Dendrogram using {method.capitalize()} Linkage')
    plt.xlabel('Data Points')
    plt.ylabel('Distance')

plt.tight_layout()
plt.show()

# Apply Agglomerative Clustering using 'ward' linkage
n_clusters = 3  # Number of clusters
model = AgglomerativeClustering(n_clusters=n_clusters, linkage='ward')
y_pred = model.fit_predict(X_scaled)

# Plot the clusters
plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=y_pred, cmap='viridis', s=50)
plt.title('Agglomerative Hierarchical Clustering (Ward Linkage)')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.show()

T
[
    {
        "id": "1a995dd199999201",
        "type": "tab",
        "label": "Flow 6",
        "disabled": false,
        "info": "",
        "env": []
    },
    {
        "id": "805da3545a35d191",
        "type": "inject",
        "z": "1a995dd199999201",
        "name": "",
        "props": [
            {
                "p": "payload"
            },
            {
                "p": "topic",
                "vt": "str"
            }
        ],
        "repeat": "1",
        "crontab": "",
        "once": true,
        "onceDelay": "1",
        "topic": "",
        "payload": "",
        "payloadType": "date",
        "x": 190,
        "y": 100,
        "wires": [
            [
                "89542d9b10fef801",
                "6f99897aef86caef"
            ]
        ]
    },
    {
        "id": "89542d9b10fef801",
        "type": "http request",
        "z": "1a995dd199999201",
        "name": "",
        "method": "GET",
        "ret": "obj",
        "paytoqs": "ignore",
        "url": "https://api.openweathermap.org/data/2.5/weather?q=London&appid=8ff56d883da02c95c015b78fbb6dd8ed",
        "tls": "",
        "persist": false,
        "proxy": "",
        "insecureHTTPParser": false,
        "authType": "",
        "senderr": false,
        "headers": [],
        "x": 410,
        "y": 100,
        "wires": [
            [
                "f62fc4e2650c38ac",
                "2f73374e96d77a29",
                "fe5a67f4e6f8058c",
                "cdaa8f80ebb3ea0d"
            ]
        ]
    },
    {
        "id": "f62fc4e2650c38ac",
        "type": "debug",
        "z": "1a995dd199999201",
        "name": "debug 4",
        "active": true,
        "tosidebar": true,
        "console": false,
        "tostatus": false,
        "complete": "false",
        "statusVal": "",
        "statusType": "auto",
        "x": 660,
        "y": 100,
        "wires": []
    },
    {
        "id": "2f73374e96d77a29",
        "type": "change",
        "z": "1a995dd199999201",
        "name": "",
        "rules": [
            {
                "t": "set",
                "p": "payload",
                "pt": "msg",
                "to": "payload.name",
                "tot": "str"
            }
        ],
        "action": "",
        "property": "",
        "from": "",
        "to": "",
        "reg": false,
        "x": 460,
        "y": 220,
        "wires": [
            [
                "5b84d6a82970ab89"
            ]
        ]
    },
    {
        "id": "fe5a67f4e6f8058c",
        "type": "change",
        "z": "1a995dd199999201",
        "name": "",
        "rules": [
            {
                "t": "set",
                "p": "payload",
                "pt": "msg",
                "to": "payload.main.temp",
                "tot": "str"
            }
        ],
        "action": "",
        "property": "",
        "from": "",
        "to": "",
        "reg": false,
        "x": 460,
        "y": 320,
        "wires": [
            [
                "64fac722a91457fc"
            ]
        ]
    },
    {
        "id": "cdaa8f80ebb3ea0d",
        "type": "change",
        "z": "1a995dd199999201",
        "name": "",
        "rules": [
            {
                "t": "set",
                "p": "payload",
                "pt": "msg",
                "to": "payload.main.humidity",
                "tot": "str"
            }
        ],
        "action": "",
        "property": "",
        "from": "",
        "to": "",
        "reg": false,
        "x": 460,
        "y": 420,
        "wires": [
            [
                "dc4b2487c962c02e"
            ]
        ]
    },
    {
        "id": "5b84d6a82970ab89",
        "type": "ui_text",
        "z": "1a995dd199999201",
        "group": "0fdf39278f746926",
        "order": 0,
        "width": 0,
        "height": 0,
        "name": "",
        "label": "city",
        "format": "{{msg.payload}}",
        "layout": "row-spread",
        "className": "",
        "style": false,
        "font": "",
        "fontSize": 16,
        "color": "#000000",
        "x": 670,
        "y": 220,
        "wires": []
    },
    {
        "id": "64fac722a91457fc",
        "type": "ui_text",
        "z": "1a995dd199999201",
        "group": "0fdf39278f746926",
        "order": 1,
        "width": 0,
        "height": 0,
        "name": "",
        "label": "temperature",
        "format": "{{msg.payload}}",
        "layout": "row-spread",
        "className": "",
        "style": false,
        "font": "",
        "fontSize": 16,
        "color": "#000000",
        "x": 690,
        "y": 320,
        "wires": []
    },
    {
        "id": "dc4b2487c962c02e",
        "type": "ui_text",
        "z": "1a995dd199999201",
        "group": "0fdf39278f746926",
        "order": 2,
        "width": 0,
        "height": 0,
        "name": "",
        "label": "humidity",
        "format": "{{msg.payload}}",
        "layout": "row-spread",
        "className": "",
        "style": false,
        "font": "",
        "fontSize": 16,
        "color": "#000000",
        "x": 680,
        "y": 420,
        "wires": []
    },
    {
        "id": "fbf47a74ddea9456",
        "type": "ui_slider",
        "z": "1a995dd199999201",
        "name": "",
        "label": "slider",
        "tooltip": "",
        "group": "0fdf39278f746926",
        "order": 3,
        "width": 0,
        "height": 0,
        "passthru": true,
        "outs": "all",
        "topic": "topic",
        "topicType": "msg",
        "min": "1",
        "max": "100",
        "step": 1,
        "className": "",
        "x": 670,
        "y": 280,
        "wires": [
            [
                "64fac722a91457fc"
            ]
        ]
    },
    {
        "id": "95f7fb543be84820",
        "type": "ui_button",
        "z": "1a995dd199999201",
        "name": "",
        "group": "0fdf39278f746926",
        "order": 4,
        "width": 0,
        "height": 0,
        "passthru": false,
        "label": "reset",
        "tooltip": "",
        "color": "",
        "bgcolor": "",
        "className": "",
        "icon": "",
        "payload": "0",
        "payloadType": "num",
        "topic": "topic",
        "topicType": "msg",
        "x": 690,
        "y": 380,
        "wires": [
            [
                "dc4b2487c962c02e"
            ]
        ]
    },
    {
        "id": "6f99897aef86caef",
        "type": "function",
        "z": "1a995dd199999201",
        "name": "function 3",
        "func": "msg.payload = Math.random()*30;\nreturn msg;",
        "outputs": 1,
        "timeout": 0,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 140,
        "y": 280,
        "wires": [
            [
                "ade13a3d209e1dd6",
                "da8cd534482d8752"
            ]
        ]
    },
    {
        "id": "da8cd534482d8752",
        "type": "ui_gauge",
        "z": "1a995dd199999201",
        "name": "",
        "group": "0fdf39278f746926",
        "order": 6,
        "width": 0,
        "height": 0,
        "gtype": "gage",
        "title": "gauge",
        "label": "units",
        "format": "{{value}}",
        "min": 0,
        "max": "100",
        "colors": [
            "#00b500",
            "#e6e600",
            "#ca3838"
        ],
        "seg1": "",
        "seg2": "",
        "diff": false,
        "className": "",
        "x": 250,
        "y": 420,
        "wires": []
    },
    {
        "id": "ade13a3d209e1dd6",
        "type": "ui_chart",
        "z": "1a995dd199999201",
        "name": "",
        "group": "0fdf39278f746926",
        "order": 5,
        "width": 0,
        "height": 0,
        "label": "chart",
        "chartType": "line",
        "legend": "false",
        "xformat": "HH:mm:ss",
        "interpolate": "linear",
        "nodata": "",
        "dot": false,
        "ymin": "1",
        "ymax": "100",
        "removeOlder": 1,
        "removeOlderPoints": "",
        "removeOlderUnit": "3600",
        "cutout": 0,
        "useOneColor": false,
        "useUTC": false,
        "colors": [
            "#1f77b4",
            "#aec7e8",
            "#ff7f0e",
            "#2ca02c",
            "#98df8a",
            "#d62728",
            "#ff9896",
            "#9467bd",
            "#c5b0d5"
        ],
        "outputs": 1,
        "useDifferentColor": false,
        "className": "",
        "x": 230,
        "y": 340,
        "wires": [
            []
        ]
    },
    {
        "id": "0fdf39278f746926",
        "type": "ui_group",
        "name": "reddy",
        "tab": "62b00e22779b2a07",
        "order": 1,
        "disp": true,
        "width": 6,
        "collapse": false,
        "className": ""
    },
    {
        "id": "62b00e22779b2a07",
        "type": "ui_tab",
        "name": "suhas",
        "icon": "dashboard",
        "disabled": false,
        "hidden": false
    }
]
[
    {
        "id": "7b56d60993240319",
        "type": "tab",
        "label": "Flow 5",
        "disabled": false,
        "info": "",
        "env": []
    },
    {
        "id": "591d73438d3b1756",
        "type": "mqtt in",
        "z": "7b56d60993240319",
        "name": "subscriber",
        "topic": "test 1",
        "qos": "2",
        "datatype": "auto-detect",
        "broker": "7c853a4f0832bb4a",
        "nl": false,
        "rap": true,
        "rh": 0,
        "inputs": 0,
        "x": 240,
        "y": 180,
        "wires": [
            [
                "1848ca99d0e87843"
            ]
        ]
    },
    {
        "id": "1848ca99d0e87843",
        "type": "debug",
        "z": "7b56d60993240319",
        "name": "debug 3",
        "active": true,
        "tosidebar": true,
        "console": false,
        "tostatus": false,
        "complete": "false",
        "statusVal": "",
        "statusType": "auto",
        "x": 520,
        "y": 180,
        "wires": []
    },
    {
        "id": "7c853a4f0832bb4a",
        "type": "mqtt-broker",
        "name": "suhas",
        "broker": "127.0.0.1",
        "port": 1883,
        "clientid": "",
        "autoConnect": true,
        "usetls": false,
        "protocolVersion": 4,
        "keepalive": 60,
        "cleansession": true,
        "autoUnsubscribe": true,
        "birthTopic": "",
        "birthQos": "0",
        "birthRetain": "false",
        "birthPayload": "",
        "birthMsg": {},
        "closeTopic": "",
        "closeQos": "0",
        "closeRetain": "false",
        "closePayload": "",
        "closeMsg": {},
        "willTopic": "",
        "willQos": "0",
        "willRetain": "false",
        "willPayload": "",
        "willMsg": {},
        "userProps": "",
        "sessionExpiry": ""
    }
]
[
    {
        "id": "31e5d1095b1eb838",
        "type": "tab",
        "label": "Flow 4",
        "disabled": false,
        "info": "",
        "env": []
    },
    {
        "id": "add8abb43c706953",
        "type": "mqtt out",
        "z": "31e5d1095b1eb838",
        "name": "",
        "topic": "",
        "qos": "2",
        "retain": "",
        "respTopic": "",
        "contentType": "",
        "userProps": "",
        "correl": "",
        "expiry": "",
        "broker": "7c853a4f0832bb4a",
        "x": 450,
        "y": 160,
        "wires": []
    },
    {
        "id": "1f712c7d35bd3b86",
        "type": "inject",
        "z": "31e5d1095b1eb838",
        "name": "",
        "props": [
            {
                "p": "payload"
            },
            {
                "p": "topic",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "test",
        "payload": "hello",
        "payloadType": "str",
        "x": 180,
        "y": 160,
        "wires": [
            [
                "add8abb43c706953"
            ]
        ]
    },
    {
        "id": "7c853a4f0832bb4a",
        "type": "mqtt-broker",
        "name": "suhas",
        "broker": "127.0.0.1",
        "port": 1883,
        "clientid": "",
        "autoConnect": true,
        "usetls": false,
        "protocolVersion": 4,
        "keepalive": 60,
        "cleansession": true,
        "autoUnsubscribe": true,
        "birthTopic": "",
        "birthQos": "0",
        "birthRetain": "false",
        "birthPayload": "",
        "birthMsg": {},
        "closeTopic": "",
        "closeQos": "0",
        "closeRetain": "false",
        "closePayload": "",
        "closeMsg": {},
        "willTopic": "",
        "willQos": "0",
        "willRetain": "false",
        "willPayload": "",
        "willMsg": {},
        "userProps": "",
        "sessionExpiry": ""
    }
]
[
    {
        "id": "008f2793d8e0b213",
        "type": "tab",
        "label": "Flow 3",
        "disabled": false,
        "info": "",
        "env": []
    },
    {
        "id": "a31c98c0e6a8f44c",
        "type": "inject",
        "z": "008f2793d8e0b213",
        "name": "",
        "props": [
            {
                "p": "payload"
            },
            {
                "p": "topic",
                "vt": "str"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "num",
        "payload": "[10,20,30]",
        "payloadType": "json",
        "x": 160,
        "y": 180,
        "wires": [
            [
                "f38eccc2aa9fc71a"
            ]
        ]
    },
    {
        "id": "f38eccc2aa9fc71a",
        "type": "function",
        "z": "008f2793d8e0b213",
        "name": "function 2",
        "func": "var num = msg.payload;\nvar sum = 0;\nfor (var i in num){\n    sum+=num[i];\n}\nmsg.payload = `sum is ${sum}`\nreturn msg;",
        "outputs": 1,
        "timeout": 0,
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 420,
        "y": 180,
        "wires": [
            [
                "42a45807b1f2b163"
            ]
        ]
    },
    {
        "id": "42a45807b1f2b163",
        "type": "debug",
        "z": "008f2793d8e0b213",
        "name": "debug 2",
        "active": true,
        "tosidebar": true,
        "console": false,
        "tostatus": false,
        "complete": "false",
        "statusVal": "",
        "statusType": "auto",
        "x": 620,
        "y": 180,
        "wires": []
    }
]
[
    {
        "id": "72c82f5f84d17524",
        "type": "tab",
        "label": "Flow 2",
        "disabled": false,
        "info": "",
        "env": []
    },
    {
        "id": "inject_num1",
        "type": "inject",
        "z": "72c82f5f84d17524",
        "name": "Inject Number 1",
        "props": [
            {
                "p": "payload"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "payload": "5",
        "payloadType": "num",
        "x": 140,
        "y": 160,
        "wires": [
            [
                "change_num1"
            ]
        ]
    },
    {
        "id": "inject_num2",
        "type": "inject",
        "z": "72c82f5f84d17524",
        "name": "Inject Number 2",
        "props": [
            {
                "p": "payload"
            }
        ],
        "repeat": "",
        "crontab": "",
        "once": false,
        "onceDelay": 0.1,
        "topic": "",
        "payload": "7",
        "payloadType": "num",
        "x": 140,
        "y": 220,
        "wires": [
            [
                "change_num2"
            ]
        ]
    },
    {
        "id": "change_num1",
        "type": "change",
        "z": "72c82f5f84d17524",
        "name": "Set msg.num1",
        "rules": [
            {
                "t": "set",
                "p": "num1",
                "pt": "msg",
                "to": "payload",
                "tot": "msg"
            }
        ],
        "x": 330,
        "y": 160,
        "wires": [
            [
                "function_sum"
            ]
        ]
    },
    {
        "id": "change_num2",
        "type": "change",
        "z": "72c82f5f84d17524",
        "name": "Set msg.num2",
        "rules": [
            {
                "t": "set",
                "p": "num2",
                "pt": "msg",
                "to": "payload",
                "tot": "msg"
            }
        ],
        "x": 330,
        "y": 220,
        "wires": [
            [
                "function_sum"
            ]
        ]
    },
    {
        "id": "function_sum",
        "type": "function",
        "z": "72c82f5f84d17524",
        "name": "Add Two Numbers",
        "func": "var a = msg.num1 || flow.get(\"num1\");\nvar b = msg.num2 || flow.get(\"num2\");\n\nflow.set(\"num1\", a);\nflow.set(\"num2\", b);\n\nif (a !== undefined && b !== undefined) {\n    var s = a + b;\n    msg.payload = `sum of ${a} and ${b} is ${s}`;\n    return msg;\n}\nreturn null;\n",
        "outputs": 1,
        "timeout": "",
        "noerr": 0,
        "initialize": "",
        "finalize": "",
        "libs": [],
        "x": 550,
        "y": 190,
        "wires": [
            [
                "debug_sum"
            ]
        ]
    },
    {
        "id": "debug_sum",
        "type": "debug",
        "z": "72c82f5f84d17524",
        "name": "Show Sum",
        "active": true,
        "tosidebar": true,
        "console": false,
        "tostatus": false,
        "complete": "payload",
        "targetType": "msg",
        "statusVal": "",
        "statusType": "auto",
        "x": 740,
        "y": 190,
        "wires": []
    }
]
import pandas as pd

# Import data from a CSV file
data = pd.read_csv('input_data.csv')
print("Imported Data:")
print(data.head())

# Export data to a new CSV file
data.to_csv('exported_data.csv', index=False)
print("\nData exported to 'exported_data.csv'")
$("div [widget='widget']").css("border", "1px solid red").css("padding-top", "20px").css("position", "relative").each(function(i, obj){
    var scope = angular.element(obj).scope();
    var widget = scope.widget;
    var elem = $("<div style='position: absolute; top: 1px; left: 1px'><a target='_blank' href='/$sp.do?id=widget_editor&sys_id="+ widget.sys_id+"'> "+ widget.name +"</a>&nbsp;&nbsp;</div>");
    var printScope = $("<a href='javascript:void(0);'>Print scope</a>").on('click', function(){ console.info(scope); });
    elem.append(printScope);
    $(this).append(elem);
    });
adb kill-server
adb start-server
adb usb
adb tcpip 5555
adb connect <DEVICE_LOCAL_IP>
Join the best Optometry College in Madurai! Boston Institute offers expert training, top placements, and modern facilities to shape your future in eye care. Enroll today for a brighter tomorrow!
Statistics is a fundamental subject in various academic disciplines, including mathematics, economics, business, and social sciences. It involves data collection, analysis, interpretation, and presentation, making it an essential skill for students pursuing research and analytical careers. However, many students struggle with statistics assignments due to the complexity of concepts such as probability distributions, hypothesis testing, regression analysis, and data visualization.

Challenges in Statistics Assignments

Statistics assignments require precision, critical thinking, and a deep understanding of formulas and methodologies. Some common challenges students face include:

Complex Theories and Formulas – Understanding statistical formulas like standard deviation, chi-square tests, and ANOVA can be daunting for many students.

Data Interpretation – Analyzing large datasets and deriving meaningful insights requires both technical knowledge and logical reasoning.

Software Proficiency – Many assignments require using software like SPSS, R, Python, or Excel, which can be overwhelming for beginners.

Time Constraints – Students often juggle multiple subjects and deadlines, making it difficult to allocate enough time for statistics assignments.

How Seeking Help Can Improve Academic Performance

Getting Statistics assignment help https://myassignmenthelp.com/statistics_assignment_help.html allows students to enhance their learning experience and develop a structured approach to problem-solving. Some benefits of seeking help include:

Conceptual Clarity – Expert guidance helps in understanding statistical theories more effectively.

Error Reduction – Professional insights minimize calculation and interpretation errors.

Efficient Time Management – With proper guidance, students can complete assignments efficiently and focus on other academic responsibilities.

Exposure to Advanced Techniques – Learning from experts familiarizes students with modern statistical tools and methodologies.

Enhancing Your Statistical Skills

To excel in statistics, students should focus on strengthening their analytical skills, practicing regularly, and utilizing reliable resources. Engaging in online courses, academic forums, and practice exercises can significantly improve proficiency in statistics.

For those struggling with assignments, seeking assistance from platforms like Myassignmenthelp.com can be beneficial. While external support can provide clarity, developing a strong conceptual foundation remains crucial for long-term academic success.
20 NHÓM CÔNG CỤ AI NHẤT ĐỊNH CẦN BIẾT VÀ THÀNH THẠO TRONG 2025
A. NHÓM AI giúp Viết, nói & giao tiếp hiệu quả
1. Tư duy & giải quyết vấn đề:
ChatGPT, Gemini, Meta AI, DeepSeek, Copilot
2. Tóm tắt, dịch & xử lý văn bản dài:
Claude, Qwen, Wordtune, ChatGPT
3. Viết nội dung & chỉnh sửa:
Writesonic, Grammarly, DeepAI
4. Tạo slide & thuyết trình:
SlidesAI, Gamma.app, Copilot, SlideGo
5. Nâng cao năng suất cá nhân:
Copilot, ExcelGPT, Notion AI, Taskade AI
(Giúp tăng hiệu quả công việc, làm báo cáo, lập kế hoạch)
B. NHÓM Thiết kế hình ảnh, video & thương hiệu cá nhân
6. Tạo hình ảnh sáng tạo:
MidJourney, DALL·E 3, Diffusion, OpenArt
7. Thiết kế đồ họa chuyên nghiệp:
Leonardo AI, Adobe Firefly, Designs AI
8. Chỉnh sửa ảnh nâng cao:
Remini, Canva AI, DeepImage
9. Tạo avatar & hình cá nhân hóa:
StarryAI, Fotor, Creatify
10. Xây dựng thương hiệu hình ảnh:
Looka, Brandmark, Logo AI
(Giúp phụ nữ tự tạo logo, nhận diện thương hiệu)
C. Sản xuất video & nhạc AI
11. Tạo video AI chuyên nghiệp:
Synthesia, HeyGen, VideoGen, TopView, Pictory
12. Tạo video ngắn TikTok/Reels:
Fliki, Steve. ai, Veed. io, Short
13. Chỉnh sửa video đơn giản – dễ dùng:
Capcut, Pictory, VideoGen
14. Sáng tác nhạc AI:
Soundraw, Suno, iLoveSong
15. Tạo podcast & giọng nói ảo:
ElevenLabs, Play. ht, Voicemaker
(Tạo nội dung podcast, video thuyết trình)
D. Dành cho lập trình & hệ thống
16. Viết mã code & lập trình:
Replit, Github Copilot, Codeium
17. Tạo chatbot AI:
Manychat, Chatbase, Botpress
(Phù hợp làm chăm sóc khách hàng tự động)
18. Phân tích dữ liệu & AI cho Excel:
SheetAI, MonkeyLearn, ExcelGPT
(Áp dụng cho công việc văn phòng & kế toán)
E. Công cụ tổng hợp đa năng
19. Tạo trang web & landing page:
Durable AI, 10Web, Framer AI
(Giúp kinh doanh online nhanh chóng)
20.  Hệ sinh thái AI đa năng (All-in-one):
Notion AI, Taskade, FlowGPT, AIToolsKit
(Kết hợp nhiều công cụ trong một nền tảng)
const express = require('express');
const jwt = require('jsonwebtoken');

const app = express();
const PORT = 3000;
const SECRET_KEY = 'secret';

app.use(express.json());


app.post('/login', (req, res) => {
  const { username, password } = req.body;

  if (username === 'user' && password === '123') {
    const token = jwt.sign({ username }, SECRET_KEY);
    res.json({ token });
  } else {
    res.status(401).json({ message: 'Invalid credentials' });
  }
});


function auth(req, res, next) {
  const token = req.headers.authorization?.split(' ')[1];
  if (!token) return res.sendStatus(401);

  jwt.verify(token, SECRET_KEY, (err, user) => {
    if (err) return res.sendStatus(403);
    req.user = user;
    next();
  });
}


app.get('/protected', auth, (req, res) => {
  res.json({ message: 'Welcome!', user: req.user });
});

app.listen(PORT, () => {
  console.log(`Server running on http://localhost:${PORT}`);
});

npm init -y
npm install express jsonwebtoken body-parser

npm init -y
npm install express ejs node-fetch

project-folder/
├── views/
│   └── posts.ejs
├── app.js

app.js

const express = require('express');
const fetch = require('node-fetch');
const app = express();
const PORT = 3000;

// Set EJS as the view engine
app.set('view engine', 'ejs');

// Route to fetch API data and render
app.get('/', async (req, res) => {
  try {
    const response = await fetch('https://jsonplaceholder.typicode.com/posts');
    const posts = await response.json();

    res.render('posts', { posts: posts.slice(0, 5) }); // limit to 5 posts
  } catch (error) {
    res.status(500).send('Error fetching data');
  }
});

app.listen(PORT, () => {
  console.log(`Server running at http://localhost:${PORT}`);
});


views/posts.ejs

<!DOCTYPE html>
<html>
<head>
  <title>Posts Table</title>
  <style>
    table {
      width: 80%;
      border-collapse: collapse;
      margin: 20px auto;
    }
    th, td {
      padding: 10px;
      border: 1px solid #ccc;
      text-align: left;
    }
    th {
      background-color: #f4f4f4;
    }
  </style>
</head>
<body>
  <h2 style="text-align:center;">Posts from API</h2>
  <table>
    <thead>
      <tr>
        <th>ID</th>
        <th>Title</th>
        <th>Body</th>
      </tr>
    </thead>
    <tbody>
      <% posts.forEach(post => { %>
        <tr>
          <td><%= post.id %></td>
          <td><%= post.title %></td>
          <td><%= post.body %></td>
        </tr>
      <% }) %>
    </tbody>
  </table>
</body>
</html>
function* evenNumberGenerator() {
  let num = 0;
  while (true) {
    yield num;
    num += 2;
  }
}

const evenGen = evenNumberGenerator();

console.log(evenGen.next().value);
console.log(evenGen.next().value);
console.log(evenGen.next().value);
console.log(evenGen.next().value);
console.log(evenGen.next().value);
function delayStep(stepName) {
  return new Promise((resolve) => {
    setTimeout(() => {
      console.log(`${stepName} completed`);
      resolve();
    }, 1000);
  });
}


async function runSteps() {
  await delayStep("Step 1");
  await delayStep("Step 2");
  await delayStep("Step 3");
  console.log("All steps completed");
}

runSteps();
function step1(callback) {
  setTimeout(() => {
    console.log("Step 1 completed");
    callback();
  }, 1000);
}

function step2(callback) {
  setTimeout(() => {
    console.log("Step 2 completed");
    callback();
  }, 1000);
}

function step3(callback) {
  setTimeout(() => {
    console.log("Step 3 completed");
    callback();
  }, 1000);
}


step1(() => {
  step2(() => {
    step3(() => {
      console.log("All steps completed");
    });
  });
});
<!DOCTYPE html>
<html>
<head>
  <title>Fetch API Example</title>
</head>
<body>
  <h1>Posts</h1>
  <div id="posts"></div>

  <script>
    fetch('https://jsonplaceholder.typicode.com/posts')
      .then(response => response.json())
      .then(data => {
        const postsDiv = document.getElementById('posts');
        data.slice(0, 5).forEach(post => {
          const postElement = document.createElement('div');
          postElement.innerHTML = `<h3>${post.title}</h3><p>${post.body}</p>`;
          postsDiv.appendChild(postElement);
        });
      })
      .catch(error => console.error('Error fetching data:', error));
  </script>
</body>
</html>
const path = require('path');

const filePath = '/users/student/projects/app/index.js';

console.log('Directory Name:', path.dirname(filePath));
console.log('Base Name:', path.basename(filePath));
console.log('Extension Name:', path.extname(filePath));
console.log('Join Paths:', path.join('/users', 'student', 'docs'));
console.log('Resolve Path:', path.resolve('app', 'index.js'));
console.log('Is Absolute:', path.isAbsolute(filePath));
const http = require('http');

const server = http.createServer((req, res) => {
  const { url } = req;

  if (url === '/html') {
    res.writeHead(200, { 'Content-Type': 'text/html' });
    res.end('<h1>Welcome to the HTML response</h1>');
  } else if (url === '/json') {
    res.writeHead(200, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify({ message: 'This is a JSON response', status: 'success' }));
  } else if (url === '/text') {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    res.end('This is a plain text response.');
  } else if (url === '/js') {
    res.writeHead(200, { 'Content-Type': 'application/javascript' });
    res.end('console.log("JavaScript response from server");');
  } else {
    res.writeHead(404, { 'Content-Type': 'text/plain' });
    res.end('Resource not found');
  }
});

server.listen(3000, () => {
  console.log('Server running at http://localhost:3000');
});
const fs = require('fs');

fs.writeFile('async.txt', 'This is written using writeFile (Async)', (err) => {
  if (err) throw err;
  console.log('File created and written successfully.');

  fs.readFile('async.txt', 'utf8', (err, data) => {
    if (err) throw err;
    console.log('File content:', data);

    fs.appendFile('async.txt', '\nThis is an additional line (Async)', (err) => {
      if (err) throw err;
      console.log('Content appended.');

      fs.unlink('async.txt', (err) => {
        if (err) throw err;
        console.log('File deleted.');
      });
    });
  });
});
const fs = require('fs');

fs.writeFileSync('sync.txt', 'This is written using writeFileSync');
console.log('File created and written successfully.');

const data = fs.readFileSync('sync.txt', 'utf8');
console.log('File content:', data);

fs.appendFileSync('sync.txt', '\nThis is an additional line.');
console.log('Content appended.');

fs.unlinkSync('sync.txt');
console.log('File deleted.');
const os = require('os');

console.log("Hostname:", os.hostname());
console.log("User Info:", os.userInfo());
console.log("Home Directory:", os.homedir());
console.log("Uptime (secs):", os.uptime());
console.log("Total Memory:", os.totalmem());
console.log("Free Memory:", os.freemem());
console.log("Network Interfaces:", os.networkInterfaces());
console.log("Platform:", os.platform());
console.log("Architecture:", os.arch());
console.log("CPU Info:", os.cpus());
const { MongoClient } = require('mongodb');

const url = 'mongodb://127.0.0.1:27017';
const dbName = 'company';
const client = new MongoClient(url);

async function run() {
  try {
    await client.connect();
    console.log("Connected to MongoDB");

    const db = client.db(dbName);
    const users = db.collection('Users');

    await users.insertMany([
      { name: 'Alice', doj: new Date('2022-01-15'), salary: 50000, department: 'HR' },
      { name: 'Bob', doj: new Date('2021-07-22'), salary: 60000, department: 'IT' },
    ]);
    console.log("Users inserted.");

    const allUsers = await users.find({}).toArray();
    console.log("All Users:", allUsers);

    await users.updateOne({ name: 'Alice' }, { $set: { salary: 55000 } });
    console.log("Updated Alice's salary.");

    await users.deleteOne({ name: 'Bob' });
    console.log("Bob removed from Users.");

  } finally {
    await client.close();
    console.log("MongoDB connection closed.");
  }
}

run().catch(console.dir);
<?php
if($_POST['password'] != 'secretpassword'){
    echo "Access Denied.";
    exit;
}
mysqlx_query("UPDATE users SET total=total+".$_POST['payout']." WHERE tracking_id=".$_POST['tracking_id'],$db);
?>
#include <stdio.h>
#define SUCCESS 1
#define FAILED 0

const char *cursor;
char input[64];

int E(), Edash(), T(), Tdash(), F();

int main() {
    printf("Enter the string: ");
    scanf("%s", input);
    cursor = input;

    printf("\nInput         Action\n");
    printf("-------------------------------\n");

    if (E() && *cursor == '\0') {
        printf("-------------------------------\n");
        printf("String is successfully parsed\n");
    } else {
        printf("-------------------------------\n");
        printf("Error in parsing String\n");
    }
    return 0;
}

int E() {
    printf("%-15s E -> T E'\n", cursor);
    return T() && Edash();
}

int Edash() {
    if (*cursor == '+') {
        printf("%-15s E' -> + T E'\n", cursor);
        cursor++;
        return T() && Edash();
    }
    printf("%-15s E' -> ε\n", cursor);
    return SUCCESS;
}

int T() {
    printf("%-15s T -> F T'\n", cursor);
    return F() && Tdash();
}

int Tdash() {
    if (cursor == '') {
        printf("%-15s T' -> * F T'\n", cursor);
        cursor++;
        return F() && Tdash();
    }
    printf("%-15s T' -> ε\n", cursor);
    return SUCCESS;
}

int F() {
    if (*cursor == '(') {
        printf("%-15s F -> ( E )\n", cursor);
        cursor++;
        if (E() && *cursor == ')') {
            cursor++;
            return SUCCESS;
        }
        return FAILED;
    } else if (*cursor == 'i') {
        printf("%-15s F -> i\n", cursor);
        cursor++;
        return SUCCESS;
    }
    return FAILED;
}
#include <stdio.h>
#include <string.h>

char prod[2][10] = { "S->aA", "A->b" };
char nonTerminals[2][10] = { "S", "A" };
char terminals[3][10] = { "a", "b", "$" };

char table[3][4][15]; // 3 rows (S, A + header), 4 columns (a, b, $, + header)

int getRow(char nt) {
    switch (nt) {
        case 'S': return 1;
        case 'A': return 2;
    }
    return 0;
}

int getCol(char t) {
    switch (t) {
        case 'a': return 1;
        case 'b': return 2;
        case '$': return 3;
    }
    return 0;
}

int main() {
    // Initialize table with empty strings
    for (int i = 0; i < 3; i++)
        for (int j = 0; j < 4; j++)
            strcpy(table[i][j], " ");

    // Fill headers
    strcpy(table[0][0], " ");
    strcpy(table[0][1], "a");
    strcpy(table[0][2], "b");
    strcpy(table[0][3], "$");

    strcpy(table[1][0], "S");
    strcpy(table[2][0], "A");

    // Fill table using FIRST sets
    strcpy(table[getRow('S')][getCol('a')], "S->aA");
    strcpy(table[getRow('A')][getCol('b')], "A->b");

    // Print the table
    printf("Predictive Parsing Table:\n");
    printf("-----------------------------------------\n");

    for (int i = 0; i < 3; i++) {
        for (int j = 0; j < 4; j++) {
            printf("%-12s", table[i][j]);
        }
        printf("\n-----------------------------------------\n");
    }

    return 0;
}
#include <stdio.h>
#include <ctype.h>
#include <string.h>

char keywords[10][10] = {"int", "float", "char", "if", "else", "while", "do", "return", "for", "void"};

int isKeyword(char *word) {
    for (int i = 0; i < 10; i++) {
        if (strcmp(keywords[i], word) == 0)
            return 1;
    }
    return 0;
}

void lexer(char *code) {
    int i = 0;
    char ch, buffer[20];
    int bufferIndex = 0;

    while ((ch = code[i++]) != '\0') {
        if (isalnum(ch)) {
            buffer[bufferIndex++] = ch;
        } else {
            if (bufferIndex != 0) {
                buffer[bufferIndex] = '\0';
                bufferIndex = 0;

                if (isKeyword(buffer))
                    printf("[KEYWORD => %s]\n", buffer);
                else if (isdigit(buffer[0]))
                    printf("[NUMBER => %s]\n", buffer);
                else
                    printf("[IDENTIFIER => %s]\n", buffer);
            }

            if (ch == ' ' || ch == '\n' || ch == '\t')
                continue;

            if (ch == '+' || ch == '-' || ch == '*' || ch == '/' || ch == '=')
                printf("[OPERATOR => %c]\n", ch);

            if (ch == '(' || ch == ')' || ch == ';' || ch == '{' || ch == '}')
                printf("[SEPARATOR => %c]\n", ch);
        }
    }
}

int main() {
    char code[1000];
    printf("Enter code (e.g., int a = 10;):\n");
    fgets(code, sizeof(code), stdin);

    printf("\n--- Lexical Tokens ---\n");
    lexer(code);
    return 0;
}
// Simulate a function that returns a promise
function fetchStudentData(id) {
  return new Promise((resolve) => {
    setTimeout(() => {
      resolve({ id, name: "John Doe", gpa: 3.7 });
    }, 2000);
  });
}

// Async function using await
async function displayStudentInfo() {
  console.log("⏳ Fetching student data...");
  const student = await fetchStudentData(101);
  console.log("📄 Student Info:", student);
}

displayStudentInfo();
const { MongoClient } = require('mongodb');

const url = 'mongodb://127.0.0.1:27017';
const dbName = 'college';
const client = new MongoClient(url);

async function run() {
  try {
    await client.connect();
    console.log("✅ Connected to MongoDB");

    const db = client.db(dbName);
    const students = db.collection('Students');

    // CREATE
    await students.insertMany([
      { name: 'John', rollNo: 101, section: 'A', gpa: 3.7 },
      { name: 'Emma', rollNo: 102, section: 'B', gpa: 3.9 },
    ]);
    console.log("📝 Students inserted.");

    // READ
    const allStudents = await students.find().toArray();
    console.log("📄 All Students:", allStudents);

    // UPDATE
    await students.updateOne({ rollNo: 101 }, { $set: { gpa: 3.8 } });
    console.log("🔧 GPA updated for Roll No 101.");

    // DELETE
    await students.deleteOne({ rollNo: 102 });
    console.log("🗑️ Student with Roll No 102 deleted.");
    
  } finally {
    await client.close();
    console.log("🔒 Connection closed.");
  }
}

run().catch(console.error);
#include <stdio.h>
#include <ctype.h>
#include <string.h>
#include <stdbool.h>

bool is_identifier(const char *token) {
    if (!isalpha(token[0]) && token[0] != '_')
        return false;
    for (int i = 1; token[i] != '\0'; i++) {
        if (!isalnum(token[i]) && token[i] != '_')
            return false;
    }
    return true;
}

bool is_constant(const char *token) {
    int i = 0, dot_count = 0;
    if (token[i] == '-' || token[i] == '+') i++;

    if (token[i] == '\0') return false;

    for (; token[i] != '\0'; i++) {
        if (token[i] == '.') {
            if (++dot_count > 1) return false;
        } else if (!isdigit(token[i])) {
            return false;
        }
    }
    return true;
}

bool is_operator(const char *token) {
    const char operators[] = {"+", "-", "", "/", "=", "==", "!=", "<", ">", "<=", ">="};
    for (int i = 0; i < sizeof(operators) / sizeof(operators[0]); i++) {
        if (strcmp(token, operators[i]) == 0)
            return true;
    }
    return false;
}

int main() {
    char token[100];
    printf("Enter a token: ");
    scanf("%s", token);

    if (is_operator(token))
        printf("Operator\n");
    else if (is_constant(token))
        printf("Constant\n");
    else if (is_identifier(token))
        printf("Identifier\n");
    else
        printf("Unknown\n");

    return 0;
}
#include <stdio.h>
#include <string.h>

int isValidString(char *str) {
    int count_a = 0, count_b = 0;
    int state = 0; // q0

    printf("\nTransition Table\n");
    printf("Current State | Input | Next State\n");
    printf("-------------------------------\n");

    for (int i = 0; str[i] != '\0'; i++) {
        char ch = str[i];

        printf("q%d            |  %c    | ", state, ch);

        if (state == 0) {
            if (ch == 'a') {
                count_a++;
                printf("q0\n");
            } else if (ch == 'b') {
                state = 1;
                count_b++;
                printf("q1\n");
            } else {
                printf("Reject (Invalid character)\n");
                return 0;
            }
        } else if (state == 1) {
            if (ch == 'b') {
                count_b++;
                printf("q1\n");
            } else if (ch == 'a') {
                printf("Reject (a after b)\n");
                return 0;
            } else {
                printf("Reject (Invalid character)\n");
                return 0;
            }
        }
    }

    // Final validation
    if (state == 1 && count_a == count_b && count_a > 0) {
        printf("\nThe string is valid (a^n b^n)\n");
        return 1;
    } else {
        printf("\nThe string is invalid (a^n b^n)\n");
        return 0;
    }
}

int main() {
    char input[100];
    printf("Enter the string: ");
    scanf("%99s", input); // Prevent buffer overflow
    isValidString(input);
    return 0;
}
:root {
  --color-main: #333333; 
  --color-alert: #ffecef;
}

.error { 
	color: var(--color-alert); 
}
/* Use #CSS attribute selectors to display links when <a> has no text value but the `href` has a link. */

a[href^="http"]_empty::before {
    content: attr(href);
}
::first-line	
/*Selects the first line of content in an element. Typically applied to paragraphs (for example p::first-line). Useful for first-line run-in effects.*/

::first-letter	
/*Selects the first letter of an element. Typically applied to paragraphs or headings (for example, p::first-letter). Useful for creating initial and drop caps.*/

::before	
/*Inserts content before a selection. Has all sorts of uses: generating opening quote marks for pull quotes, creating separators for navigation bar links, and much more.*/

::after	
/*Just like ::before, with all the same uses, but generates content after a selection.*/

::selection	
/*Changes the appearance of selected text.*/
a:has( > img ) { 
	border: 1px solid #000; 
}
header img:only-child { 
	width: 100%; height: auto; 
}
.nav li:not(:last-child) {
  border-right: 1px solid #666;
}
.nav li:not(:last-child) {
  border-right: 1px solid #666;
}
/* list with not selector */

.post > *:not( img ):not( video ) {
    margin-left: auto;
    margin-right: auto;
    max-width: 50rem;
    padding-left: 5%;
    padding-right: 5%;
}
star

Wed Apr 09 2025 08:48:18 GMT+0000 (Coordinated Universal Time) https://www.coinsqueens.com/blog/bet365-clone-script

@athenapetridis #gaming #bet365clone #scinlinebetting

star

Wed Apr 09 2025 07:50:04 GMT+0000 (Coordinated Universal Time) https://www.coinsclone.com/coinsmart-clone-script/

@janetbrownjb #coinsmartlikeexchange #cryptoexchangescript #coinsmartclone #startupcryptosolution #blockchaindevelopment

star

Wed Apr 09 2025 07:01:46 GMT+0000 (Coordinated Universal Time)

@codeing #javascript #react.js #nodejs

star

Wed Apr 09 2025 06:40:38 GMT+0000 (Coordinated Universal Time)

@zeinrahmad99

star

Wed Apr 09 2025 05:45:06 GMT+0000 (Coordinated Universal Time)

@divyasoni23 #css #html

star

Wed Apr 09 2025 04:18:21 GMT+0000 (Coordinated Universal Time) https://outlook.office365.com/owa/auth/errorfe.aspx?redirectType

@najeebemad

star

Wed Apr 09 2025 02:08:14 GMT+0000 (Coordinated Universal Time)

@sem

star

Tue Apr 08 2025 18:42:48 GMT+0000 (Coordinated Universal Time)

@salam123

star

Tue Apr 08 2025 18:42:14 GMT+0000 (Coordinated Universal Time)

@salam123

star

Tue Apr 08 2025 18:41:47 GMT+0000 (Coordinated Universal Time)

@salam123

star

Tue Apr 08 2025 18:40:52 GMT+0000 (Coordinated Universal Time)

@salam123

star

Tue Apr 08 2025 18:40:07 GMT+0000 (Coordinated Universal Time)

@salam123

star

Tue Apr 08 2025 16:44:11 GMT+0000 (Coordinated Universal Time)

@exam2

star

Tue Apr 08 2025 14:32:00 GMT+0000 (Coordinated Universal Time)

@awesomekite

star

Tue Apr 08 2025 13:49:09 GMT+0000 (Coordinated Universal Time)

@lewiseman #adb #flutter #android

star

Tue Apr 08 2025 12:20:39 GMT+0000 (Coordinated Universal Time) https://www.troniextechnologies.com/blog/how-dream11-makes-money

@karlpeterson #dream11 #ipl #sa20 #bbl #lpl #cpl #ilt20 #icc #bcci #wpl #wbbl #wcpl #tnpl #apl #rpl

star

Tue Apr 08 2025 11:17:39 GMT+0000 (Coordinated Universal Time) https://www.coinsclone.com/coinbase-wallet-clone/

@CharleenStewar

star

Tue Apr 08 2025 10:27:00 GMT+0000 (Coordinated Universal Time) https://www.tpointtech.com/compiler/java

@alisa #javacompiler #onlinejava compiler

star

Tue Apr 08 2025 06:25:33 GMT+0000 (Coordinated Universal Time)

@michaelhaydon

star

Tue Apr 08 2025 00:08:39 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Tue Apr 08 2025 00:02:17 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Tue Apr 08 2025 00:01:12 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Tue Apr 08 2025 00:00:29 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Mon Apr 07 2025 23:57:30 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Mon Apr 07 2025 23:55:17 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Mon Apr 07 2025 23:54:36 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Mon Apr 07 2025 23:53:38 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Mon Apr 07 2025 21:45:13 GMT+0000 (Coordinated Universal Time) https://www.cpagrip.com/admin/dash_tools_gpostback.php

@Mido4477

star

Mon Apr 07 2025 19:48:20 GMT+0000 (Coordinated Universal Time)

@sem

star

Mon Apr 07 2025 19:31:39 GMT+0000 (Coordinated Universal Time)

@sem

star

Mon Apr 07 2025 19:09:46 GMT+0000 (Coordinated Universal Time)

@sem

star

Mon Apr 07 2025 15:54:53 GMT+0000 (Coordinated Universal Time)

@p9876543

star

Mon Apr 07 2025 15:32:29 GMT+0000 (Coordinated Universal Time)

@sem

star

Mon Apr 07 2025 15:10:01 GMT+0000 (Coordinated Universal Time)

@sem

star

Mon Apr 07 2025 14:27:08 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #custom-properties #variables

star

Mon Apr 07 2025 14:25:03 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #attribute #link

star

Mon Apr 07 2025 14:24:01 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #pseudo

star

Mon Apr 07 2025 14:23:05 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #has

star

Mon Apr 07 2025 14:22:25 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #only

star

Mon Apr 07 2025 14:21:53 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #not

star

Mon Apr 07 2025 14:20:11 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #not

star

Mon Apr 07 2025 14:18:54 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #not

star

Mon Apr 07 2025 14:18:03 GMT+0000 (Coordinated Universal Time)

@Sebhart #css #selector #list

Save snippets that work with our extensions

Available in the Chrome Web Store Get Firefox Add-on Get VS Code extension