Skip to main content

Posts

Search

Search Search Any Topic from Any Website Search
Recent posts

MSE vs RMSE: Differences and Use Cases

MSE vs RMSE: Differences and Use Cases Both MSE (Mean Squared Error) and RMSE (Root Mean Squared Error) are metrics used to evaluate predictive models, especially in regression. They have different characteristics and are used in different scenarios. 1. MSE (Mean Squared Error) Definition: MSE = (1/n) Σ (yáµ¢ - Å·áµ¢)² where yáµ¢ is the true value, Å·áµ¢ is the predicted value, and n is the number of samples. Characteristics: Squares differences → penalizes large errors more heavily. Units are squared compared to the original data. Smooth and differentiable → useful for optimization during model training. Use Cases: Model training / loss function: Commonly used as a loss function in machine learning, e.g., LSTM, Transformer, linear regression. Penalizing large errors: Useful in applications sensitive to large mistakes, such as stock price or weather pr...

Object-Relational Mapping (ORM)

What is ORM? ORM stands for Object-Relational Mapping . It allows you to interact with a relational database using objects in your code instead of raw SQL queries. Essentially, it maps database tables to classes and rows to objects. How it Works Database table → Class Row in table → Instance of class (object) Columns → Attributes of the object SQL queries → Methods on objects Example: Suppose you have a users table in your database: id name email 1 Alice alice@mail.com 2 Bob bob@mail.com from sqlalchemy import Column, Integer, String from sqlalchemy.orm import declarative_base Base = declarative_base() class User(Base): __tablenam...

Transformer and Positional Encoding

Positional Encoding vs Transformer Encoder Understanding the difference between positional encoding and encoder in a Transformer model. Core Difference Concept Meaning Positional Encoding Adds position information to input tokens Encoder (Transformer Encoder) Learns relationships and representations using attention 1. Positional Encoding (PE) Transformers don’t understand order by default (unlike LSTMs). Positional encoding tells the model: “This token is at position 1, 2, 3…” Why Needed Without positional encoding: [x1, x2, x3] = [x3, x1, x2] (incorrect) With positional encoding: [x1+p1, x2+p2, x3+p3] (correct) Formula (Sinusoidal) ...

LSTM vs Transformer for Time Series Prediction

Stacked LSTM vs Transformer for Time Series Prediction 1. Stacked LSTM LSTM is a type of Recurrent Neural Network (RNN)... Architecture Input sequence → LSTM layer 1 → LSTM layer 2 → Dense → Output Mathematics LSTM uses gates to control information flow: Forget gate \(f_t\): \[ f_t = σ(W_f · [h_{t-1}, x_t] + b_f) \] Input gate \(i_t\) and candidate state \(C̃_t\): \[ i_t = σ(W_i · [h_{t-1}, x_t] + b_i) \] \[ C̃_t = tanh(W_C · [h_{t-1}, x_t] + b_C) \] Cell state update \(C_t\): \[ C_t = f_t ⊙ C_{t-1} + i_t ⊙ C̃_t \] Output gate \(o_t\) and hidden state \(h_t\): \[ o_t = σ(W_o · [h_{t-1}, x_t] + b_o) \] \[ h_t = o_t ⊙ tanh(C_t) \] Where \(x_t\) is the input, \(h_...

AI Chatbot Pipeline Documentation

AI Chatbot Pipeline Documentation Pipeline Overview User Query ↓ Encoder (Transformer) ↓ Vector Search ↓ FAQ priority match? ↓ Website content match? ↓ Answer synthesis (RAG) This is a retrieval-first, safe, and efficient pipeline (no hallucination). Tech Stack FastAPI – API Sentence-Transformer – encoder FAISS – vector search Any LLM – for final answer synthesis (optional) FAQs stored separately from website content Project Structure app/ ├── main.py ├── embeddings.py ├── vector_store.py ├── rag.py ├── data/ │ ├── faqs.json │ ├── website_chunks.json Load Encoder (Transformer) # embeddings.py from sentence_transformers import SentenceTransformer model = SentenceTransformer("all-MiniLM-L6-v2") def encode(text: str): return...

JS Array & Matrix Concepts

JavaScript Array & Matrix Concepts Demo This demo covers key JavaScript array and matrix operations used in signal processing and general programming. 1. Array Manipulation (1D) // Original binary array let inputBits = [0, 1, 1, 0, 1, 0]; // map(): Convert bits to BPSK symbols (0 -> -1, 1 -> 1) let inputSymbols = inputBits.map(bit => bit===0 ? -1 : 1); console.log('map():', inputSymbols); // [-1, 1, 1, -1, 1, -1] // filter(): Split even and odd indices let s1 = inputSymbols.filter((_, i) => i % 2 === 0); // Even indices let s2 = inputSymbols.filter((_, i) => i % 2 === 1); // Odd indices console.log('filter() s1:', s1); // [-1, 1, 1] console.log('filter() s2:', s2); // [1, -1, -1] // reduce(): Sum of all bits let sumBits = inputBits.reduce((sum, b) => sum + b, 0); console.log('reduce() sum:', sumBits); // 3 // some() / every(): Check conditions let allValid = inputBits.every(b => b===0 || b===1); console.log('ev...

Event Delegation in Javascript

  Event Delegation is a powerful technique in JavaScript that allows you to manage events efficiently by attaching a single event listener to a parent element rather than adding multiple listeners to individual child elements. This approach is widely used in modern web applications, including frameworks like React, to improve performance and scalability.  Event Delegation solves this by attaching one listener to a common ancestor (parent container) and handling events for all child elements that trigger it.    <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Event Delegation Bit Validator</title> <style> body{ font-family: Arial, sans-serif; background:#f4f6f9; margin:40px; } .container{ max-width:600px; margin:auto; background:white; padding:30px; border-radius:8px; ...

People are good at skipping over material they already know!

View Related Topics to







Contact Us

Name

Email *

Message *