Posts

Top 100+ Techniques for Verbal Ability & Reading Comprehension (VARC) - CAT Exam

I. Reading Comprehension (RC) Techniques 1. Skim First, Read Later: Quickly skim the passage to get a general idea before diving into details. 2. Identify the Main Idea: Focus on the overall theme of the passage. 3. Look for Transition Words: Words like However, Thus, Therefore signal shifts in arguments. 4. Understand the Author’s Tone: Identify whether it is analytical, critical, optimistic, etc. 5. Focus on Opening & Closing Sentences: These often contain key information. 6. Analyze Passage Structure: Determine whether the passage follows an argumentation, cause-and-effect, or comparison structure. 7. Underline Keywords: Underline or note important facts and figures while reading. 8. Read Paragraph by Paragraph: Break the passage into manageable sections. 9. Map the Passage: Create a mental map of the passage to remember key points. 10. Identify Factual vs. Inferential Information: Recognize what’s stated explicitly vs. implied. 11. Avoid Assumptions: Base

Pyspark load and transform guidewire table with it's datatype autodetecting capacity

Image
Continue from blog post -->  Guidewire Self Managed version H2 Tables ingest into Databricks Delta Tables Published Notebook for PySpark will load Guidewire CSV files and then create delta tables based on auto-detecting datatypes and then finally all the load and transformation Process also creating in separate delta file processing log table final steps Here is the Code available on databricks Create New Guidewire Claim Tables from CSV File Auto-Detect Data Type V1.0 Template Code logic below  1. Create New Guidewire Claim Tables from CSV File Auto-Detect Data Type V1.0 Template from pyspark.sql import SparkSession from pyspark.sql.types import StringType, IntegerType, FloatType, BooleanType, TimestampType from pyspark.sql.utils import AnalysisException # Initialize Spark session (already initialized in Databricks) spark = SparkSession.builder.getOrCreate() # Define the new database name new_database = "gw_claimcenter_raw_db_2" # Step 1: Create a new database if it

Guidewire Self Managed version H2 Tables ingest into Databricks Delta Tables

Image
Prerequisite  1. In All Guidewire xCenters import sample data 2. Create account in Databricks Community edition, In this edition Databricks is Free Step 1: Query H2 tables Step 2: Save Resultsets into CSV files and rename prefix with clm_ Step 3: Upload all these CSV into Databricks Filestore Click Create Table then Below pop up will open and upload all csv files Step 4:   Check it in Catalog--> FileStores --> All Clm_ prefix files are available !! Step 5: Ingested CLM_ csv prefix files now it will be created as delta tables in Databricks as well as migrate to new guidewire raw database PySpark Code has below  import os from pyspark.sql import SparkSession from pyspark.sql.types import IntegerType from pyspark.sql.functions import col # Initialize Spark session (usually pre-initialized in Databricks) spark = SparkSession.builder.appName( "CreateTablesFromCSV" ).getOrCreate() # Define the base path for FileStore base_path = "dbfs:/FileStore/" # Define

Insurance Claim Domain - Fact Dimensional and Data Mesh Tables

Image
  Insurance Claim Team Related Fact Dimension table with data mesh Read Full article here  Full Article Introduction In the context of insurance claims, both fact and dimension tables can be managed using different types of Slowly Changing Dimensions (SCDs): 1. SCD Type 1: Overwrites old data with new data. 2. SCD Type 2: Maintains historical data by adding new rows for changes. Example: Dimension Table for Insurance Claims 1. Dimension Table Using SCD Type 1 An SCD Type 1 dimension table for insurance claims only keeps the latest state of attributes, overwriting previous values when changes occur. SQL Script for SCD Type 1 Dimension Table ```sql -- Create a Dimension Table for Claims Adjuster (SCD Type 1) CREATE TABLE IF NOT EXISTS dbo.AdjusterDimension_SCD1 (     AdjusterKey INT IDENTITY(1,1) PRIMARY KEY,     AdjusterID INT,     AdjusterName NVARCHAR(100),     AdjusterRegion NVARCHAR(50) ); -- Insert or update AdjusterDimension_SCD1 table based on raw data MERGE INTO dbo.AdjusterDime