Posts

Showing posts with the label Guidewire Data Management

SQL Queries Code Interviews QA 50 plus

Image
Here are SQL-focused interview questions with only the relevant SQL code: 1. Find the second highest salary from an Employee table. SELECT MAX(Salary) AS SecondHighestSalary FROM Employees WHERE Salary < (SELECT MAX(Salary) FROM Employees); Using ROW_NUMBER(): WITH RankedSalaries AS (   SELECT Salary, ROW_NUMBER() OVER (ORDER BY Salary DESC) AS Rank   FROM Employees ) SELECT Salary AS SecondHighestSalary FROM RankedSalaries WHERE Rank = 2; --- 2. Write a query to calculate a running total of sales. SELECT   OrderID,   OrderDate,   Amount,   SUM(Amount) OVER (ORDER BY OrderDate) AS RunningTotal FROM Orders; --- 3. Retrieve customers who placed no orders using a LEFT JOIN. SELECT c.CustomerID, c.CustomerName FROM Customers c LEFT JOIN Orders o ON c.CustomerID = o.CustomerID WHERE o.OrderID IS NULL; --- 4. Write a query to find the top 3 highest salaries. SELECT DISTINCT Salary FROM Employees ORDER BY Salary DESC LIMIT 3; Using DENSE_RANK(): WIT...

Guidewire data analytical in house products & its usecase scenario

Image
Guidewire Data Studio Guidewire Data Studio is part of the Guidewire Data Platform, which is an enterprise-grade, cloud-native big data solution designed specifically for property and casualty (P&C) insurers. Here's an overview of its details and how one can learn about it: Details: Purpose: Guidewire Data Studio is aimed at helping insurers unlock data potential by providing tools for data ingestion, curation, and analysis. It enables insurers to ingest data from both internal and external sources, unify and transform this data, and then use it for advanced analytics, reporting, and decision-making. Key Features:  Data Ingestion: Collects data in near real-time from various systems, including Guidewire and third-party applications. Curation: Uses multiple curation engines to prepare data for use, whether in real-time or batch mode. Data Lake: Stores both raw and curated data in a scalable environment. Data Catalog: Manages metadata to make data discoverable, secure...

Guidewire Claim Insights in Databricks and Tableau - How many claims are there?

Image
 In this blog post, sharing guidewire claims how many claims got created per city  and per state basics Databricks it is available here :  Claim happened from most US State Insights data analytics Template V1.0 - Databricks SQL Sample code  select AX.CITY, count( AX.CLAIM_NUMBER) as TOTAL_CLAIMS_PER_CITY from ( select distinct C.CLAIMNUMBER as CLAIM_NUMBER, ST. NAME as STATE, ADDR.CITY from guidewire_raw_db.cc_claim C left join guidewire_raw_db.cc_contact CONT on C.INSUREDDENORMID = CONT. ID left join guidewire_raw_db.cc_address ADDR on CONT.PRIMARYADDRESSID = ADDR. ID left join guidewire_raw_db.cctl_state ST on ADDR.STATE = ST. ID ) AX group by AX.CITY order by 2 desc From these SQL Query Resultsets Feed into Tableau Public Edition This Tableau Public view Available here  https://public.tableau.com/views/Guidewire_Insurance_Claims_Insights_one/InsuranceClaimsPerCity?:language=en-US&:sid=&:redirect=auth&:display_count=n&...

Guidewire Self-Managed Data Integration with Business context

Image
 In Guidewire, We have majorly four centers: 1. Contact Manager 2. Policy Center 3. Billing Center 4. Claim Center Business Context wise follow the same order to remember that business & data integration ways it is best approach, While loading sample data with individual instances load sample data in the same order 1,2,3,4; Once after loading the sample data stop the server and then after do the integration between multiple centers in both directions  1. Contact Manager to All remaining xCenters and vice versa, reverse directions also possible 2. Policy center to integrate with Billingcenter and Claimcenter as well as viceversa 3. Billingcenter & claimcenter there is not possible to integrate !! Coming to Business context how to easily recall above sequence    1. Contact Manager:   It is used to create diverse contacts broadly into Person / Company types; Contact Manager with AddressbookUID, PublicIDs are available, any business contact is im...

Pyspark load and transform guidewire table with it's datatype autodetecting capacity

Image
Continue from blog post -->  Guidewire Self Managed version H2 Tables ingest into Databricks Delta Tables Published Notebook for PySpark will load Guidewire CSV files and then create delta tables based on auto-detecting datatypes and then finally all the load and transformation Process also creating in separate delta file processing log table final steps Here is the Code available on databricks Create New Guidewire Claim Tables from CSV File Auto-Detect Data Type V1.0 Template Code logic below  1. Create New Guidewire Claim Tables from CSV File Auto-Detect Data Type V1.0 Template from pyspark.sql import SparkSession from pyspark.sql.types import StringType, IntegerType, FloatType, BooleanType, TimestampType from pyspark.sql.utils import AnalysisException # Initialize Spark session (already initialized in Databricks) spark = SparkSession.builder.getOrCreate() # Define the new database name new_database = "gw_claimcenter_raw_db_2" # Step 1: Create a new database if it ...

Guidewire Self Managed version H2 Tables ingest into Databricks Delta Tables

Image
Prerequisite  1. In All Guidewire xCenters import sample data 2. Create account in Databricks Community edition, In this edition Databricks is Free Step 1: Query H2 tables Step 2: Save Resultsets into CSV files and rename prefix with clm_ Step 3: Upload all these CSV into Databricks Filestore Click Create Table then Below pop up will open and upload all csv files Step 4:   Check it in Catalog--> FileStores --> All Clm_ prefix files are available !! Step 5: Ingested CLM_ csv prefix files now it will be created as delta tables in Databricks as well as migrate to new guidewire raw database PySpark Code has below  import os from pyspark.sql import SparkSession from pyspark.sql.types import IntegerType from pyspark.sql.functions import col # Initialize Spark session (usually pre-initialized in Databricks) spark = SparkSession.builder.appName( "CreateTablesFromCSV" ).getOrCreate() # Define the base path for FileStore base_path = "dbfs:/FileStore/" # Define ...