top of page
Data Aggregation Consulting Designing Web Scraping Systems

Web Scraping and Data Aggregation of Regulatory Content

Web Scraping & Data Aggregation Software

Activity: Web Scraping and Data Aggregation of Regulatory Content 

Client:    Insurers 

 

These engagements for multiple insurance companies providing cover to financial institutions involved the ingestion and aggregation of structured (CSV/JSON/XML), semi-structured (HTML) and unstructured (PDF) data originating from a Federal Agency and additional third-party sources. The ESILAB database supporting this capability currently stores in excess of 650 million data points generated during the period 2018 to date and is updated via web scraping on a daily basis. Once aggregated and converted into XML/JSON this data is used by insurers to calculate premium pricing for potential clients and brokers based on objective criteria (including ownership, relationships, financial performance, investment strategy, investment portfolio composition and current and previous exposure to criminal/civil/regulatory proceedings at the entity and employee level). The system also provides the ability to benchmark potential insureds against a basket of insureds with a similar profile. Multiple RESTful web services are also provided (each of which has been configured by the clients using no-code tools) for delivery of data to downstream systems. 

bottom of page