The increased volume of data available to the insurance industry means more information for making informed business decisions—both at the higher level of executives and the micro-level of quantitative analysts. The essential component to truly benefiting from this data, however, lies in using the right data wrangling tools available. Properly executed, wrangling provides data insights that improve both analytical inquiries and the quality of the results.
Decision-Making and Justifying the Decisions
More data is indeed more power in financial markets and in particular for the insurance industry, where key decision-making processes are made at a much more micro level.
In years gone by, decisions, and the rationale behind them, came from a much more macro level.
As data has become more prevalent in our day to day lives, so has the need to understand it and “do something with it.” This has led to decisions shifting down the chain, but the rationale given to regulators or boards of directors still being made at the macro level. This has caused a disconnect between the aggregation of these smaller micro-level decisions to a “one size fits all” macro-level rationale.
Scarce Data in Insurance
In the insurance industry—and in particular specialty insurance markets such as Lloyd’s of London—data has always been scarce. Of course this is at the essence of why you require such types of insurance because insurance is intended to collect premium from many parties to spread the risk of concentrating exposure in any one area.
However, the difference between this and retail insurance, for example, is that retail may have many hundreds of thousands of likely policyholders wanting a very generic policy. Specialty insurance conversely has very few potential policyholders who would want insurance coverage for specific needs, such as coverage for their oil rig in the Gulf of Mexico from the threat of hurricanes.
Historically, specialty markets have used sparsely-available data and judgement to assess risk. The key to this has always been imperfect information. The value of this imperfect information is the difference between an insurer either making profit or seeing huge losses.
Quants and Data
Publicly available data is helping to supplement quants with more data to analyse opportunities in the market. This means that entities that may hold a large proportion of the market share do not now hold as much of an advantage (because they own a large part of market data that they keep confidential) over market participants that hold a smaller market share.
Let’s take the example of the oil rig again. There have only been a finite number of hurricanes that have ripped through the Gulf of Mexico and each time a new oil rig is built it is meant to withstand greater force from a hurricane. This impairs the use of data within the models and puts an increasing amount of importance on cutting through the data and understanding what it is supposed to represent. So on the face of it, quants could make quite bold recommendations that are in fact flawed, if one investigated the detail behind it.
Using Tools to Understand Data
More data can mean more power only if we are able to understand and use that data. Solutions like Trifacta are today’s answer to analyzing large amounts of data quickly, using visual displays to glean insights easily, and in turn making well-informed business decisions.
Creating homogeneous groups of data that have enough volume to be credible is always going to be a difficult task. But as an industry we should use the tools available to us, like Trifacta, to understand the data before we provide macro-level rationales for micro-level decision-making.
Bigdata and data center