- New MxDataView in spyder-modelx v0.13.0
- Building an object-oriented model with modelx
- Why dynamic ALM models are slow
- Running a heavy model while saving memory
- Running modelx in parallel using ipyparallel
- Why Excel is better than Python
- How fast are lifelib models now?
- Testing GPU model on Cloud
- Modeling assets with QuantLib
- Object-oriented actuarial model
- All posts ...
How fast are lifelib models now?
Feb 26, 2022
Almost a year ago, this post was published. The post is about testing the speed of models in the fastlife library. Since then, lifelib has been updated, and two new libraries, basiclife and savings have succeeded old libraries such as fastlife and simplelife.
basiclife and savings includes models that run significantly faster than the fastlife models. This post is about the speed-test results of the models included in the basiclife and savings. The hardware environment is also updated from the one used for the previous test. In addition, the version of Python is also updated.
Models in basiclife and savings
The basiclife and savings libraries includes two types of models, single processing models and parallel processing models. Here we measure the speed of the parallel processing models. The parallel processing models are reimplementation of the corresponding single processing models and produce the same results as the single processing models. The parallel processing models run faster than the single processing models, because vectorized formulas in the models operate on numpy arrays or pandas Series and DataFrames representing all the model points in scope. The models to test are:
BasicTerm_MEin basiclife, a traditional basic term product
CashValue_MEin savings, a saving product with account value
Here are some major specs of the machine used for this test and the results.
- CPU: 12th Gen Intel Core i7-12700KF
- OS: Windows 11
- Memory: 64GB
|Model||Python ver.||# Model points||# Steps||# Calcs||Run time (Sec.)|
All the runs are significantly improved from the last test performed with fastlife.
The most significant factor contributed to the improvement is elimination of
callbacks passed to
apply method of
Series. All the element wise operations on
Series in the models this time are carried out by methods natively provided by pandas.
As discussed previously, the models don’t take advantage of multiple cores, although the CPU has 20 logical cores. So the utilization of the CPU was around 7-10% during any of the runs. If the cores are utilized fully, the speed should be closed to 20 times the results.
The machine difference has material impact too. As mentioned on lifelib’s site,
BasicTerm_ME 3.91 sec while the test above shows the same runs take 1.5 or 0.88 seconds,
although this includes the impact of the difference in Python versions.
It is also worth noting that the models run 40-70% faster with Python 3.10 than with Python 3.9. There has been an initiative going on to make Python faster, and each Python release is expected gain some performance improvement, as the figures show. Python 3.11 is expected to be faster than Python 3.10. The links below detail the initiative supported by Guido van Rossum.
- Making CPython faster, LWN.net
- Guido van Rossum’s Ambitious Plans for Improving Python Performance – The New Stack
- Software at Scale 34 - Faster Python with Guido van Rossum
- Python programming: We want to make the language twice as fast, says its creator, ZDNet
BasicTerm_MEdoes not calculate premium rates internally. It reads the rates from an Excel file generated by another model. If premium rates are to be calculated in the model, another projection needs to be carried out for generating the rates, which would increase the run time materially.
Also, reserve and capital calculations are not reflected in the model. The specs of reserve and capital calculations vary depending on regulatory or accounting requirments.