Focused crawls are collections of frequently-updated webcrawl data from narrow (as opposed to broad or wide) web crawls, often focused on a single domain or subdomain.
Hi, I am looking for documentation about your package : what can be done with it and how. Is there such a document somewhere? I have not been able to locate one so far. Thanks for your help.
Implementation of Genetic Algorithm for feature selection of neural networks proposed by Genetic algorithm-based heuristic for feature selection in credit risk assessment paper.
Project which will predict credit default or credit risk using artificial neural network algorithms. It will help banks and financial institutions to assign a credit score to the customer profile/portfolio and make a decision whether to sanction a loan or not
Enhancing Credit analysis using Geospatial techniques. The predictive model is built on logistic regression and decision tree algorithms and produces an estimated default probability of the applicant. Models are built on normalised data to cover all possible scenarios from real life. The predictive probability will determine good and bad customers by classifying them into four categories. The output from predictive models is used on tableau to generate business dashboard. Models ability to classify and performance measurements were measured by using statistical metrics: Gini, KS and AUROC.
openLGD is a Python powered library for the statistical estimation of Credit Risk Loss Given Default models. It can be used both as standalone library and in a federated learning context where data remain in distinct (separate) servers
Hi, I am looking for documentation about your package : what can be done with it and how. Is there such a document somewhere? I have not been able to locate one so far. Thanks for your help.