Skip to main content
mirekphd's user avatar
mirekphd's user avatar
mirekphd's user avatar
mirekphd
  • Member for 4 years, 10 months
  • Last seen more than a month ago
About

My decade-long span in a rather conservative, heavily regulated and old-fashioned insurance industry in a very restrictive regulatory environment (EU) has proven to be surprisingly versatile and interesting.

So in my former role of a Data Scientist / AI/ML Engineer I created a company-wide internal Python ML library and created fully automated modeling pipelines (Papermill+Scrapbook), from data mining, feature engineering (maintaining offline Feature Stores) and feature selection (e.g. SHAP and varimp), distributed/multi-device hyperparameters tuning (Optuna, Ray Core/Tune), ensuring model reproducibility and automated validation (MLflow) to monitoring of post-production features and models performance (MinIO, MLflow, k8s CronJobs, Grafana). I've created a complete solution for building and productionalizing ML models and used it in main areas of the business (such as risk and demand models) in a paradigm shift away from the decades-old generalized linear models that used to reign supreme in the insurance industry.

In my MLOps/AIOps hat I develop universal and customized Docker containers for data scientists working on BI and AI/ML models (incl. GPU-enabled), with IDEs such as Jupyter Notebook/Lab, VSCode Server (w/coding and AI ext. such as Continue), legacy RStudio Server and specialized MLOps frameworks such as MLflow or in-house data lakes (MinIO), and open-source databases: SQL (MariaDB/Postgres), No-SQL (Redis/Cassandra) and vector (ChromaDB,RediSearch).

I also develop and maintain in production custom apps with RESTful APIs for production deployment of ML models and their features (using Python, Flask/FastAPI, gunicorn, Redis, MinIO, git, and Bash) and also maintain AI workloads in on-prem GPUs: pre-trained LLMs, VLMs and SSMs via APIs (OpenAI and custom) using vLLM, Triton Inference Server with Python Backend, and Llama-server, chatbots (Streamlit and Gradio) and agentic clients and servers (FastMCP, LangChain/LangGraph).

In my DevOps hat I orchestrate two types of ML containers (stateful for ML models development and stateless for their production deployment) on several hosted and on-prem k8s/OKD servers, automate multi-stage builds, packages/libraries/extensions updates, security scans and staging/prod. deployments using Docker/compose, microk8s, Jenkins pipelines (Groovy, webhooks), OKD builders, bash, Python, build and deployment configs and CVE scanners (Clair/Grype/XRay).

I also perform Linux sysadmin role for CI/CD and build servers (Ubuntu, Docker/compose, MicroK8s, Jenkins, NGINX, Postgres, Grype) and fulfill an k8s/OKD cluster and business admin roles for compute clusters (k8s/OKD CLIs, Kustomize, Helm, Python, Bash) in both the data science / ML development and in ML models staging and production environments, servicing two hundred data scientists, over a thousand of features and dozens of custom ML models and pre-trained AI models.

Badges
This user doesn’t have any gold badges yet.
This user doesn’t have any silver badges yet.
4
bronze badges
Posts

This user hasn’t posted yet.