- Annapolis, MD
- https://twitter.com/winglian
Highlights
- Pro
Block or Report
Block or report winglian
Report abuse
Contact GitHub support about this user’s behavior. Learn more about reporting abuse.
Report abusePinned
559 contributions in the last year
Less
More
Contribution activity
May 2023
Created 4 repositories
Created a pull request in OpenAccess-AI-Collective/axolotl that received 6 comments
add support for opimum bettertransformers
https://pytorch.org/blog/out-of-the-box-acceleration/ testing initial support for gpt-neox arch
+109
−48
•
6
comments
Opened 26 other pull requests in 3 repositories
OpenAccess-AI-Collective/axolotl
24
merged
- refactor conversation plucking in sharegpt
- add py310 to the test matrix
- new hf_use_auth_token setting so login to hf isn't required
- cuda properly compiled bitsandbytes for qlora support
- automated testing in github actions
- falcon: sane starter defaults and add lora support
- add example for falcon support
- Truthy validation
- load the tokenizer seperately from the model
- Qlora fixes
- shard fix
- fixes w/ example for super basic lora starter
- fix cd within flash-attn
- add missing file
- add discord link to #axolotl-help channel
- cfg.cfg fix, also de-dupe lora module list
- fix tuple add to list
- attempt to find linear modules for qlora
- Dev to main
- Qlora
- lots of various improvements
- Mpt triton
- Jeopardy bot!
- merge dev branch for various fixes
lm-sys/FastChat
1
open
HazyResearch/flash-attention
1
open
Reviewed 16 pull requests in 1 repository
OpenAccess-AI-Collective/axolotl
16 pull requests
- Feat: Update validate_config and add tests
- Feat: Update actions version
-
Feat: Add warning for
trust_remote_code - Fix: Remove base class inherit for CompletionPrompter
- refactor: prompter
- refactor: change 4bit nomenclature to gptq
- load the tokenizer seperately from the model
-
Feat: Add
cfg.lora_target_linear - fixes w/ example for super basic lora starter
- Qlora
- Feat: Rewrite Readme
- fix: handles AutoTokenizer from untrusted source
- lots of various improvements
-
Feat: Set
halfusingcfg.fp16for 4bit - Fix: Save adapter for lora
- Add eval_batch_size for evaluation
Created an issue in OpenAccess-AI-Collective/axolotl that received 4 comments
disable checkpoint for wandb_log_model:
update all the configs / examples and change wandb_log_model: checkpoint => wandb_log_model:
this will prevent uploading obscenely large artifacts…
4
comments
Opened 15 other issues in 4 repositories
OpenAccess-AI-Collective/axolotl
9
open
3
closed
- update references to previous repo location
- custom prompt strategies improvement
- qlora save peft on final callback
- load tokenizer separately and before the models
- add pre-commit hook with pylint, flake8 and black
- add bitsandbytes build with cuda library in base docker image
- save steps enhancement
- when hashing the dataset cache, be sure to use the tokenizer as part of the hash key
- optionally save as safetensors.
- run model.train() on models before training
- early stopping callback requires load_best_model_at_end to be True
- issues to fix reported from discord







