The Wayback Machine - https://web.archive.org/web/20220811005248/https://github.com/open-mmlab/mmaction2/issues/19
Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap of MMAction2 #19

Open
hellock opened this issue Jul 13, 2020 · 38 comments
Open

Roadmap of MMAction2 #19

hellock opened this issue Jul 13, 2020 · 38 comments
Labels
good first issue help wanted

Comments

@hellock
Copy link
Member

@hellock hellock commented Jul 13, 2020

We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.

You can either:

  1. Suggest a new feature by leaving a comment.
  2. Vote for a feature request with 馃憤 or be against with 馃憥. (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!)
  3. Tell us that you would like to help implement one of the features in the list or review the PRs. (This is the greatest things to hear about!)
@hellock hellock pinned this issue Jul 13, 2020
@hellock hellock added good first issue help wanted labels Jul 13, 2020
@d-li14
Copy link

@d-li14 d-li14 commented Jul 14, 2020

I suppose it would be interesting to add CSN and X3D by FAIR into the supported model family.
I also have an interest in helping implement/review them if time permits.

@hellock
Copy link
Member Author

@hellock hellock commented Jul 14, 2020

I suppose it would be interesting to add CSN and X3D by FAIR into the supported model family.
I also have an interest in helping implement/review them if time permits.

CSN is in the plan of next release. It would be great if you would like to help with the implementation of X3D.

@Amazingren
Copy link

@Amazingren Amazingren commented Jul 15, 2020

I strongly recommend adding the support for dataset FineGym99 with video dataset_type, it would be more convenient for users to validate the ideas for fine-grained action recognition or localization tasks. Hoping this would come true in a not so long future!

@irvingzhang0512
Copy link
Contributor

@irvingzhang0512 irvingzhang0512 commented Jul 16, 2020

it will be nice if mmaction2 could support ava dataset and spatio-temporal action detection models.

@q5390498
Copy link

@q5390498 q5390498 commented Jul 20, 2020

it will be nice if mmaction2 can give some pretrained backbone models for user,, such as ResNet3dSlowFast and so on.

@hellock
Copy link
Member Author

@hellock hellock commented Jul 21, 2020

it will be nice if mmaction2 could support ava dataset and spatio-temporal action detection models.

Yes it is in the plan.

@hellock
Copy link
Member Author

@hellock hellock commented Jul 21, 2020

it will be nice if mmaction2 can give some pretrained backbone models for user,, such as ResNet3dSlowFast and so on.

There are already lots of pretrained models in the model zoo.

@innerlee innerlee mentioned this issue Jul 27, 2020
@IDayday
Copy link

@IDayday IDayday commented Jul 27, 2020

It will be better if the model can output in video format such as mp4. I have tired the demo.py, it feedbacks text.

@dreamerlin
Copy link
Collaborator

@dreamerlin dreamerlin commented Aug 3, 2020

It now supports to output video format and gif format in demo.py.

@innerlee
Copy link
Contributor

@innerlee innerlee commented Aug 3, 2020

@dreamerlin could you pls sort out all feature requests in one grand post here, so that we can easily track the status? 馃弮

@tianyuan168326
Copy link

@tianyuan168326 tianyuan168326 commented Aug 8, 2020

Introducing Multi-Grid or mixed precision training strategy would be helpful for faster prototype iteration.

@JJBOY
Copy link

@JJBOY JJBOY commented Aug 8, 2020

In the action localization task锛寉ou provided the code to get the AUC metric for action proposal evaluation.
Could you also provide the classification results to get the mAP?

@IDayday
Copy link

@IDayday IDayday commented Aug 31, 2020

It can be used to recognize real-time videos with webcamera or something else? thanks

@makecent
Copy link
Contributor

@makecent makecent commented Oct 1, 2020

There are many trained models in Model Zoo, while all of them are just used to test the performance of the proposed works. Do you plan to make them available for backbone pre-training? Say I may want to use the i3d pre-trained on kinetics-400 as the pre-trained backbone of my own model. It seems that we don't have much choice of pre-trained backbones except a Resnet50 on ImageNet.

@dreamerlin
Copy link
Collaborator

@dreamerlin dreamerlin commented Oct 1, 2020

There are many trained models in Model Zoo, while all of them are just used to test the performance of the proposed works. Do you plan to make them available for backbone pre-training? Say I may want to use the i3d pre-trained on kinetics-400 as the pre-trained backbone of my own model. It seems that we don't have much choice of pre-trained backbones except a Resnet50 on ImageNet.

To use the pre-trained model for the whole network, the new config adds the link of pre-trained models in the load_from. See Tutorial 1: Finetuning Models # Use Pre-Trained Model and example. And to use backbone pre-training, you can change pretrained value in the backbone dict, The unexpected keys will be ignored.

@makecent
Copy link
Contributor

@makecent makecent commented Oct 2, 2020

There are many trained models in Model Zoo, while all of them are just used to test the performance of the proposed works. Do you plan to make them available for backbone pre-training? Say I may want to use the i3d pre-trained on kinetics-400 as the pre-trained backbone of my own model. It seems that we don't have much choice of pre-trained backbones except a Resnet50 on ImageNet.

To use the pre-trained model for the whole network, the new config adds the link of pre-trained models in the load_from. See Tutorial 1: Finetuning Models # Use Pre-Trained Model and example. And to use backbone pre-training, you can change pretrained value in the backbone dict, The unexpected keys will be ignored.

Wow! Fantastic! I think you can mention this feature somewhere in case others, like me, may don't know that they directly use pre-trained weights of the whole model for the backbone.

@vikizhao156
Copy link

@vikizhao156 vikizhao156 commented Nov 1, 2020

Could you please support X3D

@dreamerlin
Copy link
Collaborator

@dreamerlin dreamerlin commented Nov 3, 2020

Could you please support X3D

Here is the X3D config files. https://github.com/open-mmlab/mmaction2/tree/master/configs/recognition/x3d

@ahkarami
Copy link

@ahkarami ahkarami commented Nov 21, 2020

Could you please add Video Action/Activity Temporal Segmentation models?

@ahkarami
Copy link

@ahkarami ahkarami commented Nov 21, 2020

Also, could you please add Video models on MovieNet data set?

@mikeyEcology
Copy link

@mikeyEcology mikeyEcology commented Dec 11, 2020

Hi, I'm struggling to train a model using a dataset structured like the AVA dataset. Does anyone have a config file that they have used for this type of dataset that they would be willing to share? There is a code to create an ava dataset, but I haven't been able to find any config files. Otherwise, is there a different framework I can train where I have bounding boxes in the training data?
Thank you

@innerlee innerlee mentioned this issue Dec 14, 2020
15 tasks
@wwdok
Copy link
Contributor

@wwdok wwdok commented Dec 14, 2020

Recently I learned about action localization/detection/segmentation(They seem to be the same thing ), it seems that it can generate a file like caption, i found it very interesting and practical. I will be very apreciate it if mmaction2 could have the action localization demo and more docs about it, thanks !

@irvingzhang0512
Copy link
Contributor

@irvingzhang0512 irvingzhang0512 commented Dec 18, 2020

Very happy to have spatio-temporal action detection model today... Two related features could be very helpful:

  1. spatio-temporal action detection online/video demo.
  2. train spatio-temporal action detection models with custom categories.(eg. choose sit/stand/lie, ignore all other categories)
@FarzanehAskari
Copy link

@FarzanehAskari FarzanehAskari commented Dec 22, 2020

Do you have a plan to add flow models for TSN and I3D?

@jin-s13
Copy link
Contributor

@jin-s13 jin-s13 commented Jan 6, 2021

How about adding some models for temporal action segmentation?

@jayleicn
Copy link

@jayleicn jayleicn commented Jan 15, 2021

Thanks for the great repo! Do you have plans adding S3D and S3D-G from https://arxiv.org/abs/1712.04851? They achieve better performance than the I3D model while runs much faster. Here is a reproduced implementation of the S3D model: https://github.com/kylemin/S3D. And for S3D-G model https://github.com/antoine77340/S3D_HowTo100M/blob/master/s3dg.py, https://github.com/tensorflow/models/blob/master/research/slim/nets/s3dg.py

@sijun-zhou
Copy link

@sijun-zhou sijun-zhou commented Feb 24, 2021

Thanks in advance for this great unceasing progressing repo.

Recently, I saw that on ava-kinetics challenge, the new method 'Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization' has a very good performance and take the lead of nearly 6 percent to the second place in the competition 2020. And I think is a good candidate to enrich the area of spatio temporal action localization in mmaction2.

Will you consider to include this network?
I have also open a request on #641

@tianxianhao
Copy link

@tianxianhao tianxianhao commented Feb 25, 2021

Could you please add the algorithm proposed in the paper of AVA dataset [1]. It is helpful for comparing experiment for spatio-temporal action localization when using AVA dataset. The model is consist of Faster-Rcnn and I3D.

Reference:
[1] Gu C, Sun C, Ross D A, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 6047-6056.

@f-guitart
Copy link

@f-guitart f-guitart commented Apr 21, 2021

Is there any plan or current work for multi modal action classification?

@irvingzhang0512
Copy link
Contributor

@irvingzhang0512 irvingzhang0512 commented Apr 22, 2021

Maybe MMAction2 could support some of the models and datasets from PytorchVideo

@SubarnaTripathi
Copy link

@SubarnaTripathi SubarnaTripathi commented May 14, 2021

Do you plan to support Action Genome dataset and model ?

@rlleshi
Copy link
Contributor

@rlleshi rlleshi commented May 28, 2021

Add output predictions as JSON in long_video_demo.py (currently, only video is supported). #862

I have implemented this but need to polish it so that it's clean and similar to the rest of the codebase here. Will do a PR in the future.

@Deep-learning999
Copy link

@Deep-learning999 Deep-learning999 commented Jun 5, 2021

Hope to have Kinetics-TPS FineAction MultiSports data set support, pre-training model, training and web video inference demo

@Deep-learning999
Copy link

@Deep-learning999 Deep-learning999 commented Aug 7, 2021

I hope to use posec3d to realize bone-based spatiotemporal motion detection

@connor-john
Copy link

@connor-john connor-john commented Jan 21, 2022

Add demo scripts for temporal action detection models

Was mentioned in #746 any progress?

@kennymckormick kennymckormick unpinned this issue Feb 15, 2022
@kennymckormick kennymckormick pinned this issue Feb 15, 2022
@baigwadood
Copy link

@baigwadood baigwadood commented Jun 30, 2022

Hope to have web_cam demo for posec3d in near future.

@abdulazizab2
Copy link

@abdulazizab2 abdulazizab2 commented Jul 21, 2022

Do you plan to add a new model to spatio-temporal action detection?

The ACRN (Actor Centric Relation Network) is great. However, ACAR adopts the previous work and builds on it with better results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue help wanted