| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| 8 | 9 | 10 | 11 | 12 | 13 | 14 |
| 15 | 16 | 17 | 18 | 19 | 20 | 21 |
| 22 | 23 | 24 | 25 | 26 | 27 | 28 |
Tags
- multimodal machine learning
- CNN
- Kaggle
- 코딩테스트
- long video understanding
- LeNet-5
- deeprl
- hackerrank
- Linux
- MySQL
- Server
- leetcode
- tensorflow
- Anaconda
- ma-lmm
- quantification
- Artificial Intelligence
- autogluon
- CS285
- Reinforcement Learning
- Github
- error
- jmeter
- transference
- memory bank
- Python
- 용어
- vision-language-action
- sliding video q-former
- 백준
Archives
- Today
- Total
반응형
| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| 8 | 9 | 10 | 11 | 12 | 13 | 14 |
| 15 | 16 | 17 | 18 | 19 | 20 | 21 |
| 22 | 23 | 24 | 25 | 26 | 27 | 28 |
Tags
- multimodal machine learning
- CNN
- Kaggle
- 코딩테스트
- long video understanding
- LeNet-5
- deeprl
- hackerrank
- Linux
- MySQL
- Server
- leetcode
- tensorflow
- Anaconda
- ma-lmm
- quantification
- Artificial Intelligence
- autogluon
- CS285
- Reinforcement Learning
- Github
- error
- jmeter
- transference
- memory bank
- Python
- 용어
- vision-language-action
- sliding video q-former
- 백준
Archives
- Today
- Total
목록open-source (1)
Juni_DEV
논문이 Appendix 빼고 11장, 합쳐서 37장 정도 되는데 Appendix에 실험 내용이 많다보니 내용이 좀 길다.OpenVLA7B parameter open-source VLA modeltrained on 970K robot episodes from the Open X-Embodiment datasetbuild on a Llama2 LM + visual encoder (pretrained features from DINOv2 + SigLIP)fully open-source and models can be downloaded and fine-tuned from HFBackgroundThere are two key reasons preventing the widespread use of existin..
Paper Review
2025. 6. 19. 12:54