일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
Tags
- Python
- Github
- timechat
- error
- 코딩테스트
- Server
- long video understanding
- sliding video q-former
- transference
- leetcode
- Linux
- vision-language-action
- Anaconda
- CNN
- jmeter
- Kaggle
- quantification
- 용어
- hackerrank
- 백준
- Artificial Intelligence
- q-former
- memory bank
- MySQL
- multimodal machine learning
- autogluon
- LeNet-5
- tensorflow
- timestamp-aware frame encoder
- ma-lmm
Archives
- Today
- Total
반응형
일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | 6 | 7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 |
15 | 16 | 17 | 18 | 19 | 20 | 21 |
22 | 23 | 24 | 25 | 26 | 27 | 28 |
29 | 30 |
Tags
- Python
- Github
- timechat
- error
- 코딩테스트
- Server
- long video understanding
- sliding video q-former
- transference
- leetcode
- Linux
- vision-language-action
- Anaconda
- CNN
- jmeter
- Kaggle
- quantification
- 용어
- hackerrank
- 백준
- Artificial Intelligence
- q-former
- memory bank
- MySQL
- multimodal machine learning
- autogluon
- LeNet-5
- tensorflow
- timestamp-aware frame encoder
- ma-lmm
Archives
- Today
- Total
목록2025/06/19 (1)
Juni_DEV

논문이 Appendix 빼고 11장, 합쳐서 37장 정도 되는데 Appendix에 실험 내용이 많다보니 내용이 좀 길다.OpenVLA7B parameter open-source VLA modeltrained on 970K robot episodes from the Open X-Embodiment datasetbuild on a Llama2 LM + visual encoder (pretrained features from DINOv2 + SigLIP)fully open-source and models can be downloaded and fine-tuned from HFBackgroundThere are two key reasons preventing the widespread use of existin..
Paper Review
2025. 6. 19. 12:54