| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| 8 | 9 | 10 | 11 | 12 | 13 | 14 |
| 15 | 16 | 17 | 18 | 19 | 20 | 21 |
| 22 | 23 | 24 | 25 | 26 | 27 | 28 |
Tags
- multimodal machine learning
- Reinforcement Learning
- transference
- quantification
- Artificial Intelligence
- Kaggle
- Anaconda
- autogluon
- vision-language-action
- Github
- sliding video q-former
- jmeter
- CS285
- ma-lmm
- 백준
- CNN
- MySQL
- tensorflow
- long video understanding
- error
- Python
- hackerrank
- Server
- memory bank
- Linux
- 용어
- LeNet-5
- leetcode
- deeprl
- 코딩테스트
Archives
- Today
- Total
반응형
| 일 | 월 | 화 | 수 | 목 | 금 | 토 |
|---|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
| 8 | 9 | 10 | 11 | 12 | 13 | 14 |
| 15 | 16 | 17 | 18 | 19 | 20 | 21 |
| 22 | 23 | 24 | 25 | 26 | 27 | 28 |
Tags
- multimodal machine learning
- Reinforcement Learning
- transference
- quantification
- Artificial Intelligence
- Kaggle
- Anaconda
- autogluon
- vision-language-action
- Github
- sliding video q-former
- jmeter
- CS285
- ma-lmm
- 백준
- CNN
- MySQL
- tensorflow
- long video understanding
- error
- Python
- hackerrank
- Server
- memory bank
- Linux
- 용어
- LeNet-5
- leetcode
- deeprl
- 코딩테스트
Archives
- Today
- Total
목록robotics (1)
Juni_DEV
논문이 Appendix 빼고 11장, 합쳐서 37장 정도 되는데 Appendix에 실험 내용이 많다보니 내용이 좀 길다.OpenVLA7B parameter open-source VLA modeltrained on 970K robot episodes from the Open X-Embodiment datasetbuild on a Llama2 LM + visual encoder (pretrained features from DINOv2 + SigLIP)fully open-source and models can be downloaded and fine-tuned from HFBackgroundThere are two key reasons preventing the widespread use of existin..
Paper Review
2025. 6. 19. 12:54