高質(zhì)量CV論文翻譯
## AlexNet
[【第61篇】AlexNet:CNN開山之作](https://wanghao.blog.csdn.net/article/details/128503264)
## VGGNet
[【第1篇】VGG](https://wanghao.blog.csdn.net/article/details/120094424)
## GooLeNet系列
[【第2篇】GooLeNet](https://wanghao.blog.csdn.net/article/details/120110029)
[【第3篇】Inception V2](https://wanghao.blog.csdn.net/article/details/120148613)
[【第4篇】Inception V3](https://wanghao.blog.csdn.net/article/details/120156107)
[【第62篇】Inception-v4](https://wanghao.blog.csdn.net/article/details/128522765)
## ResNet
[【第5篇】ResNet](https://wanghao.blog.csdn.net/article/details/120178266)
## DenseNet
[【第10篇】DenseNet](https://wanghao.blog.csdn.net/article/details/120347118)
## Swin Transformer
[【第16篇】Swin Transformer](https://wanghao.blog.csdn.net/article/details/120724040)
[【第49篇】Swin Transformer V2:擴(kuò)展容量和分辨率](https://wanghao.blog.csdn.net/article/details/127135297)
## MAE
[【第21篇】MAE(屏蔽自編碼器是可擴(kuò)展的視覺學(xué)習(xí)器)](https://wanghao.blog.csdn.net/article/details/121605608)
## CoAtNet
[【第22篇】CoAtNet:將卷積和注意力結(jié)合到所有數(shù)據(jù)大小上](https://wanghao.blog.csdn.net/article/details/121993729)
## ConvNeXtV1、V2
[【第25篇】力壓Tramsformer,ConvNeXt成了CNN的希望](https://wanghao.blog.csdn.net/article/details/122451111)
[【第64篇】ConvNeXt V2論文翻譯:ConvNeXt V2與MAE激情碰撞](https://wanghao.blog.csdn.net/article/details/128541957?spm=1001.2014.3001.5502)
## MobileNet系列
[【第26篇】MobileNets:用于移動視覺應(yīng)用的高效卷積神經(jīng)網(wǎng)絡(luò)](https://wanghao.blog.csdn.net/article/details/122692846)
[【第27篇】MobileNetV2:倒置殘差和線性瓶頸](https://wanghao.blog.csdn.net/article/details/122729844)
[【第28篇】搜索 MobileNetV3](https://wanghao.blog.csdn.net/article/details/122779006)
## MPViT
[【第29篇】MPViT:用于密集預(yù)測的多路徑視覺轉(zhuǎn)換器](https://wanghao.blog.csdn.net/article/details/122782937)
## VIT
[【第30篇】Vision Transformer](https://wanghao.blog.csdn.net/article/details/123695223)
## SWA
[【第32篇】SWA:平均權(quán)重導(dǎo)致更廣泛的最優(yōu)和更好的泛化](https://wanghao.blog.csdn.net/article/details/124409374)
## EfficientNet系列
[【第34篇】 EfficientNetV2:更快、更小、更強(qiáng)——論文翻譯](https://wanghao.blog.csdn.net/article/details/117399085)
## MOBILEVIT
[【第35篇】MOBILEVIT:輕量、通用和適用移動設(shè)備的Vision Transformer](https://wanghao.blog.csdn.net/article/details/124546928)
## EdgeViTs
[【第37篇】EdgeViTs: 在移動設(shè)備上使用Vision Transformers 的輕量級 CNN](https://wanghao.blog.csdn.net/article/details/124730330)
## MixConv
[【第38篇】MixConv:混合深度卷積核](https://wanghao.blog.csdn.net/article/details/124779609)
## RepLKNet
[【第39篇】RepLKNet將內(nèi)核擴(kuò)展到 31x31:重新審視 CNN 中的大型內(nèi)核設(shè)計(jì)](https://wanghao.blog.csdn.net/article/details/124875771)
## TransFG
[【第40篇】TransFG:用于細(xì)粒度識別的 Transformer 架構(gòu)](https://wanghao.blog.csdn.net/article/details/124919932)
## ConvMAE
[【第41篇】ConvMAE:Masked Convolution 遇到 Masked Autoencoders](https://wanghao.blog.csdn.net/article/details/124988783)
## MicroNet
[【第42篇】MicroNet:以極低的 FLOP 實(shí)現(xiàn)圖像識別](https://wanghao.blog.csdn.net/article/details/125177445)
## RepVGG?
[【第46篇】RepVGG :讓卷積再次偉大](https://wanghao.blog.csdn.net/article/details/126446922)
## MaxViT
[【第48篇】MaxViT:多軸視覺轉(zhuǎn)換器](https://wanghao.blog.csdn.net/article/details/127064117)
## MAFormer
[【第53篇】MAFormer: 基于多尺度注意融合的變壓器網(wǎng)絡(luò)視覺識別](https://wanghao.blog.csdn.net/article/details/127492341)
## GhostNet系列
[【第56篇】GhostNet:廉價(jià)操作得到更多的特征](https://wanghao.blog.csdn.net/article/details/127981705)
[【第57篇】RepGhost:一個(gè)通過重新參數(shù)化實(shí)現(xiàn)硬件高效的Ghost模塊](https://wanghao.blog.csdn.net/article/details/128090737)
## DEiT系列
[【第58篇】DEiT:通過注意力訓(xùn)練數(shù)據(jù)高效的圖像transformer &蒸餾](https://wanghao.blog.csdn.net/article/details/128180419)
## MetaFormer
[【第59篇】MetaFormer實(shí)際上是你所需要的視覺](https://wanghao.blog.csdn.net/article/details/128281326)
## RegNet
[【第60篇】RegNet:設(shè)計(jì)網(wǎng)絡(luò)設(shè)計(jì)空間](https://wanghao.blog.csdn.net/article/details/128339572)
# 注意力機(jī)制
[【第23篇】NAM:基于標(biāo)準(zhǔn)化的注意力模塊](https://wanghao.blog.csdn.net/article/details/122092352)
# 物體檢測
[【第6篇】SSD論文翻譯和代碼匯總](https://wanghao.blog.csdn.net/article/details/105788036)
[【第7篇】CenterNet](https://wanghao.blog.csdn.net/article/details/105593958)
[【第8篇】M2Det](https://wanghao.blog.csdn.net/article/details/105593927)
[【第9篇】YOLOX](https://wanghao.blog.csdn.net/article/details/119535667)
[【第11篇】微軟發(fā)布的Dynamic Head,創(chuàng)造COCO新記錄:60.6AP](https://wanghao.blog.csdn.net/article/details/120365468)
[【第12篇】Sparse R-CNN: End-to-End Object Detection with Learnable Proposals](https://wanghao.blog.csdn.net/article/details/120413137)
[【第13篇】CenterNet2論文解析,COCO成績最高56.4mAP](https://wanghao.blog.csdn.net/article/details/120464708)
[【第14篇】UMOP](https://wanghao.blog.csdn.net/article/details/120506747)
[【第15篇】CBNetV2](https://wanghao.blog.csdn.net/article/details/120583025)
[【第19篇 】SE-SSD論文翻譯](https://wanghao.blog.csdn.net/article/details/120875331)
[【第24篇】YOLOR:多任務(wù)的統(tǒng)一網(wǎng)絡(luò)](https://wanghao.blog.csdn.net/article/details/122357992)
[【第31篇】探索普通視覺Transformer Backbones用于物體檢測](https://wanghao.blog.csdn.net/article/details/123960815)
[【第36篇】CenterNet++ 用于對象檢測](https://wanghao.blog.csdn.net/article/details/124623781)
[【第45篇】YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors](https://wanghao.blog.csdn.net/article/details/126302859)
# 行人屬性識別
[【第66篇】行人屬性識別研究綜述(一)](https://wanghao.blog.csdn.net/article/details/128736760)
[【第66篇】行人屬性識別研究綜述(二)](https://wanghao.blog.csdn.net/article/details/128736732)
# 行人跟蹤
[【第47篇】BoT-SORT:強(qiáng)大的關(guān)聯(lián)多行人跟蹤](https://wanghao.blog.csdn.net/article/details/126890651)
[【第65篇】SMILEtrack:基于相似度學(xué)習(xí)的多目標(biāo)跟蹤](https://blog.csdn.net/hhhhhhhhhhwwwwwwwwww/article/details/128615947)
[【第70篇】DeepSort:論文翻譯](https://wanghao.blog.csdn.net/article/details/129003397)
# OCR
[【第20篇】像人類一樣閱讀:自主、雙向和迭代語言 場景文本識別建模](https://wanghao.blog.csdn.net/article/details/121313548)
[【第44篇】DBNet:具有可微分二值化的實(shí)時(shí)場景文本檢測](https://wanghao.blog.csdn.net/article/details/125513523)
# 超分辨采樣
[【第33篇】SwinIR](https://wanghao.blog.csdn.net/article/details/124434886)
# 弱光增強(qiáng)
## RetinexNet
[【第52篇】RetinexNet: Deep Retinex Decomposition for Low-Light Enhancement](https://wanghao.blog.csdn.net/article/details/127400091)
[【第50篇】邁向快速、靈活、穩(wěn)健的微光圖像增強(qiáng)](https://wanghao.blog.csdn.net/article/details/127211265)
# NLP
[【第17篇】TextCNN](https://wanghao.blog.csdn.net/article/details/120729088)
[【第18篇】Bert論文翻譯](https://wanghao.blog.csdn.net/article/details/120864338)
# 多模態(tài)
[【第43篇】CLIP:從自然語言監(jiān)督中學(xué)習(xí)可遷移的視覺模型](https://wanghao.blog.csdn.net/article/details/125452516)
# 知識蒸餾
[【第54篇】知識蒸餾:Distilling the Knowledge in a Neural Network](https://wanghao.blog.csdn.net/article/details/127808674)
# 剪枝
[【第55篇】剪枝算法:通過網(wǎng)絡(luò)瘦身學(xué)習(xí)高效卷積網(wǎng)絡(luò)](https://wanghao.blog.csdn.net/article/details/127871910)
# 智慧城市
[【第51篇】用于交通預(yù)測的時(shí)空交互動態(tài)圖卷積網(wǎng)絡(luò)](https://wanghao.blog.csdn.net/article/details/127306179)