condaactivate livecdLIVE# Please modify the paramters accordingly.pythonmain.py --config --experiment --signature --target --log_dir # Here is an simple example:python main.py --config config/base.yaml --experiment experiment_5x1 --signature smile --target figures/smile.png --log_dir log/
《Multimodal Token Fusion for Vision Transformers》
GitHub:
github.com/yikaiw/TokenFusion
《PointAugmenting: Cross-Modal Augmentation for 3D Object Detection》
GitHub:
github.com/VISION-SJTU/PointAugmenting
《Fantastic questions and where to find them: FairytaleQA -- An authentic dataset for narrative comprehension.》
GitHub:
github.com/uci-soe/FairytaleQAData
《LUNAR: Unifying Local Outlier Detection Methods via Graph Neural Networks》
GitHub:
github.com/agoodge/LUNAR
Firstly, extract data.zip
To replicate the results on the HRSS dataset with neighbour count k = 100 and "Mixed" negative sampling scheme
You can either pretrain by yourself or use the pretrained CDLM model weights and tokenizer files, which are available on HuggingFace.
Then, use:
from transformers import AutoTokenizer, AutoModel# load model and tokenizertokenizer = AutoTokenizer.from_pretrained('biu-nlp/cdlm')model = AutoModel.from_pretrained('biu-nlp/cdlm')
《Continual Learning for Task-Oriented Dialogue Systems》
GitHub:
github.com/andreamad8/ToDCL
《Torsional Diffusion for Molecular Conformer Generation》
GitHub:
github.com/gcorso/torsional-diffusion
《MMChat: Multi-Modal Chat Dataset on Social Media》
GitHub:
github.com/silverriver/MMChat
《Can CNNs Be More Robust Than Transformers?》
GitHub:
github.com/UCSC-VLAA/RobustCNN
《Revealing Single Frame Bias for Video-and-Language Learning》
GitHub:
github.com/jayleicn/singularity
《Progressive Distillation for Fast Sampling of Diffusion Models》
GitHub:
github.com/Hramchenko/diffusion_distiller
《Neural Basis Models for Interpretability》
GitHub:
github.com/facebookresearch/nbm-spam
《Scalable Interpretability via Polynomials》
GitHub:
github.com/facebookresearch/nbm-spam
《Infinite Recommendation Networks: A Data-Centric Approach》
GitHub:
github.com/noveens/infinite_ae_cf
《The GatedTabTransformer. An enhanced deep learning architecture for tabular modeling》
GitHub:
github.com
/radi-cho/GatedTabTransformer
Usage:
import torchimport torch.nn as nnfrom gated_tab_transformer import GatedTabTransformer model = GatedTabTransformer( categories = (10, 5, 6, 5, 8), # tuple containing the number of unique values within each category num_continuous = 10, # number of continuous values transformer_dim = 32, # dimension, paper set at 32 dim_out = 1, # binary prediction, but could be anything transformer_depth = 6, # depth, paper recommended 6 transformer_heads = 8, # heads, paper recommends 8 attn_dropout = 0.1, # post-attention dropout ff_dropout = 0.1, # feed forward dropout mlp_act = nn.LeakyReLU(0), # activation for final mlp, defaults to relu, but could be anything else (selu, etc.) mlp_depth=4, # mlp hidden layers depth mlp_dimension=32, # dimension of mlp layers gmlp_enabled=True # gmlp or standard mlp) x_categ = torch.randint(0, 5, (1, 5)) # category values, from 0 - max number of categories, in the order as passed into the constructor above