Get Mystery Box with random crypto!

Data Science by ODS.ai 🦜

Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜 D
Logo of telegram channel opendatascience — Data Science by ODS.ai 🦜
Channel address: @opendatascience
Categories: Technologies
Language: English
Subscribers: 51.75K
Description from channel

First Telegram Data Science channel. Covering all technical and popular staff about anything related to Data Science: AI, Big Data, Machine Learning, Statistics, general Math and the applications of former. To reach editors contact: @haarrp

Ratings & Reviews

2.67

3 reviews

Reviews can be left only by registered users. All reviews are moderated by admins.

5 stars

1

4 stars

0

3 stars

0

2 stars

1

1 stars

1


The latest Messages

2024-04-12 11:17:06
ControlNet++: Improving Conditional Controls
with Efficient Consistency Feedback

Proposes an approach that improves controllable generation by explicitly optimizing pixel-level cycle consistency

proj: https://liming-ai.github.io/ControlNet_Plus_Plus/
abs: https://arxiv.org/abs/2404.07987

@opendatascience
13.2K views08:17
Open / Comment
2024-04-09 11:06:13
PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models

Significantly improved finetuned perf by simply changing the initialization of LoRA's AB matrix from Gaussian/zero to principal components.

On GSM8K, Mistral-7B fine-tuned with PiSSA achieves an accuracy of 72.86%, outperforming LoRA’s 67.7% by 5.16%.

Github: https://github.com/GraphPKU/PiSSA
Paper: https://arxiv.org/abs/2404.02948

@opendatascience
12.6K viewsedited  08:06
Open / Comment
2024-04-02 16:00:16 Position: Analyst/Researcher for AI Team at Cyber.fund

About Cyber.fund:
Cyber.fund is a pioneering $100mm research-driven fund specializing in the realm of web3, decentralized AI, autonomous agents, and self-sovereign identity. Our legacy is built upon being the architects behind monumental projects such as Lido, p2p.org, =nil; foundation, Neutron, NEON, and early investments in groundbreaking technologies like Solana, Ethereum, EigenLayer among 150+ others. We are committed to advancing the frontiers of Fully Homomorphic Encryption (FHE) for Machine Learning, privacy-first ML (Large Language Models), AI aggregations, and routing platforms alongside decentralized AI solutions.

Who Are We Looking For?
A dynamic individual who straddles the worlds of business acumen and academic rigor with:
- A robust theoretical foundation in Computer Science and a must-have specialization in Machine Learning.
- An educational background from a technical university, with a preference for PhD holders from prestigious institutions like MIT or МФТИ.
- A track record of publications in the Machine Learning domain, ideally at the level of NeuroIPS.
- Experience working in startups or major tech companies, ideally coupled with a background in angel investing.
- A profound understanding of algorithms, techniques, and models in ML, with an exceptional ability to translate these into innovative products.
- Fluent English, intellectual curiosity, and a fervent passion for keeping abreast of the latest developments in AI/ML.

Responsibilities:
1) Investment Due Diligence: Conduct technical, product, and business analysis of potential AI/ML investments. This includes market analysis, engaging with founders and technical teams, and evaluating the scalability, reliability, risks, and limitations of products.

2) Portcos Support: Provide strategic and technical support to portfolio companies in AI/ML. Assist in crafting technological strategies, hiring, industry networking, identifying potential project challenges, and devising solutions.

3) Market and Technology Research: Stay at the forefront of ML/DL/AI trends (e.g., synthetic data, flash attention, 1bit LLM, FHE for ML, JEPA, etc.). Write publications, whitepapers, and potentially host X spaces/streams/podcasts on these subjects (in English). Identify promising companies and projects for investment opportunities.

How to Apply?
If you find yourself aligning with our requirements and are excited by the opportunity to contribute to our vision, please send your CV to sg@cyber.fund. Including a cover letter, links to publications, open-source contributions, and other achievements will be advantageous.

Location:
Location is flexible, but the candidate should be within the time zones ranging from EET to EST (Eastern Europe to the East Coast of the USA).

This is not just a job opportunity; it's a call to be part of a visionary journey reshaping the landscape of AI and decentralized technology. Join us at Cyber.fund and be at the forefront of the technological revolution.
12.7K views13:00
Open / Comment
2024-03-10 09:34:10
LLM models are in their childhood years

Source.
15.1K views06:34
Open / Comment
2024-02-01 23:54:09
14.7K views20:54
Open / Comment
2023-10-02 08:25:01
Well, AI can learn that humans might be deceiving.

Upd: as our readers noted, post originally was written by Denis here.
But then Yudkowski retweeted and it was spread on X.
13.6K views05:25
Open / Comment
2023-09-27 13:50:49 Here is very interesting notes about how behaves generation of stable diffusion trained on different datasets with the same noise. Seems very contrintuitive!

https://twitter.com/mokadyron/status/1706618451664474148
13.1K views10:50
Open / Comment
2023-09-14 20:22:31
Introducing Würstchen: Fast Diffusion for Image Generation

Diffusion model, whose text-conditional component works in a highly compressed latent space of images

Würstchen - это диффузионная модель, которой работает в сильно сжатом латентном пространстве изображений.

Почему это важно? Сжатие данных позволяет на порядки снизить вычислительные затраты как на обучение, так и на вывод модели.

Обучение на 1024×1024 изображениях гораздо затратное, чем на 32×32. Обычно в других моделях используется сравнительно небольшое сжатие, в пределах 4x - 8x пространственного сжатия.

Благодаря новой архитектуре достигается 42-кратное пространственное сжатие!

HF: https://huggingface.co/blog/wuertschen

Paper: https://arxiv.org/abs/2306.00637

Docs: hhttps://huggingface.co/docs/diffusers/main/en/api/pipelines/wuerstchen

Demo: https://huggingface.co/spaces/warp-ai/Wuerstchen

ai_machinelearning_big_data
13.6K views17:22
Open / Comment
2023-06-08 07:50:24 ​​StableRep: Synthetic Images from Text-to-Image Models Make Strong Visual Representation Learners

In a ground-breaking exploration of visual representation learning, researchers have leveraged synthetic images produced by leading text-to-image models, specifically Stable Diffusion, achieving promising results. The study uncovers two key insights - firstly, when configured correctly, self-supervised methods trained on synthetic images can match or even outperform those trained on real images. This suggests an exciting avenue for efficient and effective representation learning, reducing the need for extensive real image datasets.

Secondly, the researchers have devised a novel approach called StableRep, a multi-positive contrastive learning method that treats multiple images, generated from the same text prompt, as mutual positives. The compelling finding is that StableRep, trained solely with synthetic images, outperforms representations learned by prominent methods such as SimCLR and CLIP, even when these used real images. In a striking demonstration, when language supervision is added, StableRep trained with 20M synthetic images outperforms CLIP trained with a whopping 50M real images. These findings not only underscore the potential of synthetic data but also pave the way for more efficient, large-scale visual representation learning.

Paper link: https://arxiv.org/abs/2306.00984

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-stablerep

#deeplearning #cv #nlp #stablediffusion #texttoimage #syntheticdata
3.2K views04:50
Open / Comment
2023-06-05 07:53:43 ​​The effectiveness of MAE pre-pretraining for billion-scale pretraining

Revolutionizing the current pretrain-then-finetune paradigm of computer vision, this research has introduced an innovative pre-pretraining stage. Utilizing the Masked Autoencoder (MAE) technique for model initialization, this pre-pretraining strategy scales with the size of both the model and the data. This makes it an ideal tool for training next-generation foundation models, even on the grandest scales.

The robustness of our pre-pretraining technique is demonstrated by consistent improvement in model convergence and downstream transfer performance across diverse model scales and dataset sizes. The authors measured the effectiveness of pre-pretraining on a wide array of visual recognition tasks, and the results have been promising. The ielargest model achieved unprecedented results on iNaturalist-18 (91.3%), 1-shot ImageNet-1k (62.1%), and zero-shot transfer on Food-101 (96.0%), underlining the tremendous potential of proper model initialization, even when handling web-scale pretraining with billions of images.

Paper link: https://arxiv.org/abs/2303.13496

A detailed unofficial overview of the paper: https://andlukyane.com/blog/paper-review-mae-pretrain

#deeplearning #cv #pretraining #selfsupervisedlearning
2.5K views04:53
Open / Comment