Получи случайную криптовалюту за регистрацию!

Spark in me

Логотип телеграм канала @snakers4 — Spark in me S
Логотип телеграм канала @snakers4 — Spark in me
Адрес канала: @snakers4
Категории: Технологии
Язык: Русский
Количество подписчиков: 2.68K
Описание канала:

Lost like tears in rain. DS, ML, a bit of philosophy and math. No bs or ads.

Рейтинги и Отзывы

2.50

2 отзыва

Оценить канал snakers4 и оставить отзыв — могут только зарегестрированные пользователи. Все отзывы проходят модерацию.

5 звезд

0

4 звезд

0

3 звезд

1

2 звезд

1

1 звезд

0


Последние сообщения 10

2022-02-14 13:12:41 Smart shop MVP in Moscow, Russia
Looks like it will work only for low traffic
616 viewsAlexander, edited  10:12
Открыть/Комментировать
2022-02-08 12:15:25 imodels: leveraging the unreasonable effectiveness of rules


Looks like a cool EDA / model based data exploration instrument for tabular data:

- Illustration
- https://bair.berkeley.edu/blog/2022/02/02/imodels/
- https://github.com/csinva/imodels

Not another 1 trillion param neural network, or AI fairness or policy bs.
170 viewsAlexander, edited  09:15
Открыть/Комментировать
2022-01-30 13:59:11 Digest 2022-01

# Hardware

Using AI to Manage Internal SSD Parameters - https://thessdguy.com/using-ai-to-manage-internal-ssd-parameters/
Solidigm, SK hynix’ New SSD/Flash Subsidiary - https://thessdguy.com/solidigm-sk-hynix-new-ssd-flash-subsidiary/
Опубликованы бенчмарки EPYC 7773X: 64 ядра, 768 МБ кэша L3 - https://geekr.vercel.app/post/645203
Micron’s Tiny Little 2TB SSD - https://thessdguy.com/microns-tiny-little-2tb-ssd/
3090 Ti crazy prices - https://habr.com/ru/news/t/645881/

# Code

Object ownership across programming languages - https://codewithoutrules.com/2017/01/26/object-ownership/
Integrate-first approach - https://unstructed.tech/2022/01/10/integrate-first-approach/
Docker vs. Singularity for data processing: UIDs and filesystem access - https://pythonspeed.com/articles/containers-filesystem-data-processing/
Some more info about it - https://www.reddit.com/r/docker/comments/7y2yp2/why_is_singularity_used_as_opposed_to_docker_in/
Memory location matters for performance - https://pythonspeed.com/articles/performance-memory-locality/
Погромист. Мои самые эпичные провалы за всю карьеру - https://habr.com/ru/post/646393/
3 Things You Might Not Know About Numbers in Python - https://davidamos.dev/three-things-you-might-not-know-about-numbers-in-python/
The fastest way to read a CSV in Pandas - https://pythonspeed.com/articles/pandas-read-csv-fast/

#digest
534 viewsAlexander, 10:59
Открыть/Комментировать
2022-01-30 13:58:35 Digest 2022-01

# Blogs

My first impressions of web3 - https://moxie.org/2022/01/07/web3-first-impressions.html
Dependency Risk and Funding - https://lucumr.pocoo.org/2022/1/10/dependency-risk-and-funding/
Tech questions for 2022 - https://www.ben-evans.com/benedictevans/2022/1/2/2022-questions
5 грязных трюков в соревновательном Data Science, о которых тебе не расскажут в приличном обществе - https://habr.com/ru/post/600067/
Proof of stake is a scam and the people promoting it are scammers - https://yanmaani.github.io/proof-of-stake-is-a-scam-and-the-people-promoting-it-are-scammers/
Bitcoin will never be a stable currency - https://yanmaani.github.io/bitcoin-will-never-be-a-stable-currency/
Understanding the SSH Encryption and Connection Process - https://www.digitalocean.com/community/tutorials/understanding-the-ssh-encryption-and-connection-process
Как работает Эфириум (Ethereum)? - https://habr.com/ru/post/407583/
New data: What developers look for in future job opportunities - https://stackoverflow.blog/2021/12/07/new-data-what-developers-look-for-in-future-job-opportunities/
О фейковых криптовалютах - https://habr.com/ru/post/544700/
New data: What developers look for in future job opportunities - https://stackoverflow.blog/2021/12/07/new-data-what-developers-look-for-in-future-job-opportunities/
Journalism, media, and technology trends and predictions 2022 - https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2022
Seoul Robotics launches Level 5 Control Tower to enable autonomous mobility - https://www.therobotreport.com/seoul-robotics-launches-level-5-control-tower-to-enable-autonomous-mobility/
How no-code AI development platforms could introduce model bias - https://venturebeat.com/2022/01/06/how-no-code-ai-development-platforms-could-introduce-model-bias/
Пожалуйста, прекратите называть админов девопсами - https://habr.com/ru/post/646581/
Fast subsets of large datasets with Pandas and SQLite - https://pythonspeed.com/articles/indexing-pandas-sqlite/
Secure your GitHub account with GitHub Mobile 2FA - https://github.blog/2022-01-25-secure-your-github-account-github-mobile-2fa/
One machine can go pretty far if you build things properly - https://rachelbythebay.com/w/2022/01/27/scale/
ML and NLP Research Highlights of 2021 - https://ruder.io/ml-highlights-2021/

#digest
463 viewsAlexander, 10:58
Открыть/Комментировать
2022-01-30 13:57:59 data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language
- [Illustration](https://scontent-arn2-1.xx.fbcdn.net/v/t39.2365-6/271815807_4636921079718503_8613393990345138136_n.gif?_nc_cat=107&ccb=1-5&_nc_sid=ad8a9d&_nc_ohc=yn27DielBOYAX8rk045&_nc_ht=scontent-arn2-1.xx&oh=00_AT8ueSOOllDdunQw26KIBUYwyoOq_b1leSPKrmSfZoeazA&oe=61F26871)
- [Link](https://ai.facebook.com/blog/the-first-high-performance-self-supervised-algorithm-that-works-for-speech-vision-and-text/)
- These are actually 3 separate models (!) - marketing lies as usual
- No clear indication, but the NLP model uses 16 GPUs, others - not specified
- The first high-performance self-supervised algorithm that works for speech, vision, and text
- Trained by predicting the model representations of the full input data given a partial view of the input
- Standard Transformer architecture with a modality-specific encoding
- The encoding of the unmasked training sample is parameterized by an exponentially moving average of the model parameters
- Training targets based on the output of the top K blocks of the teacher network for time-steps which are masked in student mode
- We apply a normalization to each block before averaging the top K blocks
- For speech representations, we use instance normalization
- For NLP and vision we found parameter-less layer normalization
- 800 epochs, 86M parameters and 307M parameters
- Smooth L1 loss


HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units
- 2106.07447
- Offline clustering step to provide aligned target labels for a BERT-like prediction loss
- Applying the prediction loss over the masked regions only
- Relies on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels
- Acoustic unit discovery models to provide frame-level targets
- How to mask and where to apply the prediction loss:
- p% of the timesteps are randomly selected as start indices, and spans of l steps are masked
- cross-entropy loss computed over masked and unmasked timesteps, weighted, α parameter
- α = 1 is more resilient to the quality of cluster targets, which is demonstrated in our experiments
- Multuple clustering, iterative refinement starting with MFCC
- Convolutional waveform encoder, a BERT encoder, a projection layer and a code embedding layer
- BASE, LARGE, and X-LARGE - 95M, 317M, 964M
- ![image](https://user-images.githubusercontent.com/12515440/150782226-92accb43-380a-4e0f-91f5-86fdba4624ce.png)
- Convolutional encoder generates a feature sequence at a 20ms framerate for audio sampled at 16kHz (CNN encoder down-sampling factor is 320x)
- After pre-training, CTC loss for ASR fine-tuning of the whole model weights except the convolutional audio encoder, which remains frozen
- CTC target vocabulary includes 26 English chars + space + apostrophe + CTC blank
- 960h of LibriSpeech + 60kh of Libri-light
- First iteration labels: 960 hour LibriSpeech training set, k-means clustering with 100 clusters on 39-dimensional MFCC features, which are 13 coefficients with the first and the second-order derivatives
- For the subsequent iterations, k-means clustering with 500 clusters on the latent features from the HuBERT model pre-trained in the previous iteration
- MiniBatchKMeans
- BASE - two iterations on the 960h on 32 GPUs (batch size of at most 87.5 seconds of audio per GPU), 250k steps
- LARGE and X-LARGE for one iteration on 60kh on 128 and 256 GPUs, respectively, for 400k steps


#digest
392 viewsAlexander, 10:57
Открыть/Комментировать
2022-01-30 13:57:59 Digest 2022-01

# Speech

AI that understands speech by looking as well as hearing - https://ai.facebook.com/blog/ai-that-understands-speech-by-looking-as-well-as-hearing

HuBERT: Self-supervised representation learning for speech recognition, generation, and compression - https://ai.facebook.com/blog/hubert-self-supervised-representation-learning-for-speech-recognition-generation-and-compression

# ML

Графовые нейронные сети - https://dyakonov.org/2021/12/30/gnn/
A Gentle Introduction to Graph Neural Networks - https://distill.pub/2021/gnn-intro/
GPT-3, Foundation Models, and AI Nationalism - https://lastweekin.ai/p/gpt-3-foundation-models-and-ai-nationalism
The Illustrated Retrieval Transformer - https://jalammar.github.io/illustrated-retrieval-transformer/
You get what you measure: New NLU benchmarks for few-shot learning and robustness evaluation - https://www.microsoft.com/en-us/research/blog/you-get-what-you-measure-new-nlu-benchmarks-for-few-shot-learning-and-robustness-evaluation/
Azure AI milestone: New foundation model Florence v1.0 advances state of the art, topping popular computer vision leaderboards - https://www.microsoft.com/en-us/research/blog/azure-ai-milestone-new-foundation-model-florence-v1-0-pushing-vision-and-vision-language-state-of-the-art/
Language modelling at scale: Gopher, ethical considerations, and retrieval - https://deepmind.com/blog/article/language-modelling-at-scale
Sequence-to-sequence learning with Transducers - https://lorenlugosch.github.io/posts/2020/11/transducer/
A contemplation of logsumexp - https://lorenlugosch.github.io/posts/2020/06/logsumexp/
Meta claims its AI improves speech recognition quality by reading lips - https://venturebeat.com/2022/01/07/meta-claims-its-ai-improves-speech-recognition-quality-by-reading-lips/
Training 100B models is fucking hard - https://github.com/bigscience-workshop/bigscience/blob/master/train/lessons-learned.md
Scaling Vision with Sparse Mixture of Experts - https://ai.googleblog.com/2022/01/scaling-vision-with-sparse-mixture-of.html
Интерпретация моделей и диагностика сдвига данных: LIME, SHAP и Shapley Flow - https://habr.com/ru/company/ods/blog/599573/
A ConvNet for the 2020s - https://arxiv.org/pdf/2201.03545.pdf
LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything - https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html
Separating Birdsong in the Wild for Classification - https://ai.googleblog.com/2022/01/separating-birdsong-in-wild-for.html
Accurate Alpha Matting for Portrait Mode Selfies on Pixel 6 - https://ai.googleblog.com/2022/01/accurate-alpha-matting-for-portrait.html
The Gradient Update #16: China's World-leading Surveillance Research and a ConvNet for the 2020s - https://thegradientpub.substack.com/p/the-gradient-update-16-chinas-world
Does Gradient Flow Over Neural Networks Really Represent Gradient Descent? - http://www.offconvex.org/2022/01/06/gf-gd/
Does Your Medical Image Classifier Know What It Doesn’t Know? - https://ai.googleblog.com/2022/01/does-your-medical-image-classifier-know.html
Introducing Text and Code Embeddings in the OpenAI API - https://openai.com/blog/introducing-text-and-code-embeddings/
Steering Towards Effective Autonomous Vehicle Policy - https://thegradient.pub/engaging-with-disengagement/

Introducing StylEx: A New Approach for Visual Explanation of Classifiers
- https://ai.googleblog.com/2022/01/introducing-stylex-new-approach-for.html
-


- tldr very cool, but most likely requires a lot of compute
437 viewsAlexander, 10:57
Открыть/Комментировать
2022-01-28 14:40:56 Yet Another Farcical Self-Delusion?

I do not work with text embedding models currently, but such threads are utterly hilarious:

https://twitter.com/Nils_Reimers/status/1487014195568775173?s=20&t=z8jAsiDgoASIOqppzhNnkQ

If anyone does, please explain if this is biased. But when I worked with such models ... low key public models published by FAIR / Google worked decently, so idk.

If OpenAI in reality is as useful as Tesla car service ... well you know =)

I can only add that when we were looking for some base compact multi-language transformer model for fine-tuning ... the best we found was dated ~2019, which I find fucking hilarious.

But ofc there were several people re-uploading the most popular models from 2018 by the hundreds... claiming to make them more compact ... just by cutting unnecessary embeddings for a given language.

#no_bs
586 viewsAlexander, 11:40
Открыть/Комментировать
2022-01-25 15:46:36 Stellar No BS Articles

ConvMixer: Patches Are All You Need?

- An extremely simple model (30 lines of code in a readable format, <10 lines in non-readable format) - patch layer + some wide conv layers
- Claims competitive quality for its simplicity
- No real huge resources poured in its design, unlike effnets / regnets / nasnets etc

- https://github.com/locuslab/convmixer
- https://arxiv.org/pdf/2201.09792v1.pdf

Looks like a resurgence of plain and working ideas. ConvNext, RepVGG and now this.

#no_bs
422 viewsAlexander, edited  12:46
Открыть/Комментировать
2022-01-20 17:13:52 Even Better High Quality Ukrainian TTS

The same model, but sounds much better

Link - https://github.com/snakers4/silero-models#models-and-speakers
327 viewsAlexander, 14:13
Открыть/Комментировать