Получи случайную криптовалюту за регистрацию!

Spark in me

Логотип телеграм канала @snakers4 — Spark in me S
Логотип телеграм канала @snakers4 — Spark in me
Адрес канала: @snakers4
Категории: Технологии
Язык: Русский
Количество подписчиков: 2.68K
Описание канала:

Lost like tears in rain. DS, ML, a bit of philosophy and math. No bs or ads.

Рейтинги и Отзывы

2.50

2 отзыва

Оценить канал snakers4 и оставить отзыв — могут только зарегестрированные пользователи. Все отзывы проходят модерацию.

5 звезд

0

4 звезд

0

3 звезд

1

2 звезд

1

1 звезд

0


Последние сообщения 12

2022-01-02 12:15:00 Digest 2021-12

# Code

Yet another old TRANSFORMERS FROM SCRATCH post - http://peterbloem.nl/blog/transformers
Быстрый поиск похожих слов на SQL - https://antonz.ru/similar-words/
The Parallelism Blues: when faster code is slower - https://pythonspeed.com/articles/parallelism-slower/
This exists - Визуальное программирование на языке ДРАКОН - https://habr.com/ru/post/345320/
Using Alpine can make Python Docker builds 50× slower - https://pythonspeed.com/articles/alpine-docker-python/
Introducing stack graphs - https://github.blog/2021-12-09-introducing-stack-graphs/
A reverse chronology of some Python features - https://snarky.ca/a-reverse-chronology-of-some-python-features/
Precise code navigation for Python, and code navigation in pull requests - https://github.blog/2021-12-09-precise-code-navigation-python-code-navigation-pull-requests/
A brief history of code search at GitHub - https://github.blog/2021-12-15-a-brief-history-of-code-search-at-github/
WSL 2 GPU Support for Docker Desktop on NVIDIA GPUs - https://www.docker.com/blog/wsl-2-gpu-support-for-docker-desktop-on-nvidia-gpus/
Reducing Pandas memory usage #3: Reading in chunks - https://pythonspeed.com/articles/chunking-pandas/
Technical interviews via Codespaces - https://github.blog/2021-12-16-technical-interviews-via-codespaces/
Github codespaces - https://docs.github.com/en/codespaces
A Scalable Approach for Partially Local Federated Learning - https://ai.googleblog.com/2021/12/a-scalable-approach-for-partially-local.html
Log libraries and the tendency to open holes in things - https://rachelbythebay.com/w/2021/12/18/log/
Where’s that log file? Debugging failed Docker builds - https://pythonspeed.com/articles/debugging-docker-build/
Fix the unit test and open a giant hole everywhere - https://rachelbythebay.com/w/2021/12/24/mkdir/
Some sanity for C and C++ development on Windows - https://nullprogram.com/blog/2021/12/30/
Loading NumPy arrays from disk: mmap() vs. Zarr/HDF5 - https://pythonspeed.com/articles/mmap-vs-zarr-hdf5/

#digest
111 viewsAlexander, edited  09:15
Открыть/Комментировать
2022-01-02 12:14:35 Digest 2021-12

# ML / Papers

Evaluating Syntactic Abilities of Language Models - https://ai.googleblog.com/2021/12/evaluating-syntactic-abilities-of.html
Efficiently and effectively scaling up language model pretraining for best language representation model on GLUE and SuperGLUE - https://www.microsoft.com/en-us/research/blog/efficiently-and-effectively-scaling-up-language-model-pretraining-for-best-language-representation-model-on-glue-and-superglue/

Improving Vision Transformer Efficiency and Accuracy by Learning to Tokenize - https://ai.googleblog.com/2021/12/improving-vision-transformer-efficiency.html
- TokenLearner is a learnable module that takes an image-like tensor (i.e., input) and generates a small set of tokens.
- Saves memory and computation by half or more w/o loss of accuracy
- Inserting TokenLearner after the initial quarter of the network (at 1/4) achieves almost identical accuracies as the baseline

General and Scalable Parallelization for Neural Networks - https://ai.googleblog.com/2021/12/general-and-scalable-parallelization.html
The Death of Feature Engineering is Greatly Exaggerated - https://petewarden.com/2021/12/11/the-death-of-feature-engineering-is-greatly-exaggerated/
A Fast WordPiece Tokenization System - https://ai.googleblog.com/2021/12/a-fast-wordpiece-tokenization-system.html - but why?
More Efficient In-Context Learning with GLaM - https://ai.googleblog.com/2021/12/more-efficient-in-context-learning-with.html - new 1T param MOE model
Interpretable Deep Learning for Time Series Forecasting - https://ai.googleblog.com/2021/12/interpretable-deep-learning-for-time.html
Why you should be using active learning to build ML - https://humanloop.com/blog/why-you-should-be-using-active-learning
Training Machine Learning Models More Efficiently with Dataset Distillation - https://ai.googleblog.com/2021/12/training-machine-learning-models-more.html
Farcical Self-Delusion - https://blog.piekniewski.info/2021/12/18/farcical-self-delusion/
How a Kalman filter works, in pictures - https://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/
AI and the Future of Work: What We Know Today - https://thegradient.pub/artificial-intelligence-and-work-two-perspectives/
WebGPT: Improving the factual accuracy of language models through web browsing - https://openai.com/blog/improving-factual-accuracy/#samples
Facebook AI’s WMT21 News Translation Task Submission - http://arxiv.org/abs/2108.03265

#digest
125 viewsAlexander, edited  09:14
Открыть/Комментировать
2022-01-01 00:10:10
481 viewsAlexander, 21:10
Открыть/Комментировать
2021-12-31 13:25:08 Hardware News 2021 (RU)

- No new low-end Tensor Core Nvidia GPUs presented

- Oh, fully vendor locked PCs from Intel, Nvidia are to be expected in near future (in their wet dreams) =)

- Also WSL2, Microsoft ARM app and Windows for Android

- Many news about Intel (2 major releases) and AMD (no major releases)

- AMD to "catch up" to Intel in 2022

- 3080 Ti, mining only GPUs, hash rate limits => no real changes

- Russian tech mumbo jumbo (Mail => VK => Gazprom)





What a brave new world!
561 viewsAlexander, 10:25
Открыть/Комментировать
2021-12-31 10:29:10 A Small New Year Miracle

Someone from the chat reposted our VAD on HackerNews - https://news.ycombinator.com/item?id=29734797

And it got trending, seems like nothing, only +35, which seems kind of docile, compared even to Habr or Reddit.

But I took a look at Github this morning, and:

+50 stars
+435 views from hackernews
+313 unique visitors

PS

Previously, whenever I posted anything on HN, it either immediately got banned or was shadow banned and got no traction whatsoever (talk about free speech, lol). Now just a repost without any copywriting - just a title and a link gains nonzero traction.

Virtually identical post from me - is just dead (even if it is not directly banned).
562 viewsAlexander, edited  07:29
Открыть/Комментировать
2021-12-30 18:12:20 I enabled the reactions.

Initially I liked the fact that Telegram had no like button ... but maybe they are in beta ... because of ads?

On the other hand, reactions in Slack are useful, so let's see!
596 viewsAlexander, 15:12
Открыть/Комментировать
2021-12-30 09:19:51 https://telegra.ph/Itogi-2021-12-29
633 viewsAlexander, 06:19
Открыть/Комментировать
2021-12-29 15:11:16
С наступающим!

Желаем в Новом 2022 стремительного прироста подписчиков, высоких охватов, качественной активной аудитории и, конечно, счастья и здоровья.

Традиционный подарок от нас — новогодняя открытка о том, как прошел этот год для вашего канала.

До встречи в 2022,
Команда TGStat.ru
386 viewsAlexander, 12:11
Открыть/Комментировать
2021-12-28 10:45:51 The Fall of Independent Science Channels on YouTube?

Watching videos from 3B1B or Kurzgezagt has been a long-time diversion of mine (I share the most epic videos). In spreading the "truth" (i.e. a balanced rational scientific vision of the world based on consensus view adopted by science and technology experts) they mostly did a very similar thing to popular scientific journals of the USSR.

Also SciShow basically made a media company out of it (and its CEO / founder / host Hank Green ... is a communist in the USA, lol, but there is some controversy as well).

We mostly lost faith in future, communism, space, technology, etc (though if you squint you can see miracles every day). It is a trickle, but such content is always welcome ... until the corporate sell-outs start to emerge.

Well, the positive thing that not only I noticed that Veritassium sold out. The bad thing is that this is a trend. SmarterEveryDay basically does ads for the American military.

So if 2 biggest channels are sell-outs and / or openly promote questionable values, then what can be said about the rest?

I still respect some nerdier guys keeping their content strictly scientific, i.e. making precious metals out of road dirt, this sort of thing.





PS

This very video shills yet another paid interface built over Open VPN or something. What an irony. Though 95% of YouTube can be blamed for this or Raid Shadow Legends. xD
418 viewsAlexander, edited  07:45
Открыть/Комментировать