Juicy Data

Логотип телеграм канала @juicydata — Juicy Data J
Логотип телеграм канала @juicydata — Juicy Data
Адрес канала: @juicydata
Категории: Технологии
Язык: Русский
Страна: Россия
Количество подписчиков: 544
Описание канала:

Your guide to the DataScience world!

Рейтинги и Отзывы

3.00

2 отзыва

Оценить канал juicydata и оставить отзыв — могут только зарегестрированные пользователи. Все отзывы проходят модерацию.

5 звезд

0

4 звезд

0

3 звезд

2

2 звезд

0

1 звезд

0


Последние сообщения

9 июн 2020
​​Connected Papers: Explore in a visual graph

Connected papers is a unique, visual tool to help researchers and applied scientists find and explore papers relevant to their field of work.

Connected Papers are usefull to:
- Get a visual overview of a new academic field
- Create the bibliography to your thesis
- Discover the most relevant prior and derivative works

Web-site: https://www.connectedpapers.com/

BlogPost: https://bit.ly/3f32h8t
#connectedpapers #research #arxiv
1.4K viewsedited  06:21
Подробнее
Поделиться:
Открыть/Комментировать
1 мая 2020
​​AI21 Labs Asks: How Much Does It Cost to Train NLP Models?

AI21 Labs Co-CEO, Stanford University Professor of Computer Science (emeritus), and AI Index initiator Yoav Shoham compared three different-sized Google BERT language models on the 15 GB Wikipedia and Book corpora, evaluating both the cost of a single training run and a typical, fully-loaded model cost.

The team estimated fully-loaded cost to include hyperparameter tuning and multiple runs for each setting:

- $2.5k — $50k (110 million parameter model)
- $10k — $200k (340 million parameter model)
- $80k — $1.6m (1.5 billion parameter model)

Paper: https://arxiv.org/pdf/2004.08900.pdf

BlogPost: https://bit.ly/2SllsBK
#ai21 #nlp #gpu #bert #google
1.5K views06:21
Подробнее
Поделиться:
Открыть/Комментировать
30 апр 2020
​​Jukebox: A Generative Model for Music - Opensourced by OpeanAI

Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. The model behind is VQ-VAE.

Samples: https://jukebox.openai.com/
Paper: https://cdn.openai.com/papers/jukebox.pdf
Code: https://github.com/openai/jukebox/

BlogPost: https://openai.com/blog/jukebox/
#openai #jukebox #VQVAE
994 views18:53
Подробнее
Поделиться:
Открыть/Комментировать
30 апр 2020
​​Determined: Deep Learning Training Platform

The platform aims to help deep learning teams train models more quickly, easily share GPU resources, and effectively collaborate.

Some of the benefits:
- high-performance distributed training
- intelligent hyperparameter optimization
- flexible GPU scheduling
- built-in experiment tracking, metrics storage, and visualization
- automatic fault tolerance for DL training jobs
- integrated support for TensorBoard and GPU-powered Jupyter notebooks

Web site: https://determined.ai/developers/
GitHub: https://github.com/determined-ai/determined
#dl #determined #gpu #deeplearning
843 views07:02
Подробнее
Поделиться:
Открыть/Комментировать
18 ноя 2019
​​baikal - a graph-based functional API for building complex scikit-learn pipelines

baikal is a graph-based, functional API for building complex machine learning pipelines of objects that implement the scikit-learn API. It is mostly inspired on the excellent Keras API for Deep Learning, and borrows a few concepts from the TensorFlow framework and the (perhaps lesser known) graphkit package.

GitHub: https://github.com/alegonz/baikal
#sklearn #scikit #api #baikal
1.7K views07:52
Подробнее
Поделиться:
Открыть/Комментировать
29 окт 2019
​​Free GPUs for ML/DL Projects from Gradient

Gradient Community Notebooks from Paperspace offers a free GPU you can use for ML/DL projects with Jupyter notebooks.

Main Advantages (over Colab):
- Fast storage comparing to Colab which uses GDrive;
- Gradient guarantees the entire session. Colab instances can be shutdown (preempted) in the middle of a session leading to potential loss of work;
- A large repository of ML templates;
- Ability to add more storage and higher-end dedicated GPUs from the same environment

Gradient: https://gradient.paperspace.com/free-gpu
#hardware #gpu #freegpu #cloud
1.7K viewsedited  08:55
Подробнее
Поделиться:
Открыть/Комментировать
23 окт 2019
​​Solving classic unsupervised learning problems with deep neural networks

Discusses ideas from two recent papers: Learning gradient-based ICA by neurally estimating mutual information and Gradient-Based Training of Slow Feature Analysis and Spectral Embeddings

Blogpost: http://bit.ly/33R99An
#deeplearning #embeddings #unsupervisedlearning
1.2K views07:27
Подробнее
Поделиться:
Открыть/Комментировать
19 окт 2019
​​NERD : Evolution of Discrete data with Reinforcement Learning

A toy project aimed at evolving sequence using an algorithm which is the combination of both Genetic Algorithm and Reinforcement Learning. The aim of the project was to evolve SMILES chemical molecules from scratch.

GitHub: https://github.com/Gananath/NERD
Blog https://gananath.github.io/nerd.html
#rl #project #dl #nerd
1.1K views12:32
Подробнее
Поделиться:
Открыть/Комментировать
17 окт 2019
​​Clash of Frameworks: PyTorch vs Tensorflow

A researcher at Cornell University compared references to TensorFlow and PyTorch in public sources over the past year. PyTorch is growing rapidly within the research community, while TensorFlow maintains an edge in industry, according to the report.

Blogpost: http://bit.ly/2oDPsNB
#research #analysis #pytorch #tf #tensorflow
901 views05:54
Подробнее
Поделиться:
Открыть/Комментировать
16 окт 2019
​​Solving Rubik’s Cube with a Robotica Hand

OpenAI trained a pair of neural networks to solve the Rubik’s Cube with a human-like robot hand. The nets are trained entirely in simulation, using the same reinforcement learning code as OpenAI Five paired with a new technique called Automatic Domain Randomization (ADR).

Paper: http://bit.ly/2OREuyp
All Videos: http://bit.ly/35FRzRk

Blogpost: https://openai.com/blog/solving-rubiks-cube/
#openai #rl #adr
837 viewsedited  08:06
Подробнее
Поделиться:
Открыть/Комментировать
15 окт 2019
​​ML Career Advice and Reading Papers

One hour Andrew Ng's CS230 Lecture on Deep learning has been summarised into concise 5 minutes and was written in a spirit to retain what was said and largely to look back and implement it.

Material will help to navigate a career in ML/DL and to learn how to read research papers.

CS 230 Lectures: http://cs230.stanford.edu/lecture/

Blogpost: http://bit.ly/35AWNy4
#beginner #dl #deeplearning #cs230
791 views09:51
Подробнее
Поделиться:
Открыть/Комментировать
12 окт 2019
​​Microsoft open sources SandDance, a visual data exploration tool

By using easy-to-understand views, SandDance helps you find insights about your data, which in turn help you tell stories supported by data, build cases based on evidence, test hypotheses, dig deeper into surface explanations, support decisions for purchases, or relate data into a wider, real world context.

Preview: https://sanddance.js.org/
Git: https://github.com/Microsoft/SandDance

Blogpost: http://bit.ly/31bWO7Y
#js #sandance #microsoft
859 views10:10
Подробнее
Поделиться:
Открыть/Комментировать
11 окт 2019
​​PyTorch Mobile: Deployment on iOS and Android

Support (and corresponding packages) to deploy PyTorch models directly to mobile devices for inference was added with the latest PyTorch 1.3 release.

Overview: https://pytorch.org/mobile/home/

iOS: https://pytorch.org/mobile/ios/
Android: https://pytorch.org/mobile/android/

#pytorch #package #python #mobile
762 viewsedited  15:03
Подробнее
Поделиться:
Открыть/Комментировать
9 окт 2019
​​Image Deduplicator (imagededup)

imagededup is a Python package that simplifies the task of finding exact and near duplicates in an image collection making use of CNNs and different hashing algorithms.

Code: https://github.com/idealo/imagededup

Docs: https://idealo.github.io/imagededup/
#imagededup #cnn #hashing #lib #python
765 views06:44
Подробнее
Поделиться:
Открыть/Комментировать
8 окт 2019
​​The Paths Perspective on Value Learning

New post on Distill on how Temporal Difference learning merges paths of experience for greater statistical efficiency.

Blogpost: https://distill.pub/2019/paths-perspective-on-value-learning/
#distill #research #rl #qlearning
673 views12:47
Подробнее
Поделиться:
Открыть/Комментировать
1 окт 2019
​​TensorFlow 2.0.0 is out

TensorFlow 2.0 focuses on simplicity and ease of use, featuring updates like:

- Easy model building with Keras and eager execution;
- Robust model deployment in production on any platform;
- Powerful experimentation for research;
- API simplification by reducing duplication and removing deprecated endpoints.

TF2.0 Guide: https://www.tensorflow.org/guide/effective_tf2
Installation: https://www.tensorflow.org/install

BlogPost: https://telegra.ph/TensorFlow-20-is-now-available-10-01
#google #tf #tensorflow #tf2
765 viewsedited  06:51
Подробнее
Поделиться:
Открыть/Комментировать
12 сен 2019
​​CTRL - A Conditional Transformer Language Model

CTRL is a 1.6 billion-parameter language model with powerful and controllable artificial text generation that can predict which subset of the training data most influenced a generated text sequence.

Paper: https://einstein.ai/presentations/ctrl.pdf
Code: https://github.com/salesforce/ctrl

BlogPost: http://bit.ly/2mesY47
#cntrl #nlp #ai #salesforce #transformer
831 views07:28
Подробнее
Поделиться:
Открыть/Комментировать
6 сен 2019
​​STEGASURAS - Neural Steganography with GPT-2

Service based on EMNLP paper "Neural Linguistic Steganography", hiding secret messages in natural language via arithmetic coding and GPT-2. Using arithmetic coding in reverse enables extremely efficient steganography, and when combined with modern language models like GPT-2 it allows for convincing cover text generations that encode

Paper: https://arxiv.org/abs/1909.01496
Code: https://github.com/harvardnlp/NeuralSteganography

Demo: https://steganography.live
#gpt #gpt2 #ai #nlp #steganography
863 viewsedited  11:29
Подробнее
Поделиться:
Открыть/Комментировать
4 сен 2019
​​Neural Structured Learning in TensorFlow

TF introduces Neural Structured Learning (NSL) which is a new learning paradigm to train neural networks by leveraging structured signals in addition to feature inputs. Structure can be explicit as represented by a graph or implicit as induced by adversarial perturbation.

NSL Overview: http://bit.ly/2lYh3au
GitHub: http://bit.ly/2lDaLwx

BlogPost: https://telegra.ph/Neural-Structured-Learning-in-TensorFlow-09-04
#nsl #tf #tensorflow #ai
781 views13:22
Подробнее
Поделиться:
Открыть/Комментировать
3 сен 2019
​​AI Cheatsheets

Cheatsheets for TensorFlow, PyTorch, Keras and other popular libraries in Presentations and PDF formats.

Service developers are also working to introduce an Interactive Shell/Python Console to write and execute the machine learning/deep learning code.

Git: https://github.com/kailashahirwar/cheatsheets-ai
Site: http://www.aicheatsheets.com/
#keras #pytorch #tf #cheatsheet
788 views14:24
Подробнее
Поделиться:
Открыть/Комментировать