From faster machine start-up to increased visibility of your workflows, Spell continues to streamline your Machine Learning pipeline.
Spell automatically syncs uncommitted changes and use those changes as a git patch within your run. We provide a link that enables you to download the changes from your web console. Iterate and run your code faster while still getting all the benefits of reproducibility with Spell. Read more in our docs.
Perform actions on multiple runs at once in the web console. Stop, kill, archive, and add labels with bulk actions.
Filter by label easily by clicking on a label or by using the filter menu dropdown. …
We’ve made a number of updates to Jupyter workspaces that make it easier for you to do your work without leaving your Jupyter workspace environment.
Pins make it easy to find your most important runs by bringing pinned runs to the top of your runs list. If you’re part of a team, the runs you pin will only be visible to you.
You can also organize runs with labels, so they’re easy to spot. Labels are shared throughout an organization, so you and your teammates can group related runs under shared labels.
We’ve added filters to the runs list to help you sift through all your experiments. You can narrow down your runs by duration, repository, date created, status, and more!
We’ve heard from a number of you that you love the hardware graphs we provide and wish we had more. We now have disk usage graphs for run containers, so you’ll be able to see how much disk your code is using. …
When we launched AWS Dedicated Clusters, we already had our sights set on GCP next. This past month, we officially launched support for running Spell in your own GCP account. This means you can use Spell to manage your own dedicated GCP cluster, with support for the following features:
linkwork in your own cluster (and are easier to navigate than GCP’s default file system).
Last month, we announced our GitHub app. Now, we’ve made it easier than ever to select a private repo to start your run from. Use the web selector at web.spell.run/createrun, or the
--github-url flag if you’re launching your run from the CLI. …
The increasing availability of powerful compute resources is one of the main drivers of the rise of machine learning applications. But the cost of computation is also one of the main reasons people hesitate to run too many experiments. Building machine learning models requires a lot of trial and error, and it can take time and money. This is where Spot Instances can help.
Spot Instances let you take advantage of unused EC2 capacity on the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand instance prices. Since Spot Instances take advantage of the spare capacity of AWS EC2, they don’t have the guaranteed availability of On-Demand instances, and could be terminated with a 2 minute notification. …
Our new features in June make experiments much cheaper, saving models and run outputs 50% faster, and put more of your work at your fingertips on the web.
Read on for more!
Save 80% or more on your cloud hardware costs with spot instances. Users on our Teams plan can now run regular runs, Jupyter workspaces, and distributed runs on spot instances. Simply select the “spot” option when creating a new machine type to save on cost.
by Kathryn Lawrence
TL;DR (or if you don’t care how it’s built, and just want to read some articles about the future written by AI): Yes, the magazine can write itself, but it still needs a human publisher. Check it out at montag.xyz
The source material for this Spell-enabled project is MONTAG Magazine, an online and print magazine of stories about how technology will change the way we live. MONTAG started in 2017 as a brainchild of Grover, a Berlin-based startup offering consumer electronics rentals on a flexible monthly basis. …
At Spell, we’re always looking for ways to make your machine learning workflow faster and easier. This month, we’re announcing one big new feature for Teams as well as a new pricing structure that is simpler and more transparent.
Interested in trying our Teams features? Email us at email@example.com.
The distributed run feature is particularly useful for doing distributed deep learning. Distributed training runs your code across multiple machines in parallel to get you results even faster.
When you write code for distributed training using the Horovod framework, which works with TensorFlow, Keras, PyTorch, and MXNET, we can easily send your code to run across however many machines you want. …
It’s April and spring has come to New York City and Spell. We have a bunch of big new features we’re launching this month. Head to spell.run to test them out and tell us what you think!
Since we announced Dedicated Clusters in January we’ve been onboarding customers and starting them with our cluster management tool. Based on feedback, we’re rolling out new features and updates to the UI that makes it easier to manage your own dedicated AWS cluster using Spell.
With this month’s release, we now support:
In addition to our existing grid and random hyperparameter strategies, Spell is excited to announce support for bayesian search. One of our awesome winter interns, Nikhil Bhatia, built a workflow that performed bayesian searches. Based on that he started working on native Spell support, which we are now launching to all users.
Hyperparameters are any model parameters that are not optimized in the learning process, such as learning rate or the number of nodes in a specific layer of a neural network. They are notoriously hard to tune, often requiring a large amount of expensive trial and error. Two common methods involve trying a bunch of samples from the parameter space, either try a ‘grid’ (also known as a sweep) where you train the model with every combination of some preselected values for each parameter or a ‘random’ sampling from a range of potential values for each parameter. …