All Posts

Process Hundreds of GB of Data in the Cloud with Polars

Local machines can struggle to process large datasets due to memory and network limitations. Coiled Functions provide a cloud-based solution that allows for efficient and cost-effective handling of such extensive datasets, overcoming the constraints of local hardware for complex data processing tasks. Incorporating libraries like Polars can further enhance this approach, leveraging optimized computation capabilities to process data more quickly and efficiently.

Code snippet of using the coiled.function decorator to run a query with Polars on a large VM in the cloud

Read more ...

Processing Terabyte-Scale NASA Cloud Datasets with Coiled

We show how to run existing NASA data workflows on the cloud, in parallel, with minimal code changes using Coiled. We also discuss cost optimization.

Comparing cost and duration between running the same workflow locally on a laptop, running on AWS, and running with cost optimizations on AWS.

Read more ...

How to Run Your Jupyter Notebook on a GPU in the Cloud

You can often significantly accelerate the time it takes to train your neural network by using advanced hardware, like GPUs. In this example, we’ll go through how to train a PyTorch neural network on a GPU in the cloud using Coiled notebooks.

Snippet of using `coiled notebook start --vm-type g5.xlarge --region us-west-2` to show how to start a jupyter notebook on a GPU.

Read more ...

Ten Cents Per Terabyte

The optimal cost of cloud computing

Back-of-the-envelope calculation for a simple workload bound primarily by network bandwidth. Calculation is 1 TB / (60 MB/s) / 3600 s/hr * $0.02 / hr = $0.10

Read more ...

TPC-H Benchmarks for Query Optimization with Dask Expressions

Dask-expr is an ongoing effort to add a logical query optimization layer to Dask DataFrames. We now have the first benchmark results to share that were run against the current DataFrame implementation.

Read more ...

Coiled observability wins: Chunksize

Distributed computing is hard, distributed debugging is even harder. Dask tries to simplify this process as much as possible. Coiled adds additional observability features for your Dask clusters and processes them to help users understand their workflows better.


Read more ...

Parallel Serverless Functions at Scale

The cloud offers amazing scale, but it can be difficult for Python data developers to use. This post walks through how to use Coiled Functions to run your existing code in parallel on the cloud with minimal code changes.

Comparing code runtime between a laptop, single cloud VM, and multiple cloud VMs in parallel

Read more ...

Processing a 250 TB dataset with Coiled, Dask, and Xarray

We processed 250TB of geospatial cloud data in twenty minutes on the cloud with Xarray, Dask, and Coiled. We do this to demonstrate scale and to think about costs.

County-level heat map of the continental US showing mean depth to soil saturation (in meters) in 2020.

Read more ...

Reduce training time for CPU intensive models with scikit-learn and Coiled Functions

You can use Coiled Run and Coiled Functions for easily running scripts and functions on a VM in the cloud.

Code snippet adding coiled.function decorator to scikit-learn model training.

Read more ...

Fine Performance Metrics and Spans

While it’s trivial to measure the end-to-end runtime of a Dask workload, the next logical step - breaking down this time to understand if it could be faster - has historically been a much more arduous task that required a lot of intuition and legwork, for novice and expert users alike. We wanted to change that.

Populated Fine Performance Metrics dashboard

Read more ...

Data-proximate Computing with Coiled Functions

Coiled Functions make it easy to improve performance and reduce costs by moving your computations next to your cloud data.


Read more ...

Dask, Dagster, and Coiled for Production Analysis at OnlineApp

We show a simple integration between Dagster and Dask+Coiled. We discuss how this made a common problem, processing a large set of files every month, really easy.

Conceptual diagram showing how to use Dagster with Coiled and Dask.

Read more ...

Process Hundreds of GB of Data with DuckDB in the Cloud

DuckDB is great tool for running efficient queries on large datasets. When you want cloud data proximity or need more RAM, Coiled makes it easy to run your Python function in the cloud. In this post we’ll use Coiled Functions to process the 150 GB Uber-Lyft dataset on a single machine with DuckDB.

Code snippet of using the coiled.function decorator to run a query with DuckDB on a large VM in the cloud.

Read more ...

High Level Query Optimization in Dask

Dask DataFrame doesn’t currently optimize your code for you (like Spark or a SQL database would). This means that users waste a lot of computation. Let’s look at a common example which looks ok at first glance, but is actually pretty inefficient.

Read more ...

Easy Heavyweight Serverless Functions

What is the easiest way to run Python code in the cloud, especially for compute jobs?

Read more ...

How to Train a Neural Network on a GPU in the Cloud with coiled functions

We recently pushed out two new and experimental features coiled run and coiled functions which is a deviation of coiled run. We are excited about both of them because they:

Read more ...

Dask performance benchmarking put to the test: Fixing a pandas bottleneck

Getting notified of a significant performance regression the day before release sucks, but quickly identifying and resolving it feels great!

Read more ...

Coiled notebooks

We recently pushed out a new, experimental notebooks feature for easily launching Jupyter servers in the cloud from your local machine. We’re excited about Coiled notebooks because they:

Read more ...

Utilizing PyArrow to improve pandas and Dask workflows

Get the most out of PyArrow support in pandas and Dask right now

Read more ...

Distributed printing

Dask makes it easy to print whether you’re running code locally on your laptop, or remotely on a cluster in the cloud.


Read more ...

Observability for Distributed Computing with Dask

Debugging is hard. Distributed debugging is hell.

When dealing with unexpected issues in a distributed system, you need to understand what and why it happened, how interactions between individual pieces contributed to the problems, and how to avoid them in the future. In other words, you need observability. This article explains what observability is, how Dask implements it, what pain points remain, and how Coiled helps you overcome these.

The Coiled metrics dashboard provides observability into a Dask cluster and its workloads.

Read more ...

GIL monitoring in Dask

New in version 2023.4.1: Support GIL contention monitoring.

Dashboard of Event Loop and GIL contention

Read more ...

Performance testing at Coiled

At Coiled we develop Dask and automatically deploy it to large clusters of cloud workers (sometimes 1000+ EC2 instances at once!). In order to avoid surprises when we publish a new release, Dask needs to be covered by a comprehensive battery of tests — both for functionality and performance.

Nightly tests report

Read more ...

How well does Dask run on Graviton?

ARM-based processors are known for matching performance of x86-based instance types at a lower cost, since they consume far less energy for the same performance. It’s not surprising then that some companies, like Honeycomb, are switching their entire infrastructure to ARM.

bar chart of AWS cost vs. processor type

Read more ...

Upstream testing in Dask

Dask has deep integrations with other libraries in the PyData ecosystem like NumPy, pandas, Zarr, PyArrow, and more. Part of providing a good experience for Dask users is making sure that Dask continues to work well with this community of libraries as they push out new releases. This post walks through how Dask maintainers proactively ensure Dask continuously works with its surrounding ecosystem.

Read more ...

Burstable vs non-burstable AWS instance types for data engineering workloads

There are many instance types to choose from on AWS. In this post, we’ll look at one choice you can make—burstable vs non-burstable instances—and show how the “cheaper” burstable option can end up being more expensive for data engineering workloads.


Read more ...

Shuffling large data at constant memory in Dask

With release 2023.2.1, dask.dataframe introduces a new shuffling method called P2P, making sorts, merges, and joins faster and using constant memory. Benchmarks show impressive improvements:

P2P shuffling uses constant memory while task-based shuffling scales linearly.

Read more ...

Just in time Python environments

Docker is a great tool for creating portable software environments, but we found it’s too slow for interactive exploration. We find that clusters depending on docker images often take 5+ minutes to launch. Ouch.


Read more ...

How many PEPs does it take to install a package?

A few months ago we released package sync, a feature that takes your Python environment and replicates it in the cloud with zero effort.

Read more ...

Scaling Hyperparameter Optimization With XGBoost, Optuna, and Dask

XGBoost is one of the most well-known libraries among data scientists, having become one of the top choices among Kaggle competitors. It is performant in a wide of array of supervised machine learning problems, implements scalable training through the rabit library, and integrates with many big data processing tools, including Dask.


Read more ...

Handling Unexpected AWS IAM Changes

The cloud is tricky! You might think the rules that determine which IAM permissions are required for which actions will continue to apply in the same way. You might think they’d apply the same way to different AWS accounts. Or that if these things aren’t true, at least AWS will let you know. (I did.) You’d be wrong!

Read more ...

AWS Cost Explorer Tips and Tricks

Spending time in AWS Cost Explorer is one of the best ways to understand what’s going on in your AWS account. It’s one of the few places in the AWS Console where you can get a global view of your account or even of your entire organization.


Read more ...

Automated Data Pipelines On Dask With Coiled & Prefect

Dask is widely used among data scientists and engineers proficient in Python for interacting with big data, doing statistical analysis, and developing machine learning models. Operationalizing this work has traditionally required lengthy code rewrites, which makes moving from development and production hard. This gap slows business progress and increases risk for data science and data engineering projects in an enterprise setting. The need to remove this bottleneck has prompted the emergence of production deployment solutions that allow code written by data scientists and engineers to be directly deployed to production, unlocking the power of continuous deployment for pure Python data science and engineers.


Read more ...