Ruby on Rails Multithreading – Core Topics
What is Multithreading?
🧠 Detailed Explanation
Multithreading is the ability of a program to perform multiple tasks at the same time by using threads. Each thread acts like a mini-program that runs alongside others inside the main application.
In Ruby, a Thread
is an object that can run code in parallel with other threads. This means while one thread is doing a slow task (like calling an API or reading a file), another thread can keep doing something else — making your program feel faster and more responsive.
Threads are created using the Thread.new
method:
Thread.new do
puts "I'm running in a separate thread!"
end
Rails applications use a multithreaded web server called Puma. It allows multiple threads to handle incoming web requests at the same time — which improves performance, especially for APIs and background operations.
However, when using threads, you have to be careful with shared data. If two threads try to change the same thing at the same time, it can cause bugs. That’s why developers use tools like Mutex
to make sure only one thread can access a resource at a time.
Overall, multithreading is a great way to make Ruby apps handle more work in less time — especially when doing many I/O-heavy tasks like calling APIs, processing jobs, or streaming data.
💡 Examples
1. Basic Thread Usage in Ruby
thread = Thread.new do
puts "Running in a separate thread!"
end
thread.join # Wait for the thread to finish
This code creates a new thread that prints a message and then joins it back to the main program.
2. Running Two Threads at the Same Time
t1 = Thread.new { puts "Thread 1 is working..." }
t2 = Thread.new { puts "Thread 2 is working..." }
t1.join
t2.join
Both threads execute independently, and join
ensures the main program waits for them to complete.
3. Fetching Data from Two APIs in Parallel
result1 = nil
result2 = nil
t1 = Thread.new { result1 = call_api("/api/data1") }
t2 = Thread.new { result2 = call_api("/api/data2") }
t1.join
t2.join
puts "Data1: #{result1}, Data2: #{result2}"
Using threads helps reduce the total time spent waiting on external API responses.
4. Using Threads in Rails Controller (only for lightweight tasks)
def notify
Thread.new do
NotificationMailer.welcome_email(current_user).deliver_now
end
render json: { message: "Notification is being sent." }
end
This sends an email without blocking the user’s request, but background jobs are better for long tasks.
5. Safe Access with Mutex
mutex = Mutex.new
counter = 0
threads = 5.times.map do
Thread.new do
mutex.synchronize do
counter += 1
end
end
end
threads.each(&:join)
puts counter
Mutex
prevents multiple threads from changing counter
at the same time, avoiding race conditions.
🔁 Alternative Concepts
- ✅ Background Jobs (Sidekiq, DelayedJob)
- ✅ Async/Await (in JS)
- ✅ Fibers (lightweight Ruby concurrency)
❓ General Questions & Answers
Q1: What is a thread in Ruby?
A: A thread is a lightweight way to run code in parallel within a Ruby program. It allows you to perform multiple operations at once, such as downloading files while still responding to user input.
Q2: What is multithreading?
A: Multithreading is the technique of running multiple threads simultaneously. This helps you improve performance by allowing tasks like file reading, API calls, or heavy computation to run concurrently without blocking each other.
Q3: Why should I use threads in Ruby?
A: Threads are useful when you want to make your application faster by doing several tasks in parallel. For example, you can call two APIs at the same time or send multiple emails without waiting for each one to finish before starting the next.
Q4: Is using threads in Rails safe?
A: Yes, Rails supports multithreading, especially when using servers like Puma. But you need to manage shared data carefully using tools like Mutex to avoid race conditions.
Q5: What is the difference between a process and a thread?
A: A process is an independent program with its own memory space. A thread is a lightweight unit within a process that shares memory with other threads. Threads are faster and use fewer resources than processes.
🛠️ Technical Questions & Answers
Q1: How does Ruby manage threads internally?
A: Ruby uses native system threads (since Ruby 1.9+), but due to the Global Interpreter Lock (GIL), only one thread can run Ruby code at a time. However, Ruby can switch between threads, making it efficient for I/O-bound tasks.
Q2: Can multiple threads modify the same variable?
A: Yes, but it’s not safe without proper synchronization. You should use Mutex
to avoid race conditions.
mutex = Mutex.new
counter = 0
threads = 10.times.map do
Thread.new do
mutex.synchronize { counter += 1 }
end
end
threads.each(&:join)
puts counter
✅ Output: Always safely increments to 10.
Q3: How does Puma utilize threads in a Rails API?
A: Puma is a multi-threaded web server. In config/puma.rb
, you can define how many threads to run per worker:
# config/puma.rb
threads 5, 5
workers 2
This will run 2 workers, each with 5 threads — meaning up to 10 concurrent requests handled simultaneously.
Q4: How do you safely exit a thread in Ruby?
A: You can call Thread#exit
to stop a thread early:
t = Thread.new do
loop do
puts "Working..."
sleep 1
end
end
sleep 3
t.exit
puts "Thread stopped."
Q5: What’s the difference between Thread#join
and Thread#value
?
A: join
waits for a thread to finish. value
returns what the thread evaluated.
t = Thread.new { 5 * 5 }
puts t.value # => 25
✅ Best Practices with Examples
1. Use threads for I/O-bound tasks, not CPU-heavy work
Threads are great when your app spends time waiting (e.g. HTTP requests, reading files), not when it’s doing heavy math.
# ✅ Good for threads
Thread.new { call_external_api }
# ❌ Bad for threads
Thread.new { complex_matrix_calculation }
2. Always use join
or value
to wait for threads
This ensures your main program waits for threads to finish before continuing.
t = Thread.new { sleep 2; puts "Done!" }
t.join
puts "Finished safely"
3. Use Mutex
to protect shared resources
If two threads update the same variable, you must synchronize them to avoid race conditions.
mutex = Mutex.new
counter = 0
threads = 3.times.map do
Thread.new do
mutex.synchronize { counter += 1 }
end
end
threads.each(&:join)
4. Avoid long-running tasks inside controllers
Use background jobs (like Sidekiq) for anything that takes time. Threads should only be used for very quick operations.
# ❌ Bad: Long job in controller
Thread.new { UserMailer.bulk_report(user).deliver_now }
# ✅ Better: Use ActiveJob or Sidekiq
ReportJob.perform_later(user.id)
5. Monitor and log thread activity in production
Always track thread usage to avoid leaks or stuck threads. Add logging inside threads for traceability.
Thread.new do
Rails.logger.info \"Thread started at #{Time.now}\"
# work
Rails.logger.info \"Thread ended at #{Time.now}\"
end
🌍 Real-world Scenario
Imagine you’re building a Rails application that fetches pricing data from 3 different external APIs to show the best deal to the user. If you call each API one after the other, it might take 3–4 seconds total. But with multithreading, you can call all 3 APIs at the same time — reducing the wait time to about 1 second.
result1 = nil
result2 = nil
result3 = nil
t1 = Thread.new { result1 = fetch_price_from_site1 }
t2 = Thread.new { result2 = fetch_price_from_site2 }
t3 = Thread.new { result3 = fetch_price_from_site3 }
[t1, t2, t3].each(&:join)
render json: {
site1: result1,
site2: result2,
site3: result3
}
✅ This makes your app feel much faster and more responsive to the user — without using any external job queues. It’s perfect for I/O-heavy operations where data needs to be fetched or updated in real time.
Why Does It Matter in a Web App?
🧠 Detailed Explanation
In a web app, your users expect the site to feel fast and responsive. If your app takes too long to respond, users get frustrated and may leave. That’s why it’s important to use tools and techniques that make your app run smoothly behind the scenes.
Things like multithreading, background jobs, and non-blocking code help your app do multiple things at once — without freezing the user experience.
For example, when someone places an order:
- ✅ Save the order to the database
- ✅ Send a confirmation email
- ✅ Notify the warehouse
This not only improves speed but also scales better — your app can handle more users without slowing down.
💡 Examples
1. Without Threads or Background Jobs
def create_order
order = Order.create!(order_params)
OrderMailer.confirmation(order).deliver_now
Warehouse.notify(order.id)
render json: { message: "Order placed!" }
end
Problem: The user has to wait for the email and notification to finish before they get a response.
2. With Background Job (Faster UX)
def create_order
order = Order.create!(order_params)
OrderJob.perform_later(order.id)
render json: { message: "Order placed!" }
end
# In app/jobs/order_job.rb
class OrderJob < ApplicationJob
def perform(order_id)
order = Order.find(order_id)
OrderMailer.confirmation(order).deliver_now
Warehouse.notify(order.id)
end
end
Benefit: The user sees the success message immediately. The rest happens in the background.
3. Thread-Based API Call Handling
def fetch_data
result1, result2 = nil, nil
t1 = Thread.new { result1 = call_api1 }
t2 = Thread.new { result2 = call_api2 }
t1.join
t2.join
render json: { api1: result1, api2: result2 }
end
Benefit: Both API calls happen at the same time, reducing total wait time for the response.
🔁 Alternative Concepts
- ✅ Background Jobs: Use tools like Sidekiq, Resque, or DelayedJob to process heavy tasks later without slowing down the user.
- ✅ Asynchronous Processing: Instead of waiting for something to finish, let it run in the background and notify the user when it's done.
- ✅ Turbo Streams (Rails Hotwire): Send partial page updates from the server to the browser in real-time without needing full page reloads.
- ✅ JavaScript Fetch + Spinner: Use frontend fetch calls to request data while showing a loading spinner, so users feel something is happening.
- ✅ Caching: Use tools like Redis or Memcached to save frequent results and reduce unnecessary work.
These methods help you avoid long response times and improve your web app’s overall speed and user experience — just like threads, but sometimes more maintainable.
❓ General Questions & Answers
Q1: Why should I care about performance in my web app?
A: Because users expect web apps to be fast. If your app takes too long to respond, they may leave. Performance impacts user experience, SEO, and overall success.
Q2: What happens if I don't use threads or background jobs?
A: Your app will process everything in a single thread, one after another. This means slow tasks like email sending or API calls can block the entire response, making the app feel slow.
Q3: Is using threads safe in a web app?
A: Yes — if used correctly. Rails apps with servers like Puma support multithreading, but you must handle shared data carefully using mutexes or thread-safe patterns.
Q4: When should I use background jobs instead of threads?
A: If the task is long-running or needs retries (like sending emails or generating reports), use background jobs. Threads are better for short, quick tasks within the request cycle.
Q5: Will my users know if I use threads or background jobs?
A: Not directly. But they will notice that your app is faster, more responsive, and feels better to use. That’s the real benefit — a smoother experience.
🛠️ Technical Questions & Answers
Q1: What actually slows down a web app?
A: Slow I/O operations — like database queries, sending emails, or calling external APIs. These tasks block the main thread unless moved to background jobs or handled asynchronously.
Q2: How do background jobs improve performance?
A: They offload time-consuming tasks to run later, outside of the request-response cycle. This way, the user gets a quick response, while the job continues in the background.
# Controller
UserSignupJob.perform_later(user.id)
# Job
class UserSignupJob < ApplicationJob
def perform(user_id)
user = User.find(user_id)
UserMailer.welcome_email(user).deliver_now
end
end
Q3: Can I use threads in Rails safely?
A: Yes. Rails with Puma supports multithreading. But if you're modifying shared data (like variables, files, or memory), use Mutex
to prevent race conditions.
mutex = Mutex.new
count = 0
5.times.map do
Thread.new do
mutex.synchronize { count += 1 }
end
end.each(&:join)
Q4: How do I decide between threads and background jobs?
A: Use threads for lightweight parallelism inside a single request. Use background jobs (like Sidekiq or ActiveJob) for long tasks, retries, or system-wide async jobs.
Q5: What Rails server supports multithreading?
A: Puma
is the default server in Rails and supports concurrent threads per worker. You can configure it in config/puma.rb
:
# config/puma.rb
threads 4, 8
workers 2
This gives you 2 workers, each handling 4–8 threads — great for handling many simultaneous web requests.
✅ Best Practices
1. Always move slow tasks to background jobs
Things like sending emails, processing files, or making third-party API calls should never block your user request.
# ❌ Bad: Sends email during request
UserMailer.welcome_email(user).deliver_now
# ✅ Good: Use background job
UserMailer.welcome_email(user).deliver_later
2. Use threads wisely for lightweight parallelism
Threads are useful for fast operations you want to run in parallel, but avoid them for anything too heavy or long.
t1 = Thread.new { fetch_profile }
t2 = Thread.new { fetch_settings }
[t1, t2].each(&:join)
3. Don’t block the main thread unnecessarily
If you're reading files, calling services, or running slow loops, offload them or make them async when possible.
4. Use Sidekiq or ActiveJob for job retries & reliability
Background jobs with retry capabilities (like in Sidekiq) make your app fault-tolerant and scalable.
5. Use logging to debug slow or long-running processes
Track how long tasks take using logs so you can spot bottlenecks easily.
Rails.logger.info \"Started API call...\"
response = fetch_api
Rails.logger.info \"Finished in #{Time.now - start_time}s\"
🌍 Real-world Scenario
Imagine you’re building an e-commerce site. When a user places an order, the app must:
- ✅ Save the order in the database
- ✅ Send a confirmation email
- ✅ Notify the warehouse system
- ✅ Update inventory
If you try to do all of this inside the request-response cycle, the user might wait 5–10 seconds before the page loads — which feels slow and frustrating.
Instead, you can use:
- Background jobs to send the email and notify the warehouse
- Threads to call two APIs in parallel
- Caching to reduce how often you query inventory counts
This approach makes your app feel faster, more modern, and scalable under heavy traffic. That’s why smart architecture choices really matter in a web app.
Rails Threading Model Overview
🧠 Detailed Explanation
Ruby on Rails can handle multiple requests at the same time using threads — but only when it runs on a multithreaded server like Puma. This means your app can serve many users without creating a separate process for each one.
A thread is like a mini-worker inside your app that can run code independently. For example, while one thread is loading data from the database, another can serve a different user’s request.
Rails is mostly thread-safe, but you must make sure your own code is thread-safe too. For example:
- ❌ Don’t use global/shared variables without protection
- ✅ Use
Mutex
if multiple threads touch the same data
Most production Rails apps use Puma
, which runs multiple threads per worker.
This allows it to handle many requests quickly and efficiently — with less memory compared to spinning up multiple processes.
In short: Rails threading helps your app scale better and feel faster — but only if you write safe, clean code that works well with concurrency.
💡 Examples
1. Rails with Puma - Configuring Threads
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
Why: This sets both the minimum and maximum thread count for each Puma worker. More threads mean more concurrent requests handled.
2. Handling Multiple Requests in Parallel
When your app receives multiple HTTP requests, Puma can handle them using threads without blocking:
# Request 1 (User A)
GET /products
# Request 2 (User B)
POST /checkout
Result: Both requests are processed at the same time using different threads.
3. Unsafe Thread Access (Don't Do This)
# This is bad - global state across threads
$cart_total = 0
Thread.new { $cart_total += 50 }
Thread.new { $cart_total += 100 }
Problem: Both threads can change the variable at the same time, causing incorrect results.
4. Safe Access Using Mutex
mutex = Mutex.new
total = 0
threads = 2.times.map do
Thread.new do
mutex.synchronize { total += 50 }
end
end
threads.each(&:join)
puts total # Always 100
Why: Mutex
ensures only one thread updates the variable at a time.
🔁 Alternative Concepts
- 🧵 Process-based concurrency (e.g., Unicorn): Instead of threads, some servers (like Unicorn) use multiple processes to handle requests. Each process is isolated and safer for non-thread-safe code but consumes more memory.
- 🔧 Background Jobs: For non-urgent or heavy tasks (e.g., sending emails, processing images), tools like Sidekiq or DelayedJob use separate threads or processes outside the main app flow.
- 🧠 Async Ruby (e.g., Ractors or Fibers): Ruby 3 introduced
Ractors
for true parallelism andFibers
for lightweight concurrency. These are alternatives for advanced concurrency models, but less common in Rails. - 🌐 Horizontal Scaling (More Servers): Instead of adding threads to one server, deploy more servers behind a load balancer (e.g., AWS, Heroku Dynos) to scale out.
- 🧩 Serverless Architectures: Offload tasks to serverless platforms like AWS Lambda for specific endpoints or background functions. These isolate workloads and reduce long-running thread usage.
While Rails threading is powerful, combining it with background jobs, horizontal scaling, and proper task delegation creates a much more resilient architecture.
❓ General Questions & Answers
Q1: What is threading in Rails?
A: Threading allows your Rails app to handle multiple tasks at once. For example, it can respond to multiple users at the same time instead of processing one request after another.
Q2: Is Rails thread-safe?
A: Yes — mostly. The Rails framework is thread-safe in production mode, but your own code (especially shared variables or class-level data) must also be written in a thread-safe way.
Q3: Do I need to configure anything to use threads in Rails?
A: If you’re using the default Puma server, threading is already supported. You can configure the number of threads in config/puma.rb
.
Q4: Will using threads make my app faster?
A: It can! Especially under load, threading helps your app handle more users without spinning up new processes. It reduces memory usage and increases efficiency.
Q5: What’s the difference between threads and background jobs?
A: Threads run inside the current app process during the request. Background jobs run outside the main request cycle using tools like Sidekiq. Jobs are better for long-running tasks like email or file processing.
🛠️ Technical Questions & Answers
Q1: How does Puma handle threading in Rails?
A: Puma spins up multiple threads inside each worker. Each thread can process one HTTP request at a time. This allows Puma to serve many concurrent requests with fewer resources.
# config/puma.rb
threads 5, 5 # min, max threads per worker
workers 2 # number of forked processes (optional)
Q2: How do I make code thread-safe in Rails?
A: Avoid global/shared variables. If multiple threads need to update a shared resource, wrap the update with a Mutex
to prevent race conditions.
mutex = Mutex.new
value = 0
Thread.new { mutex.synchronize { value += 1 } }
Q3: Can I use database connections in threads?
A: Yes, but be cautious. Each thread must check out its own DB connection. Puma handles this, but your connection pool size must be high enough.
# config/database.yml
production:
pool: 10 # Match or exceed max threads
Q4: What are some common threading issues in Rails apps?
A: Common issues include:
- Race conditions on shared data
- Exceeding the database pool size
- Using non-thread-safe gems
Q5: How do I debug thread-related bugs in production?
A: Use logging and monitoring tools like:
Rails.logger.info("Thread started...")
- Application Performance Monitoring tools (NewRelic, Skylight)
- Track request IDs to see overlapping thread activity
✅ Best Practices
1. Match database pool size to thread count
Each thread needs a DB connection. If your thread count is 5, set your database pool size to at least 5 per worker.
# config/database.yml
production:
pool: 5
2. Avoid shared/global state across threads
Using class variables or globals can lead to race conditions.
# ❌ Bad
@@counter = 0
# ✅ Good
Thread.current[:counter] = 0
3. Use Mutex for thread-safe blocks
If you must share data between threads, protect it with Mutex
.
mutex = Mutex.new
count = 0
threads = 5.times.map do
Thread.new do
mutex.synchronize { count += 1 }
end
end
threads.each(&:join)
4. Don’t use thread-sleep to manage concurrency
It’s unreliable and blocks the thread. Use background jobs or async mechanisms instead.
5. Use thread-safe gems only
Not all gems are thread-safe. Always check documentation or avoid them in threaded environments.
6. Test under concurrent load
Use tools like Apache Bench or JMeter to simulate concurrent traffic in staging before production deployment.
7. Monitor Puma performance
Use metrics/logs to ensure threads aren’t getting blocked or piling up under load.
🌍 Real-world Scenario
Imagine you're running a high-traffic online booking platform — users are searching, booking, and receiving confirmation emails at the same time.
Without multithreading, each request waits for the previous one to finish. If user A is booking a ticket, user B’s request to search for trains is delayed.
But with Puma and Rails threading:
- 🧵 User A’s booking request runs in Thread 1
- 🧵 User B’s search request runs in Thread 2
- 🧵 User C’s confirmation email dispatch runs in Thread 3
This not only improves performance and responsiveness but also reduces the number of servers required — saving infrastructure cost.
Bottom line: Rails threading helps apps scale without needing a separate process for every request, especially when combined with caching and background jobs.
Thread.new and Thread Lifecycle
🧠 Detailed Explanation
In Ruby, when you use Thread.new
, you create a new thread — like starting a new worker that can run code at the same time as your main program.
This is useful when you want to do multiple things at once, like making two API calls or processing background tasks.
Threads in Ruby follow a simple lifecycle — they are born, they work, and then they finish. Here's how it goes:
- 🆕 New: Created using
Thread.new
. - 🏃 Running: Starts executing code inside the thread immediately.
- 😴 Sleeping: If you use
sleep
or wait for a resource, the thread sleeps temporarily. - ⛔ Blocked: If it's trying to access something that's locked (like a Mutex), it gets blocked.
- ✅ Dead: Once the thread completes its task or raises an error, it dies (finishes).
You can use join
to wait for a thread to finish before continuing the rest of your program.
You can also use status
or alive?
to check what the thread is doing.
Example in simple words: You and your friend are cooking together. You say, "You boil water, I’ll chop vegetables." That’s like creating a new thread — both tasks happen at the same time 🧠⚙️
💡 Examples
1. Creating a Simple Thread
thread = Thread.new do
puts "This is running in a new thread!"
end
thread.join # Wait for the thread to finish
What it does: This creates a new thread that prints a message. The main program waits until the thread finishes using join
.
2. Multiple Threads at the Same Time
t1 = Thread.new { puts "👷 Thread 1 is working" }
t2 = Thread.new { puts "👷 Thread 2 is working" }
t1.join
t2.join
Why it matters: Threads can run code in parallel, helping your app do multiple things at once (like sending emails and saving data).
3. Sleep to Simulate Delay
Thread.new do
puts "Starting..."
sleep 2
puts "Done after 2 seconds"
end.join
Thread state: During sleep
, the thread is not doing anything. It's "sleeping" and will "wake up" after the delay.
4. Check Thread Status
thread = Thread.new { sleep 1 }
puts thread.status # "sleep"
puts thread.alive? # true
thread.join
puts thread.status # false (dead)
Why it's useful: You can check what a thread is doing, or whether it's finished — helpful in debugging.
5. Thread Finishing Early
t = Thread.new do
raise "Oops!" # Error will kill this thread
end
t.join rescue puts "Thread crashed 😵"
What happens: The thread crashes, but your program handles it gracefully using rescue
.
🔁 Alternative Concepts
- 🧵 Background Jobs (e.g., Sidekiq, ActiveJob):
Instead of running logic insideThread.new
, you can move long-running tasks (like sending emails or processing files) to background jobs. These are more stable and scalable.UserMailer.welcome_email(user).deliver_later
- 🧠 Fibers:
Fibers are lighter than threads and allow you to pause/resume execution. Great for non-blocking I/O, but less common in everyday Rails apps. - 🔒 Forked Processes (Unicorn, Passenger):
Instead of threads, some servers use multiple processes. These are isolated from each other — safer, but heavier on memory. - 📡 Evented I/O (Async gem):
Instead of traditional threads, async Ruby uses event loops (like JavaScript) for concurrency. This is great for high-performance I/O-bound tasks, but requires different programming patterns. - 🚫 Avoid Threading in Certain Cases:
If your code depends heavily on shared state or complex global variables, threading may be risky. Instead, consider external job queues or services.
Threads are powerful for short-lived, concurrent operations inside your app, but when reliability, retries, and memory isolation are important, background jobs or external processes are a better fit.
❓ General Questions & Answers
Q1: What does Thread.new
do in Ruby?
A: It creates a new thread of execution. This means the code inside Thread.new { ... }
runs in parallel with the rest of your program.
Q2: Do threads run at the same time in Ruby?
A: Yes — kind of! Ruby uses a system called "green threads," meaning it manages threads in the interpreter. They're concurrent but not always truly parallel (except in JRuby).
Q3: When should I use Thread.new
?
A: Use it when you want to do something in the background — like calling multiple APIs, logging, or small tasks that shouldn’t block the user.
Q4: Is it safe to use threads in Rails?
A: Yes, if you’re careful. Avoid sharing variables across threads without protection (use a Mutex
), and don’t use it for big jobs — background workers are better.
Q5: Do I need to call join
on a thread?
A: Not always. If you want to wait for a thread to finish before continuing, use join
. Otherwise, your main program might finish before the thread is done.
🛠️ Technical Questions & Answers
Q1: What happens if I don’t call join
on a thread?
A: The main program may finish before the thread completes, and the thread might never finish its work. Use join
if the result of the thread matters.
Thread.new { sleep(1); puts "Thread done" }
puts "Main done"
# Output might skip "Thread done"
Q2: How can I check the status of a thread?
A: You can use status
and alive?
.
t = Thread.new { sleep(2) }
puts t.status # "sleep"
puts t.alive? # true
t.join
puts t.status # false
Q3: Can I safely update a shared variable in threads?
A: Only if you protect it using Mutex
to prevent race conditions.
mutex = Mutex.new
counter = 0
threads = 10.times.map do
Thread.new { mutex.synchronize { counter += 1 } }
end
threads.each(&:join)
puts counter # ✅ Always 10
Q4: What’s the difference between Thread.exit
and raise
?
A: Thread.exit
stops the thread normally. raise
throws an exception inside the thread — it can crash or be rescued.
Thread.new do
raise "Oops!" # 🚫 Crashes the thread
end
Thread.new do
Thread.exit # ✅ Gracefully ends
end
Q5: Are threads in Ruby parallel on multi-core CPUs?
A: In MRI Ruby (default), no. Threads are concurrent but not parallel due to the Global Interpreter Lock (GIL). In JRuby or TruffleRuby, they can run in true parallel.
✅ Best Practices
1. Always join
or monitor threads if results matter
If you create a thread that does important work, make sure to call join
or store the result. Otherwise, the thread may be lost when the main program ends.
t = Thread.new { do_something }
t.join # Wait for it to finish
2. Use Mutex
to protect shared data
When multiple threads write to the same variable or resource, wrap the logic in mutex.synchronize
to avoid race conditions.
mutex = Mutex.new
count = 0
Thread.new { mutex.synchronize { count += 1 } }
3. Don’t use threads for long-running tasks in Rails
Instead, use background job systems like Sidekiq
, Resque
, or DelayedJob
for better control, retries, and memory management.
4. Avoid using global variables across threads
They may be accessed simultaneously, causing bugs. Instead, use local variables or thread-safe storage.
5. Handle exceptions inside threads
Threads can crash silently. Always use a begin...rescue
block inside the thread to catch and log errors.
Thread.new do
begin
risky_code
rescue => e
puts "Error in thread: #{e.message}"
end
end
6. Keep threads lightweight and short
Threads are best used for quick, isolated tasks. Heavy logic should be offloaded to jobs or external services.
7. Clean up or monitor active threads if used in production
Unmanaged threads can leak memory or hang. Track them with logs or thread pools if you're using many.
🌍 Real-world Scenario
Imagine you're building a Rails-based e-commerce site. When a user places an order, your app needs to:
- 💳 Process payment
- 📦 Generate shipping label
- 📧 Send confirmation email
While payment must be handled immediately (in the main thread), the shipping label and email can be done asynchronously. Using Thread.new
, you could offload those tasks temporarily without blocking the user:
def complete_order
process_payment(current_user)
Thread.new { ShippingService.generate_label(order) }
Thread.new { OrderMailer.confirmation(order).deliver_now }
render json: { message: "Order placed!" }
end
⚠️ However, while this works for light workloads or quick demos, production systems should use background jobs like Sidekiq to handle these tasks more reliably.
Takeaway: Thread.new
is great for quick async tasks, but for anything critical, heavy, or long-running — use dedicated job queues.
Thread Methods: join, sleep, kill, status
🧠 Detailed Explanation
Ruby gives you tools to manage threads after you create them. These four common methods help you control how threads behave and when they start or stop:
-
join
– "Wait for me"
When you calljoin
on a thread, Ruby will pause the main program until that thread finishes its work.
Example: If you say “I’ll wait until you finish downloading before I continue.” -
sleep
– "Take a break"
This pauses a thread for a few seconds. It’s like asking the thread to rest for a while before continuing.
Example:sleep(2)
means “wait for 2 seconds.” -
kill
– "Stop now!"
This forcefully stops a thread right away. It's like canceling a task in the middle. Be careful — it can leave things incomplete or cause issues. -
status
– "What's it doing?"
This tells you what state the thread is in:"run"
,"sleep"
,false
(if finished), ornil
(if it crashed).
Useful for debugging or checking if a task is still running.
These methods give you more control and visibility when your app is doing multiple things at once — like downloading, emailing, or calculating in the background.
💡 Examples
1. Using join
to wait for a thread
t = Thread.new do
sleep 1
puts "Thread finished!"
end
puts "Main waiting..."
t.join
puts "Main done!"
Output: The main program waits until the thread prints "Thread finished!" before continuing.
2. Using sleep
to pause a thread
Thread.new do
puts "Starting..."
sleep 2
puts "Woke up after 2 seconds"
end.join
Tip: Sleep helps simulate waiting for something, like an API response or file download.
3. Using kill
to stop a thread early
t = Thread.new do
loop do
puts "Still working..."
sleep 1
end
end
sleep 3
t.kill
puts "Thread was killed"
Warning: kill
immediately stops the thread — use it carefully to avoid unfinished work.
4. Checking status
of a thread
t = Thread.new { sleep 2 }
puts "Status now: #{t.status}" # sleep
sleep 3
puts "Status later: #{t.status}" # false (finished)
Why it's useful: Use status
to check if a thread is still running or has ended.
🔁 Alternative Concepts
- 🧱
Thread.join
vs. Background Jobs (e.g., Sidekiq):
Instead of waiting withjoin
, Rails apps commonly use job queues like Sidekiq to handle work after a response is sent. This avoids blocking the user.
Example:MyMailerJob.perform_later(user.id)
- ⏳
sleep
vs. Scheduling Tools:
Rather than usingsleep
to delay actions, you can use tools likewhenever
gem orcron
to schedule jobs at the right time without blocking a thread. - 🛑
kill
vs. Graceful Timeouts:
Instead of forcefully killing threads, use timeouts (likeTimeout.timeout
) to gently stop long tasks if they exceed a limit.require 'timeout' Timeout.timeout(5) do # long-running task end
- 🧪
status
vs. Observability Tools:
Tools like New Relic, Skylight, or Scout help monitor running threads and background jobs in production — without manually checkingstatus
. - 🔁 Thread Pools (e.g., Concurrent Ruby):
Instead of creating one-off threads, libraries likeconcurrent-ruby
give you thread pools, futures, and promises — safer and more efficient for concurrent Ruby apps.
These alternatives provide safer and more scalable ways to manage background work in modern Ruby and Rails applications.
❓ General Questions & Answers
Q1: What does join
do exactly?
A: join
makes the main program wait for a thread to finish before moving on. It's like saying, “I’ll pause here until you're done.”
Q2: Is using sleep
the same as waiting for a task to finish?
A: Not quite. sleep
just pauses a thread for a set time — it doesn’t care what the thread is doing. join
actually waits for the thread’s task to finish.
Q3: Should I use kill
to stop a thread?
A: Only if you must. kill
immediately stops a thread, which can cause problems if the thread was writing to a file or handling a transaction. Prefer safe exits when possible.
Q4: How do I check if a thread is still running?
A: Use thread.status
to get the thread’s state ("run"
, "sleep"
, false
, or nil
), or thread.alive?
to get true
or false
.
Q5: Why would I use sleep
in a Rails app?
A: In most cases, you shouldn’t. sleep
is okay in quick tests or simulations, but for real delays or background tasks, use job queues like Sidekiq or scheduling tools like whenever
.
🛠️ Technical Questions & Answers
Q1: What’s the difference between Thread#status
and Thread#alive?
?
A: - status
gives the current state of the thread: "run"
, "sleep"
, false
(finished), or nil
(crashed).
- alive?
returns true if the thread is still running or sleeping.
t = Thread.new { sleep 1 }
puts t.status # => "sleep"
puts t.alive? # => true
t.join
puts t.status # => false
puts t.alive? # => false
Q2: What happens if I call join
on a dead thread?
A: Nothing bad — Ruby will just skip waiting because the thread is already finished.
t = Thread.new { puts "Done" }
t.join
# Later in the code
t.join # This has no effect, because it's already joined.
Q3: How do I handle exceptions inside a thread?
A: Wrap your thread block in a begin...rescue
to catch errors and avoid silent crashes.
Thread.new do
begin
raise "Boom!"
rescue => e
puts "Caught error: #{e.message}"
end
end.join
Q4: Can I use sleep
for retries?
A: Yes, but use it carefully. You can retry something after a short delay, but in production apps, it's better to use a backoff pattern or job retries.
tries = 0
begin
tries += 1
raise "Fail" if tries < 3
puts "Success!"
rescue
sleep 1
retry
end
Q5: How do I properly stop a thread without using kill
?
A: Use a flag variable to signal the thread to stop on its own (called graceful shutdown).
stop = false
thread = Thread.new do
until stop
puts "Working..."
sleep 1
end
end
sleep 3
stop = true
thread.join
puts "Thread ended gracefully"
✅ Best Practices with Examples
1. Use join
only when needed
Only block the main thread with join
if the result from the thread is essential. Otherwise, let it run freely or use background jobs.
thread = Thread.new { do_important_work }
thread.join # ✅ Only if the result matters before continuing
2. Never depend on sleep
for timing critical code
sleep
is okay for testing or simulating delay but not reliable for production logic or scheduling.
# ❌ Not reliable
sleep(5)
send_email()
# ✅ Better
EmailJob.set(wait: 5.seconds).perform_later(user_id)
3. Avoid using kill
— prefer graceful shutdowns
Use flags or signals to safely stop threads instead of terminating them mid-operation, which can cause data loss or corruption.
stop = false
t = Thread.new { loop { break if stop } }
stop = true
t.join
4. Monitor thread health using status
status
helps in debugging or checking if a thread finished, crashed, or is still running.
if thread.status.nil?
puts "Thread crashed 😵"
elsif thread.status == "sleep"
puts "Thread is sleeping..."
end
5. Avoid blocking threads in Rails controllers
Never use sleep
or long-running threads in request/response cycles. It slows down users. Offload tasks to background workers.
# ❌ Bad
Thread.new { sleep 10 }
# ✅ Good
NotificationJob.perform_later(user_id)
🌍 Real-world Scenario
Imagine you're building a Rails app that allows users to upload large files and immediately convert them to multiple formats (e.g., PDF, PNG, DOCX). Instead of doing it all in one go and delaying the response, you want to process each conversion task in a background thread.
def convert_file(file)
formats = %w[pdf png docx]
threads = formats.map do |format|
Thread.new do
begin
sleep 2 # Simulate time-consuming conversion
puts "Converted to #{format.upcase}"
rescue => e
puts "Error: #{e.message}"
end
end
end
threads.each(&:join)
puts "All conversions complete!"
end
✅ You used join
to wait for all formats to finish.
✅ sleep
simulates a delay (or could be replaced by a real conversion method).
✅ You could monitor each thread with status
to ensure it's still running.
❌ Avoid using kill
unless one of the conversions must be forcibly stopped (like due to a virus scan failure).
In a real production setup, you’d likely move this logic to a background worker (like Sidekiq), but this example shows how you can manage thread lifecycles manually if needed — especially for quick, non-critical concurrent tasks.
Thread-local variables (Thread.current)
🧠 Detailed Explanation
Thread-local variables are special variables that belong to a single thread only. In Ruby, you can create thread-local variables using Thread.current
.
Think of Thread.current
like a tiny "backpack" that each thread carries — it can store small pieces of information safely without sharing with other threads.
For example, if two users make two requests at the same time in a Rails app, you can store user-specific information (like :user_id
) inside Thread.current
— and it won’t mix up between the users!
This makes your code much safer when multiple things happen at once, like when using a multithreaded server (e.g., Puma).
Summary: - Each thread has its own Thread.current
.
- Variables stored there are private to that thread.
- It's great for things like request IDs, user session info, or temporary settings during a single request.
💡 Examples
1. Storing a value in Thread.current
Thread.new do
Thread.current[:user_id] = 42
puts "Inside thread: #{Thread.current[:user_id]}" # => 42
end.join
puts "Outside thread: #{Thread.current[:user_id]}" # => nil
Why: Each thread has its own storage. :user_id
is set only inside that specific thread.
2. Using Thread.current to track request ID in a Rails app
# At the beginning of request
Thread.current[:request_id] = SecureRandom.uuid
# In the middle of code (logging for example)
logger.info "Processing request #{Thread.current[:request_id]}"
# After request finished
Thread.current[:request_id] = nil
Why: Safely track which request is being processed without confusing two users' requests at the same time.
3. Different threads have different Thread.current variables
t1 = Thread.new do
Thread.current[:counter] = 100
sleep 1
puts "Thread 1 counter: #{Thread.current[:counter]}" # => 100
end
t2 = Thread.new do
Thread.current[:counter] = 200
sleep 1
puts "Thread 2 counter: #{Thread.current[:counter]}" # => 200
end
t1.join
t2.join
Why: Even though they use the same key (:counter
), each thread has its own independent value!
4. Cleaning up after the thread is done
Thread.new do
Thread.current[:cache] = { data: "Important" }
# Use the data
puts "Data: #{Thread.current[:cache][:data]}"
# Clean up
Thread.current[:cache] = nil
end.join
Best practice: Always clean up thread-local variables to avoid memory leaks in long-lived threads (like in server apps).
🔁 Alternative Concepts
- 🛡️ Request-specific storage (Rails RequestStore):
Instead of manually usingThread.current
, gems like request_store automatically create a thread-safe place to store data for each HTTP request.
Example:RequestStore.store[:user_id] = current_user.id
🔵 Easier and safer than directly managingThread.current
. - 🧠 Service Objects or Context Passing:
Instead of saving data globally in a thread, you can explicitly pass important data through methods or service objects.
Example:
This way, you keep your code cleaner and easier to test.UserService.new(user_id: 5).perform
- 🔄 Middleware storage:
For web apps, you can use Rack middleware to set and clean per-request variables automatically at the beginning and end of every request.
Good for things like logging, authentication tokens, request IDs. - 🧹 Using Background Jobs:
If you need to pass thread-specific information between systems (like from a web request to a background job), pass it explicitly as job parameters — don’t rely onThread.current
. - 💬 Database/Session storage for long-term data:
If you need to persist user-specific data beyond a single thread or request, store it in the database or user session instead of in memory.
Summary: Use Thread.current
for temporary, thread-specific, short-lived data — but for larger applications, prefer safer patterns like RequestStore, context objects, or middleware.
❓ General Questions & Answers
Q1: What is Thread.current
in Ruby?
A: Thread.current
is a special Ruby object that refers to the thread that is currently running.
You can use it like a small "private storage box" to keep variables that are safe and separate for each thread.
Q2: Why would I need thread-local variables?
A: When you have multiple threads doing work at the same time (like handling multiple web requests), thread-local variables keep each thread’s data isolated, so they don't accidentally mix up information.
Q3: Is Thread.current
shared across threads?
A: No! Each thread has its own Thread.current
.
A variable stored in one thread’s Thread.current
is not visible or accessible to other threads.
Q4: When is it a bad idea to use Thread.current
?
A: - When you have long-living threads that keep growing their local data (memory leak risk). - When you need to share data between multiple threads — then you should use databases, Redis, or job queues instead.
Q5: Does Rails automatically use Thread.current
internally?
A: Yes! Rails internally uses Thread.current
for things like request ID tracking, cache keys, and error reporting.
Gems like request_store
also rely on Thread.current
behind the scenes.
🛠️ Technical Questions & Answers
Q1: How do I set and read a thread-local variable?
A: You set it like a hash: Thread.current[:key] = value
.
You read it the same way: Thread.current[:key]
.
Thread.current[:user_id] = 123
puts Thread.current[:user_id] # => 123
Q2: What happens if two threads set the same key?
A: No conflict. Each thread has its own Thread.current
— so even if they use the same key like :user_id
, they’re separate.
t1 = Thread.new { Thread.current[:user_id] = 1; sleep(1); puts Thread.current[:user_id] }
t2 = Thread.new { Thread.current[:user_id] = 2; sleep(1); puts Thread.current[:user_id] }
t1.join
t2.join
# Output: 1 and 2 independently
Q3: How do I clear a thread-local variable?
A: Simply set it to nil
when you’re done to avoid memory leaks.
Thread.current[:user_id] = nil
Q4: Is Thread.current
safe inside a Rails controller?
A: Generally yes, because each web request gets its own thread in Rails (especially with Puma server). However, you must clean up thread-local variables at the end of each request if you set your own values manually.
Q5: Can Thread.current
be used in background jobs?
A: Technically yes, but it’s better to pass job-specific data through job arguments instead of relying on thread-local storage. Thread.current should mostly be used for request-scoped or short-living tasks.
✅ Best Practices with Examples
1. Keep thread-local variables short-lived
Only store data needed for a short time (like during a request), then clean it up to avoid memory leaks.
Thread.current[:user_id] = current_user.id
# ... use it temporarily
Thread.current[:user_id] = nil # Clean up after use
2. Always reset thread-local variables after web requests
Especially important in Rails apps running with thread pools (e.g., Puma). Use Rails middleware or custom helpers to clean.
3. Prefer RequestStore for Rails applications
Instead of directly using Thread.current
, prefer gems like request_store
for web request-specific data. It's safer and automatically cleaned up.
RequestStore.store[:user_id] = current_user.id
4. Don't use Thread.current for cross-thread communication
Thread.current
is isolated. To share data between threads, use thread-safe queues (like Queue
class) instead.
q = Queue.new
Thread.new { q.push("data") }
Thread.new { puts q.pop }
5. Namespace your thread keys clearly
Use clear key names to avoid confusion. Example: :current_user_id
instead of :id
.
Thread.current[:current_user_id] = user.id
🌍 Real-world Scenario
In a typical Rails web application using the Puma server, each incoming request is handled in its own thread. Suppose you want to track a request ID across all log entries for easier debugging.
You can generate a unique ID at the start of the request and store it in Thread.current[:request_id]
.
Then, your logging system can automatically pull the request ID from Thread.current
without needing to manually pass it everywhere.
# Middleware: set request_id
class RequestIdMiddleware
def initialize(app)
@app = app
end
def call(env)
Thread.current[:request_id] = SecureRandom.uuid
@app.call(env)
ensure
Thread.current[:request_id] = nil # Clean up
end
end
# Logger
logger.info "Processing request #{Thread.current[:request_id]}"
✅ Benefits: - Each request gets its own clean, isolated request ID - Easy to track logs by request - No risk of one user’s ID leaking into another user’s thread
🔥 This is a real pattern used inside popular gems like request_store
and services like Sentry, NewRelic, and Datadog tracing for Rails apps.
Key differences: Concurrency vs Parallelism
🧠 Detailed Explanation
Concurrency means working on multiple tasks at the same time — but not necessarily doing them exactly at the same instant. It’s about managing and switching between tasks efficiently.
Parallelism means doing multiple tasks at exactly the same time — truly running side-by-side — using multiple CPU cores.
🔥 Simple way to remember:
- Concurrency: One worker juggling many balls quickly.
- Parallelism: Many workers each juggling their own ball at the same time.
In Ruby and Rails:
- Concurrency happens mostly using threads.
- Parallelism happens using multiple processes (like Sidekiq workers or multiple Puma workers).
Both concurrency and parallelism help make web applications faster and better at handling many users at once!
💡 Examples
1. Concurrency: Time-sharing with threads (single CPU core)
t1 = Thread.new do
3.times { puts "T1 working..."; sleep(0.5) }
end
t2 = Thread.new do
3.times { puts "T2 working..."; sleep(0.5) }
end
t1.join
t2.join
Explanation: Even if there’s only one CPU core, Ruby switches between T1 and T2 quickly. It "pretends" to do both tasks at the same time (this is concurrency).
2. Parallelism: Running on multiple CPU cores (true parallel)
# Run with Parallel gem or Process.fork
require 'parallel'
Parallel.each([1, 2], in_processes: 2) do |i|
puts "Process #{i} is working..."
sleep(2)
end
Explanation: Each process runs on a different CPU core — both tasks literally happen at the same time.
3. Real-life analogy
- Concurrency: 1 cashier handling 5 customers by switching between them quickly.
- Parallelism: 5 cashiers serving 5 customers at the same time.
4. Rails Server Example
Rails with Puma is concurrent — it handles many web requests by juggling them between threads. If you run multiple Puma workers, it becomes parallel because workers can run on different CPU cores.
🔁 Alternative Concepts
- 🧵 Fibers (Lightweight Concurrency):
Instead of full threads, Ruby supports fibers — tiny, lightweight units of work. They allow you to pause and resume blocks of code manually, providing concurrency without heavy threads.
Example: UsingFiber.yield
to pause inside a block. - 🔄 Async/Await Style (Concurrent-ruby, Async gem):
Instead of traditional threads, you can use async programming where tasks declare when they wait and resume. This saves resources and improves performance without many threads.
Example: Async gem helps you perform concurrent HTTP requests very easily. - 📦 Multiprocessing (Process.fork in Ruby):
Instead of just multiple threads, you can create completely separate processes usingfork
. Each process has its own memory space and can truly work in parallel on multiple cores.
Note: Processes are heavier than threads. - 🌐 Event-driven I/O (like Node.js style):
Some systems (like Node.js) use event loops to handle thousands of concurrent network requests with a single thread by being non-blocking. Ruby has gems likeEventMachine
that support this style. - 🐙 Actor Model (like Celluloid in Ruby):
Actors are independent units that send messages to each other. Instead of sharing memory, they communicate asynchronously, making concurrency much easier and safer.
Summary: You don’t always need "true parallelism" — often good concurrency with lightweight tools is enough to make your Rails apps fast and responsive!
❓ General Questions & Answers
Q1: Can a program be concurrent but not parallel?
A: Yes! Concurrency means a program can manage multiple tasks at once (like juggling), even if it’s only using one CPU core and tasks are not truly running at the same time.
Q2: Can a program be parallel but not concurrent?
A: Yes, in rare cases. A purely parallel program might split one single task across multiple cores without switching between different tasks. But usually, real-world programs are both concurrent and parallel at the same time.
Q3: Which one is better — concurrency or parallelism?
A: It depends! - Concurrency is better when you have lots of tasks waiting (like many users making web requests). - Parallelism is better when you need pure speed for heavy tasks (like processing huge files, images, videos).
Q4: Does Rails use concurrency or parallelism?
A: Rails mainly uses concurrency with threaded servers like Puma. If you configure multiple workers (processes), then you also get parallelism.
Q5: How does a web server handle concurrency?
A: The server uses threads (or async I/O) to handle many incoming requests without blocking. While one thread waits for a database or API, another thread can work — this keeps the server fast and responsive.
🛠️ Technical Questions & Answers
Q1: How does Ruby handle concurrency internally?
A: In Ruby (especially MRI Ruby), concurrency is mostly implemented using native operating system threads. However, Ruby uses a Global Interpreter Lock (GIL), meaning only one thread can execute Ruby code at a time — even if you have multiple threads.
Thread.new { heavy_ruby_task }
Thread.new { another_heavy_ruby_task }
# They share CPU time but not execute Ruby code truly in parallel
Note: If threads are waiting (like for HTTP requests or DB queries), they can still release the GIL and allow others to run.
Q2: What is the GIL (Global Interpreter Lock)?
A: The GIL is a mechanism that prevents multiple native threads from executing Ruby code at the same time in MRI Ruby. It simplifies memory management but limits full parallelism for CPU-heavy tasks.
Other Ruby implementations: JRuby and Rubinius don’t have GIL and allow real parallel execution!
Q3: Can I achieve real parallelism in Ruby?
A: Yes, by using multiple processes instead of threads.
Gems like Parallel
or background systems like Sidekiq (with separate processes) can use multiple CPU cores fully.
Parallel.each([1,2,3,4], in_processes: 4) do |number|
puts "Processing number #{number}"
end
Q4: Is concurrency still useful even if GIL exists?
A: Absolutely! Concurrency shines when threads wait for I/O (like database, API calls, file access). Threads can switch and continue useful work while waiting, making web servers fast even with GIL.
Q5: What happens if I don’t manage threads properly?
A: - Memory leaks (if threads never finish). - Race conditions (if threads change shared data without locks like Mutex). - Hard-to-debug deadlocks (two threads waiting for each other forever).
Tip: Always use join
or a proper thread-pool manager (like Concurrent Ruby) if you manually create threads.
✅ Best Practices with Examples
1. Prefer concurrency for I/O-bound tasks
Use threads to handle tasks like web requests, file reading, and APIs — where most time is spent waiting.
Thread.new { call_external_api }
Thread.new { read_big_file }
2. Use parallelism for CPU-heavy tasks
If you are doing heavy computations (image processing, big math), use multiple processes to fully utilize CPU cores.
Parallel.each(big_data, in_processes: 4) { |chunk| process(chunk) }
3. Always manage thread lifecycle (join or pool)
Don’t let loose threads leak memory. Always join
them or use libraries like Concurrent::ThreadPool
.
threads = 5.times.map { Thread.new { work } }
threads.each(&:join)
4. Use Mutex or thread-safe structures when sharing data
Protect shared resources to avoid race conditions.
mutex = Mutex.new
mutex.synchronize { shared_counter += 1 }
5. Prefer battle-tested gems for concurrency control
Gems like Concurrent-ruby
, Sidekiq
, Parallel
handle concurrency and parallelism reliably and safely.
6. In Rails, use Puma (multi-threaded) and multiple workers if needed
Optimize Rails servers by balancing threads and processes depending on your app's workload.
# config/puma.rb
threads 5, 5
workers 2
🌍 Real-world Scenario
Imagine you have a Rails application that serves thousands of users every day. Each user makes API requests that involve fetching user profiles, posts, and notifications.
Concurrency:
Your Rails app uses a threaded server like Puma
.
When 100 users send requests at the same time, Rails uses multiple threads to handle requests concurrently — without needing 100 CPU cores.
Each thread quickly switches between I/O tasks like database queries or external API calls.
Parallelism:
For background tasks (like sending bulk emails or processing images), you use a background job system like Sidekiq
or Resque
.
These jobs run in parallel processes using multiple CPU cores to complete work faster.
Example Setup:
- Puma server: 5 threads, 2 workers → concurrency + parallelism
- Sidekiq: 10 parallel background workers → parallelism
- External APIs: Use threads to call multiple services at the same time → concurrency
Result:
🔥 Your app stays fast and responsive under high load,
🔥 Background jobs complete faster,
🔥 Your server resources are used efficiently without needing thousands of cores.
How Rails handles concurrency
🧠 Detailed Explanation
Rails handles concurrency by allowing multiple users to use the app at the same time without making them wait for each other.
It does this by using threads inside a web server like Puma
.
When many users send requests, Puma creates a new thread for each request. While one thread is waiting (like for a database or external API), another thread can continue working. This way, Rails juggles many users together smoothly.
Even though Ruby has something called the Global Interpreter Lock (GIL) that limits true parallel execution, Rails still benefits because the GIL is released during I/O operations (like database, file read, or network calls).
In short: Rails can handle lots of users at the same time by using threads smartly — and it feels fast even when many things happen at once!
💡 Examples
1. Handling multiple requests at the same time (Puma Server)
When a Rails app runs under Puma with multiple threads, it can process several HTTP requests at the same time using different threads.
# config/puma.rb
threads_count = 5
threads threads_count, threads_count
port ENV.fetch("PORT") { 3000 }
environment ENV.fetch("RAILS_ENV") { "development" }
Meaning: - Puma can run up to 5 threads. - Each thread can handle 1 active request.
2. Concurrency while waiting for slow external services
def call_external_services
t1 = Thread.new { ServiceA.fetch_data }
t2 = Thread.new { ServiceB.fetch_data }
t1.join
t2.join
render json: { service_a: t1.value, service_b: t2.value }
end
Explanation: While waiting for ServiceA or ServiceB responses, the Rails app is free to switch and do other work.
3. Database connection pool concurrency
Since each thread may access the database, Rails manages a connection pool to ensure threads safely share database connections.
# config/database.yml
production:
pool: 10
timeout: 5000
Meaning: Up to 10 threads can use the database at the same time, or wait (up to 5 seconds) if the pool is busy.
4. Concurrent caching
Rails can handle caching in a thread-safe way, so multiple threads can safely read/write cache entries at the same time.
Rails.cache.fetch("user_#{user.id}") do
expensive_user_query(user)
end
Why it’s safe: Rails cache stores are designed to handle multiple concurrent threads accessing the cache.
🔁 Alternative Concepts
- 🔄 Async Programming (without threads)
Instead of using threads, you can use event-driven, asynchronous code. Gems likeasync
andasync-http
allow Ruby to handle thousands of concurrent operations inside a single thread using non-blocking I/O.
Use case: Building lightweight, high-concurrency APIs without thread overhead. - 🌿 Multiprocessing (processes instead of threads)
Instead of threads inside a process, you can run multiple processes. Each Rails worker (like in Puma or Unicorn) runs independently, using different CPU cores.
Benefit: Real parallelism without GIL problems. - ⚡ Actor-based Concurrency (e.g., Celluloid, Karafka)
Use "actors" (small independent units) that send messages to each other instead of sharing memory between threads.
Advantage: No race conditions, easier to reason about complex systems. - 📦 Serverless Functions
Instead of worrying about concurrency inside Rails, some architectures use serverless services (like AWS Lambda) to independently handle each request in a separate lightweight process.
Best for: Highly scalable microservices.
Summary:
Rails uses threads by default, but depending on the app's needs, you can switch to async, multi-process, or actor-based models for even better concurrency handling.
❓ General Questions & Answers
Q1: Does concurrency really make my Rails app faster?
A: Yes! Concurrency lets Rails handle multiple users at the same time without making them wait in line — especially when requests involve slow I/O like database queries or API calls.
Q2: What happens if I have no concurrency?
A: Without concurrency, Rails would process one user at a time. If one user’s request is slow, everyone else waits — which would make your app feel very slow under load.
Q3: Are Rails threads safe?
A: Mostly yes — Rails itself and popular gems are designed to be thread-safe. But you must be careful when writing your own code that shares variables or data across threads. Use Mutex or thread-local variables when needed.
Q4: How many threads should I configure in Puma?
A: It depends on your server resources and app type. A common setup is 5–16 threads per Puma worker. More threads help concurrency but also use more memory.
# Example in config/puma.rb
threads 5, 5
workers 2
Q5: Do I need to write thread code manually in my Rails app?
A: Not usually! Puma handles the threading for incoming web requests automatically. You only write manual threading code if you're doing something like parallel API calls inside a controller action.
🛠️ Technical Questions & Answers
Q1: How does Puma server handle concurrency in Rails?
A: Puma starts multiple threads inside each worker process. Each thread handles a separate HTTP request. While one thread waits (for database, API), another thread can pick up and process another request — without blocking.
# config/puma.rb
threads 5, 5
workers 2
Meaning: Up to 5 concurrent requests per worker.
Q2: How does Rails handle thread safety?
A: Rails itself is designed to be thread-safe, meaning internal Rails operations won’t break when accessed by multiple threads.
However, your own code must also avoid modifying shared variables without protection like Mutex
.
mutex = Mutex.new
mutex.synchronize { shared_variable += 1 }
Q3: What is the role of the database connection pool?
A: Each Rails thread that touches the database needs a separate database connection. Rails manages this with a "connection pool" to make sure there are enough DB connections for concurrent threads.
# config/database.yml
pool: 10
Tip: Always make sure your pool
size ≥ Puma max threads.
Q4: What about Ruby's Global Interpreter Lock (GIL)?
A: Ruby’s GIL (in MRI) prevents multiple threads from executing Ruby code simultaneously. But when a thread is waiting (e.g., for I/O), another thread can run. So concurrency is still highly beneficial for Rails apps that do lots of I/O.
Q5: Can threads in Rails share variables?
A: Technically yes, but it’s risky.
Threads should avoid sharing data unless absolutely necessary, and when they do, they should use synchronization tools like Mutex
to prevent race conditions.
✅ Best Practices with Examples
1. Configure Puma threads properly
Match your Puma thread settings with your server size and expected load. More threads = more concurrent requests, but also more memory used.
# config/puma.rb
threads 5, 16
workers 2
Tip: Keep database.yml
pool size ≥ max Puma threads.
2. Keep controller actions fast
Controller actions should be lightweight — avoid heavy computations inside them. Use background jobs (like Sidekiq) for slow tasks.
def create_order
OrderCreationJob.perform_later(order_params)
render json: { status: "Order is being processed." }
end
3. Use thread-safe caches and stores
Always use thread-safe options for caches and shared memory. Rails’ Rails.cache
is thread-safe by default.
Rails.cache.fetch("user_#{user.id}") do
expensive_query(user)
end
4. Use Mutex when modifying shared variables manually
Protect critical sections of code to prevent race conditions when multiple threads change the same variable.
mutex = Mutex.new
mutex.synchronize do
counter += 1
end
5. Avoid unnecessary thread creation manually
Rely on Puma’s built-in threading model for incoming requests. Create manual threads only when absolutely needed (e.g., parallel API calls).
Thread.new { call_api_1 }
Thread.new { call_api_2 }
# Don't overuse unless needed
6. Monitor and Tune
Use monitoring tools like New Relic, Skylight, or Datadog to watch thread utilization, response time, and deadlocks. Tune thread and worker settings based on traffic patterns.
🌍 Real-world Scenario
Imagine you built a Rails API for a food delivery app. Users place orders, browse restaurants, and check their order status. Thousands of users are online at the same time, especially during peak meal hours.
If Rails could handle only one request at a time, users would experience huge delays. But thanks to Puma’s multi-threaded server:
- Each incoming request (like placing an order) is handled by a separate thread.
- While one user waits for a restaurant list to load, another user’s payment request is already processing.
- Background jobs (e.g., sending confirmation emails) are processed by Sidekiq using multiple processes, in parallel.
Result:
- Users don’t feel any delay even if many requests happen at once.
- The Rails app uses server resources efficiently without needing hundreds of CPUs.
- Slow external services (like payment gateways or map APIs) don’t block other users because threads release control while waiting.
Typical Setup:
Puma
configured with 5–16 threads per worker and 2–4 workers.Sidekiq
handling background jobs for emails, notifications, and analytics.Redis
orPostgreSQL
with tuned connection pools for concurrent access.
Is Rails thread-safe?
🧠 Detailed Explanation
Yes, Rails can be thread-safe, but it depends on two things: the server you use (like Puma) and how you write your code.
When you use a thread-based server like Puma
, it creates multiple threads to handle many user requests at the same time.
Rails supports this by design — it can serve several users at once without crashing or mixing up their data.
But to be truly thread-safe, your own code must also be careful. If two users share the same variable or data in memory, they might change it at the same time and cause bugs. That’s called a race condition.
To be safe:
- Never use shared global variables across requests
- Use thread-local storage like
Thread.current[:user_id]
if needed - Use
Mutex
when multiple threads might change the same thing
Summary: Rails is ready for threads, but you must avoid shared data problems in your app code. If done right, your Rails app will safely serve many users at once — fast and smooth!
💡 Examples
1. Safe Code (Thread-safe variable access)
# Using local variables in controller
def index
result = SomeService.call(params[:id])
render json: { data: result }
end
This is safe because result
is scoped to this request only.
Each user gets their own copy — no overlap.
2. Unsafe Code (Shared class variable)
class MyService
@@count = 0
def self.increment
@@count += 1
end
end
This is not safe! If two threads call increment
at the same time,
they could both change @@count
and mess it up.
3. Fix Unsafe Code Using Mutex
class MyService
@@count = 0
@@mutex = Mutex.new
def self.increment
@@mutex.synchronize do
@@count += 1
end
end
end
This version uses Mutex
to make sure only one thread changes the count at a time.
This protects shared data and avoids race conditions.
4. Thread-local variables
# Set user ID in a thread-safe way
Thread.current[:user_id] = current_user.id
This variable is visible only to the current thread. If another request runs in a different thread, it won’t see or affect this data.
5. Avoid Global Variables
# ❌ Not safe!
$global_data = []
def store(data)
$global_data << data
end
Global variables like $global_data
are shared between all threads.
If two users update it at once, things can break. Avoid them in web apps.
🔁 Alternative Concepts
- 🔄 Process-based Concurrency (Unicorn or Passenger)
Instead of threads, some servers likeUnicorn
orPassenger
run multiple separate processes. Each process handles one request at a time, so there are no shared memory issues.
Benefit: Easy to avoid thread safety bugs.
Downside: Uses more memory. - 📦 Background Jobs (e.g., Sidekiq)
For long-running or risky tasks (like sending emails, analytics, or batch updates), move the work to a background job. These run outside the web request and avoid threading issues in the controller.
Example: Useperform_later
to send email instead of calling it inline. - 🧵 Async Ruby / Evented I/O
Tools likeasync
orfalcon
use event loops instead of threads to manage concurrency. Great for high performance APIs without the complexity of threading. - 🔐 Immutable Data
Another way to avoid thread bugs is to design your code so nothing gets changed (immutable). For example, return new objects instead of updating the same ones in place.
Summary:
Threads are powerful, but there are other concurrency models — like processes, background jobs, and async event loops — that may be simpler and safer depending on your app.
❓ General Questions & Answers
Q1: Is Rails thread-safe by default?
A: Yes, the Rails framework is designed to be thread-safe. But your application code must also follow thread-safe practices — especially when sharing variables or using background tasks.
Q2: How do I know if my code is thread-safe?
A: If your code uses shared state (like class variables or global variables) across threads without protection, it’s not thread-safe.
Thread-safe code uses local variables, thread-local storage (Thread.current
), or synchronization (e.g. Mutex
).
Q3: What happens if I write unsafe code?
A: You might face strange bugs — like users seeing each other's data, broken counters, or missing records. These are called “race conditions,” and they are hard to find and debug.
Q4: Does using Puma mean my app is already thread-safe?
A: Puma supports threads, but your code must be thread-safe to avoid problems. Rails won’t magically fix unsafe patterns — it just gives you the tools to run multiple requests safely.
Q5: Should I always use threads manually in Rails?
A: No. Let Puma handle web request threads. Use manual threads only for specific needs (like parallel API calls), and even then, be very careful with shared data.
🛠️ Technical Questions & Answers
Q1: What is thread safety in the context of Rails?
A: Thread safety means multiple threads can access your Rails app at the same time without corrupting data or causing bugs. It requires the framework (Rails) and your app code to avoid conflicts over shared resources.
Q2: How does Rails manage threads internally?
A: Rails uses thread-local variables and avoids shared global state. With a threaded server like Puma, each web request is handled in its own thread, and Rails ensures per-request isolation for most built-in features.
Q3: What are some unsafe patterns in threaded Rails apps?
A: - Using class variables (@@count
) without synchronization
- Using global variables ($value
) across requests
- Modifying shared in-memory objects across threads
# ❌ Unsafe
@@shared_counter = 0
def increase
@@shared_counter += 1
end
Q4: How do I make code thread-safe in Rails?
A: Use Mutex
to synchronize access, avoid global state, and prefer local or thread-local variables.
mutex = Mutex.new
mutex.synchronize do
shared_data += 1
end
Or use:
Thread.current[:user_id] = current_user.id
for storing data unique to each thread.Q5: What should my database pool size be for threads?
A: Your config/database.yml
pool size should be greater than or equal to the maximum number of threads per Puma worker.
Otherwise, threads will wait for a free connection, causing performance issues.
# config/database.yml
pool: 16
✅ Best Practices with Examples
1. Use a thread-safe web server like Puma
Puma supports multithreading and is the default server in Rails. You can control thread usage in its config file.
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
preload_app!
2. Avoid using class variables or global state
Class variables are shared across threads and can lead to race conditions.
# ❌ Not thread-safe
class UserMailer
@@counter = 0
end
# ✅ Thread-safe
class UserMailer
def initialize
@counter = 0
end
end
3. Use Mutex
to protect shared resources
If multiple threads are accessing or modifying the same variable, use a mutex to prevent race conditions.
mutex = Mutex.new
count = 0
threads = 3.times.map do
Thread.new do
mutex.synchronize do
count += 1
end
end
end
threads.each(&:join)
puts count # => 3
4. Use ActiveRecord connection pooling properly
Each thread should check out its own DB connection. Use connection pools for safety.
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.create(name: "Threaded User")
end
end
5. Monitor thread usage in production
Log and trace thread activity to detect slow or stuck threads.
Thread.new do
Rails.logger.info "Thread started: #{Time.now}"
perform_work
Rails.logger.info "Thread ended: #{Time.now}"
end
6. Use background job systems for heavy tasks
Don’t do long-running tasks in controllers with threads. Use Sidekiq or ActiveJob.
# ❌ Not ideal
Thread.new { UserMailer.bulk_email.deliver_now }
# ✅ Better approach
BulkEmailJob.perform_later(user_ids)
🌍 Real-world Scenario
Imagine you’re building a Rails API for a stock trading app that fetches real-time data from three different stock exchanges. If you call each exchange one by one, it could take 3–4 seconds per request — slowing down the user experience.
By using threads inside your controller, you can call all APIs at the same time and return a response in under 1 second — without needing background jobs.
def live_prices
prices = {}
mutex = Mutex.new
threads = []
threads << Thread.new do
mutex.synchronize { prices[:nasdaq] = fetch_nasdaq_price }
end
threads << Thread.new do
mutex.synchronize { prices[:nyse] = fetch_nyse_price }
end
threads << Thread.new do
mutex.synchronize { prices[:tsx] = fetch_tsx_price }
end
threads.each(&:join)
render json: prices
end
✅ This is thread-safe because:
- Each thread fetches data independently.
Mutex
ensures the shared hashprices
is updated safely.join
ensures all threads finish before returning a response.
In production, Rails with Puma can run multiple threads simultaneously per worker — so this technique helps you improve response time without extra infrastructure.
Configuring Thread Safety in config.threadsafe!
🧠 Detailed Explanation
In older versions of Rails (like Rails 3), you had to tell Rails to get ready for handling multiple requests at the same time using:
config.threadsafe!
This line went in your environment file (like config/environments/production.rb
).
It made Rails prepare for "multithreading" — which means handling more than one request at the same time.
When you enable thread safety:
- Rails loads everything in memory ahead of time.
- It turns off features like auto-reloading code (which isn’t safe with threads).
Today, in modern Rails (Rails 5 and newer), you don’t need to use config.threadsafe!
anymore.
Rails is already designed to be thread-safe by default if you're using a proper web server like Puma.
So if you're on a newer Rails version: You don't need to write config.threadsafe!
at all. Just make sure your server supports threads.
Understanding this helps you know how Rails behaves under the hood when handling many users at the same time.
💡 Examples
1. Using config.threadsafe!
in Rails 3
In Rails 3, you had to turn on thread safety manually. You did this by adding this line to your environment file:
# config/environments/production.rb
# This tells Rails to work in thread-safe mode
config.threadsafe!
This made Rails load everything upfront (called "eager loading") and disabled code reloading during a request, which is not safe when multiple threads are used.
2. Thread-safe behavior in Rails 5 and newer (no config.threadsafe! needed)
In modern Rails (5+), thread safety is already built-in. You just need to configure your server to use multiple threads. For example, if you use Puma (default server for Rails), you can set how many threads it should use:
# config/puma.rb
# Use 5 threads per Puma worker
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
# Use 2 Puma workers (separate processes)
workers 2
This setup allows Rails to handle 10 requests at the same time (2 workers × 5 threads each).
3. Example of thread-safe code in your Rails app
When your app is thread-safe, it can handle multiple users at the same time. But you need to be careful with shared data.
For example, if two users try to change the same variable, you must use a Mutex
to make it safe.
mutex = Mutex.new
count = 0
5.times.map do
Thread.new do
mutex.synchronize do
count += 1
end
end
end.each(&:join)
puts count # => Output will be 5, safely updated
Without mutex
, the final number could be wrong because multiple threads might change the variable at the same time.
4. Avoiding thread-unsafe code
Let’s say you define a class variable (which is shared across requests). That’s not thread-safe:
# ❌ Not thread-safe
class Counter
@@count = 0
def self.increment
@@count += 1
end
end
Now here’s a safer version using instance variables and a mutex:
# ✅ Thread-safe
class Counter
def initialize
@count = 0
@mutex = Mutex.new
end
def increment
@mutex.synchronize { @count += 1 }
end
end
This version ensures that only one thread at a time can update the counter — avoiding errors.
🔁 Alternative Concepts
- Use a multithreaded server like Puma
- Use ActiveJob and background workers for long-running tasks
- Process-based servers like Unicorn (but with less concurrency)
❓ General Questions & Answers
Q1: What is config.threadsafe!
used for?
A: It was used in older versions of Rails (like Rails 3) to prepare the application to handle multiple requests at the same time using threads.
When you added config.threadsafe!
, Rails would:
- Load all your application code in memory up front (this is called "eager loading").
- Turn off automatic class reloading (which is not safe for threaded environments).
- Enable internal caching to improve performance.
Q2: Do I need to use config.threadsafe!
in Rails 6 or 7?
A: No. In Rails 5 and newer, thread safety is built in automatically.
You don’t need to call config.threadsafe!
manually anymore.
Instead, just make sure:
- Your web server supports threads (e.g., use Puma).
- Your code does not use shared global variables or class variables.
Q3: What happens if my app is not thread-safe?
A: If your app is not thread-safe, then multiple users accessing your app at the same time may accidentally change the same data. This can cause serious bugs like:
- Incorrect values saved in the database
- App crashes or random behavior
- Security issues due to shared data leaks
Q4: What is the difference between threads and processes?
A: A process is like a full program with its own memory space. A thread is a smaller unit inside a process that shares memory with other threads.
Example: If 5 users visit your app:
- With processes (like Unicorn), each user gets a separate program (more memory).
- With threads (like Puma), one program handles all users by switching between threads (less memory, faster).
Mutex
) to keep things safe.Q5: What’s the benefit of thread safety in Rails?
A: Thread safety makes your Rails app faster and able to serve more users at the same time without crashing or slowing down. It:
- Reduces the number of servers you need
- Makes better use of CPU and memory
- Improves performance and scalability
🛠️ Technical Questions & Answers
Q1: How do I configure my Rails app to be thread-safe in modern versions?
A: In Rails 5 and above, you don't need to call config.threadsafe!
. Thread safety is enabled by default.
You just need to:
- Use a thread-safe web server like Puma
- Set threads properly in
config/puma.rb
- Ensure your own code is thread-safe (no shared mutable state)
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
workers 2
preload_app!
This setup allows Rails to handle 2 workers × 5 threads = 10 concurrent requests.
Q2: What kind of code is not thread-safe in Rails?
A: Code that uses shared class variables, global variables, or caches without protection. For example:
# ❌ Not thread-safe
class Tracker
@@counter = 0
def self.increase
@@counter += 1
end
end
This can lead to race conditions when accessed by multiple threads at the same time.
✅ Solution: Use instance variables and synchronize access with Mutex
.
class Tracker
def initialize
@counter = 0
@mutex = Mutex.new
end
def increase
@mutex.synchronize { @counter += 1 }
end
end
Q3: How do I use ActiveRecord safely with threads?
A: Each thread should check out its own database connection from the connection pool. Use ActiveRecord::Base.connection_pool.with_connection
.
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.create(name: "Threaded User")
end
end
This ensures that the thread doesn’t interfere with others and avoids connection leaks.
Q4: How do I know if my app is thread-safe?
A: Use tools like rack-mini-profiler
, NewRelic
, or Scout
to trace requests. You can also:
- Use concurrency testing tools like
Apache Bench (ab)
orwrk
- Write integration tests that simulate multiple users accessing your app at the same time
Q5: How does config.threadsafe!
actually change Rails behavior internally?
A: In older Rails (e.g., 3.x), this setting:
- Enabled
ActionController::Base.allow_concurrency = true
- Turned on
cache_classes = true
- Turned off auto-reloading of files
✅ Best Practices with Examples
1. Use a Threaded Server (like Puma)
Puma is the default server in modern Rails and supports multithreading. It can handle multiple requests at the same time with fewer resources.
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
workers 2
preload_app!
This means: 2 workers × 5 threads = 10 requests handled in parallel.
2. Avoid Global or Class Variables
Shared class variables can be changed by multiple threads at the same time, leading to bugs. Instead, use instance variables and keep data local to the request.
# ❌ Not thread-safe
@@shared_counter += 1
# ✅ Thread-safe
@counter ||= 0
@counter += 1
3. Use Mutex
for Shared Resources
If you must share data between threads, wrap access with a Mutex
to make it safe.
mutex = Mutex.new
count = 0
threads = 3.times.map do
Thread.new do
mutex.synchronize do
count += 1
end
end
end
threads.each(&:join)
puts count # => 3
4. Use Database Connection Pool Properly
Each thread should use its own connection from the ActiveRecord connection pool.
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.create(name: "From Thread")
end
end
Never share a DB connection across threads manually.
5. Use Background Jobs for Long Tasks
Avoid running long tasks (like sending emails) inside threads in controllers. Use Sidekiq or ActiveJob instead.
# ❌ Not ideal: Thread in controller
Thread.new { UserMailer.notify(user).deliver_now }
# ✅ Better
NotificationJob.perform_later(user.id)
6. Set eager_load = true
in Production
This loads all classes at startup and prevents them from being reloaded during a request, which is important for thread safety.
# config/environments/production.rb
config.eager_load = true
7. Log Thread Activity (Optional)
For debugging or monitoring, log the start and end of threads to trace issues.
Thread.new do
Rails.logger.info "Thread started: #{Time.now}"
# do some work...
Rails.logger.info "Thread ended: #{Time.now}"
end
🌍 Real-world Scenario
Imagine you have a Rails 3 e-commerce application where hundreds of users visit your site during a sale. The app uses a single-threaded server and handles one request at a time. As traffic increases, users start to see slow response times and timeouts.
To improve performance, your team decides to enable multithreading. You update the server to Puma and configure Rails like this:
# config/environments/production.rb
config.threadsafe!
# config/puma.rb (for newer Rails apps)
threads 5, 5
workers 2
With this setup:
- Rails loads all code once at startup (eager loading)
- No auto-reloading happens during requests (which is unsafe for threads)
- Each Puma worker can now handle 5 requests at the same time
After deploying this change, performance improves:
- Users see faster page loads
- The app handles more traffic without crashing
- You save server cost because fewer processes are needed
🔁 In modern Rails (5+), this happens automatically — no need to call config.threadsafe!
.
You only need to configure your server correctly and write thread-safe code.
Writing Thread-Safe Code in Rails
🧠 Detailed Explanation
When your Rails app uses a multi-threaded server like Puma, it can handle more than one request at the same time. This is great for speed — but it can cause problems if two users try to use the same data at the same time.
For example, if two people visit your site and both hit a method that changes a shared variable (like a counter), they might change it at the same time — and the result could be wrong or random.
This is called a race condition. It happens when multiple threads (users) try to use or change the same thing at once.
To avoid this, your code should be thread-safe. That means:
- ✅ Each thread works with its own data
- ✅ Shared data is protected using tools like
Mutex
- ✅ Avoid using global variables (like
$global
) or class variables (like@@counter
)
Instead of storing shared values in memory, you should:
- Use local variables inside methods (they're safe)
- Use the database or Redis to store shared values
- Use
Rails.cache.increment
to safely update counters
So, thread-safe code simply means writing code in a way that each request (thread) runs independently and doesn’t break when many people use your app at once.
💡 Examples
1. ❌ Not Thread-Safe: Using a global variable
This is a global variable. It’s shared across your whole app. If two users change it at the same time, it can break.
$counter = 0
def increase
$counter += 1
end
❗ Problem: If multiple threads call increase
, they will use and change the same $counter
, which can give the wrong number.
2. ❌ Not Thread-Safe: Using a class variable
Class variables (with @@
) are also shared between all users and threads.
class Cart
@@total_items = 0
def self.add_item
@@total_items += 1
end
end
❗ Problem: Two users adding items at the same time may cause @@total_items
to update incorrectly.
3. ✅ Thread-Safe: Use local variables
Local variables (defined with regular =
) live only inside the method. Each thread has its own copy, so it's safe.
def calculate_total
total = 0
items.each do |item|
total += item.price
end
total
end
✅ Safe: Each request gets its own total
variable — no sharing, no conflict.
4. ✅ Thread-Safe: Use Mutex to protect shared data
If you really need to share something (like a counter), wrap it in a Mutex
to make sure only one thread can change it at a time.
class SafeCounter
@count = 0
@mutex = Mutex.new
def self.increment
@mutex.synchronize do
@count += 1
end
end
end
✅ Safe: Mutex
locks the code so only one user at a time can change the counter.
5. ✅ Use Rails.cache or Redis to store shared data
Instead of keeping shared data in memory, store it in something external like Rails.cache (uses Redis or Memcached).
Rails.cache.write("views", 0)
Rails.cache.increment("views") # safely increases the counter
✅ Safe: Each thread works through Redis, which is built to handle many users safely.
🔁 Alternative Concepts
- Use ActiveJob or Sidekiq for async tasks
- Use Redis or external caches for shared counters
- Store state in the database rather than memory
❓ General Questions & Answers
Q1: What does “thread-safe code” mean?
A: Thread-safe code means your code will work correctly even if many people are using your app at the same time.
When a Rails server like Puma handles multiple requests at once using threads, those threads can sometimes access or change the same data.
If your code is not careful, that shared data can get corrupted or behave unpredictably.
So, thread-safe code avoids this by:
- Not using shared/global variables
- Keeping data local to each thread
- Using locks (like
Mutex
) for shared access
Q2: Why should I avoid global or class variables?
A: Because global variables ($var
) and class variables (@@var
) are shared across all threads.
If two users try to change the same variable at the same time, it can lead to bugs — like counting wrong, losing data, or showing the wrong result.
Example: If one user adds an item to their cart, and another user does it at the same time, they might accidentally change the same variable and break each other’s carts.
Q3: Are local variables thread-safe?
A: Yes ✅. Local variables (created inside a method using just =
) are safe because each thread (user request) gets its own copy.
They are not shared between users or threads.
Example:
def calculate
total = 0
total += 5
end
The total
here is used only in this method and disappears after the request — so it’s safe.Q4: How do I safely use shared data in threads?
A: If you must use shared data, use a Mutex
to control access. This tells one thread to wait while another is using the data.
Example:
@mutex = Mutex.new
@mutex.synchronize do
@counter += 1
end
This makes sure only one thread at a time updates @counter
, so it doesn’t break.Q5: Where should I store shared counters or totals safely?
A: You should store them in a place made for multiple users — like:
Rails.cache
(works with Redis or Memcached)- Database (e.g., a table with counter columns)
- Redis counter keys
Example:
Rails.cache.increment("total_views") # thread-safe counting
🛠️ Technical Questions & Answers
Q1: How can I protect a shared variable in Ruby?
A: You can use a Mutex
(a mutual exclusion lock) to make sure only one thread can use the variable at a time.
mutex = Mutex.new
shared_count = 0
threads = 5.times.map do
Thread.new do
mutex.synchronize do
shared_count += 1
end
end
end
threads.each(&:join)
puts shared_count # Output will always be 5
✅ Safe: The mutex.synchronize
block makes sure no two threads update shared_count
at the same time.
Q2: How do I use Redis or Rails.cache for shared counters?
A: Redis and Rails.cache are both thread-safe stores. You can safely increase counters using increment
.
# config/environments/production.rb
# Make sure you use Redis or Memcached as the cache store
Rails.cache.write("page_views", 0)
Rails.cache.increment("page_views")
✅ This is safer than using in-memory Ruby variables, because Redis handles concurrent requests properly.
Q3: Can I use ActiveRecord in multiple threads?
A: Yes — but each thread must get its own database connection from the connection pool.
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.create(name: "Threaded user")
end
end
✅ Safe: This ensures that each thread checks out a clean connection and avoids connection leaks or conflicts.
Q4: How can I test if my code is thread-safe?
A: You can simulate multiple requests at once using tools like:
Apache Bench (ab)
wrk
Rails system tests
that run parallel sessions
Q5: Are controller instance variables thread-safe?
A: Yes ✅. In Rails, every request gets its own instance of the controller, so instance variables like @user
are not shared between threads.
class DashboardController < ApplicationController
def show
@user = current_user # this is safe
end
end
❗ But class variables (like @@user
) are not safe, because they're shared across threads.
✅ Best Practices with Examples
1. ❌ Avoid global ($var
) and class variables (@@var
)
These are shared by all threads. If two users change them at the same time, they may cause bugs or incorrect results.
# ❌ Not thread-safe
$counter = 0
@@cart_items = []
Use local or instance variables instead:
# ✅ Thread-safe
def add_to_cart
cart_items = []
cart_items << "item"
end
2. ✅ Use Mutex
to lock shared resources
If you must share a variable (like a counter), wrap it in a Mutex
so only one thread can update it at a time.
mutex = Mutex.new
total = 0
threads = 3.times.map do
Thread.new do
mutex.synchronize do
total += 1
end
end
end
threads.each(&:join)
puts total # Always 3
3. ✅ Use thread-safe caches like Redis or Rails.cache
For shared counters or values across users, use Rails.cache
or Redis
— they’re built for concurrency.
Rails.cache.write("likes", 0)
Rails.cache.increment("likes")
4. ✅ Use database for shared application state
The database is already safe for multiple users. For shared data like user scores or page views, store them in the DB.
# Safe example
score = User.find(1).score
User.find(1).update(score: score + 1)
5. ✅ Use ActiveRecord::Base.connection_pool.with_connection
in threads
When using threads that access the database, always check out a proper connection from the pool.
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.create(name: "Thread-safe user")
end
end
6. ✅ Use local variables inside controller actions
Controller instance variables like @user
are safe because each request has its own controller object.
class ProfilesController < ApplicationController
def show
@user = current_user # thread-safe
end
end
7. ✅ Log thread activity during debugging
You can add logs to help trace which thread is doing what — useful in production debugging.
Thread.new do
Rails.logger.info "Thread started: #{Time.now}"
perform_task
Rails.logger.info "Thread ended: #{Time.now}"
end
🌍 Real-world Scenario
Imagine you're building a Rails web app for an online quiz game. Every time a user answers a question correctly, you want to increase the total score for the team.
You first try something like this:
# ❌ Not thread-safe
class Scoreboard
@@team_score = 0
def self.add_point
@@team_score += 1
end
end
This works fine when only one person is playing. But during a live competition, 100 users are answering questions at the same time. Suddenly, you notice:
- Points are being lost or skipped
- @@team_score is sometimes lower than expected
- Some requests crash randomly
@@team_score
at the same time — causing a race condition.✅ The Fix: Use Redis or a thread-safe cache to store and update the score:
# Using Rails.cache (thread-safe with Redis or Memcached)
Rails.cache.write("team_score", 0) unless Rails.cache.exist?("team_score")
Rails.cache.increment("team_score")
Now, even if 100 players score at the same time, the total is always correct.
This is a real example of how writing thread-safe code avoids bugs and keeps your app stable under heavy traffic.
threads min, max in puma.rb
🧠 Detailed Explanation
When a Rails app runs on the Puma web server, it can handle multiple requests at the same time using something called threads.
In your config/puma.rb
file, you will see a line like this:
threads MIN, MAX
This line controls how many threads Puma will use to handle incoming requests:
- MIN is the smallest number of threads Puma keeps ready, even when traffic is low.
- MAX is the highest number of threads Puma can use when traffic is high.
🧠 Think of threads like people standing at a help desk:
- If you set
threads 1, 5
, Puma starts with 1 person ready to help users. - As more users come in, Puma adds more people — up to 5 — to help at the same time.
This helps your app handle busy times better — without using too much memory when things are quiet.
For example:
threads 5, 5
👆 This means Puma will always keep exactly 5 threads running — even when the app is idle.
Another example:
threads 1, 8
👆 This means Puma starts with 1 thread and increases to 8 threads only if needed.
The more threads you allow, the more users your app can serve at once — but too many threads can use up your memory or database connections, so be careful!
💡 Examples
1. Fixed thread count (same min and max)
# config/puma.rb
threads 5, 5
✅ This means Puma will always use exactly 5 threads per worker — no more, no less. Each thread can handle 1 request at a time.
So if you get 5 user requests at once, each one is handled in parallel. If 6 people visit your site at the same time, the 6th will wait until a thread is free.
2. Dynamic scaling: fewer threads when idle, more under load
threads 1, 8
✅ This means Puma will start with 1 thread when things are quiet, and grow to 8 threads if traffic increases. It saves memory when not needed and helps scale up when more users come.
3. Using ENV variables for flexibility
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
✅ This lets you set the thread count using environment variables (from Heroku, Docker, etc.). If no ENV variable is set, it uses 5 by default.
This is a best practice for making your app easier to configure in different environments (development, staging, production).
4. Using different min and max values from ENV
min_threads = ENV.fetch("RAILS_MIN_THREADS") { 2 }
max_threads = ENV.fetch("RAILS_MAX_THREADS") { 8 }
threads min_threads, max_threads
✅ This allows Puma to scale threads between 2 and 8 based on traffic. It’s ideal for apps with variable load.
5. Matching threads with database pool
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
# config/database.yml
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
✅ Always match your thread count with the database connection pool size. Otherwise, your app might crash under load because threads can’t get DB access.
🔁 Alternative Concepts
- Use more
workers
(processes) if you have many CPUs - Combine threads + workers for balanced concurrency
- Use async methods or background jobs for long tasks
❓ General Questions & Answers
Q1: What do the numbers in threads min, max
mean?
A: These numbers tell Puma how many threads it should use to process requests.
- min: the minimum number of threads Puma keeps ready (even if traffic is low)
- max: the maximum number of threads Puma can grow to if traffic gets busy
Q2: Why should I set both min and max?
A: Setting both helps Puma scale smartly:
- If your app is idle, Puma won’t use too many resources (min threads)
- If your app is busy, Puma can handle more users at once (up to max threads)
Q3: What if I set both min and max to the same number?
A: Then Puma will always use that exact number of threads, no more and no less.
This is useful when you know your app has consistent traffic, or when you want predictable resource usage.
Q4: How many threads should I use for my app?
A: It depends on your app:
- If your app waits for external things (like databases or APIs), use more threads (e.g., 5–16)
- If your app does heavy computations, use fewer threads to avoid overloading your CPU
threads 5, 5
and adjust based on testing.Q5: Do threads affect the database?
A: Yes. Each thread can open a database connection. So, your database.yml
must allow enough connections.
If you allow 5 threads, your database pool should also allow at least 5 connections:
# config/database.yml
pool: 5
If you don’t match these values, you might get "too many connections" or timeout errors.
🛠️ Technical Questions & Answers
Q1: How do I configure thread values using environment variables?
A: Using environment variables makes your thread settings flexible for different environments like development, staging, or production.
# config/puma.rb
min_threads = ENV.fetch("RAILS_MIN_THREADS") { 1 }
max_threads = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads min_threads, max_threads
✅ This allows you to change thread settings without editing code. Just set the variables when you deploy:
# in terminal or .env file
RAILS_MIN_THREADS=2
RAILS_MAX_THREADS=8
Q2: What happens if I set more Puma threads than my database pool?
A: You’ll run into connection errors. Each Puma thread may need its own database connection. If your thread count is higher than your database connection pool, some threads will wait or fail.
# config/puma.rb
threads 5, 5
# config/database.yml
pool: 3 # ❌ Too small
✅ Solution: Match your thread count to your DB pool:
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
Q3: Can I use threads with multiple Puma workers?
A: Yes! Threads handle multiple requests per process. Workers multiply your processes.
Example: If you set threads 5, 5
and workers 2
, you get 2 × 5 = 10 total concurrent requests.
threads 5, 5
workers 2
This gives better performance on multi-core CPUs, where each worker runs on a separate core.
Q4: How can I monitor or test Puma threads in production?
A: You can use monitoring tools like:
NewRelic
Scout APM
- Custom logs with
Rails.logger.info
to track thread usage
Thread.new do
Rails.logger.info "Thread #{Thread.current.object_id} started"
# do work
end
Q5: Is there a way to simulate high traffic and test thread behavior locally?
A: Yes! Use command-line tools like:
ab
(Apache Bench)wrk
siege
ab
:
ab -n 100 -c 10 http://localhost:3000/
This simulates 100 requests with 10 running at the same time — useful to see how your thread settings perform.
✅ Best Practices with Examples
1. ✅ Use ENV variables to configure threads
This makes your app flexible for different environments (development, staging, production) without changing code.
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
# .env or hosting environment
RAILS_MAX_THREADS=8
2. ✅ Match threads to database pool size
If you allow more threads than your database pool can handle, your app may throw connection errors.
# Puma threads
threads 5, 5
# Database pool (match this number)
pool: 5
3. ✅ Start with threads 5, 5 for most apps
This is a safe, recommended starting point. It gives enough concurrency without using too many resources.
threads 5, 5
You can increase this later after performance testing.
4. ✅ Combine threads + workers for best performance
Threads = concurrency in one process. Workers = multiple processes. Use both for high-traffic apps.
threads 5, 5
workers 2
This setup can handle up to 10 requests at once (2 workers × 5 threads each).
5. ✅ Monitor thread usage in production
Use tools like NewRelic, Scout APM, or simple logs to make sure threads are not overused or idle.
Rails.logger.info "Running in Thread ID: #{Thread.current.object_id}"
6. ✅ Use lower min threads if memory is limited
If you’re on a small server, keep the min threads low to save memory:
threads 1, 5
7. ✅ Load test before increasing thread count
More threads ≠ better performance. If your app does a lot of database or heavy CPU work, too many threads can slow it down.
Always test with tools like ab
or wrk
.
🌍 Real-world Scenario
A growing startup launched a Rails-based API used by mobile apps for ride booking. During development, they had this in their puma.rb
:
threads 1, 1
workers 1
Everything seemed fine — until they launched their app and got hit with 100 users at once. Suddenly:
- Requests were slow
- Many users saw timeouts
- The server CPU usage was low, but performance was terrible
Problem: Only one thread could handle one request at a time. All others were forced to wait in line.
✅ The team updated their configuration like this:
# config/puma.rb
threads 5, 16
workers 2
That gave them up to 2 × 16 = 32 concurrent threads, with each worker process able to grow its threads as needed. They also updated their database pool size to match.
# config/database.yml
pool: 16
🔧 After deployment:
- Response times dropped by 70%
- No more timeout errors
- The app scaled smoothly during peak hours
🧠 This example shows why tuning threads min, max
based on real traffic is important — too few and your app feels broken, too many and you may waste resources.
Difference Between Workers and Threads in Puma
🧠 Detailed Explanation
Puma is the default web server for Rails. It is designed to handle many users at the same time by using two things: workers and threads.
✅ Workers are full copies of your app running in separate processes (just like running the app multiple times). Each worker uses its own memory and runs independently.
✅ Threads are smaller units inside each worker. Each thread can handle one user request at a time. Threads share memory with their worker and are lighter than workers.
🧠 Simple Analogy:
- 🔪 Worker = One kitchen
- 👨🍳 Thread = A chef in that kitchen
So when you write this in your puma.rb
:
workers 2
threads 5, 5
That means:
- 2 workers = 2 separate app processes
- Each can handle up to 5 threads = 5 user requests at once
- Total = 2 × 5 = 10 requests at the same time
This setup helps your Rails app serve more users smoothly, especially during high traffic. Workers use more memory but give stability; threads are fast and use less memory.
💡 Examples
1. Basic Example: 1 worker, 5 threads
# config/puma.rb
workers 1
threads 5, 5
✅ This means:
- Only 1 worker (1 process)
- Can handle up to 5 requests at the same time
2. Scaled Example: 2 workers, 5 threads
workers 2
threads 5, 5
✅ This setup means:
- 2 separate processes (workers)
- Each process can handle 5 requests at the same time
- Total = 10 simultaneous user requests
3. Dynamic Threads with Environment Variables
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
threads threads_count, threads_count
preload_app!
✅ This allows you to set thread and worker counts dynamically using environment variables, like:
RAILS_MAX_THREADS=8
WEB_CONCURRENCY=3
You can scale up or down without changing code — great for Docker, Heroku, or CI/CD environments.4. Matching threads with database pool
If each worker uses 5 threads, you must ensure your database has enough connections.
# config/puma.rb
workers 2
threads 5, 5
# config/database.yml
pool: 5
❌ This will likely cause connection errors (you need a pool of at least 10).
✅ Fix it by matching pool size to workers × threads
:
# config/database.yml
pool: 10
5. Max performance on a 4-core server
workers 4
threads 5, 10
✅ Uses 4 CPU cores (1 worker per core) and allows 5–10 threads per worker. This setup can serve up to 40 requests at the same time (4 × 10) — perfect for busy production apps.
🔁 Alternative Concepts
- Use Sidekiq for background jobs
- Use caching to reduce thread load
- Use async JavaScript to offload API load
❓ General Questions & Answers
Q1: What is the difference between a worker and a thread?
A: A worker is a full copy of your app running in its own process. It uses more memory and can crash/restart independently. A thread is like a helper inside a worker — it runs in the same process and can handle one user request at a time.
💡 Easy analogy: A worker is like a restaurant kitchen. A thread is a chef in that kitchen. More workers = more kitchens. More threads = more chefs.
Q2: Which one should I increase — workers or threads?
A: It depends:
- If your server has more CPU cores → increase workers
- If your app waits on APIs or databases (I/O-bound) → increase threads
Q3: Are workers or threads better for scaling?
A: Workers help with stability and crash protection (each runs separately). Threads help with performance and memory savings (they share the same app process).
👉 Use workers when you want isolation. Use threads when you want to handle more requests with fewer resources.
Q4: Can I run Puma with just threads and no workers?
A: Yes ✅. For example:
workers 0
threads 5, 5
This means: one process with 5 threads. It’s great for development or low-traffic apps.
Q5: How do I know how many threads or workers I need?
A: There’s no single answer — but a good starting point is:
- Use
workers = number of CPU cores
- Use
threads = 5
(or up to your DB pool limit)
🛠️ Technical Questions & Answers
Q1: How do I calculate total concurrency using workers and threads?
A: Total concurrency = workers × max threads
.
This is the number of simultaneous requests your app can serve.
# Example
workers 3
threads 5, 10
# Total = 3 × 10 = 30 concurrent requests
✅ Use this to size your app correctly based on traffic and CPU cores.
Q2: Can I preload the app for all workers to save memory?
A: Yes! Use preload_app!
in your puma.rb
file.
It loads your Rails app once, then forks workers. This saves memory using Copy-On-Write.
# config/puma.rb
preload_app!
✅ Especially useful when using 2+ workers in production.
Q3: How do I prevent database connection errors with threads?
A: Make sure the DB connection pool is equal to or greater than your max thread count per worker.
# puma.rb
threads 5, 5
# database.yml
pool: 5 # ✅ Should match threads
❌ If you use more threads than DB pool connections, some threads will wait or error out.
Q4: Can I change workers and threads without restarting the server?
A: You can’t change workers
live, but threads can scale within their min-max range dynamically.
For full concurrency changes, you must restart the Puma server.
Q5: How do I set workers and threads using environment variables for deployment?
A: This makes your app portable and CI/CD-friendly. Example:
# config/puma.rb
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
preload_app!
# .env or hosting env
WEB_CONCURRENCY=4
RAILS_MAX_THREADS=8
✅ Now you can change scaling on the fly by updating your deployment settings.
✅ Best Practices with Examples
1. ✅ Match thread count with database pool size
Every Puma thread may use one database connection. If you don’t match your DB pool size with threads, some requests will fail.
# config/puma.rb
threads 5, 5
# config/database.yml
pool: 5 # ✅ Match this to max threads
2. ✅ Use 1 worker per CPU core
Each Puma worker is a separate process. To make full use of your server, match workers to available cores.
# On a 4-core server:
workers 4
3. ✅ Combine threads + workers for best performance
Threads = handle multiple requests in each worker. Workers = isolate failures, use CPU cores better.
workers 2
threads 5, 10
# Can handle up to 2 × 10 = 20 requests at once
4. ✅ Use preload_app!
to save memory with workers
This loads your app once, then forks workers — reducing memory usage by sharing loaded code (Copy-on-Write).
# config/puma.rb
preload_app!
5. ✅ Use ENV variables to manage workers and threads
This allows easy configuration for different environments (local, staging, production) without changing code.
# config/puma.rb
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
6. ✅ Log thread usage for debugging
You can log the thread ID to see if threads are working as expected under load.
Rails.logger.info "Running in thread: #{Thread.current.object_id}"
7. ✅ Use fewer threads for CPU-heavy apps, more for I/O-heavy apps
- CPU-heavy tasks (math, file processing): use fewer threads to avoid CPU contention - I/O-heavy apps (APIs, DB, external requests): use more threads to stay responsive
🌍 Real-world Scenario
A SaaS company launched a Rails app that helps users generate invoices. They deployed it on a server with 4 CPU cores and 2GB RAM.
Initially, their puma.rb
looked like this:
workers 1
threads 5, 5
This worked fine for low traffic. But as their user base grew, problems appeared:
- Pages started loading slowly during peak hours
- Background jobs started to pile up
- CPU usage remained low, meaning Puma wasn’t using the full power of the server
What they changed:
# config/puma.rb
workers 4 # one worker per CPU core
threads 5, 10 # dynamic scaling based on load
preload_app!
They also updated their database pool to support more concurrent threads:
# config/database.yml
pool: 10
✅ Result:
- The app could handle 40 requests at the same time (4 workers × 10 threads)
- Response time improved by 60%
- No more dropped requests during high traffic
This shows how the right combination of workers (for CPU) and threads (for concurrency) can dramatically improve app performance and stability.
Preload App and Copy-on-Write Optimization in Puma
🧠 Detailed Explanation
When your Rails app runs on Puma, it can create multiple workers. These workers are like separate copies of your app that run at the same time to handle more users.
Normally, each worker loads the app by itself. But if you use preload_app!
in your puma.rb
file, it tells Puma:
“Load the app once first, then copy that into each worker instead of loading it again.”
This helps save memory using a smart feature called Copy-on-Write (CoW).
🧠 What is Copy-on-Write?
It means workers share memory from the main app process — until they need to change something. If they change it, only then is new memory used.
✅ Result: your app uses much less memory because most of it is shared.
When should you use preload_app!
?
- When you use
workers
in Puma (multiple processes) - When you want to reduce memory usage
- When you want your app to boot faster and scale better
🧪 Without it, each worker loads everything separately = more memory used.
🚀 With it, workers reuse the same memory = faster and lighter app.
💡 Examples
1. Without preload_app! (more memory used)
# config/puma.rb
workers 2
threads 5, 5
# No preload_app! used
❌ Each worker loads the entire Rails app separately. If your app uses 400MB of memory, this means 400MB × 2 = 800MB used.
2. With preload_app! (memory shared)
# config/puma.rb
workers 2
threads 5, 5
preload_app!
✅ Now the app loads once, and both workers use the same memory unless they change it. Memory usage drops — e.g., from 800MB to 500MB total.
3. Full optimized configuration with environment variables
# config/puma.rb
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
preload_app!
✅ This is a best-practice setup for production. You can easily change the number of workers or threads without editing code — just set ENV variables.
# In terminal, .env file, or platform config
WEB_CONCURRENCY=3
RAILS_MAX_THREADS=6
4. When preload_app! is unnecessary
# config/puma.rb
workers 0
threads 5, 5
# preload_app! is not needed
✅ If you’re not using any workers (just threads), then preload_app!
doesn’t do anything — because there's nothing to preload for multiple processes.
5. Logging to confirm memory sharing
You can add logging to see which memory is used in the parent vs workers:
before_fork do
puts "Master process: preloading app..."
end
on_worker_boot do
puts "Worker process started"
end
✅ This helps confirm that the app is loaded before forking (Copy-on-Write will be used).
🔁 Alternative Concepts
- Using a single worker and more threads (no forking)
- JRuby mode (threads only, no fork, so no CoW)
- Process managers like Phusion Passenger (built-in CoW support)
❓ General Questions & Answers
Q1: What is preload_app!
in Puma?
A: It’s a setting in config/puma.rb
that tells Puma to load the Rails app before creating worker processes.
This means all the workers can share the loaded memory instead of loading the app again from scratch.
Q2: What is Copy-on-Write (CoW)?
A: Copy-on-Write is a memory-saving trick used by operating systems. When a process (like Puma) is copied (forked), the new process doesn't copy all the memory — it shares it until something is changed. So multiple Puma workers can reuse the same memory until they actually need to change it.
Q3: Does preload_app!
work without multiple workers?
A: No. It only makes sense when you're using multiple workers
because workers are separate processes.
If you only use threads (inside one process), there’s no benefit from preloading the app — there’s nothing to share.
Q4: How much memory can preload_app! actually save?
A: It depends on how big your Rails app is. For a medium-sized app with gems and libraries, preload_app! can save 20–40% of RAM. For example, instead of using 1.2GB with 3 workers, you might only use 700–800MB.
Q5: Do I need to configure anything else for Copy-on-Write to work?
A: No special configuration is needed, but:
- You must use
preload_app!
- Your system should support Copy-on-Write (Linux/macOS do)
- It works best if you avoid loading lots of mutable data before forking
🛠️ Technical Questions & Answers
Q1: How do I enable preload_app!
in Puma?
A: Just add this line in your config/puma.rb
file:
preload_app!
✅ Make sure you also use workers
, since preload only matters when processes are forked.
workers 2
threads 5, 5
preload_app!
Q2: How can I check if Copy-on-Write is working?
A: Monitor your memory before and after enabling preload_app!
:
- Use
htop
ortop
in your terminal - Compare total memory used by Puma workers
✅ You should see less memory used with preload enabled, because workers are sharing memory.
Q3: What is the role of before_fork
and on_worker_boot
when using preload?
A: These hooks help you control what happens before and after forking workers:
before_fork do
puts "Loading app before forking..."
end
on_worker_boot do
ActiveRecord::Base.establish_connection
end
✅ before_fork
runs once (in master), on_worker_boot
runs in each worker — ideal for reconnecting to the database.
Q4: Will preload_app!
work in development?
A: Not really. Development mode in Rails uses class reloading, which doesn’t play well with preload.
You should use preload_app!
only in production.
# config/environments/production.rb
config.eager_load = true
Q5: How does Copy-on-Write behave with background jobs like Sidekiq?
A: preload_app!
only affects Puma (the web server).
Sidekiq runs separately and doesn't benefit from Puma’s preload. However, you can preload app code similarly in Sidekiq setups using require
and boot
blocks.
✅ Best Practices with Examples
1. ✅ Always use preload_app!
in production if using multiple workers
It saves memory by sharing loaded code between worker processes.
# config/puma.rb
workers 2
threads 5, 5
preload_app!
2. ✅ Use on_worker_boot
to reconnect to the database after forking
When you preload the app, the DB connection is opened in the master process. Each worker needs its own fresh connection.
on_worker_boot do
ActiveRecord::Base.establish_connection
end
3. ✅ Combine preload_app!
with eager_load = true
This ensures all app classes are loaded before forking, so memory is shared more effectively.
# config/environments/production.rb
config.eager_load = true
4. ✅ Monitor memory usage before and after enabling preload
Use tools like htop
, top
, or AWS CloudWatch to compare memory usage with and without preload_app!
.
📉 You should see a noticeable reduction in RAM usage per worker.
5. ✅ Don’t preload in development
Preloading interferes with Rails class reloading in development, so it should only be used in production.
6. ✅ Set worker and thread count using environment variables for flexibility
This allows you to change scaling settings without touching code.
# config/puma.rb
workers ENV.fetch("WEB_CONCURRENCY") { 2 }
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
preload_app!
# .env or server config
WEB_CONCURRENCY=3
RAILS_MAX_THREADS=6
🌍 Real-world Scenario
A small SaaS company deployed their Rails application using Puma with the following configuration:
# config/puma.rb
workers 3
threads 5, 5
# preload_app! was NOT included
The app worked fine for a few users, but as traffic grew, the team noticed:
- High memory usage — each Puma worker was using 400MB+
- CPU usage was normal, but the server crashed due to memory exhaustion
- Response time was slower than expected after deploys
What they changed:
# config/puma.rb
workers 3
threads 5, 5
preload_app!
on_worker_boot do
ActiveRecord::Base.establish_connection
end
✅ After adding preload_app!
, the master process loaded the app once and forked it into 3 workers.
Thanks to Copy-on-Write (CoW), all workers shared the same memory unless modified.
Results:
- Memory usage dropped by 35% (from ~1.2GB to ~780MB total)
- Faster boot time after deployment
- No crashes during peak usage
🧠 This real-world fix shows how adding one line — preload_app!
— can dramatically improve memory performance in production without changing any business logic.
Avoiding Blocking Operations in Threads
🧠 Detailed Explanation
In a Rails app using the Puma server, multiple threads are used to handle many user requests at the same time. But each thread can only handle one thing at a time.
If a thread gets stuck doing something slow — like:
- Waiting for a slow API
- Sleeping with
sleep
- Reading or writing large files
❌ Too many blocked threads = slow app. If all threads are blocked, no users can be served — even if the server looks idle.
✅ That’s why it’s important to avoid blocking operations inside threads — especially in your Rails controllers or background jobs.
🧠 Tip: Instead of making a thread wait, let it finish fast and move long tasks to:
- A background job system (like Sidekiq)
- A non-blocking HTTP or file reader
- A database-level job queue
This keeps your app fast and your threads free to help other users immediately.
💡 Examples
1. ❌ Blocking code using sleep
Thread.new do
sleep(10)
puts "Finished!"
end
❌ This thread will do nothing for 10 seconds. During that time, it can’t serve any user request. If you have only 5 threads and all of them sleep, your app will freeze.
2. ❌ Calling a slow API inside a thread
Thread.new do
result = Net::HTTP.get(URI("https://slowapi.com/data"))
puts result
end
❌ This thread will stay blocked while waiting for a slow network response. If it takes 5 seconds, that thread is unusable for 5 seconds.
3. ✅ Better: Move to a background job
Use Sidekiq
or ActiveJob
to move blocking work outside the web thread.
# In controller
def notify
NotificationJob.perform_later(current_user.id)
render json: { status: "Queued" }
end
# app/jobs/notification_job.rb
class NotificationJob < ApplicationJob
def perform(user_id)
user = User.find(user_id)
NotificationMailer.welcome(user).deliver_now
end
end
✅ Threads stay free to serve users — email sending happens in the background.
4. ✅ Use non-blocking HTTP client (httpx)
require 'httpx'
# async, non-blocking
response = HTTPX.get("https://api.fast.com")
puts response.to_s
✅ Libraries like httpx
or typhoeus
allow multiple HTTP calls to happen at the same time without freezing the thread.
5. ❌ Reading a large file inside a thread
Thread.new do
File.read("big_file.csv") # ⛔ blocks while reading
end
✅ Solution: Use a background job or stream it in small chunks using IO.foreach
or async libraries.
🔁 Alternative Concepts
- Use background jobs (Sidekiq, ActiveJob)
- Use non-blocking I/O libraries like `httpx`, `async-http`
- Break long tasks into smaller async operations
❓ General Questions & Answers
Q1: What is a blocking operation?
A: A blocking operation is something that makes a thread stop and wait — it can’t do anything else until that task is done. Examples of blocking operations include:
sleep
- Waiting for a slow HTTP API
- Reading a large file
- Waiting for a heavy database query to finish
Q2: Why are blocking operations a problem in Rails (Puma)?
A: Puma uses threads to serve requests. If a thread is blocked, it’s stuck and can’t serve other requests.
If too many threads get blocked, your app feels slow or stops responding.
✅ You should avoid blocking tasks in your controller actions and move them to background jobs instead.
Q3: What’s the difference between blocking and non-blocking code?
A:
- Blocking code: Waits until a task finishes before moving on (e.g. `sleep 5`, `Net::HTTP.get`)
- Non-blocking code: Keeps the program moving, even if a task is still in progress (e.g. using `httpx`, background jobs)
Q4: How can I avoid blocking threads in Rails?
A: Use these techniques:
- ✅ Move slow tasks to background jobs (Sidekiq, ActiveJob)
- ✅ Use non-blocking libraries (e.g., `httpx` instead of `Net::HTTP`)
- ✅ Avoid
sleep
or large file processing in controllers
Q5: How do I know if my code is blocking threads?
A: Signs of blocking threads:
- Web requests become slow
- Timeouts under load
- CPU usage low, but app is unresponsive
start = Time.now
do_something_slow
puts "Took #{Time.now - start} seconds"
🛠️ Technical Questions & Answers
Q1: How do I prevent blocking when calling APIs in Rails?
A: Use an asynchronous (non-blocking) HTTP client instead of the default one. For example:
# ❌ Blocking (Net::HTTP or HTTP gem)
response = HTTP.get("https://slow-api.com")
# ✅ Non-blocking (httpx gem)
require 'httpx'
response = HTTPX.get("https://slow-api.com")
✅ With httpx
, multiple requests can happen without freezing your threads.
Q2: How do I handle long tasks without blocking the request thread?
A: Offload the task to a background job using Sidekiq or ActiveJob.
# Controller (non-blocking)
MyJob.perform_later(user.id)
render json: { status: "queued" }
# Job
class MyJob < ApplicationJob
def perform(user_id)
# slow task here
end
end
Q3: Can I safely use Thread.new in Rails?
A: You can, but it’s risky in web servers like Puma — especially if you’re doing anything slow inside it.
# ❌ Bad: this blocks the thread
Thread.new do
sleep 10
end
# ✅ Better: just queue a job
SlowTaskJob.perform_later(data)
Only use Thread.new
for lightweight, short-lived work — or avoid it entirely in production.
Q4: How can I stream large files without blocking threads?
A: Use IO.foreach
to read files line by line or stream content using send_data
.
# Streaming CSV instead of reading entire file
def download
response.headers['Content-Type'] = 'text/csv'
self.response_body = Enumerator.new do |yielder|
yielder << "Name,Email\n"
User.find_each do |user|
yielder << "#{user.name},#{user.email}\n"
end
end
end
Q5: How do I find what’s blocking my Puma threads?
A: Use tools like:
- Skylight or NewRelic to trace slow requests
- Insert
Time.now
logs at the start and end of actions - Log thread IDs:
puts "Thread: #{Thread.current.object_id}"
✅ These help you find which actions or code lines are keeping threads busy too long.
✅ Best Practices with Examples
1. ✅ Never use sleep
in threads handling web requests
sleep
pauses the thread, which means it can't respond to any user during that time.
# ❌ Bad
Thread.new do
sleep(10)
end
# ✅ Better: use a background job to delay
SomeJob.set(wait: 10.seconds).perform_later
2. ✅ Use background jobs for long or slow tasks
If your controller needs to send emails, run reports, or talk to other services — do it in a background job.
# Controller
ReportJob.perform_later(current_user.id)
# Job
class ReportJob < ApplicationJob
def perform(user_id)
ReportMailer.send_to(User.find(user_id)).deliver_now
end
end
3. ✅ Use non-blocking HTTP clients
Prefer httpx
, typhoeus
, or other async libraries instead of blocking HTTP libraries like Net::HTTP.
require 'httpx'
response = HTTPX.get("https://api.example.com")
4. ✅ Stream large data instead of loading all at once
Reading or sending large files can block a thread. Instead, stream the response.
# Rails example
def download_csv
self.response_body = Enumerator.new do |yielder|
yielder << "Name,Email\n"
User.find_each do |user|
yielder << "#{user.name},#{user.email}\n"
end
end
end
5. ✅ Use logging and profiling to detect blocking
Log how long each action takes and which thread is used. Helps you find bottlenecks.
start = Time.now
puts "Started in thread #{Thread.current.object_id}"
# do something
puts "Finished in #{Time.now - start} seconds"
6. ✅ Keep controller actions fast (under 100ms if possible)
Your controller methods should do the minimum needed. Heavy logic = background job.
# ✅ Keep it simple
def create
UserSignupJob.perform_later(params[:user])
render json: { status: "Queued" }
end
🌍 Real-world Scenario
A small e-commerce startup built a Rails app using the Puma web server. It handled customer orders and sent confirmation emails right from the controller using a background thread:
def checkout
# Save order
@order = Order.create!(order_params)
# Send confirmation email in a thread
Thread.new do
OrderMailer.confirmation(@order).deliver_now
end
render json: { status: "Order placed" }
end
It worked well during development. But once traffic increased (around 100 concurrent users), customers reported:
- Checkout requests were taking too long
- Some orders timed out or failed silently
- Logs showed threads were piling up
Root cause: Each Thread.new
was blocking a Puma thread while it waited for the mailer to finish.
With only 5 threads configured, most were stuck sending emails, leaving no threads to serve new requests.
Fix: They moved the email logic to Sidekiq, a background job processor:
# Controller
OrderMailerJob.perform_later(@order.id)
render json: { status: "Order placed" }
# app/jobs/order_mailer_job.rb
class OrderMailerJob < ApplicationJob
def perform(order_id)
OrderMailer.confirmation(Order.find(order_id)).deliver_now
end
end
Results:
- Response times dropped from 2.5s to 300ms
- No more timeout errors
- Puma threads stayed free to handle new user requests
✅ This scenario shows how avoiding blocking work inside threads can dramatically improve your app's responsiveness and scalability.
Sidekiq (Multi-threaded Job Processor)
🧠 Detailed Explanation
Sidekiq is a background job tool for Ruby and Rails. It helps you move slow tasks — like sending emails or processing files — out of your controller and into the background.
This means your app can respond to users quickly, and the heavy work happens separately.
💡 For example: When a user signs up, instead of making them wait for an email to send, you can use Sidekiq to do that in the background.
How does it work?
Sidekiq uses a fast memory store called Redis to store job data.
Then it starts a process that reads jobs from Redis and runs them.
✅ Unlike some job systems that use one thread per job (which is heavy), Sidekiq can run many jobs at the same time using threads. This makes it very fast and memory-efficient.
🚀 With default settings, one Sidekiq process can run 10 jobs at once — all in the background, without blocking your Rails server.
🧠 In short: Sidekiq is what you use when you want to keep your app fast and offload slow work — and it does that using multithreading and Redis.
💡 Examples
1. Enqueue a job in Rails:
# Controller
WelcomeEmailJob.perform_later(current_user.id)
2. Define the job with Sidekiq:
# app/jobs/welcome_email_job.rb
class WelcomeEmailJob < ApplicationJob
queue_as :default
def perform(user_id)
user = User.find(user_id)
UserMailer.welcome_email(user).deliver_now
end
end
3. Configure Sidekiq in your Rails app:
# config/application.rb
config.active_job.queue_adapter = :sidekiq
4. Start Sidekiq from terminal:
bundle exec sidekiq
🔁 Alternative Concepts
- Resque (process-based background job system)
- DelayedJob (simple, but slower, DB-based job queue)
- GoodJob (threads + DB-based, no Redis required)
❓ General Questions & Answers
Q1: What is Sidekiq used for?
A: It runs background jobs like sending emails, generating reports, or syncing data — without slowing down the main app.
Q2: Is Sidekiq thread-safe?
A: Yes. Sidekiq uses threads to process jobs concurrently and requires thread-safe code in your jobs.
🛠️ Technical Questions & Answers
Q1: How many jobs can Sidekiq run at once?
A: By default, Sidekiq runs 10 threads, so it can process 10 jobs at the same time.
# Start Sidekiq with 20 threads instead of 10:
bundle exec sidekiq -c 20
✅ This is useful if your jobs are I/O-bound and you want more concurrency.
Q2: Where does Sidekiq store the jobs?
A: Sidekiq uses Redis as its job queue.
When you call perform_later
, the job is pushed into Redis, and Sidekiq pulls jobs from Redis to process.
# Redis must be running in the background
redis-server
Q3: How do I retry failed jobs?
A: Sidekiq automatically retries failed jobs several times with increasing delays (exponential backoff).
class MyJob < ApplicationJob
retry_on SomeError, wait: :exponentially_longer, attempts: 5
end
✅ You can customize retry rules per job.
Q4: How do I make Sidekiq use multiple queues?
A: Define custom queues and assign jobs to them. Sidekiq can prioritize them.
# In a job
class CriticalJob < ApplicationJob
queue_as :critical
def perform; end
end
# In sidekiq.yml
:queues:
- critical
- default
- low
✅ This way, urgent jobs are handled before less important ones.
Q5: How do I ensure a Rails job runs with Sidekiq?
A: You must set the job adapter in application.rb
:
# config/application.rb
config.active_job.queue_adapter = :sidekiq
Then use perform_later
to enqueue jobs.
✅ Best Practices with Examples
1. ✅ Keep jobs small and focused
Don’t do too much in one job. A job should handle a single task like sending one email or updating one record.
# ❌ Bad: does too much
def perform(user_id)
user = User.find(user_id)
user.send_invoice
user.send_notification
user.log_event
end
# ✅ Better: split into separate jobs
InvoiceJob.perform_later(user_id)
NotificationJob.perform_later(user_id)
EventLogJob.perform_later(user_id)
2. ✅ Always use perform_later
in controllers
Never call perform_now
in controllers — it runs the job immediately and defeats the purpose of background processing.
# ✅ Good
WelcomeJob.perform_later(current_user.id)
# ❌ Bad
WelcomeJob.perform_now(current_user.id)
3. ✅ Use queues to organize job priorities
Assign critical jobs to high-priority queues and process them first.
class AlertJob < ApplicationJob
queue_as :critical
end
# sidekiq.yml
:queues:
- critical
- default
- low
4. ✅ Handle errors gracefully
Don’t let jobs crash silently. Use retry_on
or rescue blocks to handle known issues.
retry_on Net::OpenTimeout, wait: 10.seconds, attempts: 3
rescue_from(SomeAPI::Error) do |error|
Rails.logger.error("Failed: #{error.message}")
end
5. ✅ Use Sidekiq’s Web UI to monitor jobs
Mount the dashboard to view retries, queues, and job status.
# config/routes.rb
require 'sidekiq/web'
mount Sidekiq::Web => '/sidekiq'
6. ✅ Limit memory usage with lightweight code
Sidekiq reuses threads in the same process, so avoid loading huge files or allocating unnecessary objects in jobs.
7. ✅ Avoid database queries in loops
Fetch all records once, then loop — don’t query inside the loop.
# ❌ Bad
ids.each do |id|
User.find(id).notify
end
# ✅ Good
User.where(id: ids).find_each(&:notify)
🌍 Real-world Scenario
A Rails app sent welcome emails directly from the controller. As traffic grew, email sending delayed user experience.
They switched to using Sidekiq:
WelcomeEmailJob.perform_later(current_user.id)
✅ Response time dropped from 2s to 300ms, and emails were sent reliably in the background. Sidekiq processed 10 jobs in parallel using threads — keeping the app fast and users happy.
ActiveJob with Multi-threaded Adapters
🧠 Detailed Explanation
ActiveJob is a built-in feature in Rails that lets you run code in the background instead of during the web request. This helps keep your app fast and responsive.
For example, instead of sending an email while the user waits, you can use a background job to send it later.
✅ The great thing about ActiveJob is that it works with many background job systems (called adapters) like:
- Sidekiq (multi-threaded and uses Redis)
- GoodJob (multi-threaded and stores jobs in the database)
- DelayedJob (single-threaded and database-based)
If you use a multi-threaded adapter, your jobs can run at the same time — not one-by-one. This is faster and great for handling lots of work like sending emails, exporting data, or sending notifications.
🧠 With ActiveJob + a multi-threaded adapter like Sidekiq
or GoodJob
, your app becomes more scalable:
- It can run 5, 10, or even 20 jobs at the same time
- Your users don’t have to wait for slow tasks
- You can easily monitor, retry, and manage jobs
In short: ActiveJob makes it easy to write background jobs. Multi-threaded adapters make those jobs run faster and in parallel 🚀
💡 Examples
1. Enqueue a job using ActiveJob:
# app/controllers/users_controller.rb
def create
UserMailerJob.perform_later(current_user.id)
render json: { message: "Email is being sent." }
end
2. Define the job:
# app/jobs/user_mailer_job.rb
class UserMailerJob < ApplicationJob
queue_as :default
def perform(user_id)
UserMailer.welcome(User.find(user_id)).deliver_now
end
end
3. Use a threaded adapter like Sidekiq:
# config/application.rb
config.active_job.queue_adapter = :sidekiq
🔁 Alternative Concepts
- Use Sidekiq directly for advanced features
- Use GoodJob for database-based threaded job processing (no Redis)
- Use DelayedJob if thread safety isn’t required
❓ General Questions & Answers
Q1: What is ActiveJob used for?
A: It provides a consistent API for running background jobs, so you can switch job adapters (e.g. from Sidekiq to GoodJob) without rewriting your job code.
Q2: Is ActiveJob thread-safe?
A: Yes, as long as you use a multi-threaded adapter (like Sidekiq or GoodJob). Your job code must also be thread-safe (no shared global state).
🛠️ Technical Questions & Answers
Q1: How do I enable a multi-threaded adapter like Sidekiq in ActiveJob?
A: Set your Rails app to use Sidekiq as the ActiveJob adapter:
# config/application.rb
config.active_job.queue_adapter = :sidekiq
Then start Sidekiq:
bundle exec sidekiq
Q2: How can I control how many threads run at once?
A: You can change the number of threads with the -c
(concurrency) option:
bundle exec sidekiq -c 15
✅ This will allow Sidekiq to run up to 15 jobs at the same time in one process.
Q3: How do I define an ActiveJob for threaded processing?
A: Use the standard perform_later
method and Sidekiq will handle the threading.
# app/jobs/report_job.rb
class ReportJob < ApplicationJob
queue_as :default
def perform(user_id)
user = User.find(user_id)
ReportGenerator.run(user)
end
end
Q4: Can I use other multi-threaded adapters besides Sidekiq?
A: Yes. GoodJob is another adapter that supports multi-threading but doesn’t require Redis (it uses your database).
# config/application.rb
config.active_job.queue_adapter = :good_job
✅ GoodJob is perfect if you want multithreading but want to avoid Redis.
Q5: What happens if a job fails?
A: ActiveJob with Sidekiq automatically retries jobs. You can also control retries manually:
class MyJob < ApplicationJob
retry_on SomeAPI::TimeoutError, wait: 5.seconds, attempts: 3
def perform(id)
# some risky work
end
end
✅ You can retry specific exceptions with custom wait time and number of attempts.
✅ Best Practices with Examples
1. ✅ Use perform_later
instead of perform_now
perform_now
runs the job immediately (blocking the thread), while perform_later
runs it in the background.
# ✅ Good: background execution
WelcomeEmailJob.perform_later(user.id)
# ❌ Bad: runs immediately
WelcomeEmailJob.perform_now(user.id)
2. ✅ Keep job logic focused and short
A job should do only one thing, such as sending one email or exporting one report — not multiple unrelated tasks.
# ✅ Good: single responsibility
class WelcomeEmailJob < ApplicationJob
def perform(user_id)
UserMailer.welcome(User.find(user_id)).deliver_now
end
end
3. ✅ Ensure your job code is thread-safe
When using a multi-threaded adapter like Sidekiq or GoodJob, don’t use shared mutable variables (like class variables or global variables).
# ✅ Safe
def perform(user_id)
user = User.find(user_id)
logger.info "Job for user #{user.id}"
end
4. ✅ Use queues to control job priorities
Assign jobs to queues like :default
, :mailers
, or :critical
. Then configure Sidekiq or GoodJob to prioritize those.
class NotificationJob < ApplicationJob
queue_as :critical
end
5. ✅ Handle retries for network or API failures
Use ActiveJob’s built-in retry system for known issues like timeouts or connection errors.
retry_on Net::ReadTimeout, wait: 10.seconds, attempts: 3
6. ✅ Use environment variables to control concurrency
Set thread counts based on environment. Example for Sidekiq:
# Procfile
sidekiq: bundle exec sidekiq -c ${SIDEKIQ_CONCURRENCY:-10}
7. ✅ Monitor jobs with a dashboard
Use Sidekiq Web UI or GoodJob’s dashboard to view running, queued, and failed jobs.
# config/routes.rb
require 'sidekiq/web'
mount Sidekiq::Web => '/sidekiq'
🌍 Real-world Scenario
A Rails app for a tutoring platform used ActiveJob with DelayedJob. Under high load, jobs queued up because DelayedJob used single-threaded workers.
✅ They switched to ActiveJob + Sidekiq (multi-threaded) and configured it to use 20 threads:
bundle exec sidekiq -c 20
Result: background tasks like sending lesson reminders and generating PDFs ran in parallel — reducing wait time and improving scalability.
Job Retries, Deadlocks & Race Condition Handling
🧠 Detailed Explanation
When you run background jobs in Rails using Sidekiq or ActiveJob, sometimes jobs fail. This can happen for many reasons — like a slow internet connection, a locked database row, or two jobs trying to update the same data at once.
There are 3 common problems:
- 🔁 Retries: Jobs that fail are automatically retried a few times. This is helpful if the problem was temporary, like a timeout or network error.
- 🔒 Deadlocks: When two jobs try to access the same row in the database at the same time and both get stuck waiting.
- ⚠️ Race Conditions: When two jobs or users try to change the same thing at once, and the result is wrong (like lost data or double updates).
✅ Rails and Sidekiq have ways to deal with these problems:
- You can tell a job to retry when specific errors happen
- You can use
with_lock
to make sure only one thread updates a record at a time - You can use checks or keys to prevent the same job from running twice
These tools help your app stay reliable — even if jobs fail, or users do the same action at the same time.
💡 Examples
1. Job Retry with Exponential Backoff
class PaymentJob < ApplicationJob
retry_on ActiveRecord::Deadlocked, wait: :exponentially_longer, attempts: 5
def perform(order_id)
Order.find(order_id).charge_customer
end
end
2. Preventing Race Conditions Using with_lock
user = User.find(user_id)
user.with_lock do
user.points += 10
user.save!
end
3. Custom Retry on API Timeout
retry_on Net::ReadTimeout, wait: 15.seconds, attempts: 3
🔁 Alternative Concepts
- Use optimistic locking (
lock_version
) to prevent race conditions - Split large jobs into smaller, isolated jobs
- Use external services with idempotent APIs to prevent duplicates on retry
❓ General Questions & Answers
Q1: Why do jobs fail?
A: Common reasons include API timeouts, DB deadlocks, or temporary service outages.
Q2: What’s the benefit of automatic retries?
A: Many failures are temporary. Retry gives them another chance without human intervention.
Q3: What’s the risk of too many retries?
A: It can overwhelm the system or repeat an action (e.g., charge customer twice). Use idempotency keys to avoid duplication.
🛠️ Technical Questions & Answers
Q1: How do I retry a failed job only when a specific error happens?
A: Use retry_on
in ActiveJob and specify the error type.
class SyncJob < ApplicationJob
retry_on Net::ReadTimeout, wait: 10.seconds, attempts: 3
def perform(id)
ExternalAPI.sync(id)
end
end
✅ This will retry 3 times with a 10-second wait if a timeout error occurs.
Q2: How can I avoid a database deadlock when updating records?
A: Wrap your database update in with_lock
. This creates a database-level lock.
user = User.find(id)
user.with_lock do
user.points += 5
user.save!
end
✅ This ensures only one job can update the user’s points at a time.
Q3: How do I prevent the same job from running twice at the same time?
A: Use an idempotency key (like a unique job ID) or check for a job log before proceeding.
return if JobLog.exists?(key: "user_#{user_id}_sync")
JobLog.create!(key: "user_#{user_id}_sync")
# perform task
Q4: How do I log and retry a job that failed due to a race condition?
A: Catch the error, log it, and retry manually.
class SafeJob < ApplicationJob
def perform(record_id)
begin
record = Record.find(record_id)
record.with_lock { record.update!(status: "done") }
rescue ActiveRecord::StaleObjectError
Rails.logger.warn("Race condition for record #{record_id}, retrying...")
retry
end
end
end
✅ This catches concurrency issues and retries the job gracefully.
Q5: What happens when retries are exhausted?
A: Sidekiq moves the job to the "Dead" set. You can view and retry it from the Web UI manually.
# Mount Sidekiq UI in routes.rb
require 'sidekiq/web'
mount Sidekiq::Web => '/sidekiq'
✅ This helps you monitor failed jobs and recover them.
✅ Best Practices with Examples
1. ✅ Use retry_on
for expected temporary failures
Retry jobs automatically when errors like timeouts or deadlocks occur.
class ApiSyncJob < ApplicationJob
retry_on Net::ReadTimeout, wait: 10.seconds, attempts: 3
def perform(user_id)
ExternalApi.sync(User.find(user_id))
end
end
2. ✅ Wrap updates in with_lock
to avoid race conditions
Prevent two jobs from updating the same record at once.
order = Order.find(id)
order.with_lock do
order.update!(status: "paid")
end
3. ✅ Ensure idempotency — never run the same job twice accidentally
Use a unique key or a database check.
return if JobLog.exists?(key: "user_#{user_id}_email_sent")
JobLog.create!(key: "user_#{user_id}_email_sent")
Mailer.send_email(user_id)
4. ✅ Rescue and log unknown exceptions
This avoids silent failures and helps with debugging.
def perform(id)
begin
process_record(id)
rescue StandardError => e
Rails.logger.error "Job failed: #{e.class} - #{e.message}"
raise e
end
end
5. ✅ Don’t retry forever — limit retries
Retry only a few times to prevent overloading or duplicate actions.
retry_on ActiveRecord::Deadlocked, attempts: 3, wait: :exponentially_longer
6. ✅ Use ActiveJob's discard_on
to skip non-recoverable errors
If an error should never retry (like a record not found), discard it.
discard_on ActiveRecord::RecordNotFound
7. ✅ Monitor job failures using the Sidekiq Web UI
Mount the UI and check the “Dead” queue regularly.
# config/routes.rb
require 'sidekiq/web'
mount Sidekiq::Web => '/sidekiq'
🌍 Real-world Scenario
A payment processing job in a Rails app failed due to DB deadlocks during high traffic. Users were charged twice when the job retried without checking.
Fix:
- They added
retry_on ActiveRecord::Deadlocked
with a max of 3 attempts - They added
with_lock
and a DB-level unique constraint on the transaction ID - They started logging all retries and alerts for jobs in the "dead" set
✅ Result: no duplicate charges, faster error recovery, and easier debugging.
Connection Pool Per Thread (Database Connection Management)
🧠 Detailed Explanation
In a Rails application, every time your code talks to the database, it needs a database connection. But creating a new connection each time would be slow and wasteful — so Rails uses something called a connection pool.
A connection pool is a set of open database connections that your app can reuse. When a request or background job needs to use the database, it “checks out” a connection from the pool, uses it, and then “returns” it back.
If your Rails app runs with multiple threads (like with Puma or Sidekiq), each thread needs its own database connection. So the connection pool must be large enough to give one connection to every active thread.
✅ For example, if your Puma server is running with 5 threads, your connection pool size must be at least 5. If Sidekiq is running with 10 threads too, then the pool should be even bigger to avoid errors.
❗ If all connections are being used and a thread tries to get one, it will have to wait.
If it waits too long, Rails will raise an error like: ActiveRecord::ConnectionTimeoutError
🔧 You can set your connection pool size in config/database.yml
using the pool:
setting.
In short: Each thread needs its own database connection. Make sure your pool size is big enough to support all the threads your app is using (web + background).
💡 Examples
1. Setting the connection pool size
# config/database.yml
production:
adapter: postgresql
pool: 10
✅ This allows up to 10 threads to use a database connection at the same time.
2. Adjusting pool size based on Puma threads
# puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
# database.yml
pool: <<%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>>
✅ Always set the pool size to match or exceed the max number of Puma threads.
🔁 Alternative Concepts
- Use separate database roles for background jobs (read/write split)
- Use connection reapers to clean up stale connections (e.g., in long-lived jobs)
- Scale vertically (more connections) or horizontally (read replicas)
❓ General Questions & Answers
Q1: What is a connection pool?
A: A group of open, reusable database connections. Each thread checks one out as needed.
Q2: What happens if the pool is too small?
A: Threads will wait. If they wait too long, Rails raises a ConnectionTimeoutError
.
🛠️ Technical Questions & Answers
Q1: How do I set the connection pool size in Rails?
A: You set the pool size in config/database.yml
like this:
production:
adapter: postgresql
pool: 10
timeout: 5000
username: myuser
password: mypass
database: myapp_production
✅ This allows up to 10 concurrent threads to talk to the database.
Q2: How do I know how many connections I need?
A: Add up all the threads your app uses:
- Puma threads (e.g.
threads 5, 5
) - Sidekiq threads (e.g.
-c 10
)
Example: If Puma has 5 threads and Sidekiq has 10, your pool should be at least 15.
Q3: What happens if all connections in the pool are used?
A: The thread waits for a free connection. If it waits too long, Rails raises:
ActiveRecord::ConnectionTimeoutError
✅ Solution: increase your pool size or reduce thread count.
Q4: How does Rails manage connections per thread?
A: Rails uses Thread.current
to assign and store the connection for each thread. This means every thread gets its own safe connection from the pool.
Q5: Do I need to configure Sidekiq separately for connection pool?
A: No. Sidekiq uses Rails’ ActiveRecord config. Just make sure your pool
is large enough to handle its concurrency.
# If Sidekiq runs with -c 10
# then database.yml should have:
pool: 10
✅ Best Practices with Examples
1. ✅ Match your connection pool size to total thread count
The number of database connections should be at least equal to the number of threads used by Puma and background job processors like Sidekiq.
# Puma (config/puma.rb)
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
# Sidekiq (Procfile)
sidekiq: bundle exec sidekiq -c 10
# database.yml
pool: 15
2. ✅ Use environment variables for pool size
This makes it easy to scale up or down without changing code.
# database.yml
pool: <%= ENV.fetch("DB_POOL") { 10 } %>
# .env or server settings
DB_POOL=20
3. ✅ Release connections manually in long-running scripts
Long-running rake tasks or custom scripts should clean up connections when done.
ActiveRecord::Base.connection_pool.with_connection do
# your long-running code
end
4. ✅ Monitor connection usage in production
Use tools like Skylight, Scout, or NewRelic to watch for connection saturation and pool exhaustion.
5. ✅ Don’t exceed your database server’s connection limit
Postgres typically supports ~100–200 connections max. Add all app, Sidekiq, and admin connections together.
# Check PostgreSQL max connections
SHOW max_connections;
6. ✅ Use connection reapers in persistent jobs
For very long-lived background workers or streaming servers, call ActiveRecord::Base.clear_active_connections!
periodically.
7. ✅ Separate connection pools for different environments (optional)
You can isolate web and background processes by running them with different database users and connection limits if needed.
🌍 Real-world Scenario
A Rails app using Puma had threads 5, 5
and pool: 5
— everything worked fine.
But they added Sidekiq with 10 threads and started seeing ConnectionTimeoutError
during high traffic.
Fix:
- Increased
pool
to 15 indatabase.yml
- Ensured Sidekiq and Puma weren’t starving the same pool
- Used
connection_pool
monitoring to identify leaks
✅ Result: all background jobs and web requests ran smoothly without connection errors.
ActiveRecord::Base.connection_pool.with_connection
🧠 Detailed Explanation
In a Rails app, you use the database a lot — reading users, saving orders, updating records. Every time you do this, Rails uses a database connection from a shared pool of connections.
Normally, Rails manages this automatically for you — especially during web requests. But if you create your own thread or background task, Rails doesn’t manage connections for that thread.
This is where ActiveRecord::Base.connection_pool.with_connection
is useful.
✅ It lets you “check out” a connection from the pool, use it safely inside a block of code, and then “check it back in” when you’re done — even if there’s an error.
🔒 Without this, you risk keeping a connection locked, which means other threads or jobs might get stuck waiting for a connection.
Example:
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.all.each do |user|
puts user.email
end
end
end
☑️ That thread can now use the database safely and won’t cause connection leaks.
In short: Use with_connection
in threads or long scripts to safely borrow and return a DB connection.
💡 Examples
1. Basic usage in a background thread
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.find_each do |user|
puts user.name
end
end
end
2. In a custom rake task
namespace :data do
task :cleanup do
ActiveRecord::Base.connection_pool.with_connection do
OldRecord.delete_all
end
end
end
3. Handling errors gracefully
ActiveRecord::Base.connection_pool.with_connection do
begin
do_something
rescue => e
Rails.logger.error e.message
end
end
🔁 Alternative Concepts
ActiveRecord::Base.connection
— manually grabs a connection (you must release it yourself)establish_connection
— rebinds a new DB connection (used in special setups)
❓ General Questions & Answers
Q1: Why use with_connection
?
A: It safely handles checking out and checking in database connections to avoid leaks or pool exhaustion.
Q2: When should I use it?
A: In custom threads, background workers, or long-running scripts that bypass standard Rails request handling.
🛠️ Technical Questions & Answers
Q1: What does with_connection
actually do?
A: It checks out a database connection from the pool for the current thread, runs your code inside the block, and then returns the connection automatically — even if there's an error.
ActiveRecord::Base.connection_pool.with_connection do
puts User.count
end
Q2: When do I need to use it?
A: You need it when you run code in a custom thread, non-Rails script, or long-running process. In those cases, Rails won’t manage connections for you.
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
puts Order.count
end
end
Q3: What happens if I don’t use it in a thread?
A: The thread might keep the connection forever, blocking other threads from accessing the database. This leads to:
ActiveRecord::ConnectionTimeoutError
Fix: Always wrap your thread logic in with_connection
.
Q4: Can I use with_connection
inside a Sidekiq job?
A: Sidekiq already manages database connections for each thread. But if you spawn a new thread inside a job, you should use with_connection
there.
class ExportJob < ApplicationJob
def perform
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
export_large_report
end
end
end
end
Q5: Can I nest with_connection
?
A: Yes. If you're already inside a with_connection
block and call it again, Rails will reuse the same connection. It’s safe to nest.
ActiveRecord::Base.connection_pool.with_connection do
puts "Level 1"
ActiveRecord::Base.connection_pool.with_connection do
puts "Level 2"
end
end
✅ Best Practices with Examples
1. ✅ Always use with_connection
in custom threads
Rails doesn't manage connections automatically inside manually created threads — use with_connection
to avoid connection leaks.
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
puts User.count
end
end
2. ✅ Use in rake tasks or long-running scripts
Scripts that access the database directly (outside a web request) should wrap DB calls with with_connection
.
# lib/tasks/cleanup.rake
task cleanup: :environment do
ActiveRecord::Base.connection_pool.with_connection do
Order.where("created_at < ?", 30.days.ago).delete_all
end
end
3. ✅ Always release connections in non-web environments
If you're writing a script or background service, wrap everything inside with_connection
to make sure connections are released.
# inside a standalone Ruby file
ActiveRecord::Base.connection_pool.with_connection do
puts Product.count
end
4. ✅ Don’t use ActiveRecord::Base.connection
directly unless necessary
Using connection
directly requires you to manually release the connection — it’s safer to use with_connection
.
# ❌ Risky: must manually return the connection
conn = ActiveRecord::Base.connection
# do work...
conn.close # You must remember to do this
5. ✅ Handle exceptions inside the block safely
Even if an error occurs inside the with_connection
block, Rails will release the connection automatically.
ActiveRecord::Base.connection_pool.with_connection do
begin
risky_operation
rescue => e
Rails.logger.error "Job failed: #{e.message}"
end
end
🌍 Real-world Scenario
A developer added a custom thread to stream analytics from the database. They used ActiveRecord::Base.connection
without releasing it.
After a few hours, the app ran out of DB connections and crashed.
✅ They fixed it by wrapping the work in with_connection
:
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
AnalyticsStream.start
end
end
Result: no leaks, no crashes, and proper resource management.
Avoiding ActiveRecord::ConnectionTimeoutError
🧠 Detailed Explanation
In a Rails app, the database is used by many parts of your code at the same time — like web requests, background jobs, or custom threads. Rails uses a shared set of database connections, called a connection pool, to manage this safely.
When something (like a thread or a job) wants to talk to the database, it borrows a connection from this pool. If all the connections are busy, and a new thread can't get one in time, Rails will raise this error:
ActiveRecord::ConnectionTimeoutError
This means: "I'm waiting for a database connection, but none are free."
Common reasons:
- 🔁 Too many threads running at once (more than the pool size)
- 🔓 Code is holding a connection too long (slow DB queries or leaks)
- 🧵 You started a custom thread and didn’t manage the connection properly
How to fix it:
- ✅ Increase your
pool
size inconfig/database.yml
- ✅ Use
with_connection
when creating threads - ✅ Don’t keep connections open longer than necessary
- ✅ Monitor connection usage in production
In short: This error means you're asking for more database connections than your app has available. You can fix it by tuning pool size, reducing usage, or improving connection handling.
💡 Examples
1. Setting pool size in database.yml
production:
adapter: postgresql
pool: 15
timeout: 5000
2. Matching Puma thread count with DB pool
# config/puma.rb
threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
threads threads_count, threads_count
# config/database.yml
pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
3. Using with_connection
in threads
Thread.new do
ActiveRecord::Base.connection_pool.with_connection do
User.count
end
end
🔁 Alternative Concepts
- Use read replicas to offload read-heavy queries
- Queue background jobs instead of doing heavy DB work in controllers
- Use database connection reapers for long-lived processes
❓ General Questions & Answers
Q1: What causes this error?
A: When a thread tries to get a connection but all are already checked out and none return in time.
Q2: What’s the default timeout?
A: 5 seconds (timeout: 5000
in database.yml
), after which Rails raises the error.
🛠️ Technical Questions & Answers
Q1: How can I debug what’s holding connections?
A: Use tools like NewRelic, Skylight, or enable SQL logs to identify long-running queries or threads that don’t release connections.
Q2: What if my pool is large but I still get timeouts?
A: You might have connection leaks. Make sure you’re using with_connection
for all custom threads and scripts.
Q3: What should I monitor?
A: Monitor:
- Current pool usage
- Threads waiting on DB
- Query execution time
✅ Best Practices
- ☑️ Set DB pool size based on total threads (Puma + Sidekiq)
- ☑️ Use
with_connection
for manual threads - ☑️ Don’t hold connections longer than needed
- ☑️ Move heavy DB work to background jobs
- ☑️ Use read-only DB replicas if available
🌍 Real-world Scenario
A SaaS platform started seeing random ConnectionTimeoutError
during traffic spikes.
Investigation showed that Sidekiq and Puma shared a pool of just 5 connections, while Sidekiq was using 10 threads.
Fix:
- Increased
pool:
to 15 indatabase.yml
- Used
with_connection
in custom background threads - Monitored connections in production using Skylight
✅ Result: No more connection timeouts, and improved system stability during peak usage.
Mutex, Monitor, Queue – Ruby Thread Synchronization
🧠 Detailed Explanation
When you use threads in Ruby (or in Rails with things like Puma or background jobs), they can all run at the same time. If these threads try to change the same variable or use the same resource at once, it can cause bugs — this is called a race condition.
To stop this from happening, Ruby gives us tools to control how threads behave. These are:
- 🔐 Mutex — Think of it as a lock. Only one thread can get in at a time. Great for protecting shared variables.
- 🛎️ Monitor — Similar to Mutex, but it works inside a class and lets you lock multiple methods safely.
- 📦 Queue — A thread-safe list where one thread can put data and another thread can take it. Great for passing work from one thread to another.
Why are these important?
- They help avoid errors like updating the same data twice
- They make your app faster and safer in multi-threaded environments
- They prevent your app from crashing or giving wrong results
✅ In short: Use Mutex, Monitor, or Queue when threads share or exchange data — they help your code run safely and correctly.
✅ Best Implementation with Detailed Explanation
🔐 1. Using Mutex
for shared counters (thread-safe increment)
Mutex
is perfect for situations where multiple threads need to safely update a shared variable like a counter. Without it, two threads may try to update at the same time and overwrite each other.
mutex = Mutex.new
counter = 0
threads = 10.times.map do
Thread.new do
100.times do
mutex.synchronize do
counter += 1
end
end
end
end
threads.each(&:join)
puts "Final Count: #{counter}" # ✅ Always 1000
Explanation: mutex.synchronize
ensures that only one thread at a time can change the counter
. This avoids race conditions.
🛎️ 2. Using Monitor
in class-based design
When writing a class with thread-safe methods, use Ruby's built-in MonitorMixin
to keep synchronization clean and encapsulated.
require 'monitor'
class ThreadSafeList
include MonitorMixin
def initialize
super()
@items = []
end
def add(item)
synchronize do
@items << item
end
end
def size
synchronize { @items.size }
end
end
Explanation: synchronize
wraps access to shared resources inside the class, just like mutex.synchronize
, but it's cleaner for OOP design.
📦 3. Using Queue
for communication between threads
If you have a producer-consumer scenario (one thread produces tasks, another consumes them), use Queue
. It's thread-safe and handles locking internally.
queue = Queue.new
producer = Thread.new do
5.times do |i|
queue << "Job #{i}"
sleep(0.1)
end
end
consumer = Thread.new do
5.times do
job = queue.pop
puts "Processing #{job}"
end
end
[producer, consumer].each(&:join)
Explanation: Queue
handles locking and synchronization under the hood, so you don’t need to manage a Mutex
manually.
💡 Tip: Always choose the right tool:
- Use
Mutex
for critical sections (modify shared values) - Use
Monitor
in reusable or object-oriented components - Use
Queue
for safely passing data between threads
✅ Following these patterns ensures your multi-threaded Ruby or Rails code is safe, stable, and free of race conditions or deadlocks.
💡 Examples
1. Using Mutex
mutex = Mutex.new
counter = 0
threads = 5.times.map do
Thread.new do
mutex.synchronize do
counter += 1
end
end
end
threads.each(&:join)
puts counter
2. Using Monitor
require 'monitor'
class SafeCounter
include MonitorMixin
def initialize
super()
@count = 0
end
def increment
synchronize { @count += 1 }
end
end
3. Using Queue
queue = Queue.new
producer = Thread.new do
5.times { |i| queue << "Job #{i}" }
end
consumer = Thread.new do
while job = queue.pop
puts "Processing #{job}"
end
end
🔁 Alternative Concepts
Thread::SizedQueue
– queue with capacity limitsConditionVariable
– more control over thread pausing/wakeup- Actors model (celluloid, ractors – for advanced parallelism)
❓ General Questions & Answers
Q1: What is a race condition?
A: When two threads try to change the same data at the same time and cause incorrect or unpredictable results.
Q2: Why use Mutex?
A: It locks code so that only one thread can enter at a time — preventing data corruption.
Q3: Can I use these in Rails background jobs?
A: Yes, especially if multiple jobs might access the same shared object or cache.
🛠️ Technical Questions & Answers
Q1: What is a race condition, and how does Mutex fix it?
A: A race condition happens when two threads try to change the same value at the same time, and the result is unpredictable or incorrect.
Example:
counter = 0
threads = 2.times.map do
Thread.new { 1000.times { counter += 1 } }
end
threads.each(&:join)
puts counter # ❌ Might not be 2000
Fix with Mutex:
mutex = Mutex.new
counter = 0
threads = 2.times.map do
Thread.new do
1000.times do
mutex.synchronize { counter += 1 }
end
end
end
threads.each(&:join)
puts counter # ✅ Always 2000
Q2: What is the difference between Mutex and Monitor?
A: Mutex locks a specific section of code. Monitor is used inside a class to lock instance methods and internal state more easily.
require 'monitor'
class Counter
include MonitorMixin
def initialize
super()
@value = 0
end
def increment
synchronize { @value += 1 }
end
end
✅ synchronize
wraps any method so only one thread can run it at a time.
Q3: When should I use Queue?
A: Use Queue when you want threads to pass data between each other safely (producer/consumer pattern).
queue = Queue.new
# One thread produces
producer = Thread.new do
3.times do |i|
queue << "Job #{i}"
sleep(0.1)
end
end
# Another thread consumes
consumer = Thread.new do
3.times do
job = queue.pop
puts "Processing #{job}"
end
end
[producer, consumer].each(&:join)
✅ Queue is already thread-safe, so no need to lock manually.
Q4: What if a thread crashes while holding a Mutex?
A: Ruby automatically releases the lock if an exception happens inside a mutex.synchronize
block. So it’s safe.
mutex = Mutex.new
mutex.synchronize do
raise "error"
end
puts "Other threads can still get the lock"
Q5: Can I use these in Rails apps?
A: Yes! You can use them in service objects, background jobs, or threaded web servers (like Puma) to protect shared memory or manage coordination between jobs.
✅ Best Practices with Examples
1. ✅ Use Mutex
when updating shared data between threads
Always wrap shared variable updates in a mutex block to prevent race conditions.
mutex = Mutex.new
total = 0
threads = 5.times.map do
Thread.new do
100.times do
mutex.synchronize { total += 1 }
end
end
end
threads.each(&:join)
puts total # ✅ Always 500
2. ✅ Use Queue
for safe thread communication
When passing work between threads, use Queue to ensure safe, lock-free coordination.
jobs = Queue.new
5.times { |i| jobs << "Task #{i}" }
worker = Thread.new do
until jobs.empty?
task = jobs.pop
puts "Processing #{task}"
end
end
worker.join
3. ✅ Keep mutex.synchronize
blocks short
Don’t do slow operations (like sleep or I/O) while holding a lock — it blocks other threads.
mutex.synchronize do
update_user_balance # ✅ good
end
# ❌ avoid:
mutex.synchronize do
sleep(2) # bad – blocks others
end
4. ✅ Use MonitorMixin
for thread-safe classes
Encapsulate synchronization inside your class logic for cleaner code.
require 'monitor'
class SafeLog
include MonitorMixin
def initialize
super()
@lines = []
end
def log(text)
synchronize { @lines << text }
end
end
5. ✅ Use SizedQueue
if you want to limit capacity
Use this to throttle producers or control memory usage.
queue = SizedQueue.new(2)
producer = Thread.new do
5.times do |i|
queue.push("Item #{i}")
puts "Produced Item #{i}"
end
end
consumer = Thread.new do
5.times do
puts "Consumed #{queue.pop}"
end
end
[producer, consumer].each(&:join)
6. ✅ Avoid nested locks unless absolutely necessary
Nested mutex.synchronize
can lead to deadlocks if not handled carefully.
# ❌ Risky
mutex1.synchronize do
mutex2.synchronize do
# deadlock danger
end
end
🌍 Real-world Scenario
A Rails app processed orders in multiple background threads. Sometimes, two threads charged the same customer twice because they updated the same order status at the same time.
Fix:
mutex = Mutex.new
OrderProcessorJob.perform_later(order.id)
# In job:
mutex.synchronize do
order.reload
order.update!(status: "paid") unless order.paid?
end
✅ This ensured only one thread could update the order at a time, eliminating duplicate charges.
Preventing Race Conditions (Mutex & Synchronization Techniques)
🧠 Detailed Explanation
A race condition is a problem that happens when two or more threads try to use or change the same thing at the same time. Because threads run in parallel, you don’t know which one finishes first — and that can lead to wrong results.
For example:
- 🧵 Thread A and Thread B both try to increase a counter
- They both read the value as
5
- Each adds
1
, then saves it - Final result =
6
(not7
!)
✅ To fix this, we use something called a lock. A lock makes sure that only one thread can run a piece of code at a time.
In Ruby/Rails, the common tools are:
- 🔐 Mutex: Wrap code so only one thread can use it
- 🛎️ Monitor: Use in classes to make methods thread-safe
- 📦 Queue: Lets threads safely pass data to each other
- 🧾 ActiveRecord#with_lock: Locks a row in the database so only one update can happen at a time
- 🔄 Optimistic locking: Detects if another thread changed the data before saving
🧠 Think of it like taking turns. One thread “goes in,” does its job, and comes out — then the next thread goes.
If you don’t do this, your app might save wrong values, charge a user twice, or delete the wrong thing — which is why **preventing race conditions is very important** in multithreaded apps.
✅ Best Implementation with Detailed Explanation
🔐 1. Use Mutex
for in-memory shared data between threads
When multiple Ruby threads access and modify the same variable (e.g., a counter or cache), use Mutex
to prevent race conditions.
mutex = Mutex.new
count = 0
threads = 10.times.map do
Thread.new do
100.times do
mutex.synchronize do
count += 1
end
end
end
end
threads.each(&:join)
puts "Final count: #{count}" # ✅ Always 1000
Why this works: Only one thread can execute the count += 1
line at a time, so we avoid overwriting each other’s changes.
🧾 2. Use ActiveRecord#with_lock
for thread-safe DB record updates
In a Rails app, if two jobs or requests try to update the same user record at the same time, we may end up with stale data or duplicate updates. with_lock
adds a row-level DB lock to prevent this.
user = User.find(1)
user.with_lock do
user.balance += 100
user.save!
end
Why this works: The database won’t allow other threads or processes to access this row until the current thread finishes the block.
🔄 3. Use optimistic locking
to detect and resolve conflicts
Optimistic locking uses a special column (like lock_version
) to detect if another update happened before the save. Rails raises an error if the record was changed by someone else before your save.
# migration
add_column :users, :lock_version, :integer, default: 0
# controller or job
user = User.find(1)
user.balance += 50
user.save! # Fails if another thread saved it first
Why this works: If two updates happen close together, one of them will fail — allowing you to retry or alert.
⚙️ 4. Use update_counters
or increment!
for atomic DB changes
For simple numeric updates, use Rails built-in atomic methods that generate safe SQL like UPDATE ... SET count = count + 1
.
User.increment_counter(:login_count, user.id)
# OR
user.increment!(:login_count)
Why this works: These methods avoid race conditions by letting the database handle the increment in a single atomic step.
Summary of when to use what:
- Use
Mutex
for thread safety in Ruby (in-memory operations) - Use
with_lock
for critical DB updates on shared records - Use
optimistic locking
when updates rarely conflict - Use
increment!
orupdate_counters
for atomic counters
✅ Implementing these correctly avoids hard-to-debug concurrency bugs and makes your Rails app safe and scalable.
💡 Examples
1. Without Mutex (unsafe):
counter = 0
threads = 5.times.map do
Thread.new { 1000.times { counter += 1 } }
end
threads.each(&:join)
puts counter # ❌ Might not be 5000
2. With Mutex (safe):
mutex = Mutex.new
counter = 0
threads = 5.times.map do
Thread.new do
1000.times do
mutex.synchronize { counter += 1 }
end
end
end
threads.each(&:join)
puts counter # ✅ Always 5000
3. Preventing race condition in DB update:
user = User.find(id)
user.with_lock do
user.balance += 100
user.save!
end
🔁 Alternative Concepts
ActiveRecord#with_lock
– for locking rows in the databaseOptimistic Locking
– uses version columns to detect changesAtomic updates
– likeincrement_counter
in Rails
❓ General Questions & Answers
Q1: What is a race condition?
A: A bug where two threads change the same data at the same time, causing unpredictable or wrong results.
Q2: Why use Mutex?
A: Mutex locks the code so only one thread can access it at a time — it stops threads from clashing.
🛠️ Technical Questions & Answers
Q1: What is a race condition in Ruby or Rails?
A: A race condition happens when two or more threads try to change the same data at the same time. This can lead to incorrect values or unexpected behavior.
Example (Problem):
counter = 0
threads = 2.times.map do
Thread.new do
1000.times { counter += 1 }
end
end
threads.each(&:join)
puts counter # ❌ Output is unpredictable (might be 1400, not 2000)
Solution using Mutex
:
mutex = Mutex.new
counter = 0
threads = 2.times.map do
Thread.new do
1000.times do
mutex.synchronize { counter += 1 }
end
end
end
threads.each(&:join)
puts counter # ✅ Always 2000
Q2: How do you prevent race conditions in a Rails model update?
A: Use with_lock
for row-level locking in the database.
user = User.find(1)
user.with_lock do
user.balance += 100
user.save!
end
✅ This makes the database wait — no other process can change this record until the first update finishes.
Q3: What is optimistic locking and when should I use it?
A: Use optimistic locking when data is unlikely to conflict. Rails checks if the record was changed before you saved it using a lock_version
column.
Migration:
add_column :users, :lock_version, :integer, default: 0
Example:
user = User.find(1)
user.balance += 50
user.save! # Will raise error if someone else saved the record first
Q4: What’s the difference between Mutex and Monitor?
A: Mutex
is a simple locking tool. Monitor
is a built-in Ruby module that makes class methods thread-safe.
require 'monitor'
class SafeStore
include MonitorMixin
def initialize
super()
@items = []
end
def add(item)
synchronize { @items << item }
end
end
Q5: How does a Queue help avoid race conditions?
A: Queue
is a thread-safe way to pass data between threads. It handles its own locking.
queue = Queue.new
Thread.new { queue << "task1" }
Thread.new { puts queue.pop } # ✅ No race condition here
✅ Best Practices with Examples
1. ✅ Always use Mutex
to protect shared variables between threads
If multiple threads access or modify the same variable, use mutex.synchronize
to prevent race conditions.
mutex = Mutex.new
total = 0
threads = 10.times.map do
Thread.new do
100.times do
mutex.synchronize { total += 1 }
end
end
end
threads.each(&:join)
puts total # ✅ Always 1000
2. ✅ Use ActiveRecord#with_lock
when updating critical records in Rails
This ensures only one thread or process can update the same database record at a time.
user = User.find(1)
user.with_lock do
user.balance += 100
user.save!
end
3. ✅ Keep locked sections of code as short as possible
Don’t put slow operations like network requests or sleeps inside a mutex lock. This will block other threads.
mutex.synchronize do
update_local_variable
end
sleep(2) # Do outside the lock
4. ✅ Use Queue
when passing data between threads
Queue
is thread-safe and does not need manual locking.
queue = Queue.new
producer = Thread.new do
5.times { |i| queue << "Job #{i}" }
end
consumer = Thread.new do
5.times { puts "Processing #{queue.pop}" }
end
[producer, consumer].each(&:join)
5. ✅ Use optimistic locking for low-conflict updates
This avoids locking unless someone else changed the same record before you.
# Add in migration
add_column :users, :lock_version, :integer, default: 0
# Usage
user = User.find(1)
user.points += 10
user.save! # Raises error if record was updated since it was loaded
6. ✅ Avoid nesting multiple Mutex locks
Nesting multiple locks can lead to deadlocks if not carefully managed.
# ❌ Risky
mutex1.synchronize do
mutex2.synchronize do
# could deadlock
end
end
7. ✅ Monitor logs and thread behavior in production
Use tools like Skylight, Scout, or logs to detect thread blocking, long queries, or connection issues.
🌍 Real-world Scenario
A financial Rails app allowed users to top up their balance. Occasionally, two jobs ran at the same time and credited the same user twice.
Fix: They wrapped the update in with_lock
to ensure one job finishes before the other starts:
user = User.find(params[:id])
user.with_lock do
user.update!(balance: user.balance + 100)
end
✅ Result: No more double credits — safe and accurate transactions.
Avoiding Deadlocks (Mutex & Synchronization Techniques)
🧠 Detailed Explanation
A deadlock happens when two threads are each waiting for the other to finish — but neither one ever does. It’s like two people holding one key each, and refusing to move until they get the other person’s key. Result? They’re stuck forever. 🔒
Here’s how it can happen in Ruby:
- 🧵 Thread A locks
Resource 1
, then tries to lockResource 2
- 🧵 Thread B locks
Resource 2
, then tries to lockResource 1
- ⛔ Now each thread is stuck waiting for the other
This is called a deadlock, and it can freeze your whole program — especially in Rails apps using background jobs or multiple threads.
To prevent deadlocks, follow these simple ideas:
- ✅ Always lock things in the same order in every thread
- ✅ Keep locked code short — don’t put
sleep
or API calls inside - ✅ Always unlock your Mutex (use
ensure
in Ruby) - ✅ If working with database records, use
with_lock
to safely lock rows - ✅ If you can’t get a lock quickly, use
try_lock
to skip and try later
🧠 Think of it like this: If you follow rules for "who picks what first", and never hold the key for too long, everyone gets through safely.
💡 Deadlocks are rare, but when they happen — they’re hard to debug. So writing code that prevents them from the start is always best!
✅ Best Implementation with Detailed Explanation
🔁 1. Lock resources in a consistent order
The most common cause of deadlocks is when two threads lock resources in a different order. You can avoid this by always locking in the same order — for example, sort objects by ID or object ID.
mutex_a = Mutex.new
mutex_b = Mutex.new
# Safe method that locks in a consistent order
def safely_lock_both(m1, m2)
[m1, m2].sort_by(&:object_id).each(&:lock)
begin
# ✅ safe work here
puts "Both locks acquired safely"
ensure
[m1, m2].reverse.each(&:unlock)
end
end
# Threads that follow safe order
t1 = Thread.new { safely_lock_both(mutex_a, mutex_b) }
t2 = Thread.new { safely_lock_both(mutex_b, mutex_a) }
[t1, t2].each(&:join)
Why this works: By always acquiring locks in the same order, we prevent circular waiting — the key cause of deadlocks.
🧪 2. Use try_lock
if locking order isn’t guaranteed
If you cannot control lock order, you can try to acquire a lock and skip or retry if it’s not available.
mutex = Mutex.new
if mutex.try_lock
begin
puts "Lock acquired"
# do work
ensure
mutex.unlock
end
else
puts "Lock not acquired – skipping or retrying later"
end
Why this works: It avoids getting stuck by skipping the critical section if another thread holds the lock.
🧼 3. Always release locks with ensure
Even if an error occurs inside your critical section, you must unlock to prevent blocking other threads.
mutex = Mutex.new
mutex.lock
begin
puts "Doing safe work"
raise "Some error"
ensure
mutex.unlock
puts "Lock released safely"
end
Why this works: ensure
makes sure that the lock is released even if something goes wrong — preventing unintentional deadlocks.
🧾 4. Use with_lock
in Rails for DB record safety
Use ActiveRecord#with_lock
to avoid database-level deadlocks when multiple processes try to update the same rows.
# Safe transactional update
user = User.find(1)
user.with_lock do
user.balance += 100
user.save!
end
Why this works: The database puts a row-level lock, and ensures other transactions wait instead of clashing.
⚠️ 5. Don’t do slow things (like API calls or sleep) inside a lock
The longer you hold a lock, the more likely you block other threads, increasing the risk of deadlocks.
# ❌ Bad
mutex.synchronize do
sleep(2) # This blocks other threads unnecessarily
end
# ✅ Better
mutex.synchronize { update_data }
sleep(2) # Do slow stuff outside the lock
✅ Summary: Best Ways to Prevent Deadlocks
- 🔁 Lock resources in the same order
- 🧪 Use
try_lock
for optional/non-blocking work - 🧼 Always unlock using
ensure
- 🧾 Use
with_lock
for database safety - 🚫 Avoid slow operations inside locked code
Following these practices keeps your multithreaded or background jobs stable, fast, and free from mysterious freezes and stuck jobs.
💡 Examples
❌ Problem: Nested Mutex causing deadlock
mutex1 = Mutex.new
mutex2 = Mutex.new
# Thread 1
Thread.new do
mutex1.synchronize do
sleep(0.1)
mutex2.synchronize { puts "Thread 1 finished" }
end
end
# Thread 2
Thread.new do
mutex2.synchronize do
sleep(0.1)
mutex1.synchronize { puts "Thread 2 finished" }
end
end
Result: Both threads wait forever — 🔒 deadlock.
✅ Solution: Always lock in the same order
def safe_method
[mutex1, mutex2].sort_by(&:object_id).each(&:lock)
begin
# Safe work here
ensure
[mutex1, mutex2].reverse.each(&:unlock)
end
end
Why it works: All threads acquire locks in the same order — avoiding circular waits.
🔁 Alternative Concepts
- MonitorMixin – safer object-level locking
- Try-lock patterns – skip if unable to acquire lock
- Concurrent Ruby gems – provide deadlock-free primitives
❓ General Questions & Answers
Q1: What causes deadlocks in multithreading?
A: Locking resources in inconsistent order across threads or not releasing locks properly.
Q2: How can I detect a deadlock?
A: You may notice threads stop progressing or logs freeze. Use Ruby debuggers or logs to identify where the block happens.
🛠️ Technical Questions & Answers
Q1: What causes a deadlock in Ruby threads?
A: Deadlocks happen when two or more threads hold different locks and each tries to acquire the other’s lock. Since they wait on each other, they freeze forever.
Example (Deadlock):
mutex_a = Mutex.new
mutex_b = Mutex.new
# Thread 1
Thread.new do
mutex_a.synchronize do
sleep(0.1)
mutex_b.synchronize { puts "Thread 1 done" }
end
end
# Thread 2
Thread.new do
mutex_b.synchronize do
sleep(0.1)
mutex_a.synchronize { puts "Thread 2 done" }
end
end
❌ Both threads get stuck forever
Q2: How can I prevent deadlocks in Ruby?
A: Lock resources in the same order in all threads. That way, threads never wait on each other in a loop.
def safe_lock(*locks)
locks.sort_by(&:object_id).each(&:lock)
yield
ensure
locks.reverse_each(&:unlock)
end
safe_lock(mutex_a, mutex_b) do
puts "Safe execution"
end
✅ Always acquiring locks in the same order avoids circular waiting.
Q3: What is try_lock
and how does it help?
A: try_lock
tries to get the lock and returns immediately. If the lock isn’t available, it skips instead of waiting — reducing deadlock risk.
if mutex.try_lock
begin
puts "Got the lock!"
ensure
mutex.unlock
end
else
puts "Couldn't get the lock – skipping"
end
✅ Use try_lock
when locking is optional or when you want to avoid blocking behavior.
Q4: How does Rails handle database deadlocks?
A: Rails can hit database deadlocks when multiple transactions try to update rows in different orders. Use with_lock
to apply row-level locking in a safe way.
user = User.find(1)
user.with_lock do
user.balance += 100
user.save!
end
✅ This ensures that the database locks the row and waits for the lock to clear — avoiding simultaneous writes.
Q5: How do I detect or debug deadlocks?
A: Use logging, thread dumps, or tools like Thread.list
and caller
in Ruby to see where threads are stuck.
Thread.list.each do |t|
puts "#{t.inspect} - #{t.status}"
end
🔍 Look for threads that are “sleep” or “blocked” for a long time with no output — those are likely stuck.
✅ Best Practices with Examples
1. ✅ Always lock in a consistent order
If you need to lock multiple resources, always acquire them in the same order (e.g. by ID or name) to avoid circular waits between threads.
mutex_a = Mutex.new
mutex_b = Mutex.new
def safe_lock(m1, m2)
[m1, m2].sort_by(&:object_id).each(&:lock)
yield
ensure
[m1, m2].reverse.each(&:unlock)
end
safe_lock(mutex_a, mutex_b) do
# ✅ Safe critical section
end
2. ✅ Keep lock durations short
Don’t perform long operations (e.g. sleep
, API calls, or file I/O) while holding a lock — this blocks others and increases deadlock risk.
mutex.synchronize do
update_cache # ✅ Quick and safe
end
sleep(2) # ⛔ Do outside the lock
3. ✅ Always release locks using ensure
If an error happens while a lock is held, you still need to release it. ensure
guarantees the lock is released no matter what.
mutex.lock
begin
do_something_important
ensure
mutex.unlock
end
4. ✅ Use try_lock
for optional locking
If locking is optional (not critical), use try_lock
so the thread doesn’t wait forever if the lock is taken.
if mutex.try_lock
begin
process_job
ensure
mutex.unlock
end
else
puts "Job skipped to avoid blocking"
end
5. ✅ Use with_lock
for safe DB record updates
In Rails, when updating shared database records, use with_lock
to lock the row and prevent conflicts from other threads or jobs.
user = User.find(params[:id])
user.with_lock do
user.points += 50
user.save!
end
6. ✅ Avoid nested locks unless absolutely needed
Nesting locks (locking one inside another) increases deadlock risk — only do it if you're following consistent lock order rules.
# ❌ Risky if not ordered carefully
mutex1.synchronize do
mutex2.synchronize do
# Could deadlock if thread order is reversed elsewhere
end
end
7. ✅ Log blocking and thread status in production if using many threads
Use logs or thread inspection to monitor if threads are stuck too long. This helps detect deadlocks early.
Thread.list.each do |t|
puts "#{t.inspect} — #{t.status}"
end
🔍 Status like "sleep" or "blocked" can reveal stuck threads.
🌍 Real-world Scenario
A Rails app had two jobs — one updated User and the other updated Transaction. Sometimes they ran together and locked rows in different order, causing database-level deadlocks.
Fix:
- Added
with_lock
on both models - Ensured the locking order was always
User → Transaction
✅ Result: Deadlocks stopped. Jobs completed successfully even under load.
Synchronized Blocks: mutex.synchronize { ... }
🧠 Detailed Explanation
When you write a Ruby or Rails app that uses threads, those threads can run at the same time and try to change the same variable or resource. If they do that at the exact same time, it can cause bugs — like missing updates, wrong values, or even crashes.
To stop this from happening, we use something called a mutex — short for mutual exclusion. It works like a lock that only lets one thread enter a block of code at a time.
How it works:
- ✅ You create a mutex:
mutex = Mutex.new
- ✅ You wrap your code like this:
mutex.synchronize { ... }
- 🚪 Only one thread can enter that block — others must wait their turn
This block is called a synchronized block. It helps make your code thread-safe, meaning it works properly even when many threads run at once.
Why use it?
- 🔒 To protect shared variables like counters or caches
- 🧵 To prevent race conditions between threads
- ✅ To make your app more stable and safe in multi-threaded environments
💡 You don’t need to worry about locking or unlocking — Ruby’s synchronize
does that for you. Even if there’s an error, the mutex will unlock safely.
🧠 Think of it like this: One thread walks into a room, closes the door, finishes the work, and then lets the next thread come in.
✅ Best Implementation with Detailed Explanation
🔐 1. Safely update a shared variable using mutex.synchronize
When multiple threads try to change the same variable (e.g. a counter or cache), it can lead to a race condition. mutex.synchronize
wraps the code and ensures only one thread changes the variable at a time.
mutex = Mutex.new
counter = 0
threads = 5.times.map do
Thread.new do
100.times do
mutex.synchronize do
counter += 1
end
end
end
end
threads.each(&:join)
puts counter # ✅ Always 500
Why this works: The mutex acts like a door — only one thread can go inside and update the variable. Others wait their turn.
🧠 2. Protect access to shared resources like hashes
If multiple threads are reading/writing to the same hash or array, you must lock the write operation to avoid corruption or crashes.
CACHE = {}
MUTEX = Mutex.new
def safely_cache(key, value)
MUTEX.synchronize do
CACHE[key] = value
end
end
Why this works: The hash is not thread-safe by default. synchronize
makes sure only one thread writes at a time.
⏱️ 3. Keep the synchronized block short and fast
Don’t put anything slow (e.g., network requests, file I/O, or sleep) inside a synchronize
block — it can cause other threads to wait too long.
# ✅ Good
mutex.synchronize { update_db_counter }
sleep(2) # done outside the lock
📝 4. Synchronize logging from multiple threads
If many threads write to the console or log file, use a mutex to avoid jumbled output.
log_mutex = Mutex.new
5.times.map do |i|
Thread.new do
log_mutex.synchronize { puts "Thread #{i} reporting in" }
end
end.each(&:join)
Why this works: It prints log lines one at a time instead of mixing them together.
📦 5. Encapsulate Mutex
inside a class
If you're writing a class that multiple threads will use, include the Mutex
as an instance variable and wrap all mutating methods.
class SafeCounter
def initialize
@count = 0
@mutex = Mutex.new
end
def increment
@mutex.synchronize { @count += 1 }
end
def value
@mutex.synchronize { @count }
end
end
counter = SafeCounter.new
threads = 10.times.map { Thread.new { 100.times { counter.increment } } }
threads.each(&:join)
puts counter.value # ✅ Always 1000
✅ Summary of best ways to use mutex.synchronize
:
- 🔁 Use it to guard shared data or mutable state (variables, hashes, DB counters)
- ⏱️ Keep blocks small and fast
- 💥 Never raise exceptions inside without handling — or you’ll break the lock
- 🔐 Let Ruby manage lock/unlock automatically via
synchronize
(don’t uselock
/unlock
unless necessary)
✅ This is one of the easiest and safest ways to avoid race conditions in multithreaded Ruby and Rails applications.
💡 Examples
1. Thread-safe counter:
mutex = Mutex.new
count = 0
threads = 5.times.map do
Thread.new do
100.times do
mutex.synchronize do
count += 1
end
end
end
end
threads.each(&:join)
puts count # ✅ Always 500
2. Shared cache:
CACHE = {}
MUTEX = Mutex.new
def safe_write(key, value)
MUTEX.synchronize do
CACHE[key] = value
end
end
🔁 Alternative Concepts
MonitorMixin
— OOP-style synchronization for classesQueue
— thread-safe data passing without manual lockingConcurrent::Mutex
— provided by concurrent-ruby for advanced use
❓ General Questions & Answers
Q1: What is a synchronized block?
A: It’s a section of code protected by a Mutex
, so only one thread can enter it at a time.
Q2: Do I need to unlock manually?
A: No. synchronize
handles locking and unlocking automatically, even if an error happens.
🛠️ Technical Questions & Answers
Q1: What is a synchronized block in Ruby?
A: A synchronized block is a section of code that only one thread can run at a time. It is wrapped in mutex.synchronize { ... }
to prevent race conditions.
mutex = Mutex.new
mutex.synchronize do
# Only one thread can run this code at a time
shared_data += 1
end
✅ This keeps shared data safe when accessed by multiple threads.
Q2: What happens if multiple threads try to enter the same synchronize
block?
A: The first thread enters and locks the block. Other threads wait until the mutex is released.
threads = 3.times.map do |i|
Thread.new do
mutex.synchronize do
puts "Thread #{i} is running"
sleep(1)
end
end
end
threads.each(&:join)
✅ Output is never mixed — each thread runs its block one by one.
Q3: Do I need to unlock the mutex manually?
A: No. The synchronize
method handles both locking and unlocking for you — even if an error happens.
mutex.synchronize do
risky_operation # If this fails, the lock still gets released
end
✅ This is safer than using mutex.lock
and mutex.unlock
manually.
Q4: Can I nest synchronized blocks?
A: Yes, but only if they use the same mutex. Nesting different mutexes can cause deadlocks if not ordered correctly.
mutex.synchronize do
# do something
mutex.synchronize do
# safe again (same mutex)
end
end
⚠️ Avoid locking different mutexes inside each other unless you're 100% sure of the locking order.
Q5: When should I use mutex.synchronize
in Rails?
A: Use it when you're writing multi-threaded code (e.g., background jobs, concurrent caching, or custom threads) and modifying shared data in memory.
CACHE = {}
MUTEX = Mutex.new
def safe_write(key, value)
MUTEX.synchronize do
CACHE[key] = value
end
end
✅ This ensures that two jobs don’t overwrite the same cache key at the same time.
✅ Best Practices with Examples
1. ✅ Always use mutex.synchronize
when accessing shared data
If multiple threads access or change the same variable, use a mutex to make sure only one thread does it at a time.
mutex = Mutex.new
counter = 0
threads = 10.times.map do
Thread.new do
100.times do
mutex.synchronize { counter += 1 }
end
end
end
threads.each(&:join)
puts counter # ✅ Always 1000
2. ✅ Keep the synchronized block short and fast
Don’t put slow operations like sleep
, file reads, or API calls inside a synchronized block. It slows all other threads.
# ✅ Good
mutex.synchronize do
update_score
end
sleep(1) # ⛔ Do outside the mutex
3. ✅ Don’t manually lock/unlock if you can use synchronize
synchronize
automatically handles unlocking, even if there's an error. It's safer than doing it yourself.
# ✅ Recommended
mutex.synchronize do
perform_work
end
# ❌ Risky (may forget to unlock)
mutex.lock
perform_work
mutex.unlock
4. ✅ Use a dedicated mutex for each shared resource
If you’re protecting more than one variable (e.g., multiple caches), use separate mutexes to avoid unnecessary blocking.
USER_MUTEX = Mutex.new
CACHE_MUTEX = Mutex.new
USER_MUTEX.synchronize { update_user_data }
CACHE_MUTEX.synchronize { update_cache }
5. ✅ Avoid nested synchronize
blocks on different mutexes
Nesting locks can lead to deadlocks if threads lock them in different orders.
# ❌ Risky if another thread locks these in opposite order
mutex1.synchronize do
mutex2.synchronize do
update_state
end
end
✅ Use consistent lock order across all threads or avoid nesting when possible.
6. ✅ Use synchronize
for safe thread logging
If multiple threads log to STDOUT or a file, the output can get jumbled. Use a mutex to serialize log writes.
LOG_MUTEX = Mutex.new
Thread.new do
LOG_MUTEX.synchronize { puts "Thread-safe log line" }
end
7. ✅ Use MonitorMixin
if building thread-safe classes
If you're creating an object that will be accessed by threads, consider using MonitorMixin
for built-in thread safety.
require 'monitor'
class SafeCounter
include MonitorMixin
def initialize
super()
@count = 0
end
def increment
synchronize { @count += 1 }
end
end
🌍 Real-world Scenario
In a Rails API, multiple threads were incrementing a daily_request_count
counter for each user.
Occasionally, the value jumped incorrectly due to threads updating it at the same time.
Fix:
mutex = Mutex.new
def increment_request_count(user)
mutex.synchronize do
user.daily_request_count += 1
user.save!
end
end
✅ This ensured that only one thread could update the value at a time, fixing the inconsistency.
Testing Race Conditions (Debugging & Multithreaded Testing)
🧠 Detailed Explanation
A race condition is a bug that happens when two or more threads try to read or write the same variable at the same time. The problem? The final result depends on which thread runs first — and that changes every time your code runs.
These bugs are very hard to find because they don’t always happen. Sometimes your program works fine. Other times, things break or give the wrong result. That’s why we need to test for race conditions on purpose.
So how do you test for race conditions?
- 🧵 Create multiple threads
- ⏱ Add
sleep
or delays between steps to make timing issues more likely - 🔁 Repeat your tests many times
- ✅ Check if the final result is correct (like a number adding up properly)
For example, if 5 threads each add 1
to a counter 100 times, the total should be 500
.
If it’s not — then you’ve found a race condition.
In Rails, you can also test race conditions in your models by writing custom specs that simulate multiple users or background jobs accessing the same data at once.
Why it’s important:
- 💰 In finance apps, race conditions can lose or double money
- 💬 In messaging apps, they can duplicate or drop messages
- 🛒 In shopping carts, they can charge the wrong amount
✅ Finding and fixing race conditions early makes your app more stable and safe — especially when you start using threads or background jobs.
💡 Examples
1. Simulating a race condition (unsafe code):
counter = 0
threads = 5.times.map do
Thread.new do
100.times do
val = counter
sleep(0.001) # simulate race
counter = val + 1
end
end
end
threads.each(&:join)
puts counter # ❌ Often not 500
2. Testing fixed code using Mutex
:
mutex = Mutex.new
counter = 0
threads = 5.times.map do
Thread.new do
100.times do
mutex.synchronize do
counter += 1
end
end
end
end
threads.each(&:join)
puts counter # ✅ Always 500
3. Detecting issues with RSpec and sleep
:
it "should be thread-safe" do
user = User.create!(balance: 0)
threads = 5.times.map do
Thread.new do
10.times do
user.with_lock do
user.balance += 1
user.save!
end
end
end
end
threads.each(&:join)
expect(user.reload.balance).to eq(50) # ✅ Confirm thread safety
end
🔁 Alternative Concepts
- Use
Concurrent::AtomicFixnum
(from concurrent-ruby) - Use database transactions with
with_lock
- Use external tools like ThreadFuzz or Valgrind (for native code)
❓ General Questions & Answers
Q1: How can I trigger race conditions during testing?
A: Use Thread.new
to run code in parallel, and insert sleep
calls between reads and writes to simulate thread timing issues.
Q2: Are race conditions always visible?
A: No. They may only appear occasionally. That’s why running tests repeatedly or with randomized timing is useful.
🛠️ Technical Questions & Answers
Q1: What is a race condition in Ruby or Rails?
A: A race condition happens when two or more threads read/write shared data at the same time, and the result depends on who finishes first. This can lead to incorrect or inconsistent data.
counter = 0
threads = 2.times.map do
Thread.new do
100.times do
value = counter
sleep(0.001)
counter = value + 1
end
end
end
threads.each(&:join)
puts counter # ❌ Might not be 200
❗ This is a race condition because both threads might read the same value before updating it.
Q2: How do you test if code has a race condition?
A: Run the code in multiple threads, add artificial delays (like sleep
), and check if the final result is wrong.
it "is not thread-safe" do
total = 0
threads = 5.times.map do
Thread.new do
100.times do
value = total
sleep(0.001)
total = value + 1
end
end
end
threads.each(&:join)
expect(total).not_to eq(500) # ❗ Test fails if total is wrong
end
Q3: How can I fix a race condition in my test or code?
A: Use a Mutex
to make the block of code thread-safe. It ensures only one thread can run that block at a time.
mutex = Mutex.new
total = 0
threads = 5.times.map do
Thread.new do
100.times do
mutex.synchronize { total += 1 }
end
end
end
threads.each(&:join)
puts total # ✅ Always 500
Q4: How do I test for race conditions in ActiveRecord models?
A: Use with_lock
to simulate concurrent DB updates and protect rows from being updated by multiple threads at once.
user = User.create!(balance: 0)
threads = 5.times.map do
Thread.new do
10.times do
user.with_lock do
user.balance += 1
user.save!
end
end
end
end
threads.each(&:join)
puts user.reload.balance # ✅ Should be 50
Q5: Can race conditions happen even if my tests pass once?
A: Yes. Race conditions are inconsistent. Run tests multiple times or increase thread count and add sleep
to increase the chance of failure during testing.
100.times do
run_race_test
end
✅ Repetition increases your chances of catching timing bugs.
✅ Best Practices
- ☑️ Use
mutex.synchronize
for shared variables - ☑️ Use
ActiveRecord#with_lock
for DB row safety - ☑️ Add sleep/delay in tests to expose issues
- ☑️ Use thread stress tests for critical sections
- ☑️ Run tests multiple times — race conditions are inconsistent!
🌍 Real-world Scenario
A Rails fintech app had two background jobs that adjusted the same user wallet at the same time. Occasionally, both read the same balance and saved it back — overwriting the other.
How they tested it:
- Simulated the jobs using threads and
sleep
to create conflict - Verified that the final balance was incorrect
- Fixed it using
user.with_lock
to wrap all updates
✅ The fix ensured that only one job updated the balance at a time — no more lost funds.
Logging Thread Activity (Debugging Multithreaded Code)
🧠 Detailed Explanation
When your Ruby or Rails app uses multiple threads, they all run at the same time — doing work independently. That’s great for speed, but it can make debugging very hard.
To know what each thread is doing, you should log their activity. Logging helps you see:
- 🟢 When a thread starts
- 🔄 What step it is on
- 🚪 When it finishes
- ⛔ If it gets stuck or crashes
Each thread should log its own Thread ID, a timestamp, and a message like “started”, “doing X”, or “finished”. This way, you can follow the thread’s journey and debug if something goes wrong.
Why it's important:
- ✅ Helps you understand the order of execution
- ✅ Lets you catch delays or blocked threads
- ✅ Useful in testing and production monitoring
✅ Even simple puts
logs with timestamps and thread IDs can help a lot.
For production apps, use Rails' Logger
and tag each log with the thread.
🧠 Think of it like security cameras for your threads — they tell you who did what, and when.
💡 Examples
1. Basic thread logging with timestamps:
5.times do |i|
Thread.new do
puts "[#{Time.now}] Thread #{i} starting"
sleep(rand(0.5..1.5))
puts "[#{Time.now}] Thread #{i} finished"
end
end.each(&:join)
2. Logging with thread ID and mutex:
mutex = Mutex.new
Thread.new do
mutex.synchronize do
puts "[#{Time.now}] Thread #{Thread.current.object_id} entered critical section"
sleep(1)
puts "[#{Time.now}] Thread #{Thread.current.object_id} exiting"
end
end.join
3. Log blocked thread statuses:
Thread.list.each do |t|
puts "Thread: #{t.object_id}, Status: #{t.status}, Alive: #{t.alive?}"
end
🔁 Alternative Concepts
- Rails.logger for production logging
- TaggedLogging to include thread IDs in Rails logs
- Use
Logger.new("thread.log")
for thread-specific log files
❓ General Questions & Answers
Q1: Why should I log thread activity?
A: Because it helps you see what each thread is doing, when, and whether it gets stuck or overlaps with others.
Q2: What should I log from each thread?
A: Log the thread ID, timestamp, step name (start/end), and any data being read/written.
🛠️ Technical Questions & Answers
Q1: How do I log which thread is doing what?
A: Use Thread.current
and log its object_id
or name
. Include a timestamp and a message.
puts "[#{Time.now}] Thread #{Thread.current.object_id} - Started"
# Do some work
puts "[#{Time.now}] Thread #{Thread.current.object_id} - Finished"
✅ This helps you track the flow of each thread independently.
Q2: Can I use Rails logger inside threads?
A: Yes. You can use Rails.logger
to log inside any thread. For clarity, tag the logs with the thread ID.
Rails.logger.info("[Thread #{Thread.current.object_id}] Job started")
# ... work ...
Rails.logger.info("[Thread #{Thread.current.object_id}] Job ended")
✅ Tagged logs help debug concurrency issues in background jobs (e.g., Sidekiq, Puma threads).
Q3: How can I monitor all threads in my app?
A: Use Thread.list
to get all current threads. You can then check their status, whether they’re alive, or sleeping.
Thread.list.each do |t|
puts "Thread ID: #{t.object_id}, Status: #{t.status}, Alive: #{t.alive?}"
end
✅ This helps find threads that are stuck or not doing anything.
Q4: How do I prevent log messages from mixing together?
A: Use a Mutex
around logging to serialize messages and avoid overlapping output from different threads.
log_mutex = Mutex.new
Thread.new do
log_mutex.synchronize do
puts "[#{Time.now}] Log safely from thread"
end
end
✅ This prevents log lines from multiple threads being jumbled.
Q5: Can I write thread logs to a file?
A: Yes. Use Ruby’s built-in Logger
and create a separate file or tag per thread.
require 'logger'
logger = Logger.new("log/thread_#{Thread.current.object_id}.log")
logger.info("Thread started work")
✅ This creates dedicated logs for each thread — useful for deep debugging or background job tracing.
✅ Best Practices
- ☑️ Always include timestamps and thread ID in logs
- ☑️ Use
Logger
instead ofputs
in real apps - ☑️ Prefix logs with meaningful tags like [start], [end], or [error]
- ☑️ Avoid logging sensitive data from multiple threads
- ☑️ Use
Thread.list
to monitor thread health in production
🌍 Real-world Scenario
A Rails app using Sidekiq had a job that sometimes froze under load. The dev team added thread logging to see what was happening.
Fix:
- Logged every job's thread ID, start/end time, and job name
- Used
Thread.list
to print stuck jobs - Identified a shared file write causing the hang — fixed with mutex
✅ After adding proper logs and a mutex, the freeze was resolved and the team had full visibility into concurrent job flow.
Profiling & Memory Leaks with Threads
🧠 Detailed Explanation
When you use threads in a Ruby or Rails app, each thread uses memory and system resources. If threads don’t finish properly — or you create too many — your app can start using too much memory or slow down. This is called a memory leak.
Example:
- 🔄 You create a new thread inside a loop
- 🚫 You don’t stop the thread or wait for it to finish
- 💥 Threads pile up, and your app gets slower over time
To catch this, you can use profiling tools — they help you measure memory usage, track threads, and spot problems.
You can also log how many threads are running using Thread.list.size
.
Why this matters:
- 🧠 Too many threads = high memory use
- ⛔ Leaked threads never get cleaned up
- 🐢 Memory leaks make apps slow or crash
How to fix it:
- ✅ Always call
thread.join
if you useThread.new
- ✅ Don’t run infinite loops without a stop condition
- ✅ Use tools like
memory_profiler
orObjectSpace
to inspect memory and thread objects - ✅ In production apps, prefer background workers (e.g. Sidekiq) or thread pools
🧠 Think of threads like people in a room — if you let too many in and don’t ask them to leave, it gets overcrowded. Profiling tools help you count them and see who’s still inside.
💡 Examples
1. Detecting thread leaks with Thread.list
:
puts "Total threads: #{Thread.list.size}"
Thread.list.each do |t|
puts "Thread #{t.object_id} - Status: #{t.status}, Alive: #{t.alive?}"
end
2. Checking memory in a Rails app with memory_profiler:
require 'memory_profiler'
report = MemoryProfiler.report do
100.times { Thread.new { sleep(1) } }
end
report.pretty_print
3. Profiling long-lived threads with ObjectSpace:
require 'objspace'
ObjectSpace.each_object(Thread) do |thread|
puts "Thread #{thread.object_id}, Status: #{thread.status}"
end
🔁 Alternative Concepts
derailed_benchmarks
— for full Rails memory profilingheap-profiler
— for tracking retained memory- System tools:
htop
,ps aux
, ortop
❓ General Questions & Answers
Q1: What causes memory leaks with threads?
A: Threads that are not properly closed (e.g. infinite loops or blocking code) stay alive and keep consuming memory.
Q2: How do I identify leaking threads?
A: Monitor thread count over time. If it grows constantly, you may have a leak. Use Thread.list
or memory profilers.
🛠️ Technical Questions & Answers
Q1: What is a memory leak in Ruby threads?
A: A memory leak happens when threads are created but never finished or cleaned up. These threads stay in memory and keep growing in number, eventually slowing down or crashing the app.
Example:
loop do
Thread.new do
# never ends, never joined
sleep(10)
end
end
❌ This creates hundreds of threads that stay alive forever — leaking memory.
Q2: How can I detect leaking threads in Ruby?
A: Use Thread.list
to see all current threads.
If this list keeps growing or never shrinks, you likely have a thread leak.
puts "Thread count: #{Thread.list.size}"
Thread.list.each do |t|
puts "Thread #{t.object_id} – status: #{t.status}, alive: #{t.alive?}"
end
Q3: How do I fix or prevent memory leaks in threads?
A: Make sure every thread ends properly and is joined. Never leave threads running in the background unless necessary.
threads = 10.times.map do
Thread.new do
do_something
end
end
threads.each(&:join) # ✅ Wait for threads to finish
✅ join
ensures threads are cleaned up and don’t stay in memory.
Q4: What tools can I use to profile memory used by threads?
A: Use gems like memory_profiler
or ObjectSpace
to see memory usage and how many thread objects exist.
require 'memory_profiler'
MemoryProfiler.report do
10.times { Thread.new { sleep(1) } }
end.pretty_print
require 'objspace'
ObjectSpace.each_object(Thread) do |thread|
puts "Thread: #{thread.object_id} – status: #{thread.status}"
end
Q5: Can Puma or Sidekiq cause memory issues with threads?
A: Yes. If worker threads perform heavy work, hold large variables, or fail to release resources, they can cause memory leaks.
Solution: Use:
- ☑️
Thread.current[:data] = nil
to release memory manually - ☑️
ensure
blocks to clean up after jobs - ☑️ Puma's and Sidekiq's max thread pool settings
✅ Best Practices
- ☑️ Use
Thread#join
to wait for threads to finish - ☑️ Monitor
Thread.list.size
regularly in dev & prod - ☑️ Use memory profilers to find retained objects
- ☑️ Avoid spawning threads in loops without limits
- ☑️ Use thread pools for controlled concurrency
🌍 Real-world Scenario
A Rails app used threads to handle background API calls in controllers. Over time, the app slowed down and used 2–3x memory.
Fix:
- Replaced raw
Thread.new
calls withConcurrent::Future
- Used
MemoryProfiler
to trace uncollected thread objects - Called
join
explicitly and moved logic to Sidekiq
✅ Memory usage stabilized and performance returned to normal.
Learn more about Rails setup
https://shorturl.fm/5JO3e
https://shorturl.fm/TbTre
https://shorturl.fm/j3kEj
https://shorturl.fm/XIZGD
https://shorturl.fm/a0B2m
https://shorturl.fm/N6nl1
https://shorturl.fm/5JO3e
https://shorturl.fm/j3kEj
https://shorturl.fm/a0B2m