DEV Community

Davide Santangelo
Davide Santangelo

Posted on

Advanced Ruby Regular Expressions: Mastering Pattern Matching and Text Processing

Introduction

Regular expressions in Ruby represent one of the most powerful text processing capabilities available to developers. While many tutorials cover basic pattern matching, this comprehensive guide delves into advanced techniques, performance optimization strategies, and sophisticated use cases that push the boundaries of what's possible with Ruby's regex engine.

Ruby's regex implementation is built on the Onigmo library (a fork of Oniguruma), providing extensive Unicode support, advanced features, and excellent performance characteristics. This article explores the depths of Ruby's regex capabilities, from meta-programming with dynamic patterns to building complex parsers and analyzers.

Ruby's Regex Engine Architecture

The Onigmo Foundation

Ruby's regex engine is based on Onigmo, which provides several key advantages:

  • Backtracking with optimization: Smart backtracking that avoids catastrophic performance issues
  • Unicode normalization: Full Unicode support with proper case folding and character classes
  • Named capture groups: Advanced grouping with semantic meaning
  • Conditional expressions: Pattern matching based on previous captures
  • Atomic grouping: Non-backtracking groups for performance optimization

Compilation and Caching

Ruby automatically compiles and caches regex patterns, but understanding this process is crucial for optimization:

# Pattern compilation happens once
COMPILED_REGEX = /complex_pattern/i

# Dynamic patterns require recompilation
def dynamic_pattern(input)
  /#{Regexp.escape(input)}/i  # Compiled each time
end

# Optimization with memoization
class RegexCache
  def initialize
    @cache = {}
  end

  def pattern(key)
    @cache[key] ||= Regexp.new(key, Regexp::IGNORECASE)
  end
end
Enter fullscreen mode Exit fullscreen mode

Advanced Pattern Construction Techniques

Dynamic Pattern Building

Creating patterns programmatically opens up powerful possibilities:

class AdvancedPatternBuilder
  def self.build_email_validator(domains: nil, allow_plus: true, strict_tld: false)
    local_part = allow_plus ? '[a-zA-Z0-9._%+-]+' : '[a-zA-Z0-9._%+-]+'

    domain_part = if domains
      "(?:#{domains.map { |d| Regexp.escape(d) }.join('|')})"
    else
      '[a-zA-Z0-9.-]+'
    end

    tld_part = strict_tld ? '(?:com|org|net|edu|gov)' : '[a-zA-Z]{2,}'

    /\A#{local_part}@#{domain_part}\.#{tld_part}\z/i
  end

  def self.build_log_parser(timestamp_format: :iso8601, severity_levels: %w[DEBUG INFO WARN ERROR])
    timestamp_pattern = case timestamp_format
    when :iso8601
      '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}(?:\.\d{3})?Z?'
    when :syslog
      '\w{3}\s+\d{1,2}\s+\d{2}:\d{2}:\d{2}'
    else
      '[^\s]+'
    end

    severity_pattern = "(?:#{severity_levels.join('|')})"

    /^(?<timestamp>#{timestamp_pattern})\s+(?<severity>#{severity_pattern})\s+(?<message>.+)$/
  end
end

# Usage
email_regex = AdvancedPatternBuilder.build_email_validator(
  domains: %w[company.com subsidiary.net],
  allow_plus: false
)

log_regex = AdvancedPatternBuilder.build_log_parser(
  timestamp_format: :iso8601,
  severity_levels: %w[TRACE DEBUG INFO WARN ERROR FATAL]
)
Enter fullscreen mode Exit fullscreen mode

Conditional Regex Patterns

Ruby supports conditional patterns that match based on previous captures:

# Match HTML tags with proper opening/closing
html_tag_pattern = /
  <(?<tag>\w+)(?:\s+[^>]*)?>  # Opening tag
  (?<content>.*?)             # Content
  <\/\k<tag>>                 # Closing tag matching opening
/xm

# Conditional matching based on context
phone_pattern = /
  (?<country>\+\d{1,3})?      # Optional country code
  (?<area>\(\d{3}\)|\d{3})    # Area code
  (?(<country>)               # If country code exists
    [-.\s]?                   # Optional separator
  |                           # Otherwise
    [-.\s]                    # Required separator
  )
  \d{3}[-.\s]?\d{4}          # Main number
/x
Enter fullscreen mode Exit fullscreen mode

Advanced Matching Techniques

Lookahead and Lookbehind Assertions

Complex text processing often requires context-aware matching:

class AdvancedTextProcessor
  # Password validation with multiple requirements
  PASSWORD_REGEX = /
    \A
    (?=.*[a-z])         # Must contain lowercase
    (?=.*[A-Z])         # Must contain uppercase
    (?=.*\d)            # Must contain digit
    (?=.*[[:punct:]])   # Must contain punctuation
    (?!.*(.)\1{2,})     # No character repeated 3+ times
    .{8,}               # At least 8 characters
    \z
  /x

  # Extract code blocks not inside HTML comments
  CODE_BLOCK_REGEX = /
    (?<!<!--.*?)        # Not preceded by HTML comment start
    ```

(\w+)?\n         # Code fence with optional language
    (.*?)               # Code content
    \n

```               # Closing fence
    (?!.*-->)           # Not followed by HTML comment end
  /m

  # Match words not inside parentheses
  WORD_NOT_IN_PARENS = /
    (?<!\()             # Not preceded by opening paren
    \b\w+\b             # Word boundary
    (?![^()]*\))        # Not followed by closing paren without opening
  /x

  def self.extract_secure_passwords(text)
    text.scan(PASSWORD_REGEX)
  end

  def self.extract_code_blocks(markdown)
    markdown.scan(CODE_BLOCK_REGEX).map do |language, code|
      { language: language&.strip, code: code.strip }
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

Atomic Grouping and Possessive Quantifiers

Preventing backtracking for performance optimization:

class PerformanceOptimizedRegex
  # Atomic grouping prevents backtracking
  EFFICIENT_NUMBER_MATCH = /
    (?>                     # Atomic group
      \d+                   # One or more digits
      (?:\.\d+)?            # Optional decimal part
    )
    (?:\s|$)               # Followed by space or end
  /x

  # Possessive quantifiers for greedy matching without backtracking
  GREEDY_WORD_MATCH = /\w++/  # Possessive quantifier

  # Complex pattern with atomic grouping
  URL_EXTRACTOR = /
    (?>https?://)           # Protocol (atomic)
    (?>[a-zA-Z0-9.-]++)     # Domain (possessive)
    (?::\d+)?               # Optional port
    (?>/[^\s]*)?            # Optional path (atomic)
  /x

  def self.benchmark_patterns(text, iterations = 1000)
    require 'benchmark'

    Benchmark.bm(20) do |x|
      x.report("Regular pattern:") do
        iterations.times { text.scan(/\d+(?:\.\d+)?/) }
      end

      x.report("Atomic grouping:") do
        iterations.times { text.scan(EFFICIENT_NUMBER_MATCH) }
      end
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

Advanced Capture and Replacement Strategies

Named Captures with Complex Processing

class AdvancedTextReplacer
  LOG_PATTERN = /
    (?<timestamp>\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})
    \s+
    (?<level>\w+)
    \s+
    (?<logger>[\w.]+)
    \s+
    (?<message>.+)
  /x

  def self.process_log_entries(log_text)
    log_text.gsub(LOG_PATTERN) do |match|
      captures = Regexp.last_match

      # Process timestamp
      timestamp = Time.parse(captures[:timestamp])
      formatted_time = timestamp.strftime("%Y-%m-%d %H:%M:%S UTC")

      # Normalize log level
      level = captures[:level].upcase.ljust(5)

      # Truncate logger name
      logger = captures[:logger].split('.').last.ljust(15)

      # Process message
      message = captures[:message].gsub(/\s+/, ' ').strip

      "[#{formatted_time}] #{level} #{logger} - #{message}"
    end
  end

  def self.advanced_string_interpolation(template, data)
    # Support complex expressions in templates
    template.gsub(/\{\{(.+?)\}\}/) do |match|
      expression = $1.strip

      # Handle method calls
      if expression.include?('.')
        parts = expression.split('.')
        result = data[parts.first.to_sym]
        parts[1..-1].each { |method| result = result.send(method) }
        result.to_s
      else
        data[expression.to_sym].to_s
      end
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

Contextual Replacements with Callbacks

class ContextualReplacer
  def initialize
    @replacements = {}
    @context_stack = []
  end

  def define_replacement(pattern, &block)
    @replacements[pattern] = block
  end

  def process_with_context(text, initial_context = {})
    @context_stack = [initial_context]

    result = text.dup
    @replacements.each do |pattern, replacement_proc|
      result = result.gsub(pattern) do |match|
        current_context = @context_stack.last
        captures = Regexp.last_match

        replacement_proc.call(match, captures, current_context)
      end
    end

    result
  end
end

# Usage example
replacer = ContextualReplacer.new

replacer.define_replacement(/\$\{(\w+)\}/) do |match, captures, context|
  var_name = captures[1]
  context[var_name.to_sym] || match
end

replacer.define_replacement(/\@include\(([^)]+)\)/) do |match, captures, context|
  filename = captures[1]
  context[:includes] ||= []

  if context[:includes].include?(filename)
    "<!-- Circular include detected: #{filename} -->"
  else
    context[:includes] << filename
    "<!-- Content of #{filename} would be included here -->"
  end
end
Enter fullscreen mode Exit fullscreen mode

High-Performance Text Processing

Streaming Regex Processing

For large files, streaming processing prevents memory issues:

class StreamingRegexProcessor
  def initialize(pattern, chunk_size: 8192)
    @pattern = pattern
    @chunk_size = chunk_size
    @buffer = ""
    @matches = []
  end

  def process_file(filename)
    File.open(filename, 'r') do |file|
      while chunk = file.read(@chunk_size)
        @buffer += chunk
        extract_complete_matches
      end

      # Process remaining buffer
      extract_final_matches
    end

    @matches
  end

  private

  def extract_complete_matches
    # Find matches that don't span chunk boundaries
    last_newline = @buffer.rindex("\n")
    return unless last_newline

    complete_text = @buffer[0..last_newline]
    @buffer = @buffer[last_newline + 1..-1]

    complete_text.scan(@pattern) { |match| @matches << match }
  end

  def extract_final_matches
    @buffer.scan(@pattern) { |match| @matches << match }
  end
end
Enter fullscreen mode Exit fullscreen mode

Parallel Regex Processing

require 'parallel'

class ParallelRegexProcessor
  def self.process_large_dataset(data, pattern, num_threads: 4)
    # Split data into chunks
    chunk_size = (data.length / num_threads.to_f).ceil
    chunks = data.each_slice(chunk_size).to_a

    # Process chunks in parallel
    results = Parallel.map(chunks, in_threads: num_threads) do |chunk|
      chunk.map { |item| item.scan(pattern) }.flatten
    end

    results.flatten
  end

  def self.concurrent_file_processing(filenames, pattern)
    Parallel.map(filenames, in_threads: 4) do |filename|
      {
        filename: filename,
        matches: File.read(filename).scan(pattern),
        processed_at: Time.now
      }
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

Unicode and Internationalization

Advanced Unicode Handling

class UnicodeRegexProcessor
  # Unicode property classes
  UNICODE_PATTERNS = {
    letters: /\p{Letter}+/,
    digits: /\p{Digit}+/,
    punctuation: /\p{Punctuation}+/,
    currency: /\p{Currency_Symbol}/,
    math_symbols: /\p{Math_Symbol}/,
    emoji: /\p{Emoji}/
  }.freeze

  # Language-specific patterns
  LANGUAGE_PATTERNS = {
    japanese: /[\p{Hiragana}\p{Katakana}\p{Han}]+/,
    arabic: /\p{Arabic}+/,
    cyrillic: /\p{Cyrillic}+/,
    greek: /\p{Greek}+/
  }.freeze

  def self.extract_by_script(text, script)
    pattern = LANGUAGE_PATTERNS[script]
    return [] unless pattern

    text.scan(pattern)
  end

  def self.normalize_unicode_text(text)
    # Normalize Unicode combining characters
    text.unicode_normalize(:nfc)
        .gsub(/\p{Mn}/, '') # Remove combining marks if needed
        .gsub(/\s+/, ' ')   # Normalize whitespace
        .strip
  end

  def self.extract_multilingual_emails(text)
    # Email pattern supporting international domain names
    pattern = /
      [\p{Letter}\p{Digit}._%+-]+     # Local part with Unicode
      @
      [\p{Letter}\p{Digit}.-]+        # Domain with Unicode
      \.
      \p{Letter}{2,}                  # TLD with Unicode
    /x

    text.scan(pattern)
  end
end
Enter fullscreen mode Exit fullscreen mode

Building Complex Parsers

Recursive Descent Parser with Regex

class ExpressionParser
  PATTERNS = {
    number: /\d+(?:\.\d+)?/,
    identifier: /[a-zA-Z_]\w*/,
    operator: /[+\-*/]/,
    lparen: /\(/,
    rparen: /\)/,
    whitespace: /\s+/
  }.freeze

  def initialize(input)
    @input = input
    @tokens = tokenize
    @position = 0
  end

  def parse
    result = parse_expression
    raise "Unexpected token at end" unless at_end?
    result
  end

  private

  def tokenize
    tokens = []
    position = 0

    while position < @input.length
      matched = false

      PATTERNS.each do |type, pattern|
        if match = @input[position..-1].match(/\A#{pattern}/)
          unless type == :whitespace
            tokens << { type: type, value: match[0], position: position }
          end
          position += match[0].length
          matched = true
          break
        end
      end

      unless matched
        raise "Unexpected character at position #{position}: #{@input[position]}"
      end
    end

    tokens
  end

  def parse_expression
    left = parse_term

    while current_token&.dig(:type) == :operator && %w[+ -].include?(current_token[:value])
      operator = advance[:value]
      right = parse_term
      left = { type: :binary, operator: operator, left: left, right: right }
    end

    left
  end

  def parse_term
    left = parse_factor

    while current_token&.dig(:type) == :operator && %w[* /].include?(current_token[:value])
      operator = advance[:value]
      right = parse_factor
      left = { type: :binary, operator: operator, left: left, right: right }
    end

    left
  end

  def parse_factor
    if current_token&.dig(:type) == :number
      { type: :number, value: advance[:value].to_f }
    elsif current_token&.dig(:type) == :identifier
      { type: :identifier, name: advance[:value] }
    elsif current_token&.dig(:type) == :lparen
      advance # consume '('
      expr = parse_expression
      expect(:rparen)
      expr
    else
      raise "Unexpected token: #{current_token}"
    end
  end

  def current_token
    @tokens[@position]
  end

  def advance
    token = current_token
    @position += 1
    token
  end

  def expect(type)
    token = advance
    raise "Expected #{type}, got #{token&.dig(:type)}" unless token&.dig(:type) == type
    token
  end

  def at_end?
    @position >= @tokens.length
  end
end
Enter fullscreen mode Exit fullscreen mode

Configuration File Parser

class ConfigurationParser
  SECTION_PATTERN = /^\[([^\]]+)\]$/
  KEY_VALUE_PATTERN = /^([^=]+)=(.*)$/
  COMMENT_PATTERN = /^\s*[#;]/
  CONTINUATION_PATTERN = /\\$/

  def self.parse_ini_file(content)
    result = {}
    current_section = nil
    continued_line = nil

    content.each_line.with_index do |line, line_number|
      line = line.strip

      # Handle line continuation
      if continued_line
        line = continued_line + line
        continued_line = nil
      end

      if line.match(CONTINUATION_PATTERN)
        continued_line = line.gsub(CONTINUATION_PATTERN, '')
        next
      end

      # Skip empty lines and comments
      next if line.empty? || line.match(COMMENT_PATTERN)

      # Parse section headers
      if section_match = line.match(SECTION_PATTERN)
        current_section = section_match[1].strip
        result[current_section] ||= {}
        next
      end

      # Parse key-value pairs
      if kv_match = line.match(KEY_VALUE_PATTERN)
        key = kv_match[1].strip
        value = parse_value(kv_match[2].strip)

        if current_section
          result[current_section][key] = value
        else
          result[key] = value
        end
      else
        raise "Parse error at line #{line_number + 1}: #{line}"
      end
    end

    result
  end

  private

  def self.parse_value(value_str)
    # Handle quoted strings
    if value_str.match(/^"(.*)"$/) || value_str.match(/^'(.*)'$/)
      return $1
    end

    # Handle boolean values
    return true if value_str.match(/^(true|yes|on)$/i)
    return false if value_str.match(/^(false|no|off)$/i)

    # Handle numbers
    return value_str.to_i if value_str.match(/^\d+$/)
    return value_str.to_f if value_str.match(/^\d+\.\d+$/)

    # Handle arrays
    if value_str.include?(',')
      return value_str.split(',').map(&:strip).map { |v| parse_value(v) }
    end

    # Return as string
    value_str
  end
end
Enter fullscreen mode Exit fullscreen mode

Advanced Debugging and Profiling

Regex Debugging Tools

class RegexDebugger
  def self.debug_pattern(pattern, test_string)
    puts "Pattern: #{pattern.inspect}"
    puts "Test String: #{test_string.inspect}"
    puts "Options: #{pattern.options}"
    puts

    if match = pattern.match(test_string)
      puts "Match found!"
      puts "Full match: #{match[0].inspect}"
      puts "Position: #{match.begin(0)}..#{match.end(0)}"

      if match.names.any?
        puts "\nNamed captures:"
        match.names.each do |name|
          value = match[name]
          puts "  #{name}: #{value.inspect}"
        end
      end

      if match.captures.any?
        puts "\nNumbered captures:"
        match.captures.each_with_index do |capture, index|
          puts "  #{index + 1}: #{capture.inspect}"
        end
      end
    else
      puts "No match found."

      # Try to find partial matches
      puts "\nTrying to find partial matches..."
      pattern.source.split('').each_with_index do |char, index|
        partial_pattern = Regexp.new(pattern.source[0..index])
        if partial_match = partial_pattern.match(test_string)
          puts "Partial match up to position #{index}: #{partial_match[0].inspect}"
        end
      end
    end
  end

  def self.performance_analysis(pattern, test_strings, iterations = 1000)
    require 'benchmark'

    puts "Performance Analysis for: #{pattern.inspect}"
    puts "Test strings: #{test_strings.length}"
    puts "Iterations: #{iterations}"
    puts

    Benchmark.bm(20) do |x|
      x.report("match:") do
        iterations.times do
          test_strings.each { |str| pattern.match(str) }
        end
      end

      x.report("match?:") do
        iterations.times do
          test_strings.each { |str| pattern.match?(str) }
        end
      end

      x.report("scan:") do
        iterations.times do
          test_strings.each { |str| str.scan(pattern) }
        end
      end
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

Real-World Applications

Log Analysis System

class LogAnalyzer
  LOG_PATTERNS = {
    apache: /
      (?<remote_addr>\S+)\s+
      (?<remote_logname>\S+)\s+
      (?<remote_user>\S+)\s+
      \[(?<time_local>[^\]]+)\]\s+
      "(?<request>[^"]*)"\s+
      (?<status>\d+)\s+
      (?<body_bytes_sent>\d+)\s+
      "(?<http_referer>[^"]*)"\s+
      "(?<http_user_agent>[^"]*)"
    /x,

    nginx: /
      (?<remote_addr>\S+)\s+-\s+
      (?<remote_user>\S+)\s+
      \[(?<time_local>[^\]]+)\]\s+
      "(?<request>[^"]*)"\s+
      (?<status>\d+)\s+
      (?<body_bytes_sent>\d+)\s+
      "(?<http_referer>[^"]*)"\s+
      "(?<http_user_agent>[^"]*)"
    /x,

    rails: /
      (?<timestamp>\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})\s+
      (?<level>\w+)\s+
      (?<message>.+)
    /x
  }.freeze

  def initialize(log_type)
    @pattern = LOG_PATTERNS[log_type.to_sym]
    raise "Unknown log type: #{log_type}" unless @pattern
    @stats = Hash.new(0)
  end

  def analyze_file(filename)
    results = {
      total_lines: 0,
      parsed_lines: 0,
      errors: [],
      statistics: {}
    }

    File.foreach(filename).with_index do |line, line_number|
      results[:total_lines] += 1

      if match = @pattern.match(line)
        results[:parsed_lines] += 1
        update_statistics(match, results[:statistics])
      else
        results[:errors] << {
          line_number: line_number + 1,
          content: line.strip
        }
      end
    end

    results
  end

  private

  def update_statistics(match, stats)
    # Status code distribution
    if status = match[:status]
      stats[:status_codes] ||= Hash.new(0)
      stats[:status_codes][status] += 1
    end

    # User agent analysis
    if user_agent = match[:http_user_agent]
      stats[:user_agents] ||= Hash.new(0)
      browser = extract_browser(user_agent)
      stats[:user_agents][browser] += 1
    end

    # Request method analysis
    if request = match[:request]
      method = request.split.first
      stats[:methods] ||= Hash.new(0)
      stats[:methods][method] += 1
    end
  end

  def extract_browser(user_agent)
    case user_agent
    when /Chrome/i then 'Chrome'
    when /Firefox/i then 'Firefox'
    when /Safari/i then 'Safari'
    when /Edge/i then 'Edge'
    else 'Other'
    end
  end
end
Enter fullscreen mode Exit fullscreen mode

Data Validation Framework

class DataValidator
  def initialize
    @rules = []
  end

  def add_rule(name, pattern, message = nil)
    @rules << {
      name: name,
      pattern: pattern,
      message: message || "#{name} validation failed"
    }
  end

  def validate(data)
    results = {
      valid: true,
      errors: [],
      warnings: []
    }

    @rules.each do |rule|
      field_value = data[rule[:name]]
      next if field_value.nil?

      unless rule[:pattern].match?(field_value.to_s)
        results[:valid] = false
        results[:errors] << {
          field: rule[:name],
          value: field_value,
          message: rule[:message]
        }
      end
    end

    results
  end

  def self.build_common_validators
    validator = new

    # Email validation
    validator.add_rule(
      :email,
      /\A[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}\z/,
      "Invalid email format"
    )

    # Phone number validation
    validator.add_rule(
      :phone,
      /\A(?:\+?1[-.\s]?)?\(?[0-9]{3}\)?[-.\s]?[0-9]{3}[-.\s]?[0-9]{4}\z/,
      "Invalid phone number format"
    )

    # Credit card validation (basic format)
    validator.add_rule(
      :credit_card,
      /\A(?:4[0-9]{12}(?:[0-9]{3})?|5[1-5][0-9]{14}|3[47][0-9]{13}|3[0-9]{13}|6(?:011|5[0-9]{2})[0-9]{12})\z/,
      "Invalid credit card number"
    )

    # Strong password validation
    validator.add_rule(
      :password,
      /\A(?=.*[a-z])(?=.*[A-Z])(?=.*\d)(?=.*[[:punct:]]).{8,}\z/,
      "Password must be at least 8 characters with uppercase, lowercase, number, and special character"
    )

    validator
  end
end
Enter fullscreen mode Exit fullscreen mode

Conclusion

Ruby's regular expression capabilities extend far beyond simple pattern matching. The advanced techniques covered in this article—from dynamic pattern construction and conditional matching to streaming processing and Unicode handling—provide powerful tools for sophisticated text processing applications.

Key takeaways for mastering Ruby regex:

  1. Understand the engine: Onigmo's features enable advanced pattern matching techniques
  2. Optimize for performance: Use atomic grouping, possessive quantifiers, and compilation caching
  3. Leverage named captures: Make patterns self-documenting and maintainable
  4. Handle Unicode properly: Use Unicode property classes for international text processing
  5. Build reusable components: Create pattern builders and processors for common tasks
  6. Profile and debug: Use tools to understand pattern performance and behavior

The combination of Ruby's expressive syntax and powerful regex engine makes it an excellent choice for complex text processing tasks. By mastering these advanced techniques, developers can build robust, efficient, and maintainable text processing applications that handle real-world complexity with elegance and performance.

Whether you're building log analyzers, data validators, configuration parsers, or complex text processors, these advanced regex techniques provide the foundation for sophisticated Ruby applications that excel at pattern matching and text manipulation.

Top comments (1)

Collapse
 
parag_nandy_roy profile image
Parag Nandy Roy

Bookmarking for sure...this is amazing

Some comments may only be visible to logged-in visitors. Sign in to view all comments.