Job Description
JOB ID: 488824 TITLE: Splunk Administrator-Systems Engineer LOCATION: San Jose, CA, USA SKILLS: Splunk ================================================================================ DESCRIPTION: === JOB DETAILS === Date Posted: 09/19/2025 Hiring Organization: Rose International Position Number: 488824 Industry: Retail Job Title: Splunk Administrator-Systems Engineer Job Location: San Jose, CA, USA, 95123 Work Model: Hybrid Work Model Details: 3 days onsite; 2 days remote Employment Type: Temporary FT/PT: Full-Time Estimated Duration (In months): 3 Min Hourly Rate($): 60.00 Max Hourly Rate($): 70.00 Must Have Skills/Attributes: Splunk Experience Desired: Splunk Enterprise administration at scale (multi-TB/day) (3+ yrs) === JOB DESCRIPTION === Basic Qualifications • 3–5+ years hands-on Splunk Enterprise administration at scale (multi-TB/day), including indexer clustering, SHC, deployer/DS, license mgmt. • Strong SPL and performance tuning (tstats, DMs, accelerations, base/inline searches). • Data onboarding expertise: forwarders/syslog/HEC; props/transforms; timestamping/line-breaking; field extractions; retention planning. • Linux + scripting (bash/Python); networking/TLS fundamentals. • Experience operating with NFS-backed indexers. • Nice-to-have: Splunk Architect cert; ES/ITSI/MLTK/SOAR; familiarity with data-science/ML concepts. The Most Important Ways the Person Doing the Job Should Spend Their Time Are… • Keeping a multi-site Splunk Enterprise (indexer clustering + SHC) healthy: upgrades/patching, daily/weekly health checks, capacity & license management, DR tests. • Onboarding data cleanly and securely: forwarders/syslog/HEC; sourcetypes, props/transforms, timestamping/line-breaking, field extractions, retention. • Improving performance and reliability: monitor ingestion/search performance, queues, storage/bucket health; remove bottlenecks; tune searches and data models. • Enabling users: create/optimize SPL searches, dashboards, alerts; advise engineers, SREs, and SecOps on best practices and troubleshooting. The Most Important Duties Are • Operate and harden a multi-site Splunk Enterprise environment (indexer clustering, SHC, deployer/deployment server, RBAC, app lifecycle). • Monitor and tune ingestion, search, and storage (RF/SF validation; bucket health; NFS tuning; queue depths). • Lead data onboarding projects across on-prem, SaaS, cloud (Azure/AWS), K8s; ensure auditability and data-handling policy compliance. • Build/optimize SPL, dashboards, alerts; coach consumers on SPL and performance patterns (tstats, accelerations, base/inline searches). • Maintain DR posture and execute/verify failovers. What This Job Needs to Be Successful Is (Traits and Characteristics) • 3–5+ years administering Splunk Enterprise at multi-TB/day scale, including indexer clustering and SHC in multi-site deployments. • Expert SPL and performance tuning (tstats, data models/accelerations, search optimization). • Deep data-onboarding skills (forwarders/syslog/HEC) and props.conf/transforms.conf mastery (timestamps, line-breaking, field extraction, value normalization). • Strong Linux admin + scripting (bash, Python); networking/TLS fundamentals. • Experience with NFS-backed indexers (operational tuning/gotchas). • Clear communicator with a customer-enablement mindset; documents well; bias for automation. • Nice-to-have: Splunk Architect cert; experience with ES, ITSI, MLTK, and SOAR; familiarity with data-science/ML concepts (to partner with teams, not to lead research). The Simplest and Easiest Way to See That This Job Is Done Well Is… • Cluster health green: RF/SF consistently met; successful failover tests. • Low ingest error rate and low data latency to index; stable license utilization. • Search KPIs: median and P95 search times within agreed SLOs; reduced scheduler/skipped search rates. • Clean data: correct timestamps, low unknown sourcetypes, stable field extraction accuracy. • User outcomes: growing self-service usage, actionable dashboards/alerts, and satisfied internal customers (shorter MTTR for incidents). • No audit/compliance exceptions related to Splunk data handling or access controls. Environment (Context) • ~14,000 employees; ~500 active Splunk users • ~3 TB/day ingest from ~100 sources; NFS-backed storage • Sources span on-prem apps/appliances/network devices, SaaS, private cloud/K8s, Azure & AWS • Submit your resume and a brief note describing one difficult Splunk performance issue you solved: symptoms → root cause → fix → before/after metrics