The
Databricks Developer will be responsible for designing, developing, and maintaining scalable data processing solutions on the Databricks platform, with a focus on integrating and transforming Customer datasets such as the Information Returns Master File (IRMF), Business Master File (BMF), and Individual Master File (IMF). This role requires advanced proficiency in Java and Apache Spark, and a deep understanding of big data processing, performance optimization, and secure data handling in a federal government.
Responsibilities
- Design, develop, and maintain scalable data pipelines using Apache Spark on Databricks
- Implement data processing logic in Java 8+, leveraging functional programming and OOP best practices
- Integrate with Customer data systems including IRMF, BMF, or IMF
- Optimize Spark jobs for performance, reliability, and cost-efficiency
- Collaborate with cross-functional teams to gather requirements and deliver data solutions
- Ensure compliance with data security, privacy, and governance standards
- Troubleshoot and debug production issues in distributed data environments
Requirements
- US Citizenship; Active IRS MBI Clearance; IRS laptop strongly preferred
- Bachelor's degree in Computer Science, Information Systems, or a related field
- 8+ years of professional experience demonstrating the required technical skills and responsibilities listed:
- IRS Data Systems Experience
- Hands-on experience working with IRMF, BMF, or IMF datsets
- Understanding of Customer data structures, compliance, and security protocols
- Programming Language Proficiency
- Strong expertise in Java 8 or higher
- Experience with functional programming (Streams API, Lambdas)
- Familiarity with object-oriented design patterns and best practices
- Apache Spark
- Proficiency in Spark Core, Spark SQL, and DataFrame/Dataset APIs
- Understanding of RDDs and when to use them
- Experience with Spark Streaming or Structured Streaming
- Skilled in performance tuning and Spark job optimization
- Ability to use Spark UI for troubleshooting stages and tasks
- Big Data Ecosystem
- Familiarity with HDFS, Hive, or HBase
- Experience integrating with Kafka, S3, or Azure Data Lake
- Comfort with Parquet, Avro, or ORC file formats
- Data Processing and ETL
- Strong understanding of batch and real-time data processing paradigms
- Experience building ETL pipelines with Spark
- Proficient in data cleansing, transformation, and enrichment
- DevOps/Deployment
- Experience with YARN, Kubernetes, or EMR for Spark deployment
- Familiarity with CI/CD tools like Jenkins or GitHub Actions
- Monitoring experience with Grafana, Prometheus, Datadog, or Spark UI logs
- Version Control and Build Tools
- Proficient in Git
- Experience with Maven or Gradle
- Testing
- Unit testing experience with JUnit or TestNG
- Experience with Mockito or similar mocking frameworks
- Data validation and regression testing for Spark jobs
- Soft Skills/Engineering Practices
- Experience working in Agile/Scrum environments
- Strong documentation skills (Markdown, Confluence, etc.)
- Ability to debug and troubleshoot production issues effectively
Preferred Skills
- Experience with Scala or Python in Spark environments
- Familiarity with Databricks or Google DataProc
- Knowledge of Delta Lake or Apache Iceberg
- Experience with data modeling and performance design for big data systems
About Us
For more than 20 years, NewGen Technologies has solved our clients’ toughest IT challenges with integrity, security, and outstanding service by delivering both technology and talent. We have helped secure borders, have used artificial intelligence (AI) to fight terror, aided the identification of criminals, and have helped to prevent crime through the introduction of biometrics. Our team of Highly Cleared Specialists have hard-to-find skills and expertise in a wide spectrum of technologies to provide solutions that transform business processes and solve problems of national significance. #CJ