Senior Data Engineer - Databricks

Job Level:  Vice President
Job Function:  Technical Product Management
Location: 

Charlotte, NC, US, 28202

Employment Type:  Full Time
Requisition ID:  6204

 SMBC Group is a top-tier global financial group. Headquartered in Tokyo and with a 400-year history, SMBC Group offers a diverse range of financial services, including banking, leasing, securities, credit cards, and consumer finance. The Group has more than 130 offices and 80,000 employees worldwide in nearly 40 countries. Sumitomo Mitsui Financial Group, Inc. (SMFG) is the holding company of SMBC Group, which is one of the three largest banking groups in Japan. SMFG’s shares trade on the Tokyo, Nagoya, and New York (NYSE: SMFG) stock exchanges.

 

In the Americas, SMBC Group has a presence in the US, Canada, Mexico, Brazil, Chile, Colombia, and Peru. Backed by the capital strength of SMBC Group and the value of its relationships in Asia, the Group offers a range of commercial and investment banking services to its corporate, institutional, and municipal clients. It connects a diverse client base to local markets and the organization’s extensive global network. The Group’s operating companies in the Americas include Sumitomo Mitsui Banking Corp. (SMBC), SMBC Nikko Securities America, Inc., SMBC Capital Markets, Inc., SMBC MANUBANK, JRI America, Inc., SMBC Leasing and Finance, Inc., Banco Sumitomo Mitsui Brasileiro S.A., and Sumitomo Mitsui Finance and Leasing Co., Ltd.

 

The anticipated salary range for this role is between $142,000.00 and $196,000.00. The specific salary offered to an applicant will be based on their individual qualifications, experiences, and an analysis of the current compensation paid in their geography and the market for similar roles at the time of hire. The role may also be eligible for an annual discretionary incentive award. In addition to cash compensation, SMBC offers a competitive portfolio of benefits to its employees.

Role Description

The Databricks Developer is responsible for implementing, supporting, and enhancing the internal fraud detection platform by developing scalable data pipelines, integrating batch processing methods, and ensuring the platform aligns with the bank’s risk management, legal, and regulatory requirements for fraud detection and prevention.
This role requires deep functional and technical expertise in databricks development, including strong development skills in PySpark and the Azure cloud ecosystem. Proven expertise in designing and managing CI/CD pipelines using tools such as Azure DevOps, GitHub, or similar.
The developer will work closely with business units and support teams to deliver the initial application, system enhancements, perform upgrades, and provide on-call user support. The ideal candidate holds a degree in Computer Science or a related field and has at least 5 years of professional experience in data engineering and cloud-based development.
Key Responsibilities:

Role Objectives: Delivery

•    Design, develop, and optimize large-scale batch data pipelines using Databricks and PySpark on the Azure cloud platform.
•    Lead technical architecture and implementation of Azure-based solutions, supporting cloud migration and consolidation initiatives.
•    Build and maintain ETL processes, ensuring seamless data integration and high data quality across diverse sources.
•    Develop orchestration workflows using Azure Functions, Azure Data Factory (ADF), Logic Apps, and other Azure services.
•    Proven expertise in designing and managing CI/CD pipelines using tools such as Azure DevOps, GitHub, or similar
•    Implement secure and scalable solutions leveraging Blob Storage, Key Vault, Managed Identities, and Azure DevOps.
•    Provide technical guidance and support for architectural decisions and platform enhancements.
•    Own end-to-end project delivery, working closely with business stakeholders, IT teams, and third-party vendors.
•    Incorporating a variety of data processing techniques—including batch and streaming workflows—while exposing and integrating APIs and external services into Databricks pipelines to enhance platform functionality and enable seamless data exchange across systems.
•    Review and contribute to core code changes, ensuring best practices and supporting production deployments.
•    Experience in developing and implementing disaster recovery strategies for cloud-based applications

Qualifications and Skills

Required Qualifications:
•    Bachelor’s degree in Computer Science, Information Systems, or a related technical field.
•    Minimum 5 years of experience in data engineering, with a focus on Databricks, PySpark, and Azure.
•    Strong understanding of data integration, transformation, and migration strategies.
•    Experience with CI/CD pipelines and version control using Azure DevOps or githib.
•    Excellent problem-solving skills and ability to resolve moderately complex technical challenges.
•    Strong communication and collaboration skills; 

Additional Requirements

SMBC’s employees participate in a Hybrid workforce model that provides employees with an opportunity to work from home, as well as, from an SMBC office. SMBC requires that employees live within a reasonable commuting distance of their office location. Prospective candidates will learn more about their specific hybrid work schedule during their interview process. Hybrid work may not be permitted for certain roles, including, for example, certain FINRA-registered roles for which in-office attendance for the entire workweek is required.

 

SMBC provides reasonable accommodations during candidacy for applicants with disabilities consistent with applicable federal, state, and local law. If you need a reasonable accommodation during the application process, please let us know at accommodations@smbcgroup.com.


Nearest Major Market: Charlotte