Cloud Data Specialist (Azure) - Vice President
Charlotte, NC, US, 28202
SMBC Group is a top-tier global financial group. Headquartered in Tokyo and with a 400-year history, SMBC Group offers a diverse range of financial services, including banking, leasing, securities, credit cards, and consumer finance. The Group has more than 130 offices and 80,000 employees worldwide in nearly 40 countries. Sumitomo Mitsui Financial Group, Inc. (SMFG) is the holding company of SMBC Group, which is one of the three largest banking groups in Japan. SMFG’s shares trade on the Tokyo, Nagoya, and New York (NYSE: SMFG) stock exchanges.
In the Americas, SMBC Group has a presence in the US, Canada, Mexico, Brazil, Chile, Colombia, and Peru. Backed by the capital strength of SMBC Group and the value of its relationships in Asia, the Group offers a range of commercial and investment banking services to its corporate, institutional, and municipal clients. It connects a diverse client base to local markets and the organization’s extensive global network. The Group’s operating companies in the Americas include Sumitomo Mitsui Banking Corp. (SMBC), SMBC Nikko Securities America, Inc., SMBC Capital Markets, Inc., SMBC MANUBANK, JRI America, Inc., SMBC Leasing and Finance, Inc., Banco Sumitomo Mitsui Brasileiro S.A., and Sumitomo Mitsui Finance and Leasing Co., Ltd.
Role Description
SMBC is looking for a Azure Cloud Data Engineer in the Production Support group with a strong Azure DataFactory and Databricks knowledge. Candidate should be able to provide support for Azure based data integration and analytics pipelines built using Azure Data Factory and Azure Databricks ensuring the uptime and performance of critical pipelines and workflows.
This role should also have knowledge of complex interface process development and is required to troubleshoot issues.
Role Objectives
- Monitor, troubleshoot and support ADF pipelines and Databricks notebooks/jobs in Production.
- Extensive experience on Cloud solutions, specifically in Azure
- Experience with Azure cloud services, Azure Data Factory, Gen 2, Azure Databases, Functions, Databricks, or similar technology.
- Good understanding of ETL/ELT
- Analyze pipeline failures, spark job issues, data mismatches, cluster timeouts, resource unavailability and latency bottlenecks.
- Root cause analysis for incidents and outages.
- Understand the ADF Components and architecture.
- Knowledge of data integration techniques and best practices.
- Experience with connection to various data sources and destinations.
- Ability to orchestrate complex data workflows and transformation.
- Monitoring and troubleshooting data pipeline executions.
- Familiarity with ADF data flow activities for data transformations.
- Version control and deployment management using Azure DevOps or similar tools
- Awareness of ADF Integration with Azure services like Azure Data Lake Storage, Azure Databricks, etc.
- Skills in implementing streaming and batch data ingestion using Delta Lake.
- Skills in implementing data pipelines and workflows in Databricks.
- Familiarity with Databricks notebooks for interactive data exploration and development.
- Integrating Databricks with Azure Services like ADLS Gen2, Azure SQLdb.
- Monitoring and optimizing Databricks jobs for cost efficiency.
- Proficiency in Git for managing code repositories, including branching, merging and pull requests.
- Support CI/CD pipelines for deployment using Azure DevOps.
- Participate in on-call rotation and ensure business continuity via proper DR strategies.
- Experience with RDBS systems like Azure SQL DB, Oracle, and NoSQL Databases like MongoDB
- Understanding of indexing, partitioning, and other optimization techniques.
- Experience with stored procedure, functions, and triggers.
- Ensure High Availability (HA) of ADF pipelines and auto-scaling/failover readiness of Databricks clusters.
- Manage alerts, incidents and escalations using ServiceNow, Azure Monitor, Log Analytics etc.,
- Proficiency in Git for managing code repositories, including branching, merging and pull requests.
- Experience with Confluence, ServiceNow & JIRA.
- Review and provide feedback on core code changes and support production deployment
- Knowledge on ETL DataStage Application would be a plus.
- Azure Monitor, Application Insights, Log Analytics.
- Cluster and pipeline-level metrics and logs.
Qualifications and Skills
- Recommended years of experience: 7
- Strong hands-on experience with Azure Data Factory (ADF) - Pipeline orchestration, linked services, integration runtimes.
- Experience in Azure Data Bricks – running and debugging notebooks, clusters, spark job logs.
- Proficient in SQL – Writing/debugging queries, validating data. Good understanding of Azure Services: ADLS, Key Vault, Azure Monitor, Log Analytics.
- Familiarity with Azure DevOps pipelines and Git Integrations. Scripting knowledge: Python, PowerShell, or Bash. Understanding of Spark concepts and Delta Lake (Preferred).
- Ability to work on weekends for maintenance, production implementations, recovery tests and system verifications / validations.
- Ability to address production issues from home outside of normal business hours.
Additional Requirements
SMBC’s employees participate in a Hybrid workforce model that provides employees with an opportunity to work from home, as well as, from an SMBC office. SMBC requires that employees live within a reasonable commuting distance of their office location. Prospective candidates will learn more about their specific hybrid work schedule during their interview process. Hybrid work may not be permitted for certain roles, including, for example, certain FINRA-registered roles for which in-office attendance for the entire workweek is required.
SMBC provides reasonable accommodations during candidacy for applicants with disabilities consistent with applicable federal, state, and local law. If you need a reasonable accommodation during the application process, please let us know at accommodations@smbcgroup.com.
Nearest Major Market: Charlotte