- About Us
- Business Areas
- Early Careers
- Locations
Your jobs
Date live:
Feb. 27, 2026
Business Area:
Markets Post Trade
Area of Expertise:
Technology
Reference Code:
JR-0000085995
Contract:
Permanent
Take a look at the map to see what’s nearby. Train stations & bus stops, gyms, restaurants and more.
Explore locationAt Barclays, we don’t just adapt to the future – we create it. Embark on a transformative journey as a Senior Data Engineer-VP, where you will play a pivotal role in architecting and implementing high‑performance data pipelines and data flows that enable low‑latency, high‑fidelity insights. This role requires a deep interest in large‑scale data ingestion and processing across diverse formats, sources, and frequencies. You should be proficient in handling heterogeneous datasets and committed to delivering high‑quality, reliable data products. Our goal is to construct an agentic, continuously flowing data ecosystem capable of powering advanced insight engines across Research. Barclays is advancing into the agentic era by developing a robust ecosystem of talent, tooling, and modern technology to support the next generation of our data and AI capabilities. We are seeking a Senior Data Engineer for the Investment Bank’s Research Data & AI organization to help drive this transformation. Barclays Research aims to deliver differentiated, original, and data‑driven analysis for our clients. We are redesigning our Research data foundation to align with the demands of the agentic era, scaling the platform under Data Mesh principles to support distributed ownership, flexible consumption, and greater system autonomy. Success in this role requires an experienced data engineer who has worked across the full spectrum of data ingestion paradigms—including real‑time streaming and batch processing—or a strong desire to operate across all of them. This position will directly influence the strategic direction of our ingestion architecture, optimizing the full data lifecycle from acquisition and landing to transformation, entitlement, distribution, and orchestration.
To be successful in this role , you should have experience with:
Extensive Python software engineering experience, including object‑oriented programming, package management, and scripting.
Strong proficiency in an OOP language such as Java or C++, with solid grounding in design principles, systems engineering, and large‑scale application development.
Deep knowledge of AWS, specifically core architecture services (S3, Route 53, EC2, EMR, EKS, Athena) and data‑processing services (AWS Glue, AWS Catalog, AWS Kinesis, Lake Formation).
Extensive experience with distributed processing engines such as Spark, including optimization strategies, cluster‑level scaling, and operational maintenance of complex data environments.
Practical experience orchestrating enterprise data workflows using tools such as Airflow, Dagster, Prefect or Autosys.
Experience designing, architecting, and deploying scalable, resilient, and portable data jobs in production environments.
Experience with large‑scale data quality frameworks such as Great Expectations or DeeQu.
Familiarity with Data Mesh concepts—including Data Product lifecycle, Data Contracts, and Data Domain modeling.
Experience troubleshooting, debugging, and re‑architecting data pipelines to improve efficiency, maintainability, and adherence to engineering standards.
Some of Highly Valued Skills may include:
Domain knowledge of financial‑sector data use cases and value creation within the Research and Alpha‑generation lifecycle.
Working knowledge of Apache Flink for real‑time distributed processing.
Experience working with unstructured data ingestion, curation, and storage.
Experience with data catalogs, ontology development, and taxonomy design.
Understanding of API and system‑level communication protocols (e.g., gRPC).
Experience with cloud‑native and containerization deployment models (e.g., Kubernetes).
You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills.
The location of the role is Pune, IN
Purpose of the role
To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure.
Accountabilities
Vice President Expectations
All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.