🌟 Exciting Opportunity in Data Engineering 🌟 Are you a seasoned Data Engineer with expertise in Python, SQL, and data modeling? We're looking for someone like you to join our banking client! Job details below : Location Address Hybrid (1 day per week) - Downtown, Toronto Contract Duration: 9 months Candidate Requirements/Must Have Skills: 8-10+ years using Python or other programming languages, with proficiency in package management, dependencies, and deployment. Extensive experience (8-10+ years) using SQL for ETL and data analysis across platforms like SQL Server and PostgreSQL. Strong background (8-10+ years) in data engineering, collaborating with cross-functional data teams. Proven track record in data modeling, data warehousing, and database design. Expertise in designing and building ETL/ELT, data pipelines, or data engineering solutions. Strong familiarity with Linux tools and shell scripting. Nice-To-Have Skills: Experience with cloud architecture and security (Azure, AWS, GCP). Knowledge of machine learning techniques, including NLP. Hands-on experience with Big Data tools (Hadoop, Hive, Spark, BigQuery) and object storage solutions (blob, MinIO, GCS). Understanding of Agile and Scrum methodologies, with experience in a Scrum environment using Jira and Confluence. Proficiency in Docker, CI/CD tools, Airflow, and Kubernetes. Fluency in French and/or Spanish. Contact center experience or familiarity with telephony data (Avaya, Genesys) and WFM data (Verint, Aspect). If you're ready to take on a challenging role and make a significant impact in data engineering, we'd love to hear from you! Reach out or apply directly [[email protected]] #DataEngineering #Python #SQL #ETL #DataModeling #DataWarehousing #Cloud #BigData #MachineLearning #Agile #Scrum #DataPipelines #Hiring #Torontojobs #toronto
Shailja Sharma’s Post
More Relevant Posts
-
Hiring!! Data Engineer Putting together large, intricate data sets to satisfy both functional and non-functional business needs. Determining, creating, and implementing internal process improvements, such as redesigning infrastructure for increased scalability, improving data delivery, and automating manual procedures. Building necessary infrastructure using AWS and SQL technologies. This will enable effective data extraction, transformation, and loading from a variety of data sources. Reformulating existing frameworks to maximise their functioning. Building analytical tools that make use of the data flow and offer a practical understanding of crucial company performance indicators like operational effectiveness and customer acquisition. Helping stakeholders, including the data, design, product, and executive teams, with technical data difficulties. Working on data-related technical challenges while collaborating with stakeholders, including the Executive, Product, Data, and Design teams, to support their data infrastructure needs. Remaining up-to-date with developments in technology and industry norms can help you to produce higher-quality results. Send CVs :[email protected]
To view or add a comment, sign in
-
#hiring Data Engineer / Long Term Project / Azure, London, United Kingdom, £630/day, contract #jobs #jobseekers #careers £630/day #Londonjobs #GreaterLondonjobs #ITCommunications #datascience #dataanalytics #machinelearning #bigdata #dataengineer Apply: https://lnkd.in/gbyqgubq Inside IR35 On site 3 days per week Sponsorship not offered Are you a forward-thinking Data Engineer with a passion for turning data into actionable insights? Do you thrive on designing and building robust data pipelines? If so, we have an exciting opportunity for you! The role responsibilities: Engineering Data Pipelines: Designing and building reliable data pipelines for data sourcing, processing, transformation, enrichment, and storage, utilizing our data platform infrastructure effectively. ETL Expertise: Using ETL tools to ingest and transform data from various sources, ensuring data quality and consistency. Technology Enthusiast: Staying updated on industry trends and exploring how emerging technologies can be leveraged to achieve our business objectives. Insightful Dashboards: Leveraging the ingested data to develop insightful dashboards that provide valuable business insights. Agile Collaboration: Actively participating in Agile planning activities, including story refinement, demos, and retrospectives, to drive continuous improvement. The person we re looking for: PySpark and Python Skills: Strong competence in PySpark and Python to effectively engineer data pipelines and perform data analysis. Azure Proficiency: Hands-on experience with the Azure Data ecosystem, including Azure Databricks, Data Factory, Data Lake, and Synapse, is essential. SQL and Database Knowledge: A solid understanding of SQL and database design, coupled with expertise in data architectures such as databases and data lakes. API Integration: Demonstrable experience working with APIs to access and manage data from various sources
To view or add a comment, sign in
-
We are #hiring ! Know anyone who might be interested? Analytics Engineering Senior Specialist - You’ll be responsible for: Working with data scientists and ML engineers to turn their research and proof-of-concepts into reproducible, fault-tolerant jobs that operate efficiently and scale to meet evolving data volumes and time requirements Identifying opportunities to enhance the functionality of our enterprise data platform to improve user experience and increase the speed-to-market for our internal analytics teams and taking ownership of working with cross-functional teams to address these opportunities Developing assets and utilities that improve the productivity of other data and analytics teams, including: process automation tools, templates for common recurring analytics use-cases, exposing internal services as APIs, etc. Providing thought leadership on SLDC best practices and developing frameworks / standards that are fit-for-purpose and address key concerns / pain points from multiple stakeholders Providing subject matter expertise across our tech stack to fellow data scientists and developers Ensuring stakeholders understand the trade-offs across various implementation options and securing alignment Coordinating external dependencies and requirements for a given implementation (Security, DBA, Operations, Legal, project managers, Change Management) Ensuring that work is being done in a sustainable / maintainable manner to minimize technical debt Identifying strategic opportunities to refactor existing data products and services to address technical debt
Analytics Engineering Senior Specialist
interac.wd3.myworkdayjobs.com
To view or add a comment, sign in
-
Circle is hiring a remote Senior Data Engineer #Circle #remotework #remotejob #workfromhome #BigData #DataEngineering #ETL #DataGovernance #DataAnalytics #DataVisualization #DataModeling #DataWarehouse #SQL #NoSQL #MySQL #PostgreSQL #Cassandra #HBase #Redis #DynamoDB #Neo4j #WorkflowManagement #Airflow #Dagster #DBT #GoogleCloud #Snowflake #BigQuery #Databricks #PaymentsSystems #Credit #Banking #Blockchain #DataQuality #MicrosoftAzure #Java #Scala #Python #AWS #FinancialData #DataCompliance #OpenSourceTechnologies #SeniorDataEngineer #SeniorDataPlatformEngineer #DataEngineer #DataScienceEngineer
Senior Data Engineer Job at Circle | Himalayas
himalayas.app
To view or add a comment, sign in
-
#hiring FreeWheel- Sr Data Engineer- Python OR Scala- REMOTE, Pittsburgh, United States, $97K, fulltime #opentowork #jobs #jobseekers #careers $97K #Pittsburghjobs #Pennsylvaniajobs #ITCommunications Apply: https://lnkd.in/gugqk3wF FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we're making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job SummaryThe FreeWheel Identity team is a global team whose mission is to power the identity and audience functionality in Freewheel products. We operate at a tremendous scale - 10+TB ingested per day and over datasets containing more than a trillion records. We build various services from real-time APIs that support millions of requests per second to big data ETL pipelines to serve both internal and external clients. As the team grows, we are looking for a savvy Senior Software Engineer to join the team and help build the next generation of Audience & Identity data platform that handles data at scale and uses state-of-the-art technologies to unlock the data's business value. If you're excited to work with a tightly-knit team of data engineers solving hard problems the right way using cutting-edge data collection, transformation, analysis, and monitoring tools in the cloud, this opportunity is for you. Job Description Core Responsibilities Collaborates with project stakeholders to identify product and technical requirements. Conducts analysis to determine integration needs. Designs and builds data processing pipeline using cutting edge AWS services, 3rd party open-source/commercial software, and internally developed software to ingest, process, and deliver data at scale. Continually acquires new data sources to develop an increasingly rich dataset that characterizes audiences. Collaborates with the serving team, report team, and Data engineers to translate complex product requirements into working high-quality cloud-native data solutions. Works with Lead Engineers and Architects to share and contribute to the broader technical vision. Continually works on improving the codebase and has active participation and oversight in all aspects of the team, including agile ceremonies. Displays in-depth knowledge of and ability to apply, process design, and redesign skills. Presents and defends architectural, design, and technical choices to internal audiences. Oversees the documentation of all development activities. Provides guidance and support to other Engineers. Other duties and responsibilities as assigned. Requirements: SQL, Spark, Snowflake, Databricks, Big Data, AWS, and Scala or Python experience are highly preferred. Ex
https://www.jobsrmine.com/us/pennsylvania/pittsburgh/freewheel-sr-data-engineer-python-or-scala-remote/475716627
To view or add a comment, sign in
-
#hiring FreeWheel- Sr Data Engineer- Python OR Scala- REMOTE, Pittsburgh, United States, $97K, fulltime #opentowork #jobs #jobseekers #careers $97K #Pittsburghjobs #Pennsylvaniajobs #ITCommunications Apply: https://lnkd.in/gugqk3wF FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we're making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job SummaryThe FreeWheel Identity team is a global team whose mission is to power the identity and audience functionality in Freewheel products. We operate at a tremendous scale - 10+TB ingested per day and over datasets containing more than a trillion records. We build various services from real-time APIs that support millions of requests per second to big data ETL pipelines to serve both internal and external clients. As the team grows, we are looking for a savvy Senior Software Engineer to join the team and help build the next generation of Audience & Identity data platform that handles data at scale and uses state-of-the-art technologies to unlock the data's business value. If you're excited to work with a tightly-knit team of data engineers solving hard problems the right way using cutting-edge data collection, transformation, analysis, and monitoring tools in the cloud, this opportunity is for you. Job Description Core Responsibilities Collaborates with project stakeholders to identify product and technical requirements. Conducts analysis to determine integration needs. Designs and builds data processing pipeline using cutting edge AWS services, 3rd party open-source/commercial software, and internally developed software to ingest, process, and deliver data at scale. Continually acquires new data sources to develop an increasingly rich dataset that characterizes audiences. Collaborates with the serving team, report team, and Data engineers to translate complex product requirements into working high-quality cloud-native data solutions. Works with Lead Engineers and Architects to share and contribute to the broader technical vision. Continually works on improving the codebase and has active participation and oversight in all aspects of the team, including agile ceremonies. Displays in-depth knowledge of and ability to apply, process design, and redesign skills. Presents and defends architectural, design, and technical choices to internal audiences. Oversees the documentation of all development activities. Provides guidance and support to other Engineers. Other duties and responsibilities as assigned. Requirements: SQL, Spark, Snowflake, Databricks, Big Data, AWS, and Scala or Python experience are highly preferred. Ex
https://www.jobsrmine.com/us/pennsylvania/pittsburgh/freewheel-sr-data-engineer-python-or-scala-remote/475716627
To view or add a comment, sign in
-
#hiring FreeWheel- Sr Data Engineer- Python OR Scala- REMOTE, Philadelphia, United States, $97K, fulltime #opentowork #jobs #jobseekers #careers $97K #Philadelphiajobs #Pennsylvaniajobs #ITCommunications Apply: https://lnkd.in/gh68UGsi FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we're making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job SummaryThe FreeWheel Identity team is a global team whose mission is to power the identity and audience functionality in Freewheel products. We operate at a tremendous scale - 10+TB ingested per day and over datasets containing more than a trillion records. We build various services from real-time APIs that support millions of requests per second to big data ETL pipelines to serve both internal and external clients. As the team grows, we are looking for a savvy Senior Software Engineer to join the team and help build the next generation of Audience & Identity data platform that handles data at scale and uses state-of-the-art technologies to unlock the data's business value. If you're excited to work with a tightly-knit team of data engineers solving hard problems the right way using cutting-edge data collection, transformation, analysis, and monitoring tools in the cloud, this opportunity is for you. Job Description Core Responsibilities Collaborates with project stakeholders to identify product and technical requirements. Conducts analysis to determine integration needs. Designs and builds data processing pipeline using cutting edge AWS services, 3rd party open-source/commercial software, and internally developed software to ingest, process, and deliver data at scale. Continually acquires new data sources to develop an increasingly rich dataset that characterizes audiences. Collaborates with the serving team, report team, and Data engineers to translate complex product requirements into working high-quality cloud-native data solutions. Works with Lead Engineers and Architects to share and contribute to the broader technical vision. Continually works on improving the codebase and has active participation and oversight in all aspects of the team, including agile ceremonies. Displays in-depth knowledge of and ability to apply, process design, and redesign skills. Presents and defends architectural, design, and technical choices to internal audiences. Oversees the documentation of all development activities. Provides guidance and support to other Engineers. Other duties and responsibilities as assigned. Requirements: SQL, Spark, Snowflake, Databricks, Big Data, AWS, and Scala or Python experience are highly preferr
https://www.jobsrmine.com/us/pennsylvania/philadelphia/freewheel-sr-data-engineer-python-or-scala-remote/475716628
To view or add a comment, sign in
-
#hiring FreeWheel- Sr Data Engineer- Python OR Scala- REMOTE, Philadelphia, United States, $97K, fulltime #opentowork #jobs #jobseekers #careers $97K #Philadelphiajobs #Pennsylvaniajobs #ITCommunications Apply: https://lnkd.in/gh68UGsi FreeWheel, a Comcast company, provides comprehensive ad platforms for publishers, advertisers, and media buyers. Powered by premium video content, robust data, and advanced technology, we're making it easier for buyers and sellers to transact across all screens, data types, and sales channels. As a global company, we have offices in nine countries and can insert advertisements around the world. Job SummaryThe FreeWheel Identity team is a global team whose mission is to power the identity and audience functionality in Freewheel products. We operate at a tremendous scale - 10+TB ingested per day and over datasets containing more than a trillion records. We build various services from real-time APIs that support millions of requests per second to big data ETL pipelines to serve both internal and external clients. As the team grows, we are looking for a savvy Senior Software Engineer to join the team and help build the next generation of Audience & Identity data platform that handles data at scale and uses state-of-the-art technologies to unlock the data's business value. If you're excited to work with a tightly-knit team of data engineers solving hard problems the right way using cutting-edge data collection, transformation, analysis, and monitoring tools in the cloud, this opportunity is for you. Job Description Core Responsibilities Collaborates with project stakeholders to identify product and technical requirements. Conducts analysis to determine integration needs. Designs and builds data processing pipeline using cutting edge AWS services, 3rd party open-source/commercial software, and internally developed software to ingest, process, and deliver data at scale. Continually acquires new data sources to develop an increasingly rich dataset that characterizes audiences. Collaborates with the serving team, report team, and Data engineers to translate complex product requirements into working high-quality cloud-native data solutions. Works with Lead Engineers and Architects to share and contribute to the broader technical vision. Continually works on improving the codebase and has active participation and oversight in all aspects of the team, including agile ceremonies. Displays in-depth knowledge of and ability to apply, process design, and redesign skills. Presents and defends architectural, design, and technical choices to internal audiences. Oversees the documentation of all development activities. Provides guidance and support to other Engineers. Other duties and responsibilities as assigned. Requirements: SQL, Spark, Snowflake, Databricks, Big Data, AWS, and Scala or Python experience are highly preferr
https://www.jobsrmine.com/us/pennsylvania/philadelphia/freewheel-sr-data-engineer-python-or-scala-remote/475716628
To view or add a comment, sign in