Applications are invited for the Compiler Software Engineer Intern at McKesson, Bangalore. Check the eligibility and other details below!
About McKesson
Established in 1833, McKesson is a Fortune 10 global leader in healthcare supply chain management solutions, retail pharmacy, healthcare technology, community oncology, and specialty care. We partner with life sciences companies, manufacturers, providers, pharmacies, governments, and other healthcare organizations to help provide the right medicines, medical products, and healthcare services to the right patients at the right time, safely and cost effectively. Our enterprise consists of the following businesses:
Role and Responsibilities
- As a Software Development Intern with a focus on Data Engineering and Data Engineering Platforms, your role and responsibilities might include the following:
- Roles:
- Data Engineering Support: – Assist in the design, development, and maintenance of scalable data pipelines. – Help in the extraction, transformation, and loading (ETL) of data from various data sources into data warehouses or data lakes.
- Data Platform Assistance: – Support the development and maintenance of data platforms that facilitate data storage, processing, and analytics. – Assist in the integration of data platforms with other enterprise systems.
- Collaboration: – Work closely with senior data engineers, data scientists, and other cross-functional teams to understand data requirements and deliver solutions. – Participate in team meetings and contribute to project planning and progress updates.
- Learning and Development: – Continuously improve your understanding of data engineering concepts, tools, and technologies. – Take advantage of mentorship opportunities and training programs offered by the organization.
- Responsibilities:
- Data Pipeline Development: – Write code to build and optimize data pipelines. – Ensure data integrity and consistency across various stages of the pipeline.
- Data Quality and Testing: – Implement data validation and quality checks. – Conduct testing to ensure data pipelines are robust and reliable.
- Data Integration: – Assist in integrating diverse data sources, including APIs, databases, and flat files. – Help in the development of data ingestion processes.
- Documentation: – Document data workflows, pipeline structures, and data models. – Create user guides and technical documentation for future reference.
- Performance Tuning: – Monitor and optimize the performance of data pipelines and platforms. – Assist in troubleshooting and resolving performance issues.
- Tool and Technology Utilization: – Utilize data engineering tools and technologies such as SQL, Python, Apache Spark, Hadoop, etc. – Gain hands-on experience with cloud platforms like AWS, Azure, or Google Cloud Platform.
- Data Analysis and Reporting: – Support data analysis tasks and generate reports as needed. – Use data visualization tools to present insights and findings.
- Compliance and Security: – Ensure compliance with data governance and security policies. – Assist in implementing data protection measures and access controls.
Desired Skill Sets
- Programming Languages: – Proficiency in languages such as Python, SQL, and possibly Java or Scala.
- Data Tools: – Familiarity with ETL tools, data warehouses (e.g., Snowflake, Redshift), and big data technologies (e.g., Hadoop, Spark).
- Cloud Platforms: – Experience with cloud data services (e.g., AWS S3, Google BigQuery, Azure Data Lake).
- Database Management: – Understanding of relational databases (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra).
- Data Modeling: – Knowledge of data modeling techniques and best practices.
- Version Control: – Proficiency with version control systems like Git.
- Soft Skills: – Strong problem-solving skills, attention to detail, and the ability to work in a team-oriented environment.
How to Apply?
Interested candidates can directly apply through this link.
Location
Bangalore, Karnataka.