Student Veterans of America Jobs

Welcome to SVA’s jobs portal, your one-stop shop for finding the most up to date source of employment opportunities. We have partnered with the National Labor Exchange to provide you this information. You may be looking for part-time employment to supplement your income while you are in school. You might be looking for an internship to add experience to your resume. And you may be completing your training ready to start a new career. This site has all of those types of jobs.

Here are a few things you should know:
  • This site is mobile friendly. You do not need a log-in or password to access information.
  • Jobs on this site are original and unduplicated and come from three sources: the Federal government, state workforce agency job banks, and corporate career websites. All jobs are vetted to ensure there are no scams, training schemes, or phishing.
  • The site is refreshed daily to remove out-of-date content.
  • The newest jobs are listed first, so use the search features to match your interests. You can look for jobs in a specific geographical location, by title or keyword, or you can use the military crosswalk. You may want to do something different from your military career, but you undoubtedly have skills from that occupation that match to a civilian job.

Job Information

AmeriHealth Caritas Services LLC Data Developer Lead in Newtown Square, Pennsylvania

DUTIES: Analyze business requirements for the central cloud repository to store data. Architect data and design data patterns and data solutions on Azure Cloud, Azure Synapse and Databricks by using data modeling tools including Erwin. Design and build conceptual, logical, and physical data models in Erwin to solve business problems. Design and develop Azure Data Factory (ADF) pipelines to integrate data to Data Lake 2.0 by integrating data from multiple internal and external sources managed on Sybase, Oracle, SQL server and external files. Build Databricks notebooks using SQL, Python, Scala or PySpark programming languages for data transformation and loading in Azure Data Lake environment. Perform proof of concept on technical feasibility assessment for integrating new technical platform to Azure Data Lake platform. Design, develop, and test technical solutions and ensure they are aligned with Enterprise Data Strategy throughout Software Development Life Cycle (SDLC). Manage and analyze large volumes of unstructured data from internal and external sources using Hive, Impala, Oozie, SPARK (Scala), Sqoop, Flume, Hadoop API, and HDFS to optimize data loads and data transformations. Apply testing techniques, including unit, system, and regression testing, to verify deployed components work as designed. Deploy data science machine learning models to Azure Data Lake environment by using Machine Learning Operations (ML Ops) process. Design and implement data governance framework. Analyze and implement data quality rules for Data Lake tables to monitor different data quality dimensions and health of the data including the Informatica tool IDQ. Harvest and capture technical and business metadata for data cataloging using tools including EDC and Axon. Follow and prepare change control process as part of SDLC for production deployment. Provide first-time support for deployed components. Leverage and analyze EDWH data for regulatory reporting requirements and on-boarding data to Azure Data Lake 2.0 for advance analytical use cases. Remote option up to 100% of the time. REQUIREMENTS: Bachelor's degree (or foreign equivalent) in Information Systems Management or a related field, as well as the following experience which can be gained prior, during, or after Bachelor's degree: 5 years of experience architecting databases to build analytical solutions for business problems; 3 years of experience designing and building conceptual, logical, and physical data models for data integration platforms; 2 years of experience designing, building, and orchestrating ETL pipelines in Azure or other cloud- based systems; 2 years of experience using Python and Spark programing languages to code Databricks notebooks per business requirements; 2 years of experience implementing Machine Learning Modeling to model data science and monitor performance; and 2 years of experience utilizing Hadoop (including Hive, Impala and Oozie) to analyze large volumes of unstructured data from internal and external sources. In lieu of a Bachelor's degree, employer will accept a Master's degree (or foreign equivalent) in Information Systems Management or a related field plus 2 years of experience in the above which can be gained prior, during, or after Master's degree. Remote option up to 100% of the time. Please apply at http://careers.amerihealthcaritas.com. Job ID 34172

DirectEmployers