Autonomous Systems Engineer – Computer Vision
McLean, Virginia / $140000 - $180000
$140000 - $180000
Autonomous Systems Engineer - Computer Vision
Greater Washington DC Area
$140,000 - $180,000
This company is a leading innovator in the field of autonomous systems. With a mission to revolutionize industries by developing cutting-edge technologies that enable machines to perceive and interact with the world around them. They are seeking a talented and passionate Computer Vision Engineer specializing in autonomous systems to join our dynamic team.
As a Computer Vision Engineer specializing in autonomous systems, you will play a crucial role in the development and implementation of advanced computer vision algorithms and systems that enable autonomous machines to navigate, interpret, and interact with their environment. You will work closely with a team of interdisciplinary experts to design, optimize, and deploy computer vision solutions for real-world applications, spanning industries such as robotics, transportation, agriculture, and more.
- Design and develop computer vision algorithms for object detection, recognition, tracking, and scene understanding.
- Implement and optimize computer vision systems for real-time performance on embedded platforms.
- Collaborate with cross-functional teams to integrate computer vision capabilities into autonomous systems.
- Conduct research and stay up-to-date with the latest advancements in computer vision and autonomous systems.
- Analyze and improve the performance of existing computer vision algorithms and systems.
- Bachelor's or Master's degree in Computer Science, Electrical Engineering, or a related field.
- Solid understanding of computer vision concepts, including image processing, feature extraction, object detection, and tracking.
- Proficiency in programming languages such as Python, C++, or similar.
- Experience with deep learning frameworks (e.g., TensorFlow, PyTorch) and libraries (e.g., OpenCV).
- Strong mathematical foundation in linear algebra, probability, and optimization.
- Experience with real-time computer vision systems and GPU programming is a plus.
- Familiarity with autonomous systems, robotics, or machine learning is highly desirable.
- Excellent problem-solving skills and the ability to work independently and collaboratively in a fast-paced environment.
- Opportunity to work on groundbreaking technologies and shape the future of autonomous systems.
- Collaborative and innovative work environment.
- Professional development and growth opportunities.
- Flexible work hours and a healthy work-life balance.
How to Apply
Please register your interest by sending your resume to Nicolas Gonzales via the apply link on this page
AI, Autonomy, Deep Learning, Machine Learning, Human Machine Learning, Computer Vision, TensorFlow, MX Net, Scikit-Learn, Multi-Robot Systems, Robotics Hardware, Robots, Robotics Software's & Frameworks, ROS, C++, Python, Object Orientation, Perception, Reasoning, Development
Data Engineer Or Software Engineer: What Does Your Business Need? | Harnham US Recruitment post
We are in a time in which what we do with Data matters. Over the last few years, we have seen a rapid rise in the number of Data Scientists and Machine Learning Engineers as businesses look to find deeper insights and improve their strategies. But, without proper access to the right Data that has been processed and massaged, Data Scientists and Machine Learning Engineers would be unable to do their job properly. So who are the people who work in the background and are responsible to make sure all of this works? The quick answer is Data Engineers!… or is it? In reality, there are two similar, yet different profiles who can help help a company achieve their Data-driven goals. Data Engineers When people think of Data Engineers, they think of people who make Data more accessible to others within an organization. Their responsibility is to make sure the end user of the Data, whether it be an Analyst, Data Scientist, or an executive, can get accurate Data from which the business can make insightful decisions. They are experts when it comes to data modeling, often working with SQL. Frequently, “modern” Data Engineers work with a number of tools including Spark, Kafka, and AWS (or any cloud provider), whilst some newer Databases/Data Warehouses include Mongo DB and Snowflake. Companies are choosing to leverage these technologies and update their stack because it allows Data teams to move at a much faster pace and be able to deliver results to their stakeholders. An enterprise looking for a Data Engineer will need someone to focus more on their Data Warehouse and utilize their strong knowledge of querying information, whilst constantly working to ingest/process Data. Data Engineers also focus more on Data Flow and knowing how each Data sets works in collaboration with one another. Software Engineers – DataSimilar to a Data Engineers, Software Engineers – Data ( who I will refer to as Software Data Engineers in this article) also build out Data Pipelines. These individuals might go by different names like Platform or Infrastructure Engineer. They have to be good with SQL and Data Modeling, working with similar technologies such as Spark, AWS, and Hadoop. What separates Software Data Engineers from Data Engineers is the necessity to look at things from a macro-level. They are responsible for building out the cluster manager and scheduler, the distributed cluster system, and implementing code to make things function faster and more efficiently. Software Data Engineers are also better programers. Frequently, they will work in Python, Java, Scala, and more recently, Golang. They also work with DevOps tools such as Docker, Kubernetes, or some sort of CI/CD tool like Jenkins. These skills are critical as Software Data Engineers are constantly testing and deploying new services to make systems more efficient. This is important to understand, especially when incorporating Data Science and Machine Learning teams. If Data Scientists or Machine Learning Engineers do not have a strong Software Engineers in place to build their platforms, the models they build won’t be fully maximized. They also have to be able to scale out systems as their platform grows in order to handle more Data, while finding ways to make improvements. Software Data Engineers will also be looking to work with Data Scientists and Machine Learning Engineers in order to understand the prerequisites of what is needed to support a Machine Learning model. Which is right for your business? If you are looking for someone who can focus extensively on pulling Data from a Data source or API, before transforming or “massaging” the Data, and then moving it elsewhere, then you are looking for a Data Engineer. Quality Data Engineers will be really good at querying Data and Data Modeling and will also be good at working with Data Warehouses and using visualization tools like Tableau or Looker. If you need someone who can wear multiple hats and build highly scalable and distributed systems, you are looking for a Software Data Engineer. It’s more common to see this role in smaller companies and teams, since Hiring Managers often need someone who can do multiple tasks due to budget constraints and the need for a leaner team. They will also be better coders and have some experience working with DevOps tools. Although they might be able to do more than a Data Engineer, Software Data Engineers may not be as strong when it comes to the nitty gritty parts of Data Engineering, in particular querying Data and working within a Data Warehouse. It is always a challenge knowing which type of job to recruit for. It is not uncommon to see job posts where companies advertise that they are looking for a Data Engineer, but in reality are looking for a Software Data Engineer or Machine Learning Platform Engineer. In order to bring the right candidates to your door, it is crucial to have an understanding of what responsibilities you are looking to be fulfilled.That’s not to say a Data Engineer can’t work with Docker or Kubernetes. Engineers are working in a time where they need to become proficient with multiple tools and be constantly honing their skills to keep up with the competition. However, it is this demand to keep up with the latest tech trends and choices that makes finding the right candidate difficult. Hiring Managers need to identify which skills are essential for the role from the start, and which can be easily picked up on the job. Hiring teams should focus on an individual’s past experience and the projects they have worked on, rather than looking at their previous job titles. If you’re looking to hire a Data Engineer or a Software Data Engineer, or to find a new role in this area, we may be able to help. Take a look at our latest opportunities or get in touch if you have any questions.
Computer Vision: An Overview | Harnham Recruitment post
Computer Vision is a field of Artificial Intelligence (AI) that trains computers and systems to glean meaningful information from digital images, videos and other visual inputs — and act on or make recommendations based on that information. Computer Vision essentially allows computers to ‘see’, observe and understand. The process works to mimic human vision. Humans however do have a head start – benefitting from a lifetime of context that underpins their vision, helping them to identify objects, their positioning and recognise if something is wrong with an image. How does it work? Computer Vision requires lots of data to train systems. The data is analysed over and over until it can make distinctions and recognise images. The process utilises two technologies to achieve this: a convolutional neural network (CNN) and a type of machine learning called deep learning. Machine Learning uses algorithmic models that enable a computer to teach itself about the context of visual data and ultimately identify images unassisted, rather than being programmed to recognise an image. For example, instead of training systems to look for whiskers, long ears and a fluffy tail to recognise a bunny, programmers would feed the machine millions of photos of bunnies, and the model would learn on its own the features that make up a bunny and eventually be able to differentiate it from other images. The CNN technology helps a machine or deep learning model to break down images to a pixel level, enabling them to ‘look’. Pixels are labelled and are then used to perform convolutions (a mathematical operation on two functions to produce a third function) and make predictions based on what it is seeing. The CNN then checks the accuracy of its predictions in a series of repetitions until the predictions start to come true. The process can be likened to approaching a jigsaw puzzle. Neural networks view the image components, identify the edges and simple shapes and then begins to fill in the rest of the information by using filtering and a series of actions through deep network layers, such as predicting. Through this, you can start to piece all the parts of the image together. What can it do? Computer Vision is not a new technology; the first experiments with Computer Vision started in the 1950s to interpret typewritten and handwritten text. Nowadays Computer Vision has a number of already-established functions including: Image classification – viewing an image and being able classify it (a flower or a dog). It can also accurately predict that an image belongs to a certain class. For example, it could be used to recognise and filter images uploaded by social media users. Object Tracking – computers follows an object once it’s been detected, often used with images in sequence or as a video feed. For example, self-drive vehicles not only need to classify and detect objects such as people and other cars, but they also need to be able to track them in motion to avoid collisions. Content-based image retrieval – uses Computer Vision to browse, search and retrieve images from large data stores, based on the content of the images rather than metadata tags associated with them. These established tasks are being harnessed across numerous sectors and industries, often to enhance the consumer experience, reduce costs and increase security. A few notable examples include augmented reality, automotive, facial recognition and healthcare. Advancements in the sector Advances in the Computer Vision field have been astounding. Accuracy rates for object identification and classification have gone from 50 percent to 99 percent in less than a decade — and many of today’s systems are more accurate than humans at quickly detecting and reacting to visual inputs. New innovations that employ Computer Vision are appearing all the time, with industries utilising the technology to improve and advance their work. In the last couple of months an ‘intelligent sensing solutions’ company has launched a driver monitoring system (DMS), designed to indicate if you’re drowsy or distracted while driving. Research has revealed that 80 per cent of US accidents are caused by distracted driving in the 3 seconds before the collision. The system monitors the driver’s state in real-time, using AI and Computer Vision, to monitor factors including gaze vector, blink rate, and eye openness for signs of drowsiness and distraction. It will also detect actions such as wearing a seatbelt, holding a cell phone, smoking and wearing a face mask. The global market for AI in Computer Vision is expanding rapidly and is predicted to reach $73.7 billion by 2027, and we are likely to see it increasingly filter into our daily lives. If you’re looking for your next Data & Analytics role, or to build out your data team, we can help. Take a look at our latest opportunities or get in touch with one of our expert consultants to find out more.
Sectors Being Transformed By Computer Vision | Harnham Recruitment post
Despite previously covering how Computer Vision functions, we didn’t even scrape the surface when it comes to the wide-ranging capabilities of the technology and how advancements in the sector are leading to ground-breaking developments across numerous industries. Think of a core industry and you are likely to find a Computer Vision application already in process. At the very least, plans for it to be implemented in the near future are in motion. This application of Computer Vision is resulting in improved processes, enhanced consumer experiences, reduced costs and increased security.So where is Computer Vision currently being put into action and what are the real-world implications of its capabilities? AgricultureFarming is a notoriously time-intensive industry, with profits and success reliant on both the efficiency of processes but also the health of crops and livestock. Monitoring the health and wellbeing of animals and plants is a full-time job that often relies on subjective human judgement. But, if warning signs are missed, this can have devastating consequences on yields and livelihood. The combined use of automatised technology such as drones, satellite images and remote sensors can gather huge amounts of data which Computer Vision technologies can utilise to provide comprehensive, real-time monitoring of crop growth and quality as well as animal behaviours. And, crucially, all without manual intervention. For crops this may translate to information on soil conditions, irrigation levels, plant health, and local temperatures. This could have ground-breaking results for an industry where time really is money.The ability to detect plant disease at an early stage should also not be overlooked. Automatic image-based plant disease severity estimation using Deep Convolutional Neural Network (CNN) applications were developed, for example, to identify apple black rot. This will allow farmers to react to potential problem areas at an early stage, distribute available resources efficiently and hopefully avoid any yield loss. RetailIn the retail world, alongside existing security cameras, Computer Vision algorithms can automatically evaluate video material and study customer behaviour. This was particularly relevant during COVID-19 restrictions where the number of people in shops could be monitored automatically in line with the maximum number allowed. In some cases, this would then be attached to an alert system such as a green or red light to stop more customers from entering.Delving deeper, these same techniques can be used to analyse the chosen routes of customers through a store or departments, how long they stay at particular shelves and what they ended up buying. These capabilities can then have implications on the design, structure, and an optimised placement of products.EducationRemote learning has brought about numerous unique challenges, for teachers engaging their class in their lesson became a new challenge – mainly as it is very difficult to constantly monitor the engagement of that class and reengage those students. Engagement detection systems offer a potential solution, where Computer Vision and Deep Learning can detect less engaged students and notify the teacher. This will help with a teachers limited time – signalling to individual students needing extra attention or being able to see if enough students are disengaging to make reengaging the whole class worthwhile. In a similar vein, Computer Vision and Deep Learning can be used to train an AI model that is able to detect if pupils are looking at someone else’s paper during an examination. Augmented reality Computer Vision is a core element of augmented reality apps. This technology helps AR apps to detect physical objects (both surfaces and individual objects within a given physical space) in real-time and use this information to place virtual objects within the physical environment. Ikea for example has used the technology to allows prospective customers to test out products at home using the Ikea Place app. Facial recognition Facial recognition technology is commonly used to match images of people’s faces to their identities. This is crucial for biometric authentication – allowing mobile phone users to unlock their devices by showing their face. The camera sees the image and the phone analyses it to identify whether the person is authorised on this device – all in just a few seconds. Healthcare Image information accounts for 90 per cent of all medical data, making it a key element in the medical field. Many diagnoses are based in image processes – such as X-rays and MRI scans. And image segmentations help in medical scan analysis – for example, computer vision algorithms can detect diabetic retinopathy, the fastest-growing cause of blindness. Cancer detection is another key example where the technology is being harnessed to diagnose different forms of cancer. This is particularly useful when examining areas that include tumours but also sections that may appear as tumorous but are benign. The computer vision algorithm identifies the tumours and is not confused by the normal areas that resemble tumours.With the global Computer Vision market size on track to reach $41.11 billion by 2030, the capabilities that the technology can offer will continue to revolutionise numerous industries and bring about life altering and potentially lifesaving solutions.Intrigued by Computer Vision and wondering how to break into the sector? Take a look at our latest Data Science jobs or get in touch with one of our expert consultants to find out more.
CAN’T FIND THE RIGHT OPPORTUNITY?
If you can’t see what you’re looking for right now, send us your CV anyway – we’re always getting fresh new roles through the door.