The Dialogue: IR35 - What You Need To Know

Daniel Lewis our consultant managing the role
Posting date: 9/17/2020 10:57 AM
For the latest episode of The Dialogue, Associate Director Dan Lewis sat down with Andy Dagnall from Contract insurance specialists Kingsbridge to talk all thing IR35.

Here's what we learnt:

- Understanding your current workforce and who your contractors are should be the first step for any business looking into IR35. 

- Businesses should be set up better now there has been a delay. Getting IR35 right in the current economic is so much more important than it was the first time round.

- Blanket bans on contracting don't factor in the risks of projects not being completed and loss of talent. 

- The Public Sector tried blanket bans and they didn't work. They've now bounced back by embracing a new way of working.  

- There are online tools to help determine a contractor's status in regards to IR35 and Harnham have our own tool for this stage of the process. 

- Collaboration will be key for both businesses and contractors. Working with third parties will help make the process easier. 

- It is almost essential that contractors take out insurance to mitigate the risk of a wrongful determination. 

- Status determination can change throughout a project. An inside determination at one stage of a project won't necessarily stay that way for the duration. 

- Contractors will have more trust in businesses that have a robust process in place. 

- Contractors can get a working practices review done now so they can approach recruiters or businesses with an understanding of where they will fall with IR35. 

- The most important thing in a status determination is being honest. 

- And much, much more. 

You can watch the full conversation below:

Related blog & news

With over 10 years experience working solely in the Data & Analytics sector our consultants are able to offer detailed insights into the industry.

Visit our Blogs & News portal or check out the related posts below.

Using A ‘Data First’ Approach In Your Data Science Job Search

The litany of past jobs, education, and business goals has and is quickly becoming a thing of the past. With millions out of work, the rise of remote working, and the continued high demand for those in the Data industry field, CVs today must be much more dynamic. And, for the innovative Data Scientist, there are plenty of ways to stand out from the crowd. If you’re interested in working in Data First environments, why not use the idea to inform your job search? Consider this. When you begin your search, you’re using many of the same tactics you’d use in any role – assess, analyse, gather, improve, and control. Want to Make a Career Change? Assess the Situation. While more Data professionals are staying in their roles longer than in previous years, there do come times when it’s time to make a change. Maybe you want to get a fresh start somewhere for more creative expression. Maybe you want to be a strong leader who wears a variety of hats, but you’re dug in to your current role. So, what do you do? If you’re on the fence, you assess the situation. Here are a few steps you might follow: Review your CV – with which business processes and technical systems do you have experience? Determine which roles interest you and why – do you need more education, a new skill, or does the company you’re interested in offer on the job training, upskilling, or reskilling?Research the role you’re interested in – your desired outcome - and breakdown any challenges into actionable efforts you can make to get the job.Gather as much Data as possible to not only help with any adjustments to your CV, but to include in your cover letter. Remember, what was once on paper and emailed or delivered can now be done by video. After all, your next goal is likely a Zoom interview. Maybe you’ve gone a bit further and created a mind map of your goals, experiences, and processes. If so, you’ve got the beginnings of your job search model. Putting Your Job Search into Action to Improve Your Chances You’ve gotten much of the preparatory legwork done. Now, it’s time to get things moving and test the market.  Perhaps you’ve got a spreadsheet or are keeping a journal of companies you’d like to work or roles you’d like to take on. You’ve researched their website pages, gotten the hiring managers information, and have crafted your cover letter and CV to fit the job you’re most interested in. Have you sent out your information or made connections in your field? Are you getting a good response or only a trickle? This is the improvement phase. This is where you tweak things. How? Have a friend or recruiter review your CV – are their typos? Do you meet the right qualifications? Are you over or under-qualified? Is there anything they think you could add which might help? Did you follow the application instructions correctly? Did they request a video resume and you sent them paper or vice versa?Have you addressed any feedback you’ve received from hiring managers, recruiters, friends, or colleagues? When you figure out where and what to improve, you can move forward more proactively. Many businesses want a professional who can address issues specific to a project or difficult situation. Determine what needs to change and fix it with an eye toward improving the process in the future. Taking Control of Your Job Search Looking for a job can often feel like the employers have all the power, but that’s not necessarily the case. When the reigns are tightened and your focus is sharp, you have more control than you think. After all, you’ve done much of the legwork many won’t think to do. You’ve assessed your situation, your processes, and your systems – networking, job board, or contract work to try new things and gain new experience. You’ve determined your gaps and closed them. What you’ve learned along the way has helped you not only in your job search, but has given you insight into your ideal role. If you’re looking for a career change, and have followed a Data first approach, your newly embedded processes and goals will streamline into whatever you choose to do. The best part? It drives your own ROI. The time invested in yourself and in conducting your job search with such focus can only bring about good things. If you’re looking for your next role in Data & Analytics, Harnham can help you search and prepare. Take a look at our latest opportunities or get in touch with one of our expert consultants to learn more.  

From Broken Data Pipelines to Broken Data Headlines

This week's guest post is written by Moray Barclay. Two things have caused the UK’s Test & Trace application to lose 16,000 Covid-19 test results, both of which are close to my heart. The first is the application’s data pipeline, which is broken. The second is a lack of curiosity. The former does not necessarily mean that a data application will fail. But when compounded by the latter it is certain. Data Pipelines All data applications have several parts, including an interesting part (algorithms, recently in the news), a boring part (data wrangling, never in the news), a creative part (visualisation, often a backdrop to the news), and an enabling part (engineering, usually misunderstood by the news).  Data engineering, in addition to the design and implementation of the IT infrastructure common to all software applications, includes the design and implementation of the data pipeline. As its name suggests, a data pipeline is the mechanism by which data is entered at one end of a data application and flows through the application via various algorithms to emerge in a very different form at the other end. A well architected data application has a single pipeline from start to finish. This does not mean that there should be no human interaction with the data as it travels down the pipeline but it should be limited to actions which can do no harm. Human actions which do no harm include: pressing buttons to start running algorithms or other blocks of code, reading and querying data, and exporting data to do manual exploratory or forensic analysis within a data governance framework. The data pipeline for Test & Trace will look something like this:    a patient manually fills out a web-form, which automatically updates a patient listfor each test, the laboratory adds the test result for that patientthe lab sends an Excel file to Public Health England with the ID’s of positive patientsPHE manually transpose the data in the Excel file to the NHS Test & Trace systemthe NHS T&T system pushes each positive patient contact details to NHS T&T agentsfor each positive patient, an NHS T&T contact centre agent phones them. This is a not a single pipeline because in the middle a human being needs to open up an editable file and transpose it into another file. The pipeline is therefore broken, splitting at the point at which the second Excel file is manually created. If you put yourself in the shoes of the person receiving one of these Excel files, you can probably identify several ways in which this manual manipulation of data could lead to harm. And it is not just the data which needs to be moved manually from one side of the broken pipeline to the other side, it is the associated data types, and CSV files can easily lose data type information. This matters. You may have experienced importing or exporting data with an application which changes 06/10/20 to 10/06/20. Patient identifiers should be of data type text, even if they consist only of numbers, for future-proofing. Real numbers represented in exponential format should, obviously, be of a numeric data type. And so on. One final point: the different versions of Excel (between the Pillar 2 laboratories and PHE) are a side-show, because otherwise this implies that had the versions been the same, then everything would be fine. This is wrong. The BBC have today reported that “To handle the problem, PHE is now breaking down the test result data into smaller batches to create a larger number of Excel templates. That should ensure none hit their cap.” This solves the specific Excel incompatibility problem (assuming the process of creating small batches is error-free) but has no bearing on the more fundamental problem of the broken data pipeline, which will stay until the manual Excel manipulation is replaced by a normal and not particularly complex automated process. Curiosity So where does curiosity fit in? The first thing that any Data Analyst does when they receive data is to look at it. This is partly a technical activity, but it is also a question of judgement and it requires an element of curiosity. Does this data look right? What is the range between the earliest and the latest dates? If I graph one measurement over time (in this case positive tests over time), does the line look right? If I graph two variables (such as Day Of Week versus positive tests) what does the scatter chart look like? Better still, if I apply regression analysis to the scatter chart what is the relationship between the two variables and within what bounds of confidence? How does that relate to the forecast? Why? This is not about skills. If I receive raw data in csv format I would open it in a python environment or an SQL database. But anyone given the freedom to use their curiosity can open a csv file in Notepad and see there are actually one million rows of data and not 65,000. Anyone given the freedom to use their curiosity can graph data in Excel to see whether it has strange blips. Anyone given the freedom to use their curiosity can drill down into anomalies. Had those receiving the data from the Pillar 2 laboratories been allowed to focus some of their curiosity at what they were receiving they would have spotted pretty quickly that the 16,000 patient results were missing. As it was, I suspect they were not given that freedom: I suspect they were told to transpose as much data as they could as quickly as possible, for what could possibly go wrong? Single Data Pipeline, Singular Curiosity: Pick At Least One To reiterate, the current problems with T&T would never have arisen with a single data pipeline which excluded any manual manipulation in Excel. But knowing that the data pipeline was broken and manual manipulation was by design part of the solution, the only way to minimise the risk was to encourage people engaged in that manual process to engage their curiosity about the efficacy of the data they were manipulating. In their prototype phases – for that is the status of the T&T application - data projects will sometimes go wrong. But they are much more likely to go wrong if the people involved, at all levels, do not have enough time or freedom to think, to engage their curiosity, and to ask themselves “is this definitely right?” You can view Moray's original article here.  Moray Barclay is an Experienced Data Analyst working in hands-on coding, Big Data analytics, cloud computing and consulting.

RELATED Jobs

Salary

£65000 - £85000 per annum

Location

London

Description

This is an exciting new opportunity for an experienced PhD Data Scientist with Forecasting experience to work for a renewable energy company!

Salary

£44000 - £54000 per annum + Benefits

Location

London

Description

This innovative, fast growing Software/Tech start-up are looking A Senior Customer Analyst to help understand their customers purchasing behaviour.

Salary

£50000 - £70000 per annum

Location

Newbury, Berkshire

Description

Exciting new role within a leading Fintech

recently viewed jobs