AIRB Modelling jobs

What We Do

We help the best talent in the AIRB Modelling market find rewarding careers.

Advanced Internal Ratings-Based Modelling (AIRB) refers to a set of Credit Risk measurement techniques which has allowed banks to develop models which can help them quantify the required capital for Credit Risk.

The development of PD, EAD and LGD models within the banking sector is a very important, but also very niche, skillset into which Harnham have focused on sourcing, understanding, and assessing both candidate abilities and client needs. 

Latest Jobs

Salary

£30000 - £37000 per annum + competitive benefits package

Location

Edinburgh

Description

A challenging but rewarding Model Validation opportunity at a small but fast growing challenger bank!

Salary

£32000 - £37000 per annum + industry leading benefits

Location

Edinburgh

Description

A leading challenger bank need a strong Modeller to help them with their 2nd line of defence!

Salary

£50000 - £70000 per annum + Comprehensive Benefits Package

Location

London

Description

If you're a Credit Risk Modeler looking to extend yourself into machine learning then this is the role you've been waiting for!

Salary

£25000 - £38000 per annum + Competitive Benefits

Location

Edinburgh

Description

Looking for a Senior Credit Risk Analyst with exposure to Model development or validation for a role with big exposure and variety in workload.

Salary

£45000 - £65000 per annum + Comprehensive Benefits Package

Location

London

Description

Own the end to end validation of Credit Risk and Finance models at a growing London lender

Salary

£30000 - £45000 per annum + Competitive Benefits

Location

London

Description

A great opportunity for a Credit Risk Modeler to join an established, growing bank offering unrivaled learning and growth opportunities in IRB modelling.

Harnham blog & news

With over 10 years experience working solely in the Data & Analytics sector our consultants are able to offer detailed insights into the industry.

Visit our Blogs & News portal or check out our recent posts below.

The fight for senior risk analysts

If you have had difficulties hiring a Senior Risk Analyst recently, and you’re scratching your head as to why – this article should hopefully shed some light on the matter. Year on year we have seen the demand for Senior Risk Analysts skyrocket, making them the most sought after analysts in the ever-evolving world of risk. It’s no surprise that since 2012, the growth of challenger banks and the subprime sector means hiring candidates with experience in risk/FS alongside experience of SAS, has become even more challenging.The growth in demand just can’t be matched by supply. Risk analysts with 2-5 years’ experience are the golden eggs within this rapidly growing and advancing market and it seems that everyone wants them. If a strong Senior Risk Analyst comes on the market, they can have as many as 15 roles to consider at any one time – now this is great for the candidate but it is a recruitment nightmare for the companies looking to get this person on board. I have seen the good, the bad and the ugly when it comes to recruitment processes and these 5 tips below will give you a huge advantage in securing your perfect candidate in the face of fierce competition!1. You must have a slick and efficient recruitment processThe days of a 3+ stage recruitment process for Senior Analysts are over. Why do three one-hour interviews when you can cover it all in one stage and send a message of intent to the candidate? If the recruitment process is slow, unorganised and laborious, a candidate will perceive that this is what it is like to work for the company. Ultimately, the quicker the process, the more chance you will have of securing your perfect candidate.2. Sell, sell, sellAt the end of the day, the purpose of an interview is for the candidate to show off their skills in front of a prospective employer. But as the interviewer, you have a duty to sell the role as much as possible because if you don’t, I can guarantee your competitors will. Another big sell is skipping any testing at the first stage – face-to-face interaction as a first stage is such a good way to get a candidate engaged in a process. Sometimes, there may be 2-3 stages before candidates have even met anyone in the company!3. It’s the little things that make a big differenceBelieve it or not, some people don’t like regular contact from recruiters requesting updates on their situation (I couldn’t believe it!). One really nice touch I have seen work is a hiring manager calling their preferred candidate from their personal mobile in between interviews to check in and see how things were going – little things like this can make a big difference in the long run, and only take a couple of minutes.4. Best offer firstThis is the most frustrating thing that I come across in recruitment. Companies sift through a huge number of CVs sent through by recruiters/direct applicants, spend countless hours interviewing candidates, and when they finally find the perfect candidate, they under-offer them to see whether they can get them slightly cheaper… It all comes back to intent, and by offering the best possible offer first time, it sends a positive and decisive message to any prospective candidates.5. Flexing on skills – have you considered it?Although it may not always be ideal to begin with, employers flexing on skills and experience is something I have seen work a number of times over the past 12 months. For example, a role may be open for 6 months whilst the employer is trying to find their perfect candidate but within that time, they could have hired someone who didn’t quite tick all of the boxes, trained them up in 3 months and saved themselves a lot of time and money! If you think the candidate can pick things up quickly, definitely consider them.It’s never going to be a seamless process when attempting to hire a Senior Risk Analyst so don’t make it harder for yourself!

Fighting Crime with Data: An Ethical Dilemma

Can you be guilty of a crime you’ve yet to commit? That’s the premise of Steven Spielberg’s 2002 sci-fi thriller ‘Minority Report’. But could it actually be closer to reality than you think.   As technology has advanced, law enforcement has had to adapt. With criminals utilising increasingly sophisticated methods to achieve their goals, our police forces have had to continuously evolve their approach in order to keep up.   New digital advances have refined crime-solving techniques to the point where they can even predict the likelihood of a specific crime occurring. But with our personal data at stake, where do we draw the line between privacy and public safety?  Caught on Camera   The digital transformation has led to many breakthroughs over the past few decades, originating with fingerprint analysis, through to the advanced Machine Learning models now used to tackle Fraud and analyse Credit Risk.   With an estimated one camera per every 14 individuals in the UK, CCTV coverage is particularly dense. And, with the introduction of AI technologies, their use in solving crimes is likely to increase even further.   IC Realtime’s Ella uses Computer Vision to analyse what is happening within a video. With the ability to recognise thousands of natural language queries, Ella can let users search footage for exactly what they’re after; from specific vehicles, to clothes of a certain colour. With only the quality of CCTV cameras holding it back, we’re likely to see technology like this become mainstream in the near future.   Some more widespread technologies, however, are already playing their part in solving crimes. Detectives are currently seeking audio recordings from an Amazon Echo thought to be active during an alleged murder. However, as with previous requests for encrypted phone data, debate continues around what duty tech companies have to their customer’s privacy.  Hotspots and Hunches Whilst Big Data has been used to help solve crime for a while, we’ve only seen it begin to play a preventive role over the past few years. By using Predictive Analytics tools such as HunchLab to counter crime, law enforcement services can:  Direct resources to crime hotspots where they are most needed.  Produce statistical evidence that can be shared with local and national-level politicians to help inform and shape policy.   Make informed requests for additional funding where necessary.   Research has shown that, in the UK, these tools have been able to predict crime around ten times more accurately than the police.   However, above and beyond the geographical and socioeconomic trends that define these predictions, advances in AI have progressed things even further.   Often, after a mass shooting, it is found that the perpetrators had spoken about their planned attack on social media. The size of the social landscape is far too big for authorities to monitor everyone, and often just scanning for keywords can be misleading. However, IBM’s Watson can understand the sentiment of a post. This huge leap forward could be the answer to the sincere, and fair, policing of social media that we’ve yet to see. Man vs Machine  Whilst our social media posts may be in the public domain, the question remains about how much of our data are we willing to share in the name of public safety.   There is no doubt that advances in technology have left us vulnerable to new types of crime, from major data breaches, to new ways of cheating the taxman. So, there is an argument to be had that we need to surrender some privacy in order to protect ourselves as well as others. But who do we trust with that data?  Humans are all susceptible to bias and AI inherits the biases of its creators. Take a program like Boulder, a Santa-esque prototype that analyses the behaviour of people in banks, determining who is ‘good’ and who is ‘bad’. Whilst it can learn signs of what to look for, it’s also making decisions based around how it’s been taught ‘bad’ people might look or act. As such, is it any more trustworthy than an experienced security guard?  If we ignore human bias, do we trust emotionless machines to make truly informed decisions? A study that applied Machine Learning to cases of bail found that the technology’s recommendations would have resulted in 50% less reoffenders than the original judges’ decisions. However, whilst the evidence suggests that this may be the way forward, it is unlikely that society will accept such an important, life-changing decision being made by a machine alone.  There is no black and white when it comes to how we use data to prevent and solve crime. As a society, we are continuously pushing the boundaries and determining how much technology should impact the way we govern ourselves. If you can balance ethics with the evolution of technology, we may have a role for you.   Take a look at our latest roles or contact one of our expert consultants to find out how we can help you. 

Recently Viewed jobs