Generic selectors
Exact matches only
Search in title
Search in content
Search in posts
Search in pages
Filter by Categories
Blogs
Board Members
Board: no title
Board: non-practitioner members
Board: practitioner members
Consultations
Guidance
Member spotlight
News
Press articles
Press releases
Speeches & panels
Uncategorised
 

Annual Review 2018/2019

ANNEXES

ANNEX A — ABOUT THE BSB

What is the BSB?

The Banking Standards Board (BSB) was established in April 2015 with the aim of helping to raise standards of behaviour and competence across the UK banking sector. It is a non-statutory, voluntary membership body open to all banks and building societies operating in the UK.

The BSB was set up following the report by the UK Parliamentary Commission on Banking Standards1 into the events that precipitated and exacerbated the financial crisis. Its Chairman, Dame Colette Bowe, was appointed by Bank of England Governor Mark Carney in November 2014 and its Board announced in April 2015. The new organisation opened its doors to membership from across the sector in January 2016. It is the boards of banks and building societies, with their primary responsibility for culture, that take the decision about whether to join the BSB.

The composition of the BSB’s Board reflects its remit and its focus on customer and societal outcomes. The organisation is funded by its members not to represent them, but to inform and challenge them; it speaks with and about, but never for, its member firms. A majority of BSB Board members are, accordingly, appointed from outside the sector. These non-practitioner members (who include the Chairman and Deputy Chairman, Sir Brendan Barber) bring expertise and authority to the Board and ensure its independence and impartiality. Practitioner members, meanwhile, are drawn from across the banking as well as the investor spectrum and bring both a professional and personal commitment to the role.

Given the diversity of a sector that spans banks and building societies, foreign branches and subsidiaries, and retail and investment firms, the BSB engages with a wider range of both firms and partner organisations. We have worked with professional bodies, training providers, industry bodies, other industry organisations (e.g. the Fixed Income, Currencies and Commodities Markets Standards Board), academic institutions, think tanks, regulators, policy makers and professional networks, both in the UK and globally. We aim to work creatively, effectively and efficiently. This includes avoiding duplication of what is already being well, and stepping in to inform, facilitate, innovate and challenge, and (equally importantly) stepping back again as appropriate to avoid fragmentation or complexity.

Click here to find out more about the BSB Board.

Trustworthiness and culture

The BSB does not exist to encourage trust in the banking sector, but to help raise the trustworthiness of the sector; a very different proposition. The onus is not on customers, members and clients to trust the firm, but on banks and building societies themselves to demonstrate through their actions that they are worth of being trusted. These actions need to encompass the broad spectrum of competence, behaviour and culture.

When we talk about culture in this context, we mean the way that things get done within an organisation; the assumptions, values and expectations that shape the way in which people behave in a group (and which may be very different to how they behave in other groups, or on their own). These assumptions, values and expectations will influence the way in which people identify, communication and respond to incentives.

Similar objectives, strategies, business models, responsibility maps and reward structure may produce different results in firms with different cultures. Understanding and managing the culture of the firm it leads is therefore a core responsibility of any board. It is one that cannot be delegated to regulators, policy makers or a specific function within the firm such as compliance, risk or human resources.

The responsibility that boards of banks and building societies have for managing culture is, given the size and inter-connectedness of the UK banking sector and its centrality to the economy and society, particularly important.

A successful, dynamic UK economy needs a secure, reliable and trustworthy banking sector. Higher standards of behaviour and competence matter also for the wellbeing and fulfilment of the many people who work in banks and building societies across every part of the UK. A banking sector that neither demonstrates nor aspires to high standards lets down its customers, members, investors and all who give the sector its ‘social licence’ to operate.

While every firm shares responsibility for the trustworthiness of the sector of which it is a part, the starting point for each individual firm and the challenges it faces will, of course, be different. For some, the challenge may be about changing and improving their culture; for others, maintaining that culture in a context of changing technology, markets, customer preferences, demographics, competition or other external factors. For others still, aligning local, national and global cultures, or creating the ‘right’ culture in a newly established firm (or not losing it as the firm expands or restructures). The particular combination of challenges facing each board and executive team will be as individual as the firm, and will itself change over time.

The BSB approach

Work to raise standards and demonstrate trustworthiness requires not only determination but also information. Boards and executive teams need evidence that can help them to identify what needs to be done, prioritise competing demands, determine actions and timescales, and establish a baseline against which progress can be measured.

While every firm is different, some of the issues each faces will nevertheless be shared. Banks and building societies can learn from each other, as well as from firms in other sectors, about how to address such issues in the most effective and efficient way; challenging themselves, and building better business models for their customers, members and clients.

The BSB’s assessment and policy work are intended to provide member firms with the evidence, support and challenge, to help them achieve and maintain high standards of behaviour and competence, individually and collectively. Underpinning this dual approach is a framework of nine characteristics, both ethical and professional, that we would expect to lead to good outcomes for customers, members, clients, employees or investors and the economy and society as a whole; characteristics that we would therefore expect to be associated with any good culture in banking.

Our Assessment does not assess firms against a template of what a ‘good’ culture looks like. There is no uniquely good (or bad) organisational culture against which all others can be measured. Firms with very different cultures can produce equally good or bad outcomes for customers and clients and more broadly.

We do not, therefore, set out to measure or rank culture directly. Rather, we ask how far each of our nine characteristics is demonstrated by the firm and relative to other firms. We would expect a firm that strongly exhibited our nine characteristics to be better equipped and more likely to service its customers, members and clients well, than one in which these elements were lacking.

Fig 53. BSB Assessment Framework
The nine characteristics against which firms are assessed are honesty, respect, openness, accountability, competence, reliability, responsiveness, personal and organisational resilience, and shared purpose.

If these characteristics appear obvious and fundamental, that is precisely as it should be. They are characteristics that customers, members or clients should be able to take for granted as being not only present, but present to a very high degree in any bank or building society that they entrust their money to or deal with, irrespective of the firm’s size, business model, market segment, age, ownership structure or location. Furthermore, given the importance of the banking sector to the economy and the systemic nature of the sector, the public as a whole also has the right to expect the same of every bank or building society operating in the UK, whether or not they engage with it personally and directly.

Assessing firms against our nine characteristics and exploring areas of both strength and weakness, reveals issues relevant to both individual firms and to firms collectively. At the individual firm level, the results of the Assessment are given to each board and discussed with the firm. The BSB does not publish firms’ Assessment reports. It is the responsibility of each board and executive team to decide how to act on and share (e.g. with employees and regulators) the contents of their report.

Members firms join the BSB and engage in the Assessment in order to learn and continuously improve.

Participation in the Assessment, with its cross-firm benchmarking and detailed reporting, demands a readiness on the part of board members and the executives to be self-critical and to ask questions of themselves and their employees that may elicit unexpected and unwelcome answers. A far more comfortable option would be to avoid asking such questions in the first place. BSB membership is voluntary; it is also challenging.

While individual firm reports are owned by the firms concerned, the BSB is committed to publishing evidence of what it finds at the cross-firm level, and identifying the issues and themes that in turn inform its policy work.

The Assessment is, by its nature, undertaken with firms individually. The BSB’s policy work, in contrast, focuses on collective challenges that may not necessarily be shared by all firms but will be common to many.

Drawing on evidence from the Assessment and elsewhere, the BSB works with firms to understand better the issues identified and to develop appropriate policy responses, including developing good practice guidance. Good practice guidance, it should be noted, does not impose any legal or regulatory obligation and is not a standard that sets a minimum requirement nor does the BSB operate on a ‘comply or explain’ basis. Rather, good practice guidance allows member firms and others in the sector to reference their own policies and procedures against a statement of what ‘good’ looks like. Guidance is developed in partnership with BSB members and represents a pooling of knowledge and experience. This is an approach that, to be effective, requires firms to be genuinely committed to a culture of continuous improvement; a commitment consistent with the voluntary nature of BSB membership.

In developing its policy work the BSB aims to be innovative, challenging, objective and collaborative, making full use of the flexibility afforded to it by its non-regulatory status. We work not only with firms but a broad range of stakeholders from across industries, consumer organisations, civil society, non-profits, the public sector and academia.

ANNEX B — THE BSB ASSESSMENT OF CULTURE, BEHAVIOUR AND COMPETENCE

Overview of the BSB Assessment

The Assessment exercise asks how far the nine characteristics of the BSB framework are demonstrated within a firm. Repeated annually, it provides boards with an impartial, evidence-based picture of the culture of their firm; not only over time and across different business areas, but also relative to other firms. These multiple perspectives, combined with other internal and external data used by firms, equip boards and executive teams better to gauge progress, set priorities and learn from good practice both within the firm and (including through our policy work) across firms.

The Assessment approach was developed by the BSB, working with leading academics in the fields of organisational psychology and ethnography from the London Business School and the London School of Economics and with strategy consultants. It comprises both a quantitative element (generated from an employee Survey) and a qualitative dimension (including e.g. focus groups, interviews and board questions) that allows the Survey results and any broader themes to be explored in more detail.

All participating firms engage in the Survey, the data from which provides benchmarked results by firm and business area.

Each firm receives its own Survey results, including (assuming that response rates were high enough to be statistically representative of the relevant populations, and, where at least seven firms could be compared) a comparison of its scores on each characteristic and question, with the range of scores of all participating firms. These comparisons are provided not only at firm level, but also (where relevant to the firm) for retail banking, investment banking and functions, and at the next level down (e.g. within retail banking, for retail branch, private banking, commercial banking etc).

Comparisons in each case include a range and quartile against the equivalent category across all relevant firms, though without revealing the identify of any individual firm. The results for retail banking at Firm A, for example, were compared with those for retail banking at all assessed firms with a retail banking business.

Firms that take part in both the Survey and the qualitative elements of the full Assessment receive a more extensive report containing fuller findings and analysis.

The consistency of the Survey questions (and the nine characteristics of the underlying framework) is central to enabling a dynamic picture to be built up over time, both at and within individual firms and across firms. The qualitative aspect of the Assessment may, in contrast, vary from year to year depending on the themes explore and the most appropriate way of approaching these.

In 2017, we amended a very small number of questions to reflect our learning from the first year of having run the Survey. Where questions are amended, this is always made clear to avoid incorrect comparisons being drawn. We also asked, in 2018, an additional three questions exploring specific themes (shown in Annex D).

We continue to explore new measurement techniques, as well as whether and how to incorporate firm or third-party information into the Assessment, to ensure that the exercise remains valuable for firms and for their customers, members, clients and employees.

Assessment methodology

In 2018 the BSB conducted its third annual Assessment. This exercise asks how far the nine characteristics of the BSB framework are demonstrated within a firm, and uses both quantitative and qualitative evidence to inform the answer. Quantitative evidence is gathered through an Employee Survey, the results of which provide firms with comparative results both over time and against other firms; qualitative evidence was draw in 2018, from written submissions by firms’ boards, interviews with Non- Executive Directors and executives, and focus groups with employees.

26 member firms took part in 2018 from 25 in 2017, and 22 in 2016. Of these 26, nine participated in the full Assessment (both quantitative and qualitative), and 17 in the Survey element only. The firms that took part in 2018 are shown below, with those that participated in the full Assessment highlighted in bold.

Atom Bank, Bank of Ireland, Barclays, Buckinghamshire Building Society, Cambridge and Counties Bank, Charity Bank, Citi, Co-op, CYBG, Ecology Building Society, Handelsbanken, HSBC, Lloyds Banking Group, Morgan Stanley, Nationwide, OneSavings Bank, Paragon Bank, Penrith Building Society, RBS, Redwood Bank, Santander, Standard Chartered, State Bank of India, Tesco Bank, Unity Trust Bank, Vanquis Bank.

The BSB Employee Survey

The BSB Employee Survey consists of 37 questions (Annex C). Questions 1 to 36 each correspond to one of the nine characteristics of the Assessment framework. Question 37 invites respondents to enter three words describing their firm in a free text box.

The questions explore employees’ perceptions, observations and beliefs about their firm’s culture, drawing on personal experience. Questions 1 to 36 are both positively and negatively framed to reduce the risk of acquiescence bias (the tendency of survey participants to agree with questions) and, apart from the first and last few questions, are presented in a randomised order.

The Survey is run in each firm in the same way, to reduce as far as possible any firm-specific framing effects that might bias answers. It is not carried out as part of or in conjunction with any other survey or exercise, and has a set appearance and format that positions it clearly as an externally run BSB survey. Employees completing the Survey do so completely anonymously; the BSB does not know respondents’ individual identities, and results are presented to firms in a way that avoids any risk of attribution.

In developing and refining the survey we conducted cognitive testing with a number of employees across business lines at a diverse set of firms.

Questions 1 to 36 of the survey use a five-point Likert scale, i.e. strongly agree, somewhat agree, neither agree nor disagree, somewhat disagree, strongly disagree. The answers to each question are converted into a score on a scale of 0 to 100, (with 100 representing a situation in which all respondents strongly agreed with a positively framed statement, or strongly disagree with one that was negatively phrased)2.

The scores for the individual questions relevant to each of the nine Assessment characteristics are then combined (giving equal weight to each of the relevant questions) to give a score of 0 to 100 for each characteristic.

The Survey can be run on either a census basis (i.e. sent to everyone in the firm) or on a sample basis.

In practical terms, the latter approach is used only in larger firms, as the sample required in a smaller firm would be close to the entire population of that firm. Some larger firms that began in 2016 with a sampling approach have subsequently decided to send the Survey to all of their employees.

Where a sample approach is used, the number of respondents needed is determined by what is required to provide statistically representative results and comparisons to firms at the level of individual business lines and functions (i.e. not just at the level of the firm as a whole).

  1. The scores on a scale of 0 to 100 are calculated after applying weights evenly from 0 to 1 to each of the five possible Likert scale responses, i.e. 0, 0.25, 0.5, 0.75, 1. The weighting is reversed for negatively framed questions. This means that a higher score in the results presented always means a more positive outcome irrespective of whether the question is positively or negatively framed.

In 2018 the Survey was sent to 188,050 people at 26 firms and received 72,024 responses. This level of responses means our results for individual firms have relatively small margins of errors, i.e. we can be confident the views expressed by responding samples are unlikely to be different to the views of the populations they are meant to represent3. In 2017 the Survey was sent to 106,092 people at 25 firms, and received 36,268 responses (again, leading to high confidence in the results). In 2016, the Survey was sent to 82,139 people at 22 firms and received 28,122 responses (again, in both years, allowing high confidence in the results).

Each firm receives its own Survey results across different parts of its organisation, and is able to compare its scores for each characteristic and question with those of all other participating firms. Other firms are not identified in a firm’s own report.

The Survey results are provided on an interactive dashboard. Firms can see the Survey score for their whole organisation, with a rank and quartile position compared to other participating firms. They can also see the question and characteristic scores for each of their business areas, benchmarked against all other firms with comparable business areas. Firms are also able to analyse the data by some demographic characteristics, though the dashboard prevents the data being cut in any way that would create a risk of attribution of respondents.

With their results displayed in this way, boards and executive teams can see where specifically they are performing well against their peers, and where there is room for progress. Repeating this annually, using the same methodology, allows them also to gauge progress over time.

In any one year we may include additional Survey questions to gather more information on a particular theme. In 2018 we included three such questions, all of which were presented to respondents after questions 1 to 37 had been asked. Additional questions do not feed into the scores of firms or affect comparative rankings.

  1. For the results we provide to individual firms at the firm level and for business areas within a firm we aim for sample sizes sufficient to achieve a confidence level of 95% and a confidence interval (margin of error) of +5%. In practice, the response rates achieved in 2018 means these desired levels for most firms are met. For the smallest firms or for smaller business areas, margins of errors can be larger unless almost all employees respond.

Gathering qualitative evidence

Nine firms took part in the qualitative aspect of the Assessment in 2018, and evidence for this was gathered in three ways: board questionnaires, interviews with non-executive directors and executives, and employee focus groups.

Board questionnaires

The BSB Chairman wrote to the chairs of each of the nine firms in 2018, asking the following questions.

  • Last year, you outlined for us your priorities on organisational culture for 2017/18. How do you feel you have progressed against these priorities?
  • Do your priorities remain the same for 2018/19, or have they changed, and why?

The boards’ responses provided firm-specific context to inform our interpretation of the wider evidence received.

Interviews with non-executive directors and executives

As part of our qualitative information gathering exercise we also interviewed in 2018 12 non-executive directors (NEDs) and 59 executives from across the nine participating firms. Interviews were semi-structured and covered:

  • culture change at the firm during the previous 12 months;
  • priorities for the coming 12 months;
  • questions relating to diversity and inclusion, customer focus, the relationship between control functions and business areas, the implications of possible future changes to the banking industry and a range of firm-specific themes; and
  • any other cultural or behavioural topics that the interviewee wished to discuss.

These interviews at a senior level helped us to understand the perspectives of individual NEDs and executives, including what they felt was going well and anything that they considered needed to change.

Focus groups

The third and very important aspect of the qualitative Assessment in 2018 (as in previous years) was hearing from employees about their firm’s culture through focus group discussions. In 2018, we also explored with focus groups the themes of diversity and inclusion, customer focus and the relationship between control functions and business areas.

Each focus group was firm-specific (i.e. participants were in each case from the same firm). Employees were from a mixture of junior and middle levels of seniority, and we asked in each case that no participant’s line manager was part of the same focus group. We conducted 89 focus groups in 2018, involving 837 employees from different business areas across the nine participating firms. The focus groups were facilitated by a third party; the BSB itself does not receive or hold information on the identities of the individuals who participate (other than the business area(s) or part(s) of the firm that each focus group is drawn from).

ANNEX C — 2018 BSB EMPLOYEE SURVEY QUESTIONS

HONESTY
  1. I believe senior leaders in my organisation mean what they say
  2. In my organisation I see instances where unethical behaviour is rewarded.
  3. My colleagues act in an honest and ethical way.
  4. It is difficult to make career progression in my organisation without flexing my ethical standards.
RESPECT
  1. At my work I feel that I am treated with respect.
  2. At my work people seek and respect different opinions when making decisions.
  3. In my organisation Risk and Compliance are both respected functions.
  4. In my organisation we are encouraged to follow the spirit of the rules (what they mean, not just the words).
  5. I believe my organisation puts customers at the centre of business decisions.
OPENNESS
  1. In my experience, people in my organisation are truly open to review and feedback from external sources.
  2. In my organisation people are encouraged to provide customers with information in a way that helps them make the right decisions.
  3. In my experience, people in my organisation do not get defensive when their views are challenged by others.
  4. In my organisation I am encouraged to share learnings and good practices with others.
  5. If I raised concerns about the way we work, I would be worried about the negative consequences for me.
ACCOUNTABILITY
  1. In my experience, people in my area clearly understand the behaviour that is expected of them.
  2. I believe senior leaders in my organisation take responsibility, especially if things go wrong.
  3. I see people in my organisation turn a blind eye to inappropriate behaviour.
  4. I see people in my organisation try to avoid responsibility in case something goes wrong.
  5. I feel comfortable challenging a decision made by my manager.
COMPETENCE
  1. In my experience, people in my organisation have the skills and knowledge to do their jobs well.
  2. In my role, I am encouraged to continually learn new skills and improve my role-specific knowledge.
  3. I am confident in the ability of people in my area to identify risks.
RELIABILITY
  1. When my organisation says it will do something for customers, it gets done.
  2. I see the people I work with go the extra mile in order to meet the needs of our customers.
  3. When people in my organisation say they will do something, I can rely on them getting it done.
RESILIENCE
  1. In my experience, people in my organisation are good at dealing with issues before they become major problems.
  2. My organisation focuses primarily on short-term results.
  3. I often feel under excessive pressure to perform in my work.
  4. Working in my organisation has a negative impact on my health and wellbeing.
RESPONSIVENESS
  1. I believe that my organisation responds effectively to staff feedback.
  2. Our internal processes and practices are a barrier to our continuous improvement.
  3. I believe that my organisation responds effectively to customer feedback.
  4. I believe that my organisation encourages innovation in the best interests of our customers.
  5. I have observed improvements in the way we do things based on lessons learnt.
SHARED PURPOSE
  1. My organisation’s purpose and values are meaningful to me.
  2. There is no conflict between my organisation’s stated values and how we do business.
FREE TEXT QUESTION
  1. What 3 words would you use to describe your organisation?

Additional questions for 2018

SPEAKING UP

Q39 Have you wanted to raise concerns at your organisation over the last 12 months? (If yes, please select the one issue that concerned you most.)

  • No, I have not wanted to raise concerns at my organisation over the last 12 months
  • Yes, relating to actions not in the best interests of customers, clients or members
  • Yes, relating to actions that damage market integrity
  • Yes, relating to ignoring internal policies and procedures
  • Yes, relating to sexual harassment
  • Yes, relating to bullying
  • Yes, relating to discrimination
  • Yes, relating to something else [please specify]

Q39a [Only asked of respondents who answered that they had a concern to Q39]

Did you raise your concerns about the issue?

  • Yes
  • No
  • Prefer not to say

Q39b [Only asked of respondents who answered ‘yes’ that they had raised their concerns to Q39a]

Do you feel your concerns were (or are being) listened to and taken seriously?

  • Yes
  • No
  • Don’t know

Q39c [Only asked of respondents who answered ‘no’ that they had not raised their concerns in the earlier question to Q39b]

What was it that stopped you from raising concerns about the issue? (Please select one or more of the statements below.)

  • I did not know who to raise concerns to
  • I did not trust the process to keep my concerns secure and confidential
  • I felt that nothing would happen if I did raise concerns
  • I felt it would be held against me if I raised concerns
  • I felt it would make my manager or team look bad if I raised concerns
  • I felt it would make me look bad if I raised concerns
  • I did not raise concerns as no one else does this in my organisation
  • I did not raise concerns for other reasons (not covered above)

PERCEPTIONS OF GENDER EQUALITY

Q40 How far do you agree or disagree with the following statement:

People have equal opportunities in my organisation regardless of their gender.

  • Strongly agree
  • Somewhat agree
  • Neither agree nor disagree
  • Somewhat disagree
  • Strongly disagree

Q40a [Only asked of respondents who ‘somewhat disagreed’ or ‘strongly disagreed’ with Q40

Which of the following statements do you feel best describe your organisation?

  • Men have greater opportunities in my organisation
  • Women have greater opportunities in my organisation
  • Other [please specify]

WELLBEING

Q29a [Only asked of respondents who ‘strongly agreed’ or ‘somewhat agreed’ to the statement in Q29: ‘Working in my organisation is having a negative impact on my health and well-being’]

In an earlier question you said that working in your organisation was having a negative impact on your health and well-being. Could you tell us what it is about working in your organisation that causes this?

(Your comments, along with all of the other survey responses we receive, will help us understand better the factors that may have a negative impact on health and well-being in your organisation and in the banking sector.)

ANNEX D — METHODOLOGY USED IN THIS ANNUAL REVIEW

Methodology for quantitative analysis

We use a series of regression analyses and additional tests to analyse the quantitative data obtained from the Survey.

Regressions

The 36 questions that form our core Survey are answered on a five-point Likert scale (strongly agree, somewhat agree, neither agree nor disagree, somewhat disagree, strongly disagree). This is an ordinal scale, i.e. one where responses can be sorted by a rank order. The primary regression model we apply to the data is therefore an ordered logit model.

Regressions are run for every question separately (so there are 36 different regressions for the 36 Survey questions), and at the level of the individual respondent. Using data from 2016, 2017 and 2018, we ran our models across a total of 136,414 lines of data.

To understand what explains the variation in responses to the Survey questions, we use the demographic and institutional data we gather from Survey respondents relating to gender, tenure, location, role type, business area and firm. We control for firm specific-effects in all our regression models by including (1,0) dummy variables, which identify the firm a respondent is from. We do not, however, report the results of individual firm-specific results in this public report.

We also include a year dummy variable to understand whether responses to our questions differ across years. The regressions that we run are weighted so that the results are representative of the population of participating firms. Some firms follow a sampling approach for the Survey, others send the Survey to their entire populations. Samples in any case are non-linearly related to population sizes. The weighting approach in our regressions accounts for these situations.

Outputs and their interpretation

The coefficients for all variables are calculated and need to be interpreted relative to a base. The results for the variable ‘line manager’, for example, should be interpreted relative to not having line management duties. Figure 54 shows the explanatory variables in our models and the base in each case.

The coefficients for these ordered logit regressions are expressed as odds ratios. If, for example, the odds ratio on Q1 for the variable ‘line manager’ is 1.6, this means that — controlling for all the other factors in our model — the odds of answering more positively are 1.6 times greater for line managers than for those who do not have line management duties. Odds ratios of greater than 1 imply a positive likelihood, and vice versa.

For ease of interpretation, we reverse the order of the Likert scale where a question is negatively phrased, so that higher odds ratios always imply a more positive likelihood for all questions.

Our regressions give us two pieces of information; an odds ratio (which gives a sense of the magnitude of the effect of each explanatory variable on the outcome of interest) and a p-value (which differentiates whether the variable is statistically significant in explaining the outcome). To be able to observe patterns more easily across all the ordered logit regressions for our Survey questions, the results are presented in the visual format shown in figure 55. The size of the circles denotes the size of the impact of the variable. As the odds ratio increases above 1, the size of the circle increases (a positive relationship of the explanatory variable with the outcome). Correspondingly, as the odds ratio decreases below 1 to 0, the size of the circle increases (a negative relationship of the explanatory variable with the outcome). The colour of the circles reflects whether the variable is statistically significant, and if so in which direction (green for positive and red for negative, with the darker the shading, the greater the statistical significance). The results of the regressions across all our Survey questions are presented in this format in figure 55.

Fig 55. BSB Survey 2018 results by characteristic

Robustness checks

An important assumption of ordered logit models is that the relationship between an explanatory variable and the dependent variable should not change for the different categories (in this case, the steps of the Likert scale). This is known as the proportional odds or parallel lines assumption. Tests show that, in our regressions, this assumption is often violated and therefore, in order to establish greater confidence in our results, we conduct additional non-parametric tests and run further types of regression. If the results across all our regressions and tests are consistent, we can be reasonably confident of the direction of the relationship between an explanatory variable and the outcome of interest, as well as the broad size of the effect.

Fig 56. Series of regressions and tests used to confirm robustness of results

First, we run non-parametric tests (Mann-Whitney and χ2) to test for differences between distributions. As an example, for each Survey question we test whether the shift in the ordinal distribution from 2017 to 2018 is statistically significant or not.

Second, we run generalised ordered logit regressions. This method has the advantage of freeing up the variables from the proportional odds assumption. It also allows us to see how the odds ratios vary at the different thresholds (the steps of the Likert scale). This type of model does, however, introduce greater complexity by generating four sets of coefficients for each regression, making it difficult to present results across our 36 core Survey questions in a way that allows the easy identification of patterns.

Third, we run a simpler logit model by collapsing the two most favourable response categories (strongly agree and agree for positively phrased questions) into one, the two least favourable response categories (strongly disagree and disagree for positively phrased questions) into one, and ignoring all neutral responses. While this is a simpler approach it does not allow us to use the full richness of our data.

We compare the results of all our regressions and tests. Where these are consistent, we can be reasonably confident of the direction and size of the relationship between an explanatory variable and the outcome of interest. In practice, we find that, for most major cases we explore, the regressions and tests validate each other.

ANNEX E — ADDITIONAL REGRESSION RESULTS BY BUSINESS AREA

PREVIOUS SECTION: NEXT STEPS

Fig 54. Making sense of the data (using Q1 as an example)

  • Fig 55. BSB Survey 2018 results by characteristic

  • Note: Firm-specific effects are controlled for but not shown here
  • Fig 56. Series of regressions and tests used to confirm robustness of results

  • Fig 57. BSB Survey 2018 — Retail ordinal logit result

  • Note: Firm-specific effects are controlled for but not shown here
  • Fig 58. BSB Survey 2018 — Commercial Banking ordinal logit results

  • Note: Firm-specific effects are controlled for but not shown here
  • Fig 59. BSB Survey 2018 — Investment Banking ordinal logit results

  • Note: Firm-specific effects are controlled for but not shown here
  • Fig 60. BSB Survey 2018 — Functions ordinal logit results

  • Note: Firm-specific effects are controlled for but not shown here