+44 (0)7766-525433

Friday 19th July 14:12 (UK)

Posts Tagged ‘benchmarking’

Director’s blog: Using practice sharing to catalyse innovation

Thursday, May 12th, 2016

the director's blog on innovation - logo with text


Catalysing innovation by showing a better way

One effective approach to triggering innovation is seeing ‘a better way’ – that is a superior way of working (aka business practice) that is relevant to your business. Benchmarking has been a long-established approach for doing this by comparing your business against another in a structured way. Indeed, as part of Codexx – and previously when I worked at IBM – I have led multiple benchmarking-type assessments in business areas such as production, R&D, supply chain and innovation. These benchmarking assessments were effective approaches to comparing business areas based on models of best practice and performance and thus identifying practice shortfalls – thus driving focused improvements.

However, benchmarking comparisons cannot be so easily applied to more focused business areas as there may not be a relevant best practice model in place or a database to compare against. In this case a more tailored approach is required, that I will refer to as ‘practice sharing’. This approach is less focused on numeric performance comparison – and more on key practices. However unlike an unstructured visit to view another company – ‘industrial tourism’ – this is a structured approach.

 Benchmarking continuum

Introducing practice sharing

I recently led a practice sharing programme between two major industrial businesses – one based in Denmark, the other in Germany, focusing on the development of specialist production equipment. This was by definition a niche area that was key to both businesses’ competitiveness. This programme had its beginnings in 2009 as part of a re-engineering programme that Codexx was supporting for the Danish company’s production maintenance organisation covering eight factories. To help in overcoming resistance to change and to ‘open the eyes’ of the maintenance managers to the opportunities for improvement we included a ‘benchmarking’ element as part of the re-engineering programme. Because maintenance benchmarking tools in the market were overly focused in specific areas (such as cost or lean) and did not provide the wide enough view that was needed, we developed a best practices framework based on the ISO 8 Management Principles.

We used this to perform assessment visits to the maintenance organisations in aerospace, automotive, plastics and white goods manufacturers across Europe. These visits provided benefits for both our client and the companies being visited who received a comparative report and the opportunity to visit our client. Importantly the assessment team comprised the maintenance managers who used the framework to perform the assessment, supported by Codexx. This structured approach ensured that key relevant practices were reviewed and compared and the comparative practice scoring was used to define an improvement path and monitor progress using a number of subsequent self-assessments. The programme achieved its objectives of catalysing the maintenance managers to seek opportunities for applying new practices as part of the re-engineering programme.


Developing the approach

This success led to the Danish company deciding to utilise a similar approach, with the support of Codexx, to review their development of production systems, working with a major German company in 2013, with whom they has an existing commercial relationship – but who were not a competitor. We called this approach ‘practice sharing’, rather than ‘benchmarking’ to make the approach less formal and more in the spirit of learning rather than an audit – which helped in gaining the support and involvement of the German company. Codexx developed a practice sharing framework, again based around the ISO Management principles, using a similar structure and assessment approach to the maintenance programme.

This framework was tested with the China-based operations of both companies and then finalised. We then performed a practice sharing assessment in 2014. This was considered valuable by both parties and a subsequent practice sharing programme focusing on another area of production systems was performed with the same Germany company in 2015-16. The approach in these practice sharing programmes was similar:

1. A business area of interest to both parties was identified and a commitment to perform a practice sharing assessment was made.

2. A practice sharing framework was developed.

3. Each partner self-assessed itself against the practice sharing framework.

4. A 1-day practice sharing visit was made to each company, facilitated by Codexx. This included presentations on the development of the company, a tour of its operations and then a review of the self-assessment. The agenda was allowed to flex substantially to take account of interest areas that emerged.

5. A report of the practice sharing findings and outcomes was produced by Codexx and shared with both parties.

6. Each company took forward specific follow-up internal actions and agreed collaborations.

What are the benefits from practice sharing?

Based on my experience of working with this approach since 2009, I have seen the following benefits:

  • The approach provides a structure missing from an ad hoc visit that helps align and focus the discussion on relevant practice areas.
  • The self-assessment provides a clear and objective picture of current practices including areas for improvement which helps in focusing the discussion.
  • The programme provides a catalyst for improvement for each party.
  • It’s a time-effective and cost-effective process.
  • It’s not complex and is transparent to the participants and other users.

What’s needed for effective practice sharing?

  • Win – Win: Unlike benchmarking where your company is being compared to a model of best practice, with the comparison performed by an external assessor, practice-sharing requires a partner. To engage the partner, there needs to be the potential for benefits for both parties: the practice area to be examined needs to be relevant and each party needs to consider that they can learn something from the other.
  • A structure: A practice sharing framework is needed to provide a structured comparison and independent facilitation to ‘run the process’ with the goal of maximising and capturing the outcomes from the practice sharing. Both companies also need to agree to respect confidential information that might be shared in the programme.
  • Flexibility: The assessment visits need to be flexible and adapt to the interests of the participants. As a facilitator, I had to strike a balance between directing the discussion back to relevant areas whilst allowing deviations from the agenda that were clearly creating value. This is key, as the framework is in effect a ‘working hypothesis’ of what are the important practices. The reality will undoubtedly be somewhat different and thus the session has to seek to accommodate potentially valuable emergent discussions.
  • The right people: Both parties need to assemble a team of specialists in the area of interest that can participate effectively both technically, inter-personally and language-wise (for international comparisons, it is likely that English will be the common language).
  • The right attitude: Both parties need to be open and ready to ‘tell it as it is’, covering both strengths and weaknesses in their own practices. This is not a competition – it’s a collaboration.

I’d be interested in readers’ own experiences in this area – contact me here.

Alastair Ross

Codexx Associates Ltd


From benchmarking to best practice sharing

Tuesday, May 5th, 2015

by Alastair Ross and Martyn Luscombe


The term ‘benchmarking’ originally emanated from the practices of cobblers, measuring people’s feet on a bench to make a pattern for a shoe. It was appropriated by business in the 1980s when progressive companies began to formally ‘benchmark’ themselves against each other. Its purpose was to drive business improvement based on identification of superior practices in other organisations. Xerox was the first major business to popularise it and since then it has become a core tool in the business improvement toolkit. Whilst benchmarking can be a very effective tool in catalysing and focusing improvement, it can also be easy to misuse, with the result that the not insignificant resources required can fail to yield usable improvements.

Codexx has gained significant experience in benchmarking programmes since it was established in 2002 and our consultants have additional benchmarking experience gained from their work with previous employers such as IBM, Philips and Cranfield School of Management. Our experience includes the benchmarking of manufacturing practices and performance, supply chain management, R&D, general business innovation and law firm innovation. This experience has given us a good feel for both the strengths and weaknesses of benchmarking as a whole. In this short article we want to share some of the key lessons we have learned and our recommendations on how to best to use benchmarking methods in your own improvement programmes.

To help in establishing a framework for assessing the use of benchmarking, it is important to recognise that in practice ‘benchmarking’ covers a wide range of approaches, each with their strengths and weaknesses. We have divided benchmarking into three distinct types and evaluated each on their merits based on our experience in applying them:

  • Quantitative benchmarking
  • Quantitative practice assessments
  • Qualitative practice assessments

In practice, many benchmarking assessments will be a hybrid of two or more types to suit specific requirements. A key success factor in benchmarking is not to lose sight of the goal. In benchmarking the goal is to be able to use the assessment findings in a practical way to catalyse, focus and effectively execute changes in business practices which will yield performance improvement. The benchmarking findings (the ‘score’) should never be seen as an end in itself – although, in our experience, that can all too easily happen as managers use the findings simply to validate a past or planned action, rather than to use it to drive improvement.

1. Quantitative benchmarking

This type of benchmarking can be considered as the ‘classic’ benchmarking with its focus on comparing the selected business against a defined set of performance metrics and comparing the results with a database of other businesses on an anonymous basis.

Sometimes benchmarking exercises make the mistake of focusing solely on performance metrics, and only part of the story is revealed. By also comparing business practices in an objective way (e.g. through the use of maturity grids) a more valuable assessment can be performed helping to answer the ‘why?’ as well as the ‘what?’ of the scoring. For example, if a manufacturing company is shown to have high inventories compared to the average in its sector, weak practice scoring in the use of lean practices and the use of large production batches would explain this poor performance and give management a list of areas for improvement.

A good example of this type of benchmarking is the Probe manufacturing benchmarking tool. This was developed by IBM Consulting and London Business School in 1992 as an objective way of assessing a manufacturing operation against world class standards. It was based on a model of World Class Manufacturing that drew heavily on lean thinking. The benchmarking tool provided numerical scoring based on practice maturity and measurable performance achievements, and grew to a database of over 2000 manufacturing sites. Codexx has been a licensed user of the Probe manufacturing benchmarking tool since 2002 and we have used Probe for multi-factory benchmarking on a regular basis.

An excellent example of a company that has successfully applied best practices benchmarking and improvement is Grundfos, the leading global pump manufacturer with factories in Europe, America and Asia and headquartered in Denmark. Grundfos started using best practice benchmarking using PROBE in 1996 to assess the practices and performance in its factories. This identified gaps with best practices including the need to utilise Lean manufacturing approaches. Grundfos commenced implementation of lean manufacturing techniques across their factories from 1997 and continued regular PROBE benchmarking supported by Codexx. In parallel it used EFQM to drive overall Business Excellence. Grundfos Denmark won the EFQM award in 2006. In the PROBE benchmarking assessment in 2008, their Danish factories scored at a world class practice level. The result has been clear improvement in key performance indicators such as on-time delivery, inventory turns and quality.

New Directions 19 - benchmarking article - Figure 1

Figure 1: Example of practice v performance scatter for PROBE manufacturing benchmarking



  • Uses a structured approach and provides quantified comparisons with other organisations on an anonymous basis.
  • The large database of businesses that have been benchmarked against the defined practices and performance provides an authoritative comparison.
  • This can be a cost-effective approach – the license or consulting fee required will be less than the total cost of a ‘do it yourself’ approach.
  • The benchmarking exercise can be performed without visiting other companies so does not require much time.


  • The underlying model that the benchmark is based on is key to its relevance and its makeup should be clear to participants. It is all too easy for this to be complex and opaque and therefore unclear as to the validity and relevance of the final scoring.
  • Performance benchmarking alone does not provide easily actionable improvements. An effective Quantitative Benchmarking tool will cover both practice and performance.
  • The scope of the benchmark and the practice and performance areas covered are pre-defined by the benchmarking model and there is little or no room for flexibility.
  • Valid comparison is based on the assumption that all participating organisations define the compared metrics in the same way. It is up to the benchmarking assessors to ensure this is the case and if participating businesses are self-assessed, this cannot be guaranteed. The danger is that if the metric is ‘apples’, some organisations will actually be measuring ‘pears’… And this can easily be done. For example one global engineering company found that each of its international subsidiaries had different ways of measuring ‘on time delivery’; so a benchmarking performance question that asked for ‘on time delivery performance’ could get a high variety of %results from across this company even if they actually had identical performance…

2. Quantitative practice assessments

This benchmarking approach uses a defined framework of key practices with levels of maturity, i.e. from weak to strong practices. It is used when there is an absence of a large database of companies which have been assessed against these practices, so quantitative benchmarking is not feasible. Typically this would be because the nature or scope of the practice area is not wide enough to have merited academic or consulting study and the development of a benchmarking programme. This approach suits niche assessments of specialist areas, where a structured comparison against best practices is required. The lack of a database of comparative data means that the process must involve a number of like-minded organisations, all seeking to learn how to improve performance in the area involved.

Codexx used this approach to help a client perform benchmarking of production maintenance practices as part of a re-engineering programme for maintenance across 8 factories. The objectives of the benchmarking were:

1. To help ‘open the eyes’ of the maintenance managers to better practices in other maintenance organisations – and thus encourage change.
2. To provide a quantifiable comparison of practices and some key performance metrics with organisations with measurably better maintenance operations to provide a set of improvement targets.
3. To provide examples of improved practices which could be transferable to their business.
4. To learn about each company’s ‘journey’ to their current maintenance practices and benefit from their learning.

We initially looked to use a commercially available benchmarking tool for this work, but we found no tool that was broad enough to meet the client needs (most were focused on specific aspects of maintenance such as cost). We therefore developed an assessment framework built around a skeleton provided by the ISO 9001 8 principles of quality management. On this skeleton we added maintenance-specific practices and performances and developed this assessment model collaboratively with the client’s re-engineering team of maintenance managers – with whom we were already working. This approach was used to benchmark five other European companies, using 1 day visits – attended by the majority of the maintenance managers – supported by prior data sharing and preparation. Each participating firm was provided with an assessment report and overall scoring – it was important to ensure that each participating company gained value from the exercise. The assessment approach provided much insight to the maintenance managers and was used to develop improvement projects as part of the re-engineering programme. The scoring also provided a measurable set of practice and performance targets that were used over a three year period to measure re-engineering progress, with 6-12 month self-assessments against the framework (more information).

New Directions 19 - benchmarking article - Figure 2

Figure 2: Extract from F4i showing questions on innovation practice maturity.


Another example of this approach is in the development of the Foundations for Innovation (F4i) assessment tool by Codexx in 2006. This was born our of our recognition of a lack of a cohesive tool for assessing an organisation’s innovation ‘health’ in a cohesive and objective way. F4i was developed with the support of Imperial College Business School to bring academic rigour to the development of the assessment tool. Based on an academically validated model of innovation, it used 60 key innovation practices and performance metrics to enable a quantitative assessment of innovation in a business (see figures 2 & 3). We have used F4i in innovation assessments of industrial and service organisations across Europe, USA and China. F4i does not have a large database of companies – we focus primarily on how participating organisations score against the defined practices and more importantly in the underlying reasons for weak practices. We took a similar approach in the development of an innovation assessment for the legal sector, building on our experience of working with major UK law firms in innovation since 2005. We used this to perform a study of innovation in 35 UK and German law firms, together with the business schools of Exeter and Leipzig universities (more information).


  • The assessment framework can be tailored precisely to the specific needs of the organisation (or group of companies) involved.
  • The assessment approach can again be tailored as required.
  • Learning is gained from the visits to other organisations.


  • It is typically a more expensive approach to benchmarking than Type 1 as all the costs of development usually have to be borne by the lead partner(s).
  • There needs to be mutual value in the assessment for each participant and the lead organisation needs to recruit the involvement of others and typically cover the costs of the framework, travel to the other companies and in the scoring and reporting.
  • The limited number of participants results in a limited data set which restricts the ability to identify and validate relationships between practice and performance. Findings are thus more indicative than anything.
  •  A custom assessment framework needs to be developed for the benchmarking exercise by one of the parties involved in the assessment or by a supporting consultant.

New Directions 19 - benchmarking article - Figure 3

Figure 3: Example of ‘traffic light’ scoring against overall F4i innovation model.


 3. Qualitative practice assessments

This final type of ‘benchmarking’ is comparatively unstructured and primarily produces a qualitative assessment. One could argue that this is what normally happens in ad hoc and unstructured visits to other companies. Such informal benchmarking is sometimes referred to as ‘industrial tourism’ where the lack of any objective or structure to the visit results in limited value and few actionable outcomes. We are not advocating this approach. Structured qualitative practice assessment does have objectives and an assessment framework but it is deliberately loose. Codexx have facilitated a number of such assessments in a practice sharing programme on manufacturing technology areas, between a major Scandinavian and German manufacturers where there was mutual interest in a specific area of production technology development and the companies did not compete directly in their customer markets but were mutual users of each other’s products. Our approach to these assessments was to develop an overall practice framework (again based around the ISO’s 8 principles of quality management) and develop specific practices around this framework in collaboration with the two companies. An assessment visit programme was arranged and the framework used to scope and focus discussions. Key findings from the assessment were documented for each company.


  • Suitable for use in highly specific business areas where an existing benchmarking tool is unlikely to be available.
  • Learning is gained from the visits to other organisations.
  • The flexible framework enables practice sharing visits and discussions to accommodate discussion and evaluation of thinking, methods and experience which emerge as of value to participants, and would not easily be accommodated in a tight prescriptive assessment format. Basically ‘best practices’ in this area are not well defined or understood and so the assessment approach needs to accommodate emergent practices that have proved to be of value to at least one of the participating organisations.


  • Requires participants to each have areas of practice which are perceived to be progressive by other partners, so each participant can learn from the assessment – not just teach the other participants. This means that pre-work is typically required to engage participants and establish the potential for a mutually beneficial practice sharing arrangement.
  • There is a danger that the assessment will be so flexible and loose that it becomes difficult to make objective comparisons to enable effective learning for participants.
  • Again one party needs to take the lead in such a practice sharing programme.

 Final Words

Benchmarking is a proven way to compare an organisation’s effectiveness with others and do this in a way that provides objective findings that can be used to catalyse business improvement. However, sometimes traditional benchmarking can be expensive, too ‘one size fits all’ and its scoring opaque to participants.

Less formal methods, based around ‘practice sharing’ can be effective at achieving the required objectives of catalysing improvement – without the ‘heaviness’ of typical benchmarking – and enabling a more personalised fit to the requirements of the participants.

Overall, it is key that, whatever benchmarking approach is used, participants do not lose sight of the ultimate aims of the activity: to catalyse and focus improvement activities.

Product development – ten best practices

Friday, October 17th, 2014

Product development is a critical function for product-based businesses. The ability to develop new products with features and functionality that are valued by users, to bring them to market quickly, cost effectively and at acceptable quality and then to support and enhance them during their lifetime has always been a complex undertaking.

Today’s additional requirements to serve a global market, to exploit new technologies – including the opportunities provided by the internet – to meet ever higher customer expectations and to compete against new low cost, but increasingly high value, rivals from China and other emerging economies simply adds to the challenge.

How do companies ensure that their product development practices are good enough?

Over the past decade Codexx has worked with a number of clients on assessing and improving product development practices, most recently we performed a global R&D benchmarking assessment covering Europe, US and China. In 2010-12 Codexx carried out two major research studies on what we call ‘the innovation journey’ for technology-based product businesses, from new ideas to value (e.g. [products) in the marketplace. This covered 43 technology businesses in the UK and Denmark.*

New Directions 18 - Figure 1

Figure 1: An end-to-end product innovation process

From our experience, we have put together a ‘top ten’ list of practices that we consider have a key impact on the success of product development. In no particular order:

  •  Become a user. Get the ‘voice of the user’ into R&D and keep it there. This can be done in a number of ways: by ensuring that development engineers and product managers spend time with relevant users, or by nominating a ‘user advocate’ in the development team, or by regularly and systematically bringing field feedback into R&D. A measure of the importance of this practice is that our innovation journey study found that end-user involvement in new product development was the practice with the fourth highest correlation with innovation performance.
  • Effectively explore new ideas. This means using a rich approach to exploration that involves personnel from development, marketing and production as well as potential users and partners, to determine the potential user and business opportunities and also the risks of new concepts and to identify potential improvements. The use of rough-prototyping is very effective here to help evaluate the concept. This level of exploration ‘shakes up’ the concept to determine whether it has sufficient merits to consider taking it forward. In our innovation journey study of 2012-13, the effective use of rough prototyping for idea exploration was the practice with the second highest correlation with overall innovation performance.
  • Enhance existing products. It can be easy for R&D to over-focus on new products, at the expense of enhancing existing products – from an engineering perspective it can be seen as less challenging and less exciting. But enhancing an existing product instead of investing in a new one has many advantages – less technical risk, less investment, easier for sales personnel to sell (minimal training required). Enhancements can be through updated technology elements in an existing platform, through accessories (which can be provided by 3rd parties) or through software and services. This does not mean we are ‘anti-new products’, but simply that the development investment portfolio should give sufficient weight to existing product enhancement as well as new products.
  • Establish an ‘end-to-end’ product innovation process. Whilst most product businesses will have a formal New Product Development (NPD) process, this only covers part of the journey from new ideas to products in the market. The NPD process needs to be complemented by a front-end process for idea generation and exploration (covering the key ‘fuzzy front end’ of product development), a back-end support & enhancement process and a parallel process for capturing learning from the field (as shown in Figure 1). Front-end exploration projects will often “fail” in the sense that the results are disappointing – which is an acceptable outcome as not all new ideas should (or can) proceed. Engineers must be judged on whether they do good, disciplined investigations in this stage, not on whether the ideas work out. Some element of waste is inevitable at the front end. Without such an end-to-end approach, the NPD process will waste resources on weak concepts and deliver sub-optimum products to the market. Product innovation must be guided by a clear stage gate process with an objective Go/No go at all the stage gates, especially, of course in the ‘Fuzzy Front’ end. Few companies are good at biting the bullet and stopping unpromising projects early on. But these processes have tendency to become more onerous with time as extra checks are inserted every time a problem occurs. This can become a bureaucratic nightmare. Beware of making the process any more formal than is absolutely necessary.
  • Reduce development time and team size. A major proportion of development cost is engineering hours – and this is a function of team size and project duration. So short development times and small teams have many benefits. Small teams are easier to coordinate, require less project management time and thus are able to make faster decisions. Shorter development cycles are less likely to require specification ‘resets’ due to competitive product announcements, new technologies, regulatory changes or delays from resource shortages. Frankly long development cycles and large teams generate waste in lost time, rework and non-value-adding coordination. Philosophies such as ‘Lean Startup’ (see the book by Eric Reis of the same name) support fast development time as this enables products to get to market faster and then user-based learning can start. So the mantra ‘keep it short and keep it small’ should be the guide for product development.

New Directions 18 - Figure 2

Figure 2: A system for innovation within an organisation

  • Employ platform thinking. This approach supports fast development time by introducing modular thinking into product design, to create a platform-based architecture. Thus new products become a mix of existing, evolved and new modules – for example a new mobile phone may have the same casing and screen, but updated processor, camera and new version of the operating system. This approach reduces development cost and risk and enables innovation to be focused on the new platform modules (rather than unnecessary re-invention). Platform thinking is well established in sectors such as automotive with cars such as the Volkswagen Golf and Audi A3 sharing much of the floorpan and chassis. Our innovation journey study showed that the separation of technology and product development (which is a key aspect of platform thinking) and the re-use of design or technology elements in new products were both top ten practices for correlation with overall innovation performance.
  • Ensure development proven practices are deployed. There are some proven practices for the design of new products such as documented user requirements – to provide a firm base for specification, QFD (Quality Function Deployment) – to build a functional specification that is aligned with user requirements, FMEA (Failure Mode Effect Analysis) – to identify potential design weaknesses, Design Peer Reviews – to provide an independent ‘fresh view’ on a design before it is frozen and Post Project reviews – to capture lessons learned and required improvements to working methods. These good practices are well known, so it can be surprising how poorly these are in place in product development teams. We have been in firms where these practices have simply ‘faded away’ or only have ‘lip service’ paid to them. It is the responsibility of development management to ensure that these proven development practices are in place and used effectively.
  • Establish an innovative culture. We have worked in product development teams where employees have said ‘We don’t have time for innovation, we’re too busy developing products’. Of course incremental product development is indeed one type of innovation. But the point is well made – employees need to have time, management support and encouragement to identify new ideas and concepts. This requires a supportive culture for innovation – a key element of the ‘climate’ for innovation within a firm. This is one of the seven key practice areas required for an effective innovation system (see Figure 2). A key aspect of an innovative culture is a clear and overt tolerance for failure when new ideas don’t work out. Of course, this is not the same as a tolerance for bad work…
  • Don’t overload! Management ambitions can lead to the commitment of new projects and a development roadmap that simply exceeds development capacity. The result is a climate of project slippage and postponement – which further wastes development resources in re-planning and stop-start activities. Development capacity should not be loaded more than 80% of capacity to avoid short-term overloads of some resources and also to allow personnel time to get involved in new concept exploration work, platform development or other improvements. Overloading inevitably leads to queueing, and extra inefficiency comes if engineers need to swap from project to project.
  • Have an effective Go/No Go to market. Stage-gate management in product development is a proven and effective way of managing cost and risk exposure. Just as cost and risk exposure significantly increase once a new product concept is accepted for development, so does cost and risk exposure increase significantly once a product is released to be taken to market as large investments in manufacturing and sales are required. This is why the decision to go/no go to market needs to involve all the key functions affected (such as marketing, sales, manufacturing and development) – it is not a decision for development management alone. Indeed marketing involvement in signoff of product release to market was the practice that had the highest correlation with high innovation performance in our innovation journey study.

Thanks to Rick Mitchell, Visiting Professor of Innovation Management at the University of Cambridge for his contribution to this article.

*‘The Innovation Journey for technology-rich product businesses – Phase 2 – Final study report’, October 2013, Codexx Associates Ltd, The University of Exeter Business School, The University of Aalborg Business School. More information.

For further information on our innovation and product development solutions, contact us at www.codexx.com.

Codexx innovation solution selected for new benchmarking product

Monday, April 22nd, 2013

F4i logo

Codexx Associates Ltd and PROBE Network LLP agree to develop new innovation benchmarking product

Monday 22nd April 2013

Codexx Associates Ltd and Probe Network LLP today announced their agreement for the development of a new best practice benchmarking solution for business innovation. PROBE for Innovation Excellence will be based on the ‘Foundations for Innovation’ (F4i) assessment solution developed by Codexx and used in a number of business sectors including manufacturing, insurance and professional services.

The development of PROBE for Innovation Excellence is being supported by IEL Santa Catarina, Brazil (Euvaldo Lodi Institute of Santa Catarina).The first version will be available during the summer of 2013. Codexx and PROBE have had a long standing relationship since 2002 as Codexx has been a user of PROBE’s benchmarking tools in its consulting work.

Announcing the agreement, Alastair Ross, Director of Codexx, said “I am delighted to be working with PROBE on this exciting new product. PROBE’s expertise and global market reach in benchmarking will maximise the value of our investment in F4i.” Jeff Taylor, Senior Partner of PROBE Network LLP  said “Innovation is a key driver of business success, so we are delighted to be working with Codexx on the development of a new PROBE tool that will spur innovation and act as a catalyst for overall business excellence.”

For further information contact Codexx at www.codexx.com.

Energizing Change

Copyright © Codexx
All rights reserved