Modern Economy
Vol.07 No.14(2016), Article ID:72795,19 pages

Performance Evaluation of Technology Park Implementation Phase through Multicriteria Methodology for Constructivist Decision Aid (MCDA-C)

Maria Zenilda da Silva, Adenir Steimback, Ademar Dutra, Graciella Martignago, Vinicius Dezem

Universidade do Sul de Santa Catarina (UNISUL), Florianopolis, Brazil

Copyright © 2016 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

Received: August 8, 2016; Accepted: December 13, 2016; Published: December 16, 2016


This research aims to build a model for performance evaluation of the implementation phase of a technology park, through the multicriteria methodology for constructivist decision aid (MCDA-C). This is a survey based on the constructivist paradigm, in a form of case study, of an exploratory nature, with qualitative and quantitative approach. For data collection, interview techniques, direct observation and review documents were used. As results the model enabled: i) identify four Areas of Concern, sixteen Fundamental Points of View―FPVs and seventy-one descriptors (performance indicators) that integrate the evaluation model; ii) demonstrate the performance profile of the current situation (statusquo) of the Technology Park implementation process, which resulted in 62 points; iii) explain a structured process for the identification of strengths and opportunities for performance improvement. Thus, the research generated knowledge regarding the Technology Park implementation process and the evaluation model is a tool to support the management in decision-making activities.


Performance Evaluation, Technology Park, Performance Indicators, MCDA-C Methodology

1. Introduction

Technology parks are innovative enterprises and have aroused the attention of governments in many countries, including Brazil, due to the possibility of using them as platforms for the development and implementation of projects in business, scientific and technological area. Technology parks appear as effective mechanisms to foster the interaction among various organizational actors and promote innovation in the productive sector [1] .

Technology parks have the role of: i) encourage and manage the flow of knowledge and technology among universities and enterprises; ii) provide environments that enhance a culture of innovation, creativity and quality; iii) to facilitate the creation and growth of innovative enterprises by means of incubation and mechanisms for creating spin-off [2] .

The expansion of technology parks at the international level, United States, Europe and, later, Asia and Latin America, results in a variety of adjustments and experimentation that modify and expand their concept. This heterogeneity of models of enterprises reflected in various terminologies used in the English language that came to be adopted in Brazil [3] .

The development stages of these projects are complex, and involve institutions with very distinct natures and large scale investments. In addition, the local context in which the project is inserted is also crucial, because there are political and social dimensions which are typical of certain cities and can hardly be applied in other contexts. However, according to the author, it is possible to define common steps, by which all technology parks initiatives must travel to increase their chances of success [4] .

ANPROTEC divided the development of a technology park in three phases: i) project and planning; ii) implementation and iii) operationalization [5] .

According to Soly et al., (2012) [6] ; Spolidoro (1997) [7] the planning phase defines the area, the physical and services structures, the organizational and legal model of the institution managing the project, elaborates studies of social, economic and environmental impacts. In the implementation phase, it begins the construction of a set of physical infrastructure, exploration and dissemination of the project to attract investors and companies and installation of the first organizations. The operation phase covers its occupation by the companies, project management and offer of services provided by the park to resident companies.

In Brazil, technology parks came to be created from 1984 in order to promote in the regions innovative entrepreneurship and support the creation and growth of technology-based organizations and social enterprises. This process provided an opportunity to appropriation of scientific knowledge and technology generated in research and development institutions―P & D and insertion of products, services and innovative pro- cesses in the market [8] .

The ratings in technology parks are needed due to two main aspects: i) to assist in the decision making process, both public and private, in relation to the support to be directed to the technology parks in order to induce/ensure the sustainability of these experiences, and ii) to promote the improvement of policies targeting this segment [3] .

The initiatives of developments of technology parks in Brazilian context have been discussed in various forums, receiving, to a greater or lesser degree, support, including financial one, of several public and private institutions. In this way, taking into account the high disbursement of financial resources that are required for its implementation, it is necessary and appropriate to evaluate its efficacy [3] .

Given this context, it is intended to answer the following research question: Given this context, we intend to answer the following research problem: what are the criteria to be included in a performance evaluation model of the implementation stage of a Technology Park? To answer this question, it has as a main objective of this research developing a model for performance evaluation of Sapiens Park implementation phase, based on the multicriteria methodology for decision aid constructivist approach― MCDA-C.

As specific objectives are: i) structure a set of indicators for evaluating the implementation phase of the Sapiens Park, aligned to the perceptions and values of the decider; ii) to transform the indicators identified in instruments that provide performance measurement global and local through the construction of cardinal scales and rates of substitution; and iii) to test the proposed model in order to verify their compliance and applicability in the implementation phase of the Sapiens Park.

Measuring the performance of technology parks is paramount and requires strict approaches. However, in accordance with bibliometric study carried out for the purpose of this research it was not identified a concern of the scientific community in developing, through scientific methods, a structured process for evaluating the performance of the technology parks implementation [9] .

In terms of relevance and contribution, the present research is justified by the following aspects: i) the theoretical contribution on the subject through literature presented, expanding the studies on the evaluation of the technology parks performance; ii) improvement of performance evaluation methodologies, testing the consistency and adherence of the MCDA-C Methodology in the stage of a technology park implementation; iii) availability of Sapiens Park to managers; a tool to support the management now well established scientifically; iv) assist and facilitate the decision-making process in the implementation phase of the technology park.

By the end of this study, a model will have been created, which allows decisors to be familiar with: the critical factors for the success of the company; the current performance level in each of these factors, including which factors present compromising characteristics and which provide a competitive edge; and how to use the process available to improve strategic actions.

In addition to this introductory section, this study has four more sections: 1) theoretical background; 2) research methodology; 3) presentation and analysis of the results; and 4) final considerations.

2. Theoretical Background

The performance evaluation is a tool for managing support that subsidizes the managers in making informed, transparent and more appropriate decisions to each context. There is no effective management without the use of a measurement process of organizational performance. The measurements are the starting point for the improvement of the organization itself because they allow managers to compare the planned work versus the executed one [10] [11] .

The scientific literature shows a variety of concepts on performance evaluation theme. Because it is configured as a multidisciplinary subject there is no consensus on the most appropriate concept to represent this organizational construct. Table 1 pre- sents a summary of understanding of various authors on the subject.

Among the concepts presented in Table 1, this research adopts the theoretical affiliation proposed by Ensslin, et al. (2010), which is related with the other concepts presented. Most performance evaluation concepts are sustained in the context of building and disseminating knowledge, through a management process linked to organizational strategies, which considers the peculiarities, the external and internal environment, context and future expectations of the organization, serving for manager decision support [12] .

The evaluation of performance while management process, has its centrality in the

Table 1. Summary of performance evaluation concepts.

Source: Developed by the authors, (2016).

indicators and/or evaluation criteria that identify in which performance level the investigated organization and/or context is Specifically in relation to the technology parks, from research conducted in databases in the period from 2000 to 2013, several criteria and/or performance variables were identified, which are summarized in Table 2 [13] [14] .

It is observed in Table 2 that the studies have sought to evaluate technology parks through different criteria of performance. When focused on companies, the studies are based on factors such as: i) economic performance; ii) performance in innovation (investments in R&D and production of patents); iii) creation of jobs; iv) access to public funding; v) longevity of the business; and vi) knowledge management of technology park companies. When focused on the technology park as a whole, the studies have evaluated factors such as: i) attractiveness capacity of innovation actors to the region of the TP; ii) global impact on employability in the region of the TP; iii) creation of new enterprises; iv) model of management and governance; v) intensity of relationship between university and company; and vi) practices of knowledge management.

Before the studies presented it is observed that the criteria used to evaluate the development of technology parks are concentrated, most of them, in the operationalization stage. In this way, it can be stated that there is a gap in the literature regarding the performance evaluation in the implementation phase of technology parks. This is why this research is solidified. It collaborates with the expansion of the performance evaluation research of technology parks and expands knowledge of these enterprises.

3. Research Methodology

The Research Methodology applicable to this project is composed of two parts. The first part is dedicated to the methodological framework of the research; and the second part presents the intervention instrument selected for the construction of the evaluation model of the Technology Park implementation phase, namely the Multicriteria Methodology for Constructivist Decision Aid (MCDA-C).

3.1. Methodological Framework of Research

The methodological framework comprises the following choices [35] :

1) Concerning the nature of research, it is classified as applied in the form of a case study with a view to solving a real problem, that is building a model to evaluate the Technology Park implementation stage.

2) As to the nature of the objective, it is exploratory, by promoting reflection and generation of knowledge in the decision maker/manager of Sapiens Park. The aim is to deepen knowledge on the subject, and with this to structure a set of criteria/perfor- mance indicators that enable to evaluate the performance, from the perception and values of the decision maker.

3) Regarding the approach to the problem, it is qualitative-quantitative, and the qualitative approach occurred at the time the decision maker concerns and values were

Table 2. Summary of performance indicators presented in the literature.

Source: Developed by the authors, (2016).

considered, during the phase of structuring the evaluation model, more specifically in the construction of the Primary Evaluation Elements―PEEs and the Cognitive Maps. Later, the quantitative phase happened through construction of ordinal scales of descriptors, from the transformation of ordinal scales into cardinal ones, the allocation of substitution rates and the additive aggregation process of implementation phase performance of the evaluated park.

4) Regarding the data collection, it involved primary and secondary data. The primary data, in the form of semi-structured interviews, where they sought to identify the concerns and preferences of the manager/decision maker for the structuring of the evaluation model. The secondary data consisted in the analysis of documents and standards used by the organization object of study, related to the topic of research.

3.2. Intervention Tool―MCDA-C Methodology

The consolidation of MCDA-C Methodology as a scientific tool of management occurs from the 1980s. The scientific basis of the MCDA-C methodology arise with the publications of Roy’s (1993) and Landry’s works (1995) when defining the limits of objectivity to the decision support processes; and Skinner’s (1986) and Keeney’s works (1992) when recognizing that the attributes (objectives/criteria) are specific to each context, from the perceptions of the manager/decision maker; and also with Bana and Costa’s work (1993) when explaining the beliefs of MCDA [12] [36] [37] [38] [39] [40] .

The purpose of the MCDA-C methodology is reached by three main phases, as shown in Figure 1: i) Structuring Phase; ii) Evaluation Phase; and iii) Elaboration of Recommendations Phase.

As can be seen in Figure 1, MCDA-C methodology due to its constructivist view, presents the possibility of recursion in all phases and stages. Below is a summary of each phase, based on the following authors: Enslin et al., 2016; Dutra et al., 2014; Lacerda et al., 2014; Ensslin et al., 2014; Della Bruna Junior, Ensslin, Ensslin, 2014; Marafon et al., 2013; Ensslin, et al., 2012; Azevedo et al., 2013; Rosa et al., 2012; Ensslin et al.,

Source: [41]

Figure 1.MCDA-C Methodology and its stages.

2001; Ensslin, Dutra, Ensslin, 2000 [41] - [51] .

3.2.1. Structuring Phase

Centrally, Structuring Phase aims to organize, develop and expand the decision maker’s knowledge regarding his/her decision-making context. To this end, some steps need to be taken to achieve these objectives, namely: Identification of Primary Elements Evaluation―PEEs (hereinafter also referred to concerns) Guidance of PEEs for Action, Grouping Areas of Interest, Value Tree Building, Construction of Descriptors for the selected objectives (ordinal scales of measurement) and Set of Reference levels. The product of this phase consists in a hierarchical structure of value by presenting the aspects, according to which the investigated context will be evaluated, as well as presenting what will be considered to evaluate each of the model aspects.

3.2.2. Evaluation Phase

Evaluation Phase aims at translating the ordinal qualitative model built in the Structuring Phase in a mathematical model where one can identify the quantitative performance of the individual context (in each model aspect) or globally (global evaluation of the context performance). To this end, some steps need to be taken to achieve these goals: Construction of the Value Functions; Construction of Compensation Rates; Identification of the status quo performance profile; and Calculation of the performance evaluation of the context in analysis. The phase product is a multicriteria mathematical model which allows the calculation of the overall context performance or its constituent parts.

3.2.3. Recommendation Phase

Recommendation Phase aims to offer information/actions that the decision maker may use/put into practice to improve the performance of the analyzed context, focusing on performance improvement in the indicators included in the model. The knowledge generated here allows the decision maker visualize graphically and numerically in every aspect (performance indicator) if the performance is “excellent”, “competitive” or “compromising”.

It should be explained that the decision aid activity, in this research focused on management, is characterized as the central differential of MCDA-C Methodology facing other multicriteria methodologies. In this context, it is implied that: i) the decision-maker is the central element, without whom, the activity, and MCDA methodology, lose their reason for being; ii) that the central goal is to enable the actors involved in the decision-making process, generate learning propitiated by the degree of understanding generated during the process, informed by both the value system and the decision maker goals; iii) the central focus of MCDA-C Methodology is to develop a set of conditions and means (“keys”) as a basis for decisions, depending on what the decision maker believes to be the most appropriate, within a given context [36] .

It is noteworthy that the built and presented model, from the next Section, based on the Multicriteria Methodology for Constructivist Decision Aid (MCDA-C), focuses on the Structuring Phase and Evaluation Phase.

4. Presentation and Analysis of the Results

In this section, the results of a case study are presented, which was carried out at Sapiens Park, through the construction of a model for evaluating the project implementation phase, in the light of the Multicriteria Methodology for Constructivist Decision Aid―MCDA-C, following the steps presented in Section 3.2, specifically the Phases of Structuring and Evaluation.

4.1. Structuring Phase

Sapiens Park is a park of innovation in which one seeks to apply scientific and empirical knowledge in generating something new for society. The project incorporates concepts and guidelines present in more daring and innovative projects in the world in this area, as the economy of experience, knowledge society, sustainable development, digital convergence and sciences and technologies, economic globalization and the adoption of a continuous cycle of innovation. The project is being implemented in the metropolitan region of Florianópolis, the capital of the state of Santa Catarina, in a total area of 431.50 hectares [52] [53] .

Structuring phase of MCDA-C Methodology provides for the identification of actors involved in the decision-making context, the problem label definition, the identification of the Primary Elements of Evaluation―PEEs, the construction of the Fundamental Points of View Family―FPVF and the preparation of descriptors.

The decisions are usually the result of various interactions between individuals and groups of influences that are identified as authors. In the present study, people that acted as decision makers were the Executive Director of the Sapiens Park and the author of this research; the author and coauthors of this study were facilitators; the professionals that work at Sapiens Park, government, companies, universities and society in general as the Acted [51] .

The problem label highlights the wording of the investigated context and must contain the focus of work. It represents a fundamental step toward the construction of the model, within which the problem is outlined and focused on its main concerns. It was defined as a label “Evaluating the performance of the implementation phase of Sapiens Park.” [51] .

The PEEs are made up of objectives, goals, values, actions, options and alternatives for decision-makers. All primary elements of evaluation that come in mind must be expressed so generate greater quality in the structuring of the multicriteria evaluation model [40] .

After several interactions with decision makers, one hundred and fourteen PEEs were identified, whose cut out of the ten first is listed below (Table 3), containing the concepts (present pole and opposite pole).

In the construction of FPVF (Fundamental Points of View Family) all PEEs are used, by a process of grouping the concepts of the same nature, adopting an up down motion. The concepts should be grouped together in areas of common ground and/or subareas representing equivalent strategic concerns that can be called “areas of concern”. The procedure of grouping has identified four areas of concern, as shown in Figure 2 [51] .

Table 3. Primary elements of evaluation―PEEs.

Source: Developed by the authors, 2016.

Source: Developed by the authors, 2016.

Figure 2.FPVFs containing the areas of concern, subareas and related concepts.

From FPVFs the construction of descriptors is started, identifying what is most appropriate to measure and, thus, build the ordinal scales that may measure the properties performance of established strategic objectives. A descriptor defines a set of levels of impact that serve as the basis for describing the possible performances of potential actions [40] .

Were identified and structured 71 descriptors for evaluating the performance of the implementation phase of Sapiens Park. The amount of descriptions per Area of Concern meets the following distribution: Structure-34; Assets-15; Clusters-12 and Actors- 10. In Table 4, we can observe the structuring of the descriptor 1.1 Road System, linked to the Infrastructure subarea, Structure Area, of the evaluation model.

It is observed in Table 1 that the descriptor was structured into five levels of impact, where Level 2 was referred to as “NEUTRAL” and Level 4 referred to as “GOOD”. The NEUTRAL level corresponds to the minimum acceptable performance and the GOOD performance corresponds to a proper performance that meets the expectations of decision-makers. The definition of these levels, as anchors levels, enables the additive aggregation of the performance of each descriptor and/or area of concern [51] .

To complete the construction of descriptors, an understanding of the implementation phase context of the Sapiens Park was developed, by means of seventy-one descriptors, organized by means of nominal scales. For better understanding, Figure 3 shows the Hierarchical Structure of Value and the construction of the first five descriptors linked to the Infrastructure Subarea.

4.2. Evaluation Phase

Evaluation phase is intended to build a model of preference, transforming the ordinal scales into cardinal ones. To accomplish this transformation is required the participation of decision makers to define the attractiveness between the levels of each scale, through the use of MACBETH software. By its theoretical foundation, representativeness and practical recognition, this method has been the most used [12] .

For the construction of the mathematical model it is necessary, in line with the MCDA-C, to establish value functions, replacement rates and its overall evaluation

Table 4. Descriptor for FPV 1.1 road system.

Source: Prepared by the authors, 2016.

Source: Developed by the authors, 2014.

Figure 3.Cut out of five first descriptors of the Infrastructure subarea.

formula. Then the model is operationalized by evaluating the current performance with its respective scores. MACBETH software (Measuring Attractiveness by a Cathegorical Based Evaluation Technique), makes use of semantic judgments, to identify the difference in attractiveness between two potential actions.

During this process, the facilitator requests the decision-maker to indicate, for a certain descriptor, the attractiveness of going from a certain level x to another level y, opting for one of the semantic categories from the following scale: null, very weak, weak, moderate, strong, very strong, and extreme. This process is repeated for all the pairs of descriptor performance levels, resulting in a judgment matrix by means of the M- MACBETH software program.

Once the matrix is complete, this method proposes a numeric scale that meets all the semantic conditions of the decision-maker, as well as the conditions required by a value function. In the MCDA-C, the value functions are anchored in the reference levels established when the descriptors were constructed, receiving a score of 0 for the lower level (neutral) and 100 for the upper level (good). This procedure allows a clearer visualization of performance at poor levels (below the lower reference level), at market level (between the two reference levels), and performance at an excellent level (above the upper reference level).

Figure 4 shows the transformation of ordinal scales into cardinal ones to the de-

Source: Developed by the authors, 2016

Figure 4. Transformation of ordinal scales of the descriptor 1.1.1 road system.

scriptor 1.1-Road System.

Completed the construction of value functions for each descriptor it goes to the definition of replacement rates which express the performance loss that a certain potential action must suffer in a criterion in order to compensate the performance gain in another one, from judgment of the decision makers. The substitution rates or compensation are designed to obtain as much as each of the FPVs or EPVs contributes or poses to the global model.

By means of a formula of additive aggregation, which consists in the weighted sum of the scores obtained in each criterion multiplied by the substitution rate, we have the global performance of the model, in this particular case the implementation phase performance of the Sapiens Park. Figure 5 shows the impact profile (Status Quo) of Sapiens Partk implementation phase.

It can be observed in Figure 5 that the status quo is evident for each FPV, i.e., for each subarea of concern. So the descriptors set performance of each FPV is consolidated in the performance of its own FPV. The overall performance of the implementation phase of the Sapiens Park reached 62 points, and the area of concern Assets with lower performance (9 points) and Structure with the best performance (29 points).

Source: Developed by the authors, 2014

Figure 5. Performance profile of sapiens park implementation phase.

From the performance results evidenced through the constructed model, it is concluded that the implementation phase of the Sapiens Park is overdue, a conclusion ratified by the managers of the venture from the reasons: i) difficulties in the environmental licensing process, with a view to adaptation to the Municipal Master Plan and the formalization of the business condominium; and ii) the lack of funding source, especially after the 2008 global crisis that slowed investment both from the management organization and the public and private sectors.

The evaluation model built based on the MCDA-C methodology enables support managers in the analysis of implemented practices and the results achieved in the implementation of Sapiens Park, providing subsidies to generate and prioritize actions that best meet the objectives of the venture.

5. Final Considerations

The main objective of this research consisted in developing a model for performance evaluation of Sapiens Park implementation phase, based on the Multicriteria methodology for Constructivist Decision Aid―t MCDA-C. The goal was to provide the construction of an evaluation model containing four areas of concern, sixteen Fundamental Points of View and seventy-one descriptors.

The operationalization of the model allowed us to demonstrate that the implementation phase performance of the Sapiens Park reached 62 points, and the Assets area of concern with lower performance (9 points) and Structure with the best performance (29 points).

As results, it is worth mentioning: i) the adherence and robustness of the Methodology MCDA-C in the construction of an evaluation model that incorporates the values and perceptions of decision-makers; ii) the implementation phase performance profile of the Sapiens Park highlights opportunities for improvement, allowing to support the venture managers in the process of decision making; iii) the construction of the model enabled the generation of knowledge facing the evaluated context, ensuring consistency and alignment with the theoretical affiliation used for purposes of this research.

The decision maker’s participation in the whole process ensured that, on one hand, everything being developed corresponded with his perceptions and represented his values and preferences; on the other hand, his confidence in the created model helped him to use it in order to lay the foundation and add transparency to his management. He thereby felt more comfortable justifying his choices and showing how his process was developed. Thus, the use of the MCDA-C methodology as the research instrument is justified for confusing environments involving multiple actors, with conflicting and partially set objectives.

As research limitations, there is the limited availability of the project managers’ time to act as decision makers in the context of MCDA-C Methodology and as suggestions for future research the proposition and monitoring of improvement actions of Sapiens Park implementation phase, from the evaluated performance.

Cite this paper

da Silva, M.Z., Steimback, A., Dutra, A., Martignago, G. and Dezem, V. (2016) Performance Evaluation of Technology Park Implementation Phase through Multicriteria Methodology for Constructivist Decision Aid (MCDA-C). Mo- dern Economy, 7, 1687-1705.


  1. 1. Noce, A.F.S., et al. (2002) O processo de implantacao e operacionalizacao de um parque tecnologico: um Estudo de Caso.

  2. 2. IASP (2013) XXIII Seminário Nacional de Parques Tecnologicos e Incubadoras de Empresas e 30a Conferencia da IASP.

  3. 3. Vedovello, C.A., Judice, V.M.M. and Maculan, A.D. (2006) Revisao critica as abordagens a parques tecnologicos: Alternativas interpretativas as experiencias brasileiras recentes. Revista de Administracao e Inovacao, 3, 103-118.

  4. 4. Oliveira, F.H.P. (2008) O desafio de implantar parques tecnologicos. Instituto Inovacao, Belo Horizonte.

  5. 5. ANPROTEC. Associacao Nacional de Entidades Promotoras de Empreendimentos Inovadores. Portfolio de Parques Tecnologico no Brasil. Dez./2008.

  6. 6. Soly, B., et al. (2014) Os incentivos fiscais a inovacao tecnologica. In: Garcia, C., (Ed.), Lei do Bem: Como alavancar a inovacao com a utilizacao dos incentivos fiscais, Pillares, Sao Paulo, 39-78.

  7. 7. Spolidoro, R. (1997) A sociedade do conhecimento e seus impactos no meio urbano. In: Paladino, G.G. and Medeiros, L.A., Eds., Parques Tecnologicos e Meio Urbano: Artigos e debates, ANPROTEC, Brasilia.

  8. 8. Correia, A.M.M. and Gomes, M.L.B.G. (2012) Habitats de inovacao na economia do conhecimento: Identificando acoes de sucesso. Revista de Administracao e Inovacao—RAI, Sao Paulo, 9, 32-54.

  9. 9. Bigliardi, B., Dormio, A., Nosella, A. and Petroni, G. (2006) Assessing Science Parks’ Performances: Directions from Selected Italian Case Studies.Technovation, Elsevier.

  10. 10. Dutra, A. and Ensslin, L. (2008) Ferramentas de avaliacao do desempenho organizacional. In: Angeloni, M.T. and Mussi, C.C., Eds., Estratégias: Formulacao, implementacao e avaliacao: O desafio das organizacoes contemporaneas, Saraiva, Sao Paulo.

  11. 11. Harrington, H.J. (1993) Aperfeicoando Processos Empresariais. Makron Books, Sao Paulo.

  12. 12. Ensslin, L., Giffhorn, E., Ensslin, S.R., Petri, S.M. and Vianna, W.B. (2010) Avaliacao do Desempenho de Empresas Terceirizadas com o uso da Metodologia Multicritério em Apoio a Decisao-Construtivista. Pesquisa Operacional, 30, 125-152.

  13. 13. Martins, R., De Oliveira Lacerda, R. and Ensslin, L. (2013) Um estudo bibliométrico sobre avaliacao de desempenho em instituicoes de ensino superior. Revista Eletronica de Estratégia & Negocios, 6, 238-265.

  14. 14. Ensslin, L., Dutra, A., Ensslin, S.R., Chaves, L.C. and Dezem, V. (2015) Research Process for Selecting a Theoretical Framework and Bibliometric Analysis of a Theme: Illustration for the Management of Customer Service in a Bank. Modern Economy, 6, 782-796.

  15. 15. Colombo, M.G. and Delmastro, M. (2002) How Effective Are Technology Incubators? Evidence from Italy. Research Policy, 31, 1103-1122.

  16. 16. Lofsten, H. and Lindelof, P. (2002) Science Parks and the Growth of New Technology-Based Firms—Academic-Industry Links, Innovation and Markets. Research Policy, 31, 859-876.

  17. 17. Siegel, D., Westhead, P. and Wright, M. (2003) Assessing the Impact of University Science Parks on Research Productivity: Exploratory Firm-Level Evidence from the United Kingdom. International Journal of Industrial Organization, 21, 1357-1369.

  18. 18. Appold, S. (2004) Research Parks and the Location of Industrial Research Laboratories: An Analysis of the Effectiveness of a Policy Intervention. Research Policy, 33, 225-243.

  19. 19. Ferguson, R. and Olofsson, C. (2004) Science Parks and the Development of NTBFs-Location, Survival and Growth. Journal of Technology Transfer, 29, 5-17.

  20. 20. Chan, K.F. and Lau, T. (2005) Assessing Technology Incubator Programs in the Science Park: The Good, the Bad and the Ugly. Technovation, 25, 1215-1228.

  21. 21. Sanz, L. (2006) Strategigram: A Tool to Deepen Our Understanding of Science Park Strategies. BEI, Luxemburgo.

  22. 22. Link, A.N. and Scott, J.T. (2006) US University Research Parks. Journal of Productivity Analysis, 25, 43-55.

  23. 23. Hansson, F. (2007) Science Parks as Knowledge Organizations—The “ba” in Action? European Journal of Innovation Management, 10, 348-366.

  24. 24. Van Geenhuizen, M. and Soetanto, D.P. (2008) Science Parks: What They Are and How They Need to Be Evaluated. International Journal Foresight and Innovation Policy, 4, 90-111.

  25. 25. Radosevic, S. and Myrzakhmet, M. (2009) Between Vision and Reality: Promoting Innovation through Technoparks in an Emerging Economy. Technovation, 29, 645-656.

  26. 26. Squicciarini, M.G. (2009) Science Parks, Knowledge Spillovers, and Firms’ Innovative Performance: Evidence from Finland. Discussion Paper No. 32.

  27. 27. Testa, M.G. and Luciano, E.M. (2012) Determinantes do sucesso de um Spin-off em parque tecnologico. R. Adm. FACES Jornal Belo Horizonte, 11, 69-83.

  28. 28. Fukugawa, N. (2010) Assessing the Impact of Science Parks on Knowledge Interaction in the Regional Innovation System. Summer Conference 2010 on Opening up Innovation: Strategy, Organization and Technology, London, 16-18 June 2010.

  29. 29. Zeng, S., Xie, X. and Tam, C. (2010) Evaluating Innovation Capabilities for Science Parks: A System Model. Technological and Economic Development of Economy, 16, 297-413.

  30. 30. Jung Neto, R. and Paula, E.A.W. (2009) Indicadores de avaliacao de desempenho para o parque cientifico e tecnologico da Puc-rs-Tecnopuc, na percepcao de seus principais stakeholders. 19th Seminário Nacional de Parques, Florianopolis, 26-30 October 2009.

  31. 31. Ondategui, J. (2009) Los parques cientificos y tecnologicos en Espana: Retos y oportunidades, Direccion General de Investigacion. Consejeria de Educacion, Madrid.

  32. 32. Chen, C.-U. (2006) The Investigation for the Establishment of Science Parks: The Case of Taiwan. Journal of American Academy of Business, 8, 62-66.

  33. 33. APTE (Associacion de Parques Cientificos y Tecnologicos de Espana) (2005) Estudio del impacto socioeconomico de los parques cientificos y tecnologicos espanoles. Ministério de Educacao e Ciencia, Madrid.

  34. 34. Plonski, G.A., Lourencao, T.M. and Gargione, L.A. (2005) Fatores Criticos de Sucesso para Modelagem de Parques Tecnologicos Privados no Brasil. 14th Seminário Latino-Iberoamericano de Gestion Tecnologica, Salvador, 25-28 October 2005, 1-16.

  35. 35. Richardson, R.J. (2010) Pesquisa Social: Métodos e técnicas. 3rd Edition, Atlas, Sao Paulo.

  36. 36. Roy, B. (1993) Decision Science or Decision-Aid Science? European Journal of Operational Research, 66, 184-203.

  37. 37. Landry, M. (1995) A Note of the Concept of “Problem”. Organization Studies, 16, 315-343.

  38. 38. Skinner, W. (1986) The Productivity Paradox. Management Review, 75, 41-45.

  39. 39. Keeney, R.L. (1992) Value-Focused Thinking: A Path to Creative Decision-Making. Harvard University Press, Cambridge.

  40. 40. Bana e Costa, C.A. (1993) Tres Conviccoes Fundamentais na Prática do Apoio a Decisao. Pesquisa Operacional, 13, 09-29.

  41. 41. Ensslin, L., Dutra, A. and Ensslin, S. (2000) Construtivist Approach to the Management of Human Resources at a Governmental Agency. International Transactions in Operational Research, 7, 79-100.

  42. 42. Ensslin, L., Dutra, A., Martins, R.P. and Dezem, V. (2016) Modelo Construtivista para Apoiar o Processo de Gestao da Universidade Federal de Tocantins. Iberoamerican Journal of Strategic Management, 15, 122-129.

  43. 43. Dutra, A., Gamba Jr., J., Ripoll, M.F., Ensslin, S.R. and Lacerda, R.T.O. (2014) Multicriteria Performance Evaluation of Emergency Service Conducted by Military Fire Department in Santa Catarina-Brazil. La Pensée, 76, 94-108.

  44. 44. Lacerda, R.T.O., Ensslin, L., Ensslin, S.R. and Dutra, A. (2014) A Constructivist Approach to Manage Business Process as a Dynamic Capability. Knowledge and Process Management, 21, 54-66.

  45. 45. Ensslin, S.R., Ensslin, L., de Oliveira Lacerda, R.T. and de Souza, H.A. (2014) Disclosure of the State of the Art of Performance Evaluation Applied to Project Management. American Journal of Industrial and Business Management, 4, 677-687.

  46. 46. Della Bruna Jr., E., Ensslin, L. and Ensslin, S.R. (2014) An MCDA-C Application to Evaluate Supply Chain Performance. International Journal of Physical Distribution & Logistics Management, 44, 597-616.

  47. 47. Marafon, A.D., Ensslin, L., Ensslin, S.R., Rocha, S. and Medaglia, T.A. (2013) Modelo multicritério de apoio a decisao construtivista no processo de avaliacao de fornecedores. Producao, 23, 402-421.

  48. 48. Ensslin, L., Ensslin, S.R. and Pacheco, G.C. (2012) Um estudo sobre seguranca em estádios de futebol baseado na análise bibliométrica da literatura internacional. Perspectivas em Ciencia da Informacao, UFMG, Belo Horizonte, 17, 71-91.

  49. 49. Azevedo, R.C., Lacerda, R.T.O., Ensslin, L., Jungles, A.E. and Ensslin, S.R. (2013) Performance Measurement to Aid Decision Making in the Budgeting Process for Apartment Building Construction: A Case Study Using MCDA-C. Journal of Construction Engineering and Management, 139, 225-235.

  50. 50. Rosa, F.S., Ensslin, S.R., Ensslin, L. and Lunkes, R.J. (2012) Environmental Disclosure Management: A Constructivist Case. Management Decision, 50, 1117-1136.

  51. 51. Ensslin, L., Montibeller Neto, G.N. and Noronha, S.M. (2001) Apoio a decisao: Metodologias para estruturacao de problemas e avaliacao multicritério de alternativas. Insular, Florianopolis.

  52. 52. CERTI (2013) Fundacao Centros de Referencia em Tecnologias Inovadoras.

  53. 53. Sapiens Parque (2013)