Creative Education
Vol.06 No.02(2015), Article ID:53920,17 pages
10.4236/ce.2015.62017

Adoption of Innovation within Universities: Proposing and Testing an Initial Model

Abdulrahman Hariri1, Paul Roberts2

1Quality Management, King Abdulaziz University, Jeddah, Saudi Arabia

2Warwick Manufacturing Group, The University of Warwick, Coventry, UK

Email: aaahariri@kau.edu.sa, Paul.Roberts@warwick.ac.uk

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 17 January 2015; accepted 5 February 2015; published 10 February 2015

ABSTRACT

This study discusses the need for improvement and innovation in universities so they can effectively serve students and stay ahead in competition. Many technologies and innovations are already being used in universities. However, in order to diffuse or spread technologies or innovations effectively, it is important to understand the reasons leading to the adoption of technologies and innovations in universities. Based on a number of established theories and models on innovation and technology adoption and acceptance, this study proposes a theoretical model that helps explain the factors responsible for innovation adoption within universities. Measures for the study were adopted from previous studies, and an online questionnaire was used. Exploratory and confirmatory factor analyses were used to test and better understand the underlying structure of the proposed model. Reliability and validity of the proposed model were also examined. The initially proposed model seems to help in explaining the adoption of innovations within universities and is of value to researchers when investigating adoption within universities.

Keywords:

Innovation Adoption, Innovation, Higher Education, Technology Adoption

1. Introduction

1.1. Innovation in Higher Education

Universities within the United Kingdom are facing many issues and challenges including the demands for accountability, conflicting demands of teaching and research, budget cuts, rapidly changing environment, advance- ments in technology, and many more. These issues and challenges have impacted the staff members’ ability to develop, improve and innovate and have thus restricted innovation within universities (Hariri & Roberts, 2014) .

A considerable amount of time has passed since the beginning of advances in information technology, particularly, the diffusion and widespread use of Internet around the world. This has certainly facilitated the diffusion of web-based approaches to learning (Rogers, 2003) . Nevertheless, the benefits realised from adopting, integrating, and using technologies to enhance learning are still minimal, and there has yet to be substantial improvement in teaching (Miller, Martineau, & Clark, 2000; Zemsky & Massy, 2004 ; Lonn & Teasley, 2009; Nachmias & Ram, 2009; Soffer, Nachmias, & Ram 2010) . According to Miller et al. (2000) , technology seems to be least diffused and less common in a classroom. Currently, personal computers, projectors, and other technologies are being used, but are these the only innovations that can be used to enhance learning? Is it really possible that the technology-loving students are learning effectively from the instruction methods that have been used for tens or hundreds of years? This is quite hard to believe. More innovative approaches and technologies are required that can increase the quality of education. However, if such approaches exist, how can they be diffused across different departments or universities? According to Soffer, Nachmias, & Ram (2010) , the diffusion of innovations within universities does not necessarily mean their successful adoption and significant impact on learning.

While the use of various innovations and technologies may help universities improve their services, but they might not guarantee their adoption by the staff members.

1.2. Adoption of Innovations

Innovation adoption is not granted. The assumption that the creation of technology would lead to its adoption is false (Zemsky & Massy, 2004) . Providing or enabling the use of technologies (or innovations) is not sufficient to realise their benefits. The adoption and diffusion of innovations (including technologies) is a complex process that differs across groups of people and organisations (Rogers, 2003) .

Innovations and technologies disappear if not adopted (i.e. diffused). Thus, the process of getting individuals (e.g. staff members or students) to adopt and use these innovations or technologies (Rogers, 2003; Greenhalgh, Robert, Macfarlane, Bate, & Kyriakidou, 2005) is equally important.

Various factors influence the adoption of innovations such as characteristics of the innovation, environment, and the adopter. Different theories and models have sought to help explain the adoption behaviour in various settings (Davis, 1989 ; Moore & Benbasat, 1991 ; Dooley, 1999; Karahanna, Straub, & Chervany, 1999 ; Venkatesh & Davis, 2000; Venkatesh, Morris, Davis, & Davis, 2003) .

The ability to evaluate the success of various technologies and innovations used in universities depends largely on the number of adopters and how well they use technologies and innovations. Staff members need to buy-in to the use of technologies or innovations to enhance learning. Similarly, students may need to understand how such technologies would enable them to learn prior to deciding whether they should adopt and use them, if adoption of technologies was not mandated. For example, the success of online or distance education is heavily reliant on the faculty’s engagement and participation (Tabata & Johnsrud, 2008) .

Understanding the factors responsible for the adoption of technologies or innovations within universities is necessary for their adoption and use that may enhance learning. However, a lack of such understanding can result in the failure of such adoptions (Zemsky & Massy, 2004) .

1.3. Model for Innovation Adoption in Higher Education

The model of Unified Theory of Acceptance and Use of Technology (UTAUT) aims to integrate and validate many previous theories and models such as the theory of reasoned action (TRA), theory of planned behaviour (TPB), and technology acceptance model (TAM), etc. (Figure 1). Testing the UTAUT model in different organisational settings has accounted for seventy per cent of the variance in the intention to use (Venkatesh, Morris, Davis, & Davis, 2003) . This model is considered to be one of the best models for explaining the adoption or acceptance of technology (Jong & Wang, 2009) . Many others have used the UTAUT and have reported that it is adequately robust (e.g. El-Gayar & Moran, 2006 ; Oshlyansky, Cairns, & Thimbleby, 2007 ; Gogus, Nistor, & Lerche, 2012 ).

Till date, the UTAUT model within an education or higher education context has been validated outside the UK (Jong & Wang, 2009 ; Yamin & Lee, 2010 ; Marques, Villate, & Carvalho, 2011 ; Gogus, Nistor, & Lerche, 2012 ; Oye, Iahad, & Rahim, 2012) . Furthermore, most of these studies have used students as participants (e.g.

Figure 1. The Unified Theory of Acceptance and Use of Technology (UTAUT).

El-Gayar & Moran, 2006; Jong & Wang, 2009 ; Sumak, Polancic, & Hericko, 2010; Yamin & Lee, 2010 ; Hsu, 2012; Lakhal, Khechine, & Pascot, 2013) . Thus, there is a need to understand and validate this model from the teacher’s perspective. Therefore, our study will focus on the academic staff members of UK universities. Despite the UTAUT being a robust model that can explain technology or innovation adoption, it was not developed to explain the adoption within the education context. For instance, important factors that potentially influence the innovation adoption within universities, such as students’ requirements, expectations and learning, were not investigated earlier.

Although the UTAUT integrated and tested many factors that influence innovation adoption, it failed to capture other important constructs supported by innovation adoption literature such as reinvention, results demonstrability, and trialability (Moore & Benbasat, 1991 ; Karahanna, Straub, & Chervany, 1999 ; Wejnert, 2002 ; Rogers, 2003; Suoranta, 2003; Odumeru, 2013) .

Moreover, it has been a norm in the technology adoption and acceptance field to test the UTAUT or similar models using a single innovation or technology use (e.g. use of an e-mail client, e-learning system, etc.). These attempts of validating such theories or models may introduce unwanted effects. Moreover, Tornatzky & Klein (1982) recommended researchers to look at multiple innovation characteristics that allow for a better understanding of predictive power and any inter-relationships.

The theoretical model of our study may help to understand innovation adoption within UK universities and will be based on the UTAUT (Venkatesh, Morris, Davis, & Davis, 2003) and the diffusion of innovation theory (Moore & Benbasat, 1991; Rogers, 2003) . Additionally, as suggested by Tornatzky & Klein, (1982) , this study will investigate the adoption of different innovations, rather than a single innovation or technology, to reduce any unwanted moderating effects and to test the predictive power of the model.

The proposed model postulates ten constructs (Figure 2) that influence the intention and use of innovations. Literature support is given for the constructs themselves or any factors that they incorporated. For instance, perceived ease of use was incorporated into the effort expectancy construct.

Performance expectancy is the adopter’s perception of how the adopted innovation will help in achieving better job performance. This is similar to the relative advantage attribute (Rogers, 2003) . Effort expectancy is the perception of the ease of using an innovation. Social influence is the degree to which peers influence the use of an innovation. Facilitating conditions is the perception of the proper support provided to help in using the innovation. Finally, behavioural intention is the readiness to use the innovation. The higher the intention to perform something (such as using an innovation), the more likely it would take place ( Ajzen, 1991 ; Table 1).

In addition to the above constructs, the following constructs were postulated to influence the intention to use an innovation: results demonstrability, visibility, trialability, and reinvention. Some of these proposed constructs have been empirically tested in the literature.

Results demonstrability is the visibility of the results of using the innovation. Visibility of the innovation, while very close to the previous construct, is concerned with how visible is the innovation to others. Trialability is the ability to experiment with the innovation before its full adoption and use. Reinvention is the degree to

Figure 2. Proposed model for innovation adoption in UK universities.

Table 1. Main constructs.

which an innovation can be adapted, changed, or modified to suit the circumstances of the adopter or user.

In addition to the above constructs that have been discussed and tested directly or indirectly (e.g. using similar constructs) in previous studies, we have proposed two new constructs: students’ requirements and expectations, and students’ learning.

Students’ requirements and expectations are expected to influence the staff members’ decision to adopt or use innovation. Such adoption decision should ideally take into consideration the students’ requirements and expectations in order to meet or exceed them. Similarly, students’ learning should be something that universities should strive to improve continuously. Therefore, the adoption decision should also take into consideration the degree to which the innovations can help improve students’ learning (Table 2).

In the next section, the study focuses on data collection so as to test the proposed model.

2. Methods

Based on a number of innovation and technology adoption theories and models (e.g. Davis, 1985; Davis, Bagozzi, & Warshaw, 1989 ; Venkatesh, Morris, Davis, & Davis, 2003 ; El-Gayar & Moran, 2006 ; Oshlyansky, Cairns, & Thimbleby, 2007 ), this study aims to propose and validate a theoretical model that may help explain innovation adoption within UK universities. Moreover, the two newly proposed constructs, along with the previously studied constructs, will be tested in our study.

Data collected were aggregated and pooled across different innovations/technologies and organisations. Such aggregation is consistent with previous research (e.g. Compeau & Higgins, 1995 ; Venkatesh & Davis, 1996 ; Nistor, Wagner, Istvanffy, & Dragota, 2010 ). Thus, any influence that may result from testing the model against a single innovation or technology can be minimised. Furthermore, it allows for a better understanding of its suitability and helps explain its adoption across multiple innovations.

In line with previous studies (e.g. Davis, 1985 ; Moore & Benbasat, 1991 ; Venkatesh, Morris, Davis, & Davis, 2003 ; Kijsanayotin, Pannarunothai, & Speedie, 2009 ), an online questionnaire instrument survey approach was used because of its easy and low-cost distribution among the staff members. Currently, most university staff members have and are expected to use their emails.

Table 2. Additional constructs.

Different measures were adopted from various studies (most notably Moore & Benbasat, 1991; Venkatesh, Morris, Davis, & Davis, 2003 ) and modified to suit the present context. These measures are presented in Appendix 1.

This model was first validated within the University of Warwick, UK. Following previous research (Bryman, 2008; Cohen, Manion, & Morrison, 2011) , the survey questionnaire was pilot-tested within the university on 25 staff members that were selected from a population of over 130 staff members. Minor adjustments were made to the questions used. The reliability testing of the analysed data proved it to be reliable. Then, the researchers proceeded to the next phase.

Over 17,754 staff members from 27 UK universities were invited to participate in our study. Considering the busy schedules of the staff members, we had expected a low response rate and had thus drawn a large sample. We received a total of 499 responses. The data obtained were then screened, and those with missing values and un-engaged responses (e.g. those answering the same choice for all questions) were removed. Finally, we obtained 497 responses that were used to validate the proposed model.

2.1. Demographics

Of the total respondents, 61% were males and the remaining 38% were females. The majority of the respondents were aged between 30 to 50 years (59%), followed by those aged over 50 years (39%) and finally, those under 30 years (1.8%). In terms of work experience, 63% respondents had over 9 years, 21% had 5 to 9 years and 15% had less than 5 years of experience. In terms of educational level, 77% respondents had a doctorate degree, 19% had a master’s degree and 8% had other qualifications.

Upon the successful collection of the data, the exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) were performed to study the underlying relationships in the model and to test its reliability and validity.

2.2. Exploratory Factor Analysis

EFA is helpful in investigating the underlying structures based on correlation between different factors (Brace, Kemp, & Snelgar, 2012) . Using the SPSS software package, an EFA was carried out using the maximum likelihood extraction method and a Promax rotation method (Hair, Black, Babin, & Anderson, 2010; Brace, Kemp, & Snelgar, 2012) . The former method is appropriate and consistent with the next stage of the analysis, which uses the same technique.

2.3. Confirmatory Factor Analysis

The assessment of the CFA measurement model helps in reaching a better understanding of how well the measurement items reflect the latent variables (Byrne, 2010) . Moreover, while doing a CFA, the validity and reliability of various factors can be examined as well.

CFA can be used in an exploratory or confirmatory way (Byrne, 2010) . When used in an exploratory way, the goal is usually to confirm predefined relationships. On the other hand, if there are no predefined relationships or if the initial model was rejected by the researcher, CFA can be used to explore and test various effects (Byrne, 2010) .

3. Results

The results of EFA and CFA are presented below.

3.1. Exploratory Factor Analysis (EFA)

A total of 11 factors were analysed using EFA, and the results of the analyses are presented in Appendix 2.

After the initial assessment and the removal of low loading and non-loading factors (Hair, Black, Babin, & Anderson 2010) , the resulting model had a Kaiser-Meyer-Olkin (KMO) value of 0.824, which is above the acceptable value of 0.7. Commonalities for all the variables were sufficiently high (above 0.500). The adequacy of all the variables and the model were also confirmed as the reproduced matrix had only 2% non-redundant residuals that were greater than 0.05. The total variance explained by the tested model was 65%, which is considered significant.

With respect to factor loadings, the items related to social influence were found loading on two different factors: social influence and image. The social influence construct integrated many similar concepts (Venkatesh, Morris, Davis, & Davis, 2003) including subjective norms, social factors, and image. Some researchers also warned that social influence should not be looked at as a single construct ( Martins & Kellermanns, 2004 ; Lakhal, Khechine, & Pascot, 2013) .

Moreover, trialability was found to have high loading with facilitating conditions. This may be because individuals felt that they were free to test innovations before using it. This perception is close to the definition given for facilitating conditions (see Venkatesh, Morris, Davis, & Davis, 2003 ) since individuals would feel less constrained and free to test innovations if proper facilitating conditions were in place.

One single Heywood case was found as PE_3’s estimate was higher than 1. This was considered to avoid further issues in the subsequent stage.

Reliability and Validity

To ensure minimum measurement error, the properties of the measures used in the study should be investigated to gain confidence over their effectiveness (Field, 2009) .

Cronbach’s alpha values were calculated and investigated (see Appendix 2). All values were well above the 0.7 cut-off point.

In this study, two types of construct validity were investigated: convergent and discriminant validities. Convergent validity examines the degree to which the items that theoretically belong to a single construct correlate. Discriminant validity examines the degree to which items or the measures of a scale do not measure with other constructs.

From the pattern matrix produced (Appendix 2), it can be seen that all the constructs have shown high convergent validity, i.e. above the 0.350 threshold (Hair, Black, Babin, & Anderson, 2010) . Moreover, despite T loading on two factors, its loading with FC is high, indicating that they are highly related. Furthermore, since the researchers had considered social influence as two different constructs, social influence and image, the investigating factor loadings for items related to both the constructs show high convergent validity.

Discriminant validity of the model is also shown since measures belonging to each factor were not loading on other factors simultaneously. However, one exception is the cross loading of T. Additionally, investigating the factor correlation matrix (Appendix 2) shows no correlations higher than 0.7 between any of the constructs, which confirmed the discriminant validity of all the constructs.

3.2. Confirmatory Factor Analysis (CFA)

Using the results of the previous EFA stage, the CFA measurement model was developed and assessed.

Based on experts’ recommendations (e.g. Byrne, 2010; Hair, Black, Babin, & Anderson, 2010 ) and by following a number of iterations to explore and attempt to improve mode fit, the following CFA model (Figure 3) was considered good as it fits the data adequately. As can be seen in Table 3, the goodness of fit (GOF) indices of the model indicate a good model fit.

Reliability and Validity

Although the reliability and validity of the proposed model was performed in the previous EFA stage, it is important to re-examine them because of the changes (e.g. addition and removal of items) introduced to the model

Figure 3. Confirmatory factor analysis.

Table 3. CFA* goodness of fit indices.

and to ensure that the measurement errors were reduced. A high reliability is argued to be linked with lower measurement errors (Hair, Black, Babin, & Anderson, 2010) . Furthermore, to reflect latent factors appropriately, observed variables need to show the evidence of reliability and validity (Straub, Boudreau, & Gefen, 2004; Schumacker & Lomax, 2010) .

Using the validity testing tool within the “Stats Tools Package” (Gaskin, 2012) and by imputing AMOS’s correlations and standardised regression tables into the tool, the reliability and validity testing results were calculated. These are shown in Appendix 3. The following points have been highlighted:

・ CR (Composite reliability): It measures the reliability of the factors and should ideally be above 0.75.

・ AVE (Average Variance Extracted): This is a measure of convergent validity and should be above 0.5 (Hair, Black, Babin, & Anderson, 2010) . It indicates how well the items explain the factor. It is shown diagonally in bold.

・ MSV (Maximum Shared Squared Variance): The MSV between the factor and the other factors in the model indicates how well is the factor explained by items outside the factor (i.e. items of other constructs).

・ ASV (Average shared squared variance): It is similar to MSV, but takes the average of the squared variances. It indicates how much on an average is explained by items of other factors.

・ AVE (Average variance extracted): It should always be higher than MSV and ASV. The items belonging to the factor itself should better explain it than the items belonging to other factors (Straub, Boudreau, & Gefen, 2004) .

From the table in Appendix 3 and Appendix 4, we can see that all constructs have high CR values. The high (above 0.50) AVE values indicate a good convergent validity. Discriminant validity can be assessed by comparing the square root of the AVE for each construct (the diagonal in bold) to all inter-factor correlations (below the values in bold). All factors reveal adequate discriminant validity because all diagonal values (square root of AVE) are greater than the correlations. Therefore, we conclude that adequate reliability and construct validity have been established.

3.3. Common Method Bias or Variance

Common method bias remains a threat to validity in certain research fields. Despite the majority of information systems (IS) research using a single data collection method, only few studies have reported it (Straub, Boudreau, & Gefen, 2004) .

Our study investigated whether the use of common factor method was an issue because the method was considered to be relevant for studies that do not measure a common factor explicitly (MacKenzie & Podsakoff, 2012) . Addition of the common method factor indicated certain common method bias issues in some of the factors. This was also reflected in the reliability and validity tests that were run with the common method factor.

In this study, common method bias might have affected some items because certain questions could have influenced the respondent’s responses to the next item (Straub, Boudreau, & Gefen, 2004) . Therefore, items or constructs that were impacted by this bias were dropped, and minor adjustments were made to the mode, which resulted in the following measurement model (Figure 4).

There was an error variance e31, which had to be set, instead of being freely estimated as it would have otherwise caused the regression weight for the SI_6 item to be above 1.

3.4. Invariance Testing

When carrying out research that spans across different groups (i.e. different countries, universities, etc.), it is vital to be conscious of and reduce any bias that may have resulted from the data collection method and/or respondents’ characteristics (Cohen, Manion, & Morrison, 2011) . Hair, Black, Babin, & Anderson (2010) recommended establishing some form of metric-invariance before examining the path estimates, which is something future studies may decide to pursue when building on the well-established model presented in this study.

To assess and reduce such bias, measurement invariance across different groups (e.g. gender, age, experience, etc.) should be assessed. If testing the model across different groups shows a good GOF for the model, it indicates a configurable invariance and that the groups are likely to be equivalent.

Using SPSS AMOS, the following groups were created using categorical (e.g. demographics) data captured in the survey to test the model across gender (male/female), age (30 - 50 years/over 50 years), education (MSc/ doctorate), teaching hours per year (51 - 100, 501 - 1000 hours), experience (medium/high), and country (England/Scotland/Wales). The model fit summary is mentioned in Table 4.

Based on the parameters reported above, we can say that the model is equivalent across different groups.

Moreover, using the Stats Tools Package (Gaskin, 2012) , a chi-square test of difference was used to compare between degrees of freedom for the unconstrained and fully constrained models. For the fully constrained model, regression values were removed from the lines and variances for factors were restricted to 1. The output of the comparison is given in Table 5.

As can be seen from the Table 5, the p-value is not significant and is greater than Byrne’s (2010) 0.05 cut-off indicating that there are no significant differences between the groups at the model level.

Figure 4. Final measurement model.

Table 4. Multigroup invariance testing model fit.

Table 5. Invariance testing of the fully constrained and unconstrained model.

4. Discussion

Rapid changes have pressurised universities around the world to either improve or fall behind in an increasingly competitive race, where funding is scarce. Universities need to improve to accommodate the tech-addicted students. Various technologies and innovations ranging from the Internet, emails, learning management systems, and others have been implemented and are being used by staff members within universities. New knowledge is generated every second and due to the Internet, its delivery and distribution costs have become cheap, if not free. Traditional preaching-type teaching methods have become obsolete for today’s knowledge creating and consuming societies.

Thus, the introduction of innovations and technologies is not enough. Staff members must be encouraged to adopt such innovations and technologies, which may help enhance learning.

Acknowledging the need to understand the factors that lead to the adoption of innovations within universities, this study hopes to develop a theoretical model that may help explain the factors responsible for innovation and technology adoption within universities. For this purpose, the study used existing measures and adapted them accordingly. More specifically, measures were adapted to capture information related to different innovations. Additionally, new measures were developed for a number of constructs that were investigated.

EFA and CFA were used to understand the underlying structure of the proposed model. Furthermore, many reliability and validity techniques were used, and the common method variance was examined. Lastly, configural invariance at the model level was established since the proposed model performed well after testing it against a number of groups that were defined based on demographic or categorical information. Based on the results gained so far, the proposed model was found to be of adequate fit to the collected data. Subsequent studies may focus on testing and on exploring various relationships in the model as well as any mediation and moderation effects.

Although the proposed model needs to undergo further testing, a number of contributions have been achieved so far. First, in addition to the UTAUT’s constructs, a number of new constructs were proposed. EFA and CFA results indicate that the proposed model fits the data. Second, although changes were made to the UTAUT measures to capture information related to multiple innovations, these measures are still reliable and valid. Third, this study proposed two additional constructs that are believed to be important for the adoption of innovations within universities: students’ learning, and students’ requirements and expectations. New measures were developed for both constructs, which showed adequate reliability and validity. Future studies investigating the adoption of innovations within universities are likely to benefit from incorporating and using these measures or the whole model as a starting point. Further research is required towards a better understanding of the adoption of innovations within universities: to help diffuse innovations, technologies, or approaches that may enhance learn- ing within our universities.

As is the case with any research, this work has a number of limitations. First, despite the researchers’ intention to study a larger sample, the response rate of this study was low (Hariri & Roberts, 2014) . As a result of this low response rate, it is not possible to generalise the findings of this study. Moreover, there is a possibility of the inherent bias where only staff members who had time or were interested in participating did so. Furthermore, the questions adopted by this research relied on personal opinions and perceptions as reported by the participants. Hence, responses may not reflect the accurate feelings and beliefs of the respondents. Rather than relying fully on self-administered questionnaires, forthcoming research may consider using another data collection method or a combination of methods such as observations, actions research, and/or collecting data at different periods of time. Moreover, certain technologies or methods may be used within the institutions to track and report usage of certain innovations adopted by staff members. The use of different data collection methods could help in understanding whether the nature of the widely used questionnaire instrument is influencing or causing problems in researching and understanding the adoption of innovations and technologies. Other data collection methods may be more accurate especially with respect to capturing actual adoption and use rather than self-reported information.

References

  1. Ajzen, I. (1991). The Theory of Planned Behavior. Organizational Behavior and Human Decision Processes, 50, 179 -211. http://dx.doi.org/10.1016/0749-5978(91)90020-T
  2. Brace, N., Kemp, R., & Snelgar, R. (2012). SPSS for Psychologists. Basingstoke: Palgrave Macmillan.
  3. Bryman, A. (2008). Social Research Methods (3rd ed.). Oxford, NY: Oxford University Press.
  4. Byrne, B. M. (2010). Structural Equation Modeling with Amos: Basic Concepts, Applications, and Programming (2nd ed.). New York: Taylor and Francis Group.
  5. Cohen, L., Manion, L., & Morrison, K. (2011). Research Methods in Education (7th ed.). New York: Routledge.
  6. Compeau, D. R., & Higgins, C. A. (1995). Computer Self-Efficacy: Development of a Measure and Initial Test. MIS Quarterly, 19, 23. http://dx.doi.org/10.2307/249688
  7. Davis, F. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13, 319?340. http://dx.doi.org/10.2307/249008
  8. Davis, F. D. (1985). A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Massachusetts Institute of Technology. http://hdl.handle.net/1721.1/15192
  9. Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User Acceptance of Computer Technology: A Comparison of Two. Management Science, 35, 982-1003. http://dx.doi.org/10.1287/mnsc.35.8.982
  10. Dooley, K. E. (1999). Towards a Holistic Model for the Diffusion of Educational Technologies: An Integrative Review of Educational Innovation Studies. Educational Technology & Society, 2.
  11. El-Gayar, O. F., & Moran, M. (2006). College Students’ Acceptance of Tablet PCs: An Application of the UTAUT Model. Decision Sciences Institute (DSI), 2845-2850. http://www.homepages.dsu.edu/moranm/research/publications/dsi06-rip-tam-utaut.pdf
  12. Field, A. (2009). Discovering Statistics Using SPSS (3rd ed.). London: Sage Publications.
  13. Gaskin, J. (2012). Stats Wiki and Stats Tools Package. http://statwiki.kolobkreations.com/
  14. Gogus, A., Nistor, N., & Lerche, T. (2012). Educational Technology Acceptance across Cultures: A Validation of the Unified Theory of Acceptance and Use of Technology in the Context of Turkish National Culture. Turkish Online Journal of Educational Technology, 11, 394-408.
  15. Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P., & Kyriakidou, O. (2005). Diffusion of Innovations in Health Service Organisations: A Systematic Literature Review. Malden, MA: Blackwell.
  16. Hair, J., Black, W., Babin, B., & Anderson, R. (2010). Multivariate Data Analysis: A Global Perspective (7th ed.). London: Pearson Education.
  17. Hariri, A., & Roberts, P. (2014). Challenges and Issues Hindering Innovation in UK Universities. International Journal of Management and Marketing Academy, 2, 41-54.
  18. Hsu, H. (2012). The Acceptance of Moodle: An Empirical Study Based on UTAUT. Creative Education, 3, 44-46. http://dx.doi.org/10.4236/ce.2012.38B010
  19. Jong, D., & Wang, T. (2009). Student Acceptance of Web-Based Learning System. In Proceedings of the 2009 International Symposium on Web Information Systems and Applications (pp. 533-536). Nanchang: Academy Publisher.
  20. Karahanna, E., Straub, D. W., & Chervany, N. L. (1999). Information Technology Adoption across Time: A Cross-Sectional Comparison of Pre-Adoption and Post-Adoption Beliefs. MIS Quarterly, 23, 183-213. http://dx.doi.org/10.2307/249751
  21. Kijsanayotin, B., Pannarunothai, S., & Speedie, S. M. (2009). Factors Influencing Health Information Technology Adoption in Thailand’s Community Health Centers: Applying the UTAUT Model. International Journal of Medical Informatics, 78, 404-416. http://dx.doi.org/10.1016/j.ijmedinf.2008.12.005
  22. Lakhal, S., Khechine, H., & Pascot, D. (2013). Student Behavioural Intentions to Use Desktop Video Conferencing in a Distance Course: Integration of Autonomy to the UTAUT Model. Journal of Computing in Higher Education, 25, 93-121. http://dx.doi.org/10.1007/s12528-013-9069-3
  23. Lonn, S., & Teasley, S. D. (2009). Saving Time or Innovating Practice: Investigating Perceptions and Uses of Learning Management Systems. Computers & Education, 53, 686-694. http://dx.doi.org/10.1016/j.compedu.2009.04.008
  24. MacKenzie, S. B., & Podsakoff, P. M. (2012). Common Method Bias in Marketing: Causes, Mechanisms, and Procedural Remedies. Journal of Retailing, 88, 542-555. http://dx.doi.org/10.1016/j.jretai.2012.08.001
  25. Marques, B., Villate, J., & Carvalho, C. (2011). Applying the UTAUT Model in Engineering Higher Education: Teacher’s Technology Adoption. In 6th Iberian Conference on Information Systems and Technologies (CISTI). Chaves: Institute of Electrical and Electronics Engineers (IEEE). http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5974236
  26. Martins, L. L., & Kellermanns, F. W. (2004). A Model of Business School Students’ Acceptance of a Web-Based Course Management System. Academy of Management Learning & Education, 3, 7-26. http://dx.doi.org/10.5465/AMLE.2004.12436815
  27. Miller, J., Martineau, L., & Clark, R. (2000). Technology Infusion and Higher Education: Changing Teaching and Learning. Innovative Higher Education, 24, 227-241. http://dx.doi.org/10.1023/B:IHIE.0000047412.64840.1c
  28. Moore, G. C., & Benbasat, I. (1991). Development of an Instrument to Measure the Perceptions of Adopting an Information Technology Innovation. Information Systems Research, 2, 192-222. http://dx.doi.org/10.1287/isre.2.3.192
  29. Nachmias, R., & Ram, J. (2009). Research Insights from a Decade of Campus-Wide Implementation of Web-Supported Academic Instruction at Tel Aviv University. International Review of Research in Open and Distance Learning, 10, 1-16.
  30. Nistor, N., Wagner, M., Istvanffy, E., & Dragota, M. (2010). The Unified Theory of Acceptance and Use of Technology: Verifying the Model from a European Perspective. International Journal of Knowledge and Learning, 6, 185-199. http://dx.doi.org/10.1504/IJKL.2010.034753
  31. Odumeru, J. A. (2013). Going Cashless: Adoption of Mobile Banking in Nigeria. Arabian Journal of Business and Management Review, 1, 9-17.
  32. Oshlyansky, L., Cairns, P., & Thimbleby, H. (2007). Validating the Unified Theory of Acceptance and Use of Technology (UTAUT) Tool Cross-Culturally. Proceedings of the 21st BCS HCI Group Conference, 2, 83-86.
  33. Oye, Nathaniel, A.Iahad, N., & Ab.Rahim, Nor Zairah (2012). The Impact of UTAUT Model and ICT Theoretical Framework on University Academic Staff: Focus on Adamawa State University, Nigeria. International Journal of Computers & Technology, 2, 102-111.
  34. Rogers, E. M. (2003). Diffusion of Innovations. New York: Free Press.
  35. Schumacker, R., & Lomax, R. (2010). A Beginner’s Guide to Structural Equation Modeling. New York: Routledge.
  36. Soffer, T., Nachmias, R., & Ram, J. (2010). Diffusion of Web Supported Instruction in Higher Education―The Case of Tel-Aviv University. Educational Technology & Society, 13, 212-223.
  37. Straub, D., Boudreau, M.-C., & Gefen, D. (2004). Validation Guidelines for IS Positivist Research. Communications of the Association for Information Systems, 13, 380-427.
  38. Sumak, B., Polancic, G., & Hericko, M. (2010). An Empirical Study of Virtual Learning Environment Adoption Using UTAUT. In Proceedings of the 2nd International Conference on Mobile, Hybrid, and On-Line Learning (ELML’10) (pp. 17-22). Saint Maarten: IEEE.
  39. Suoranta, M. (2003). Adoption of Mobile Banking in Finland. Jyväskylä Studies in Business and Economics, University of Jyväskylä. https://jyx.jyu.fi/dspace/bitstream/handle/123456789/13203/9513916545.pdf?sequence=1
  40. Tabata, L. N., & Johnsrud, L. K. (2008). The Impact of Faculty Attitudes toward Technology, Distance Education, and Innovation. Research in Higher Education, 49, 625-646. http://dx.doi.org/10.1007/s11162-008-9094-7
  41. Tornatzky, L. G., & Klein, K. J. (1982). Innovation Characteristics and Innovation Adoption-Implementation: A Meta- Analysis of Findings. IEEE Transactions on Engineering Management, 29, 28-45. http://dx.doi.org/10.1109/TEM.1982.6447463
  42. Venkatesh, V., & Davis, F. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46, 186-204. http://dx.doi.org/10.1287/mnsc.46.2.186.11926
  43. Venkatesh, V., & Davis, F. D. (1996). A Model of the Antecedents of Perceived Ease of Use: Development and Test. Decision Sciences, 27, 451-481. http://dx.doi.org/10.1111/j.1540-5915.1996.tb01822.x
  44. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27, 425-478.
  45. Wejnert, B. (2002). Integrating Models of Diffusion of Innovations: A Conceptual Framework. Annual Review of Sociology, 28, 297-326. http://dx.doi.org/10.1146/annurev.soc.28.110601.141051
  46. Yamin, M., & Lee, Y. (2010). Level of Acceptance and Factors Influencing Students’ Intention to Use UCSI University’s e-Mail System. 2010 International Conference on User Science and Engineering, Shah Alam, 13-15 December 2010, 26-31. http://dx.doi.org/10.1109/IUSER.2010.5716717
  47. Zemsky, R., & Massy, W. (2004). Thwarted Innovation: What Happened to e-Learning and Why. http://www.immagic.com/eLibrary/ARCHIVES/GENERAL/UPENN_US/P040600Z.pdf

Appendices

Appendix 1. Measures

Appendix 2. EFA results and reliability testing.

Extraction Method: Maximum Likelihood. *One or more communality estimates greater than 1 were encountered during iterations. The resulting solutions should be interpreted with caution.

Total variance explained.

Extraction Method: Maximum Likelihood. #When factors are correlated, sums of squared loadings cannot be added to obtain a total variance.

Pattern matrix.

Extraction Method: Maximum Likelihood. Rotation Method: Promax with Kaiser Normalization. a. Rotation converged in 7 iterations.

Factor correlation matrix.

Extraction Method: Maximum Likelihood. Rotation Method: Promax with Kaiser Normalization.

Appendix 3. Reliability and validity testing of the measurement model.

No Validity Concerns.

Appendix 4. Reliability and validity testing of the final measurement model.

No Validity Concerns.