Search results for: quantitative pairwise comparison
7703 A Survey of Digital Health Companies: Opportunities and Business Model Challenges
Authors: Iris Xiaohong Quan
Abstract:
The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.Keywords: digital health, business models, entrepreneurship opportunities, healthcare
Procedia PDF Downloads 1877702 A Study on Human Musculoskeletal Model for Cycle Fitting: Comparison with EMG
Authors: Yoon- Ho Shin, Jin-Seung Choi, Dong-Won Kang, Jeong-Woo Seo, Joo-Hack Lee, Ju-Young Kim, Dae-Hyeok Kim, Seung-Tae Yang, Gye-Rae Tack
Abstract:
It is difficult to study the effect of various variables on cycle fitting through actual experiment. To overcome such difficulty, the forward dynamics of a musculoskeletal model was applied to cycle fitting in this study. The measured EMG data were compared with the muscle activities of the musculoskeletal model through forward dynamics. EMG data were measured from five cyclists who do not have musculoskeletal diseases during three minutes pedaling with a constant load (150 W) and cadence (90 RPM). The muscles used for the analysis were the Vastus Lateralis (VL), Tibialis Anterior (TA), Bicep Femoris (BF), and Gastrocnemius Medial (GM). Person’s correlation coefficients of the muscle activity patterns, the peak timing of the maximum muscle activities, and the total muscle activities were calculated and compared. BIKE3D model of AnyBody (Anybodytech, Denmark) was used for the musculoskeletal model simulation. The comparisons of the actual experiments with the simulation results showed significant correlations in the muscle activity patterns (VL: 0.789, TA: 0.503, BF: 0.468, GM: 0.670). The peak timings of the maximum muscle activities were distributed at particular phases. The total muscle activities were compared with the normalized muscle activities, and the comparison showed about 10% difference in the VL (+10%), TA (+9.7%), and BF (+10%), excluding the GM (+29.4%). Thus, it can be concluded that muscle activities of model & experiment showed similar results. The results of this study indicated that it was possible to apply the simulation of further improved musculoskeletal model to cycle fitting.Keywords: musculoskeletal modeling, EMG, cycle fitting, simulation
Procedia PDF Downloads 5737701 Comparison of Iodine Density Quantification through Three Material Decomposition between Philips iQon Dual Layer Spectral CT Scanner and Siemens Somatom Force Dual Source Dual Energy CT Scanner: An in vitro Study
Authors: Jitendra Pratap, Jonathan Sivyer
Abstract:
Introduction: Dual energy/Spectral CT scanning permits simultaneous acquisition of two x-ray spectra datasets and can complement radiological diagnosis by allowing tissue characterisation (e.g., uric acid vs. non-uric acid renal stones), enhancing structures (e.g. boost iodine signal to improve contrast resolution), and quantifying substances (e.g. iodine density). However, the latter showed inconsistent results between the 2 main modes of dual energy scanning (i.e. dual source vs. dual layer). Therefore, the present study aimed to determine which technology is more accurate in quantifying iodine density. Methods: Twenty vials with known concentrations of iodine solutions were made using Optiray 350 contrast media diluted in sterile water. The concentration of iodine utilised ranged from 0.1 mg/ml to 1.0mg/ml in 0.1mg/ml increments, 1.5 mg/ml to 4.5 mg/ml in 0.5mg/ml increments followed by further concentrations at 5.0 mg/ml, 7mg/ml, 10 mg/ml and 15mg/ml. The vials were scanned using Dual Energy scan mode on a Siemens Somatom Force at 80kV/Sn150kV and 100kV/Sn150kV kilovoltage pairing. The same vials were scanned using Spectral scan mode on a Philips iQon at 120kVp and 140kVp. The images were reconstructed at 5mm thickness and 5mm increment using Br40 kernel on the Siemens Force and B Filter on Philips iQon. Post-processing of the Dual Energy data was performed on vendor-specific Siemens Syngo VIA (VB40) and Philips Intellispace Portal (Ver. 12) for the Spectral data. For each vial and scan mode, the iodine concentration was measured by placing an ROI in the coronal plane. Intraclass correlation analysis was performed on both datasets. Results: The iodine concentrations were reproduced with a high degree of accuracy for Dual Layer CT scanner. Although the Dual Source images showed a greater degree of deviation in measured iodine density for all vials, the dataset acquired at 80kV/Sn150kV had a higher accuracy. Conclusion: Spectral CT scanning by the dual layer technique has higher accuracy for quantitative measurements of iodine density compared to the dual source technique.Keywords: CT, iodine density, spectral, dual-energy
Procedia PDF Downloads 1247700 AI-Driven Solutions for Optimizing Master Data Management
Authors: Srinivas Vangari
Abstract:
In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.Keywords: artificial intelligence, master data management, data governance, data quality
Procedia PDF Downloads 237699 Effect of Diamagnetic Additives on Defects Level of Soft LiTiZn Ferrite Ceramics
Authors: Andrey V. Malyshev, Anna B. Petrova, Anatoly P. Surzhikov
Abstract:
The article presents the results of the influence of diamagnetic additives on the defects level of ferrite ceramics. For this purpose, we use a previously developed method based on the mathematical analysis of experimental temperature dependences of the initial permeability. A phenomenological expression for the description of such dependence was suggested and an interpretation of its main parameters was given. It was shown, that the main criterion of the integral defects level of ferrite ceramics is the relation of two parameters correlating with elastic stress value in a material. Model samples containing a controlled number of intergranular phase inclusions served to prove the validity of the proposed method, as well as to assess its sensitivity in comparison with the traditional XRD (X-ray diffraction) analysis. The broadening data of diffraction reflexes of model samples have served for such comparison. The defects level data obtained by the proposed method are in good agreement with the X-ray data. The method showed high sensitivity. Therefore, the legitimacy of the selection relationship β/α parameters of phenomenological expression as a characteristic of the elastic state of the ferrite ceramics confirmed. In addition, the obtained data can be used in the detection of non-magnetic phases and testing the optimal sintering production technology of soft magnetic ferrites.Keywords: cure point, initial permeability, integral defects level, homogeneity
Procedia PDF Downloads 1377698 Guidelines for Enhancing the Learning Environment by the Integration of Design Flexibility and Immersive Technology: The Case of the British University in Egypt’s Classrooms
Authors: Eman Ayman, Gehan Nagy
Abstract:
The learning environment has four main parameters that affect its efficiency which they are: pedagogy, user, technology, and space. According to Morrone, enhancing these parameters to be adaptable for future developments is essential. The educational organization will be in need of developing its learning spaces. Flexibility of design an immersive technology could be used as tools for this development. when flexible design concepts are used, learning spaces that can accommodate a variety of teaching and learning activities are created. To accommodate the various needs and interests of students, these learning spaces are easily reconfigurable and customizable. The immersive learning opportunities offered by technologies like virtual reality, augmented reality, and interactive displays, on the other hand, transcend beyond the confines of the traditional classroom. These technological advancements could improve learning. This thesis highlights the problem of the lack of innovative, flexible learning spaces in educational institutions. It aims to develop guidelines for enhancing the learning environment by the integration of flexible design and immersive technology. This research uses a mixed method approach, both qualitative and quantitative: the qualitative section is related to the literature review theories and case studies analysis. On the other hand, the quantitative section will be identified by the results of the applied studies of the effectiveness of redesigning a learning space from its traditional current state to a flexible technological contemporary space that will be adaptable to many changes and educational needs. Research findings determine the importance of flexibility in learning spaces' internal design as it enhances the space optimization and capability to accommodate the changes and record the significant contribution of immersive technology that assists the process of designing. It will be summarized by the questionnaire results and comparative analysis, which will be the last step of finalizing the guidelines.Keywords: flexibility, learning space, immersive technology, learning environment, interior design
Procedia PDF Downloads 1017697 Comparison Ileal and Excreta Digestibility of Protein Poultry by-Product Meal in 21 to 28 Days of Age Broiler Chicken
Authors: N. Mahmoudnia, M. Khormali
Abstract:
This experiment was conducted to determine the apparent protein digestibility of poultry by- product meal (PBPM) from two industrial poultry slaughter houses on Ross 308 male broiler chickens in independed comparisons. The experiment consisted of seven dietary treatments and three replicates per treatment with three broiler chickens per replicate in a completely randomized design. Dietary treatments consisted of a control corn- soybean diet, and levels 3, 6 and 9% PBPM produced by slaughter house 1 and levels 3, 6 and 9% PBPM produced by slaughter house 2. Chromic oxide was added to the experimental diets as indigestible marker. The apparent protein digestibility of each diet were determined with two methods of sample collection of ileum and excreta in 21-28 d of age. The results this experiment showed that use of PBPM had no significantly effect on performance of broiler chicks during period of experiments. The apparent protein digestibility of PBPM groups was significantly higher than control group by excreta sampling procedure (P<0.05). Using of PBPM 2 significantly (P<0.05) decreased the apparent protein digestibility values based on ileum sampling procedure vs control ( 85.21 vs 90.14).Based results of this experiment,it is possible to use of PBPM 1 in broiler chicken.Keywords: poultry by-product meal, apparent protein digestibility, independed comparison, broiler chicken
Procedia PDF Downloads 6047696 Comparison between Deterministic and Probabilistic Stability Analysis, Featuring Consequent Risk Assessment
Authors: Isabela Moreira Queiroz
Abstract:
Slope stability analyses are largely carried out by deterministic methods and evaluated through a single security factor. Although it is known that the geotechnical parameters can present great dispersal, such analyses are considered fixed and known. The probabilistic methods, in turn, incorporate the variability of input key parameters (random variables), resulting in a range of values of safety factors, thus enabling the determination of the probability of failure, which is an essential parameter in the calculation of the risk (probability multiplied by the consequence of the event). Among the probabilistic methods, there are three frequently used methods in geotechnical society: FOSM (First-Order, Second-Moment), Rosenblueth (Point Estimates) and Monte Carlo. This paper presents a comparison between the results from deterministic and probabilistic analyses (FOSM method, Monte Carlo and Rosenblueth) applied to a hypothetical slope. The end was held to evaluate the behavior of the slope and consequent risk analysis, which is used to calculate the risk and analyze their mitigation and control solutions. It can be observed that the results obtained by the three probabilistic methods were quite close. It should be noticed that the calculation of the risk makes it possible to list the priority to the implementation of mitigation measures. Therefore, it is recommended to do a good assessment of the geological-geotechnical model incorporating the uncertainty in viability, design, construction, operation and closure by means of risk management.Keywords: probabilistic methods, risk assessment, risk management, slope stability
Procedia PDF Downloads 3947695 Recommendations for Teaching Word Formation for Students of Linguistics Using Computer Terminology as an Example
Authors: Svetlana Kostrubina, Anastasia Prokopeva
Abstract:
This research presents a comprehensive study of the word formation processes in computer terminology within English and Russian languages and provides listeners with a system of exercises for training these skills. The originality is that this study focuses on a comparative approach, which shows both general patterns and specific features of English and Russian computer terms word formation. The key point is the system of exercises development for training computer terminology based on Bloom’s taxonomy. Data contain 486 units (228 English terms from the Glossary of Computer Terms and 258 Russian terms from the Terminological Dictionary-Reference Book). The objective is to identify the main affixation models in the English and Russian computer terms formation and to develop exercises. To achieve this goal, the authors employed Bloom’s Taxonomy as a methodological framework to create a systematic exercise program aimed at enhancing students’ cognitive skills in analyzing, applying, and evaluating computer terms. The exercises are appropriate for various levels of learning, from basic recall of definitions to higher-order thinking skills, such as synthesizing new terms and critically assessing their usage in different contexts. Methodology also includes: a method of scientific and theoretical analysis for systematization of linguistic concepts and clarification of the conceptual and terminological apparatus; a method of nominative and derivative analysis for identifying word-formation types; a method of word-formation analysis for organizing linguistic units; a classification method for determining structural types of abbreviations applicable to the field of computer communication; a quantitative analysis technique for determining the productivity of methods for forming abbreviations of computer vocabulary based on the English and Russian computer terms, as well as a technique of tabular data processing for a visual presentation of the results obtained. a technique of interlingua comparison for identifying common and different features of abbreviations of computer terms in the Russian and English languages. The research shows that affixation retains its productivity in the English and Russian computer terms formation. Bloom’s taxonomy allows us to plan a training program and predict the effectiveness of the compiled program based on the assessment of the teaching methods used.Keywords: word formation, affixation, computer terms, Bloom's taxonomy
Procedia PDF Downloads 257694 Weight Loss and Symptom Improvement in Women with Secondary Lymphedema Using Semaglutide
Authors: Shivani Thakur, Jasmin Dominguez Cervantes, Ahmed Zabiba, Fatima Zabiba, Sandhini Agarwal, Kamalpreet Kaur, Hussein Maatouk, Shae Chand, Omar Madriz, Tiffany Huang, Saloni Bansal
Abstract:
The prevalence of lymphedema in women in rural communities highlights the importance of developing effective treatment and prevention methods. Subjects with secondary lymphedema in California’s Central Valley were surveyed at 6 surgical clinics to assess demographics and symptoms of lymphedema. Additionally, subjects on semaglutide treatment for obesity and/or T2DM were monitored for their diabetes management, weight loss progress, and lymphedema symptoms compared to subjects who were not treated with semaglutide. The subjects were followed for 12 months. Subjects who were treated with semaglutide completed pre-treatment questionnaires and follow-up post-treatment questionnaires at 3, 6, 9, 12 months, along with medical assessment. The untreated subjects completed similar questionnaires. The questionnaires investigated subjective feelings regarding lymphedema symptoms and management using a Likert-scale; quantitative leg measurements were collected, and blood work reviewed at these appointments. Paired difference t-tests, chi-squared tests, and independent sample t-tests were performed. 50 subjects, aged 18-75 years, completed the surveys evaluating secondary lymphedema: 90% female, 69% Hispanic, 45% Spanish speaking, 42% disabled, 57 % employed, 54% income range below 30 thousand dollars, and average BMI of 40. Both treatment and non-treatment groups noted the most common symptoms were leg swelling (x̄=3.2, ▁d= 1.3), leg pain (x̄=3.2, ▁d=1.6 ), loss of daily function (x̄=3, ▁d=1.4 ), and negative body image (x̄=4.4, ▁d=0.54). Subjects in the semaglutide treatment group >3 months of treatment compared to the untreated group demonstrated: 55% subject in the treated group had a 10% weight loss vs 3% in the untreated group (average BMI reduction by 11% vs untreated by 2.5%, p<0.05) and improved subjective feelings about their lymphedema symptoms: leg swelling (x̄=2.4, ▁d=0.45 vs x̄=3.2, ▁d=1.3, p<0.05), leg pain (x̄=2.2, ▁d=0.45 vs x̄= 3.2, ▁d= 1.6, p<0.05), and heaviness (x̄=2.2, ▁d=0.45 vs x̄=3, ▁d=1.56, p<0.05). Improvement in diabetes management was demonstrated by an average of 0.9 % decrease in A1C values compared to untreated 0.1 %, p<0.05. In comparison to untreated subjects, treatment subjects on semaglutide noted 6 cm decrease in the circumference of the leg, knee, calf, and ankle compared to 2 cm in untreated subjects, p<0.05. Semaglutide was shown to significantly improve weight loss, T2DM management, leg circumference, and secondary lymphedema functional, physical and psychosocial symptoms.Keywords: diabetes, secondary lymphedema, semaglutide, obesity
Procedia PDF Downloads 647693 Interface Fracture of Sandwich Composite Influenced by Multiwalled Carbon Nanotube
Authors: Alak Kumar Patra, Nilanjan Mitra
Abstract:
Higher strength to weight ratio is the main advantage of sandwich composite structures. Interfacial delamination between the face sheet and core is a major problem in these structures. Many research works are devoted to improve the interfacial fracture toughness of composites majorities of which are on nano and laminated composites. Work on influence of multiwalled carbon nano-tubes (MWCNT) dispersed resin system on interface fracture of glass-epoxy PVC core sandwich composite is extremely limited. Finite element study is followed by experimental investigation on interface fracture toughness of glass-epoxy (G/E) PVC core sandwich composite with and without MWCNT. Results demonstrate an improvement in interface fracture toughness values (Gc) of samples with a certain percentages of MWCNT. In addition, dispersion of MWCNT in epoxy resin through sonication followed by mixing of hardener and vacuum resin infusion (VRI) technology used in this study is an easy and cost effective methodology in comparison to previously adopted other methods limited to laminated composites. The study also identifies the optimum weight percentage of MWCNT addition in the resin system for maximum performance gain in interfacial fracture toughness. The results agree with finite element study, high-resolution transmission electron microscope (HRTEM) analysis and fracture micrograph of field emission scanning electron microscope (FESEM) investigation. Interface fracture toughness (GC) of the DCB sandwich samples is calculated using the compliance calibration (CC) method considering the modification due to shear. Compliance (C) vs. crack length (a) data of modified sandwich DCB specimen is fitted to a power function of crack length. The calculated mean value of the exponent n from the plots of experimental results is 2.22 and is different from the value (n=3) prescribed in ASTM D5528-01for mode 1 fracture toughness of laminate composites (which is the basis for modified compliance calibration method). Differentiating C with respect to crack length (a) and substituting it in the expression GC provides its value. The research demonstrates improvement of 14.4% in peak load carrying capacity and 34.34% in interface fracture toughness GC for samples with 1.5 wt% MWCNT (weight % being taken with respect to weight of resin) in comparison to samples without MWCNT. The paper focuses on significant improvement in experimentally determined interface fracture toughness of sandwich samples with MWCNT over the samples without MWCNT using much simpler method of sonication. Good dispersion of MWCNT was observed in HRTEM with 1.5 wt% MWCNT addition in comparison to other percentages of MWCNT. FESEM studies have also demonstrated good dispersion and fiber bridging of MWCNT in resin system. Ductility is also observed to be higher for samples with MWCNT in comparison to samples without.Keywords: carbon nanotube, epoxy resin, foam, glass fibers, interfacial fracture, sandwich composite
Procedia PDF Downloads 3047692 Determination of Phenolic Compounds in Apples Grown in Different Geographical Regions
Authors: Mindaugas Liaudanskas, Monika Tallat-Kelpsaite, Darius Kviklys, Jonas Viskelis, Pranas Viskelis, Norbertas Uselis, Juozas Lanauskas, Valdimaras Janulis
Abstract:
Apples are an important source of various biologically active compounds used for human health. Phenolic compounds detected in apples are natural antioxidants and have antimicrobial, anti-inflammatory, anticarcinogenic, and cardiovascular protective activity. The quantitative composition of phenolic compounds in apples may be affected by various factors. It is important to investigate it in order to provide the consumer with high-quality well-known composition apples and products made out of it. The objective of this study was to evaluate phenolic compounds quantitative composition in apple fruits grown in a different geographical region. In this study, biological replicates of apple cv. 'Ligol', grown in Lithuania, Latvia, Poland, and Estonia, were investigated. Three biological replicates were analyzed; one of each contained 10 apples. Samples of lyophilized apple fruits were extracted with 70% ethanol (v/v) for 20 min at 40∘C temperature using the ultrasonic bath. The ethanol extracts of apple fruits were analyzed by the high-performance liquid chromatography method. The study found that the geographical location of apple-trees had an impact on the composition of phenolic compounds in apples. The number of quercetin glycosides varied from 314.78±9.47 µg/g (Poland) to 648.17±5.61 µg/g (Estonia). The same trend was also observed with flavan-3-ols (from 829.56±47.17 µg/g to 2300.85±35.49 µg/g), phloridzin (from 55.29±1.7 µg/g to 208.78±0.35 µg/g), and chlorogenic acid (from 501.39±28.84 µg/g to 1704.35±22.65 µg/g). It was observed that the amount of investigated phenolic compounds tended to increase from apples grown in the southern location (Poland) (1701.02±75.38 µg/g) to apples grown northern location (Estonia) (4862.15±56.37 µg/g). Apples (cv. 'Ligol') grown in Estonia accumulated approx. 2.86 times higher amount of phenolic compounds than apples grown in Poland. Acknowledgment: This work was supported by a grant from the Research Council of Lithuania, project No. S-MIP-17-8.Keywords: apples, cultivar 'Ligol', geographical regions, HPLC, phenolic compounds
Procedia PDF Downloads 1907691 Evaluation of Teaching Team Stress Factors in Two Engineering Education Programs
Authors: Kari Bjorn
Abstract:
Team learning has been studied and modeled as double loop model and its variations. Also, metacognition has been suggested as a concept to describe the nature of team learning to be more than a simple sum of individual learning of the team members. Team learning has a positive correlation with both individual motivation of its members, as well as the collective factors within the team. Team learning of previously very independent members of two teaching teams is analyzed. Applied Science Universities are training future professionals with ever more diversified and multidisciplinary skills. The size of the units of teaching and learning are increasingly larger for several reasons. First, multi-disciplinary skill development requires more active learning and richer learning environments and learning experiences. This occurs on students teams. Secondly, teaching of multidisciplinary skills requires a multidisciplinary and team-based teaching from the teachers as well. Team formation phases have been identifies and widely accepted. Team role stress has been analyzed in project teams. Projects typically have a well-defined goal and organization. This paper explores team stress of two teacher teams in a parallel running two course units in engineering education. The first is an Industrial Automation Technology and the second is Development of Medical Devices. The courses have a separate student group, and they are in different campuses. Both are run in parallel within 8 week time. Both of them are taught by a group of four teachers with several years of teaching experience, but individually. The team role stress scale items - the survey is done to both teaching groups at the beginning of the course and at the end of the course. The inventory of questions covers the factors of ambiguity, conflict, quantitative role overload and qualitative role overload. Some comparison to the study on project teams can be drawn. Team development stage of the two teaching groups is different. Relating the team role stress factors to the development stage of the group can reveal the potential of management actions to promote team building and to understand the maturity of functional and well-established teams. Mature teams indicate higher job satisfaction and deliver higher performance. Especially, teaching teams who deliver highly intangible results of learning outcome are sensitive to issues in the job satisfaction and team conflicts. Because team teaching is increasing, the paper provides a review of the relevant theories and initial comparative and longitudinal results of the team role stress factors applied to teaching teams.Keywords: engineering education, stress, team role, team teaching
Procedia PDF Downloads 2297690 Performance Comparison of Wideband Covariance Matrix Sparse Representation (W-CMSR) with Other Wideband DOA Estimation Methods
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
In this paper, performance comparison of wideband covariance matrix sparse representation (W-CMSR) method with other existing wideband Direction of Arrival (DOA) estimation methods has been made.W-CMSR relies less on a priori information of the incident signal number than the ordinary subspace based methods.Consider the perturbation free covariance matrix of the wideband array output. The diagonal covariance elements are contaminated by unknown noise variance. The covariance matrix of array output is conjugate symmetric i.e its upper right triangular elements can be represented by lower left triangular ones.As the main diagonal elements are contaminated by unknown noise variance,slide over them and align the lower left triangular elements column by column to obtain a measurement vector.Simulation results for W-CMSR are compared with simulation results of other wideband DOA estimation methods like Coherent signal subspace method (CSSM), Capon, l1-SVD, and JLZA-DOA. W-CMSR separate two signals very clearly and CSSM, Capon, L1-SVD and JLZA-DOA fail to separate two signals clearly and an amount of pseudo peaks exist in the spectrum of L1-SVD.Keywords: W-CMSR, wideband direction of arrival (DOA), covariance matrix, electrical and computer engineering
Procedia PDF Downloads 4737689 A Construction Management Tool: Determining a Project Schedule Typical Behaviors Using Cluster Analysis
Authors: Natalia Rudeli, Elisabeth Viles, Adrian Santilli
Abstract:
Delays in the construction industry are a global phenomenon. Many construction projects experience extensive delays exceeding the initially estimated completion time. The main purpose of this study is to identify construction projects typical behaviors in order to develop a prognosis and management tool. Being able to know a construction projects schedule tendency will enable evidence-based decision-making to allow resolutions to be made before delays occur. This study presents an innovative approach that uses Cluster Analysis Method to support predictions during Earned Value Analyses. A clustering analysis was used to predict future scheduling, Earned Value Management (EVM), and Earned Schedule (ES) principal Indexes behaviors in construction projects. The analysis was made using a database with 90 different construction projects. It was validated with additional data extracted from literature and with another 15 contrasting projects. For all projects, planned and executed schedules were collected and the EVM and ES principal indexes were calculated. A complete linkage classification method was used. In this way, the cluster analysis made considers that the distance (or similarity) between two clusters must be measured by its most disparate elements, i.e. that the distance is given by the maximum span among its components. Finally, through the use of EVM and ES Indexes and Tukey and Fisher Pairwise Comparisons, the statistical dissimilarity was verified and four clusters were obtained. It can be said that construction projects show an average delay of 35% of its planned completion time. Furthermore, four typical behaviors were found and for each of the obtained clusters, the interim milestones and the necessary rhythms of construction were identified. In general, detected typical behaviors are: (1) Projects that perform a 5% of work advance in the first two tenths and maintain a constant rhythm until completion (greater than 10% for each remaining tenth), being able to finish on the initially estimated time. (2) Projects that start with an adequate construction rate but suffer minor delays culminating with a total delay of almost 27% of the planned time. (3) Projects which start with a performance below the planned rate and end up with an average delay of 64%, and (4) projects that begin with a poor performance, suffer great delays and end up with an average delay of a 120% of the planned completion time. The obtained clusters compose a tool to identify the behavior of new construction projects by comparing their current work performance to the validated database, thus allowing the correction of initial estimations towards more accurate completion schedules.Keywords: cluster analysis, construction management, earned value, schedule
Procedia PDF Downloads 2697688 Information Technology Capabilities and Organizational Performance: Mediating Role of Strategic Benefits of It: A Comparison between China and Pakistan
Authors: Rehan Ullah
Abstract:
The primary purpose of the study is to observe the relationship that exists between the organizational information technology (IT) capabilities and the organizational performance in China and Pakistan. Nations like China and Pakistan utilize modern techno-how to enhance their production endeavors. Therefore, making a wide-ranging comparison of the manufacturing services between China and Pakistan was chosen due to numerous reasons. One reason for carrying out this comparison is to determine how IT of the two countries enhances organizational competency on small and medium-sized manufacturing enterprises (SMEs). The study hypothesized that organizational IT capabilities (IT infrastructure, IT competence) have a positive influence on organizational performance and the strategic benefits of IT have a mediating effect on the relationship between IT capability and organizational performance. To investigate the relationship between IT capabilities and organizational performance, surveys were sent to managers of small, medium-sized manufacturing organizations located in the southwestern region, Sichuan province of China, and Pakistani companies, which are located in Islamabad, Lahore, and Karachi. These cities were selected as typical representatives of each country. Organizational performance has been measured in terms of profitability, organizational success, growth, market share, and innovativeness. Out of 400 surveys distributed to different manufacturing organizations, 303 usable and valid responses were received that are analyzed in this research. The data were examined using SPSS and Smart PLS computer software. The results of the study, including the descriptive statistics of each variable, are used. The outer model has been measured with considerations to content validity, discriminant validity, and convergent validity. The path coefficients among the constructs were also computed when analyzing the structural model using the bootstrapping technique. The analysis of data from both China and Pakistan yields an identical but unique result. The results show that IT infrastructure, IT competence, strategic benefits of IT are all correlated to the performance of the organizations. Moreover, strategic benefits of IT have been proved to mediate the relationship between IT capabilities and organization performance. The author, concerning the role of IT on the performance of an organization, highlights the different aspects as well as its benefits in an organization. The overall study concludes several implications for both managers and academicians. It also provides the limitations of the study and offers recommendations for future studies and practice.Keywords: organizational performance, IT capabilities, IT infrastructure, IT competence, strategic benefits of IT, China, Pakistan
Procedia PDF Downloads 977687 Intensive Intercultural English Language for Enhanced School Community Engagement: An Exploratory Study Applied to Parents from Language Backgrounds Other Than English in a Regional Australian Primary School
Authors: Ann Dashwood
Abstract:
Using standard Australian English with confidence is a cultural expectation of parents of primary school aged children who want to engage effectively with their children’s teachers and school administration. That confidence in support of their children’s learning at school is seldom experienced by parents whose first language is not English. Sharing language with competence in an intercultural environment is the common denominator for meaningful communication and engagement to occur in a school community. Experience in relevant interactive sessions is known to enhance engagement and participation. The purpose of this paper is to identify interactional settings for which parents who are isolated from the daily use of functional Australian cultural language learned to engage more effectively in their children’s learning at school. The outcomes measured parents’ intercultural engagement with classroom teachers and attention to the school’s administrative procedures. The study used quantitative and qualitative methods. The principles of communicative task-based language learning combined with intercultural communication principles provided the theoretical base for intensive English task-based learning and engagement. The quantitative analysis examined data samples collected by classroom teachers and administrators and parents’ writing samples. Interviews and observations qualitatively informed the study. Currently significant numbers of projects are active in community centres and schools to enhance English language knowledge of parents from Language Backgrounds Other Than English (LBOTE). The study was significant to explore the effects of conducting intensive English with parents of varied English language backgrounds by targeting language use for social interactions in the community, specific engagement in school activities, cultural interaction with teachers and responsiveness to complying with school procedures.Keywords: engagement, intercultural communication, LBOTE, school community
Procedia PDF Downloads 1117686 Transient and Persistent Efficiency Estimation for Electric Grid Utilities Based on Meta-Frontier: Comparative Analysis of China and Japan
Authors: Bai-Chen Xie, Biao Li
Abstract:
With the deepening of international exchanges and investment, the international comparison of power grid firms has become the focus of regulatory authorities. Ignoring the differences in the economic environment, resource endowment, technology, and other aspects of different countries or regions may lead to efficiency bias. Based on the Meta-frontier model, this paper divides China and Japan into two groups by using the data of China and Japan from 2006 to 2020. While preserving the differences between the two countries, it analyzes and compares the efficiency of the transmission and distribution industries of the two countries. Combined with the four-component stochastic frontier model, the efficiency is divided into transient and persistent efficiency. We found that there are obvious differences between the transmission and distribution sectors in China and Japan. On the one hand, the inefficiency of the two countries is mostly caused by long-term and structural problems. The key to improve the efficiency of the two countries is to focus more on solving long-term and structural problems. On the other hand, the long-term and structural problems that cause the inefficiency of the two countries are not the same. Quality factors have different effects on the efficiency of the two countries, and this different effect is captured by the common frontier model but is offset in the overall model. Based on these findings, this paper proposes some targeted policy recommendations.Keywords: transmission and distribution industries, transient efficiency, persistent efficiency, meta-frontier, international comparison
Procedia PDF Downloads 1067685 Implementation Status of Industrial Training for Production Engineering Technology Diploma Inuniversity Kuala Lumpur Malaysia Spanish Institute (Unikl Msi)
Authors: M. Sazali Said, Rahim Jamian, Shahrizan Yusoff, Shahruzaman Sulaiman, Jum'Azulhisham Abdul Shukor
Abstract:
This case study focuses on the role of Universiti Kuala Lumpur Malaysian Spanish Institute (UniKL MSI) to produce technologist in order to reduce the shortage of skilled workers especially in the automotive industry. The purpose of the study therefore seeks to examine the effectiveness of Technical Education and Vocational Training (TEVT) curriculum of UniKL MSI to produce graduates that could immediately be productively employed by the automotive industry. The approach used in this study is through performance evaluation of students attending the Industrial Training Attachment (INTRA). The sample of study comprises of 37 students, 16 university supervisors and 26 industrial supervisors. The research methodology involves the use of quantitative and qualitative methods of data collections through the triangulation approach. The quantitative data was gathered from the students, university supervisors and industrial supervisors through the use of questionnaire. Meanwhile, the qualitative data was obtained from the students and university supervisors through the use of interview and observation. Both types of data have been processed and analyzed in order to summarize the results in terms of frequency and percentage by using a computerized spread sheet. The result shows that industrial supervisors were satisfied with the students’ performance. Meanwhile, university supervisors rated moderate effectiveness of the UniKL MSI curriculum in producing graduates with appropriate skills and in meeting the industrial needs. During the period of study, several weaknesses in the curriculum have been identified for further continuous improvements. Recommendations and suggestions for curriculum improvement also include the enhancement of technical skills and competences of students towards fulfilling the needs and demand of the automotive industries.Keywords: technical education and vocational training (TEVT), industrial training attachment (INTRA), curriculum improvement, automotive industry
Procedia PDF Downloads 3707684 Emotion Expression of the Leader and Collective Efficacy: Pride and Guilt
Authors: Hsiu-Tsu Cho
Abstract:
Collective efficacy refers to a group’s sense of its capacity to complete a task successfully or to reach objectives. Little effort has been expended on investigating the relationship between the emotion expression of a leader and collective efficacy. In this study, we examined the impact of the different emotions and emotion expression of a group leader on collective efficacy and explored whether the emotion–expressive effects differed under conditions of negative and positive emotions. A total of 240 undergraduate and graduate students recruited using Facebook and posters at a university participated in this research. The participants were separated randomly into 80 groups of four persons consisting of three participants and a confederate. They were randomly assigned to one of five conditions in a 2 (pride vs. guilt) × 2 (emotion expression of group leader vs. no emotion expression of group leader) factorial design and a control condition. Each four-person group was instructed to get the reward in a group competition of solving the five-disk Tower of Hanoi puzzle and making decisions on an investment case. We surveyed the participants by employing the emotional measure revised from previous researchers and collective efficacy questionnaire on a 5-point scale. To induce an emotion of pride (or guilt), the experimenter announced whether the group performance was good enough to have a chance of getting the reward (ranking the top or bottom 20% among all groups) after group task. The leader (confederate) could either express or not express a feeling of pride (or guilt) following the instruction according to the assigned condition. To check manipulation of emotion, we added a control condition under which the experimenter revealed no results regarding group performance in maintaining a neutral emotion. One-way ANOVAs and post hoc pairwise comparisons among the three emotion conditions (pride, guilt, and control condition) involved assigning pride and guilt scores (pride: F(1,75) = 32.41, p < .001; guilt: F(1,75) = 6.75, p < .05). The results indicated that manipulations of emotion were successful. A two-way between-measures ANOVA was conducted to examine the predictions of the main effects of emotion types and emotion expression as well as the interaction effect of these two variables on collective efficacy. The experimental findings suggest that pride did not affect collective efficacy (F(1,60) = 1.90, ns.) more than guilt did and that the group leader did not motivate collective efficacy regardless of whether he or she expressed emotion (F(1,60) = .89, ns.). However, the interaction effect of emotion types and emotion expression was statistically significant (F(1,60) = 4.27, p < .05, ω2 = .066); the effects accounted for 6.6% of the variance. Additional results revealed that, under the pride condition, the leader enhanced group efficacy when expressing emotion, whereas, under the guilt condition, an expression of emotion could reduce collective efficacy. Overall, these findings challenge the assumption that the effect of expression emotion are the same on all emotions and suggest that a leader should be cautious when expressing negative emotions toward a group to avoid reducing group effectiveness.Keywords: collective efficacy, group leader, emotion expression, pride, guilty
Procedia PDF Downloads 3337683 Spatial Variation of WRF Model Rainfall Prediction over Uganda
Authors: Isaac Mugume, Charles Basalirwa, Daniel Waiswa, Triphonia Ngailo
Abstract:
Rainfall is a major climatic parameter affecting many sectors such as health, agriculture and water resources. Its quantitative prediction remains a challenge to weather forecasters although numerical weather prediction models are increasingly being used for rainfall prediction. The performance of six convective parameterization schemes, namely the Kain-Fritsch scheme, the Betts-Miller-Janjic scheme, the Grell-Deveny scheme, the Grell-3D scheme, the Grell-Fretas scheme, the New Tiedke scheme of the weather research and forecast (WRF) model regarding quantitative rainfall prediction over Uganda is investigated using the root mean square error for the March-May (MAM) 2013 season. The MAM 2013 seasonal rainfall amount ranged from 200 mm to 900 mm over Uganda with northern region receiving comparatively lower rainfall amount (200–500 mm); western Uganda (270–550 mm); eastern Uganda (400–900 mm) and the lake Victoria basin (400–650 mm). A spatial variation in simulated rainfall amount by different convective parameterization schemes was noted with the Kain-Fritsch scheme over estimating the rainfall amount over northern Uganda (300–750 mm) but also presented comparable rainfall amounts over the eastern Uganda (400–900 mm). The Betts-Miller-Janjic, the Grell-Deveny, and the Grell-3D underestimated the rainfall amount over most parts of the country especially the eastern region (300–600 mm). The Grell-Fretas captured rainfall amount over the northern region (250–450 mm) but also underestimated rainfall over the lake Victoria Basin (150–300 mm) while the New Tiedke generally underestimated rainfall amount over many areas of Uganda. For deterministic rainfall prediction, the Grell-Fretas is recommended for rainfall prediction over northern Uganda while the Kain-Fritsch scheme is recommended over eastern region.Keywords: convective parameterization schemes, March-May 2013 rainfall season, spatial variation of parameterization schemes over Uganda, WRF model
Procedia PDF Downloads 3167682 Delving into Market-Driving Behavior: A Conceptual Roadmap to Delineating Its Key Antecedents and Outcomes
Authors: Konstantinos Kottikas, Vlasis Stathakopoulos, Ioannis G. Theodorakis, Efthymia Kottika
Abstract:
Theorists have argued that Market Orientation is comprised of two facets, namely the Market Driven and the Market Driving components. The present theoretical paper centers on the latter, which to date has been notably under-investigated. The term Market Driving (MD) pertains to influencing the structure of the market, or the behavior of market players in a direction that enhances the competitive edge of the firm. Presently, the main objectives of the paper are the specification of key antecedents and outcomes of Market Driving behavior. Market Driving firms behave proactively, by leading their customers and changing the rules of the game rather than by responding passively to them. Leading scholars were the first to conceptually conceive the notion, followed by some qualitative studies and a limited number of quantitative publications. However, recently, academicians noted that research on the topic remains limited, expressing a strong necessity for further insights. Concerning the key antecedents, top management’s Transformational Leadership (i.e. the form of leadership which influences organizational members by aligning their values, goals and aspirations to facilitate value-consistent behaviors) is one of the key drivers of MD behavior. Moreover, scholars have linked the MD concept with Entrepreneurship. Finally, the role that Employee’s Creativity plays in the development of MD behavior has been theoretically exemplified by a stream of literature. With respect to the key outcomes, it has been demonstrated that MD Behavior positively triggers firm Performance, while theorists argue that it empowers the Competitive Advantage of the firm. Likewise, researchers explicate that MD Behavior produces Radical Innovation. In order to test the robustness of the proposed theoretical framework, a combination of qualitative and quantitative methods is proposed. In particular, the conduction of in-depth interviews with distinguished executives and academicians, accompanied with a large scale quantitative survey will be employed, in order to triangulate the empirical findings. Given that it triggers overall firm’s success, the MD concept is of high importance to managers. Managers can become aware that passively reacting to market conditions is no longer sufficient. On the contrary, behaving proactively, leading the market, and shaping its status quo are new innovative approaches that lead to a paramount competitive posture and Innovation outcomes. This study also exemplifies that managers can foster MD Behavior through Transformational Leadership, Entrepreneurship and recruitment of Creative Employees. To date, the majority of the publications on Market Orientation is unilaterally directed towards the responsive (i.e. the Market Driven) component. The present paper further builds on scholars’ exhortations, and investigates the Market Driving facet, ultimately aspiring to conceptually integrate the somehow fragmented scientific findings, in a holistic framework.Keywords: entrepreneurial orientation, market driving behavior, market orientation
Procedia PDF Downloads 3877681 Pharmacokinetic Modeling of Valsartan in Dog following a Single Oral Administration
Authors: In-Hwan Baek
Abstract:
Valsartan is a potent and highly selective antagonist of the angiotensin II type 1 receptor, and is widely used for the treatment of hypertension. The aim of this study was to investigate the pharmacokinetic properties of the valsartan in dogs following oral administration of a single dose using quantitative modeling approaches. Forty beagle dogs were randomly divided into two group. Group A (n=20) was administered a single oral dose of valsartan 80 mg (Diovan® 80 mg), and group B (n=20) was administered a single oral dose of valsartan 160 mg (Diovan® 160 mg) in the morning after an overnight fast. Blood samples were collected into heparinized tubes before and at 0.5, 1, 1.5, 2, 2.5, 3, 4, 6, 8, 12 and 24 h following oral administration. The plasma concentrations of the valsartan were determined using LC-MS/MS. Non-compartmental pharmacokinetic analyses were performed using WinNonlin Standard Edition software, and modeling approaches were performed using maximum-likelihood estimation via the expectation maximization (MLEM) algorithm with sampling using ADAPT 5 software. After a single dose of valsartan 80 mg, the mean value of maximum concentration (Cmax) was 2.68 ± 1.17 μg/mL at 1.83 ± 1.27 h. The area under the plasma concentration-versus-time curve from time zero to the last measurable concentration (AUC24h) value was 13.21 ± 6.88 μg·h/mL. After dosing with valsartan 160 mg, the mean Cmax was 4.13 ± 1.49 μg/mL at 1.80 ± 1.53 h, the AUC24h was 26.02 ± 12.07 μg·h/mL. The Cmax and AUC values increased in proportion to the increment in valsartan dose, while the pharmacokinetic parameters of elimination rate constant, half-life, apparent of total clearance, and apparent of volume of distribution were not significantly different between the doses. Valsartan pharmacokinetic analysis fits a one-compartment model with first-order absorption and elimination following a single dose of valsartan 80 mg and 160 mg. In addition, high inter-individual variability was identified in the absorption rate constant. In conclusion, valsartan displays the dose-dependent pharmacokinetics in dogs, and Subsequent quantitative modeling approaches provided detailed pharmacokinetic information of valsartan. The current findings provide useful information in dogs that will aid future development of improved formulations or fixed-dose combinations.Keywords: dose-dependent, modeling, pharmacokinetics, valsartan
Procedia PDF Downloads 3017680 Skin-Dose Mapping for Patients Undergoing Interventional Radiology Procedures: Clinical Experimentations versus a Mathematical Model
Authors: Aya Al Masri, Stefaan Carpentier, Fabrice Leroy, Thibault Julien, Safoin Aktaou, Malorie Martin, Fouad Maaloul
Abstract:
Introduction: During an 'Interventional Radiology (IR)' procedure, the patient's skin-dose may become very high for a burn, necrosis and ulceration to appear. In order to prevent these deterministic effects, an accurate calculation of the patient skin-dose mapping is essential. For most machines, the 'Dose Area Product (DAP)' and fluoroscopy time are the only information available for the operator. These two parameters are a very poor indicator of the peak skin dose. We developed a mathematical model that reconstructs the magnitude (delivered dose), shape, and localization of each irradiation field on the patient skin. In case of critical dose exceeding, the system generates warning alerts. We present the results of its comparison with clinical studies. Materials and methods: Two series of comparison of the skin-dose mapping of our mathematical model with clinical studies were performed: 1. At a first time, clinical tests were performed on patient phantoms. Gafchromic films were placed on the table of the IR machine under of PMMA plates (thickness = 20 cm) that simulate the patient. After irradiation, the film darkening is proportional to the radiation dose received by the patient's back and reflects the shape of the X-ray field. After film scanning and analysis, the exact dose value can be obtained at each point of the mapping. Four experimentation were performed, constituting a total of 34 acquisition incidences including all possible exposure configurations. 2. At a second time, clinical trials were launched on real patients during real 'Chronic Total Occlusion (CTO)' procedures for a total of 80 cases. Gafchromic films were placed at the back of patients. We performed comparisons on the dose values, as well as the distribution, and the shape of irradiation fields between the skin dose mapping of our mathematical model and Gafchromic films. Results: The comparison between the dose values shows a difference less than 15%. Moreover, our model shows a very good geometric accuracy: all fields have the same shape, size and location (uncertainty < 5%). Conclusion: This study shows that our model is a reliable tool to warn physicians when a high radiation dose is reached. Thus, deterministic effects can be avoided.Keywords: clinical experimentation, interventional radiology, mathematical model, patient's skin-dose mapping.
Procedia PDF Downloads 1467679 A Mixed Methods Study: Evaluation of Experiential Learning Techniques throughout a Nursing Curriculum to Promote Empathy
Authors: Joan Esper Kuhnly, Jess Holden, Lynn Shelley, Nicole Kuhnly
Abstract:
Empathy serves as a foundational nursing principle inherent in the nurse’s ability to form those relationships from which to care for patients. Evidence supports, including empathy in nursing and healthcare education, but there is limited data on what methods are effective to do so. Building evidence supports experiential and interactive learning methods to be effective for students to gain insight and perspective from a personalized experience. The purpose of this project is to evaluate learning activities designed to promote the attainment of empathic behaviors across 5 levels of the nursing curriculum. Quantitative analysis will be conducted on data from pre and post-learning activities using the Toronto Empathy Questionnaire. The main hypothesis, that simulation learning activities will increase empathy, will be examined using a repeated measures Analysis of Variance (ANOVA) on Pre and Post Toronto Empathy Questionnaire scores for three simulation activities (Stroke, Poverty, Dementia). Pearson product-moment correlations will be conducted to examine the relationships between continuous demographic variables, such as age, credits earned, and years practicing, with the dependent variable of interest, Post Test Toronto Empathy Scores. Krippendorff’s method of content analysis will be conducted to identify the quantitative incidence of empathic responses. The researchers will use Colaizzi’s descriptive phenomenological method to describe the students’ simulation experience and understand its impact on caring and empathy behaviors employing bracketing to maintain objectivity. The results will be presented, answering multiple research questions. The discussion will be relevant to results and educational pedagogy in the nursing curriculum as they relate to the attainment of empathic behaviors.Keywords: curriculum, empathy, nursing, simulation
Procedia PDF Downloads 1167678 Data Transformations in Data Envelopment Analysis
Authors: Mansour Mohammadpour
Abstract:
Data transformation refers to the modification of any point in a data set by a mathematical function. When applying transformations, the measurement scale of the data is modified. Data transformations are commonly employed to turn data into the appropriate form, which can serve various functions in the quantitative analysis of the data. This study addresses the investigation of the use of data transformations in Data Envelopment Analysis (DEA). Although data transformations are important options for analysis, they do fundamentally alter the nature of the variable, making the interpretation of the results somewhat more complex.Keywords: data transformation, data envelopment analysis, undesirable data, negative data
Procedia PDF Downloads 297677 Gender Differences in Walking Capacity and Cardiovascular Regulation in Patients with Peripheral Arterial Disease
Authors: Gabriel Cucato, Marilia Correia, Wagner Domingues, Aline Palmeira, Paulo Longano, Nelson Wolosker, Raphael Ritti-Dias
Abstract:
Women with peripheral arterial disease (PAD) present lower walking capacity in comparison with men. However, whether cardiovascular regulation is also different between genders is unknown. Thus, the aim of this study was to compare walking capacity and cardiovascular regulation between men and women with PAD. A total of 23 women (66±7 yrs) and 31 men (64±9 yrs) were recruited. Patients performed a 6-minute test and the onset claudication distance and total walking distance were measured. Additionally, cardiovascular regulation was assessed by arterial stiffness (pulse wave velocity and augmentation index) and heart rate variability (frequency domain). Independent T test or Mann-Whitney U test were performed. In comparison with men, women present lower onset claudication distance (108±66m vs. 143±50m; P=0.032) and total walking distance (286±83m vs. 361±91 m, P=0.007). Regarding cardiovascular regulation, there were no differences in heart rate variability SDNN (72±160ms vs. 32±22ms, P=0.587); RMSSD (75±209 vs. 25±22ms, P=0.726); pNN50 (11±17ms vs. 8±14ms, P=0.836) in women and men, respectively. Moreover, there were no difference in augmentation index (39±10% vs. 34±11%, P=0.103); pulse pressure (59±17mmHg vs. 56±19mmHg, P=0.593) and pulse wave velocity (8.6±2.6m\s vs. 9.0±2.7m/s, P=0.580). In conclusion, women have impaired walking capacity compared to men. However, sex differences were not observed on cardiovascular regulation in patients with PAD.Keywords: exercise, intermittent claudication, cardiovascular load, arterial stiffness
Procedia PDF Downloads 3957676 Using Industrial Service Quality to Assess Service Quality Perception in Television Advertisement: A Case Study
Authors: Ana L. Martins, Rita S. Saraiva, João C. Ferreira
Abstract:
Much effort has been placed on the assessment of perceived service quality. Several models can be found in literature, but these are mainly focused on business-to-consumer (B2C) relationships. Literature on how to assess perceived quality in business-to-business (B2B) contexts is scarce both conceptually and in terms of its application. This research aims at filling this gap in literature by applying INDSERV to a case study situation. Under this scope, this research aims at analyzing the adequacy of the proposed assessment tool to other context besides the one where it was developed and by doing so analyzing the perceive quality of the advertisement service provided by a specific television network to its B2B customers. The INDSERV scale was adopted and applied to a sample of 33 clients, via questionnaires adapted to interviews. Data was collected in person or phone. Both quantitative and qualitative data collection was performed. Qualitative data analysis followed content analysis protocol. Quantitative analysis used hypotheses testing. Findings allowed to conclude that the perceived quality of the television service provided by television network is very positive, being the Soft Process Quality the parameter that reveals the highest perceived quality of the service as opposed to Potential Quality. To this end, some comments and suggestions were made by the clients regarding each one of these service quality parameters. Based on the hypotheses testing, it was noticed that only advertisement clients that maintain a connection to the television network from 5 to 10 years do show a significant different perception of the TV advertisement service provided by the company in what the Hard Process Quality parameter is concerned. Through the collected data content analysis, it was possible to obtain the percentage of clients which share the same opinions and suggestions for improvement. Finally, based on one of the four service quality parameter in a B2B context, managerial suggestions were developed aiming at improving the television network advertisement perceived quality service.Keywords: B2B, case study, INDSERV, perceived service quality
Procedia PDF Downloads 2117675 Key Factors for Stakeholder Engagement and Sustainable Development
Authors: Jo Rhodes, Bruce Bergstrom, Peter Lok, Vincent Cheng
Abstract:
The aim of this study is to determine key factors and processes for multinationals (MNCs) to develop an effective stakeholder engagement and sustainable development framework. A qualitative multiple-case approach was used. A triangulation method was adopted (interviews, archival documents and observations) to collect data on three global firms (MNCs). 9 senior executives were interviewed for this study (3 from each firm). An initial literature review was conducted to explore possible practices and factors (the deductive approach) to sustainable development. Interview data were analysed using Nvivo to obtain appropriate nodes and themes for the framework. A comparison of findings from interview data and themes, factors developed from the literature review and cross cases comparison were used to develop the final conceptual framework (the inductive approach). The results suggested that stakeholder engagement is a key mediator between ‘stakeholder network’ (internal and external factors) and outcomes (corporate social responsibility, social capital, shared value and sustainable development). Key internal factors such as human capital/talent, technology, culture, leadership and processes such as collaboration, knowledge sharing and co-creation of value with stakeholders were identified. These internal factors and processes must be integrated and aligned with external factors such as social, political, cultural, environment and NGOs to achieve effective stakeholder engagement.Keywords: stakeholder, engagement, sustainable development, shared value, corporate social responsibility
Procedia PDF Downloads 5187674 Challenges and Professional Perspectives for Pedagogy Undergraduates with Specific Learning Disability: A Greek Case Study
Authors: Tatiani D. Mousoura
Abstract:
Specific learning disability (SLD) in higher education has been partially explored in Greece so far. Moreover, opinions on professional perspectives for university students with SLD, is scarcely encountered in Greek research. The perceptions of the hidden character of SLD along with the university policy towards it and professional perspectives that result from this policy have been examined in the present research. This study has applied the paradigm of a Greek Tertiary Pedagogical Education Department (Early Childhood Education). Via mixed methods, data have been collected from different groups of people in the Pedagogical Department: students with SLD and without SLD, academic staff and administration staff, all of which offer the opportunity for triangulation of the findings. Qualitative methods include ten interviews with students with SLD and 15 interviews with academic staff and 60 hours of observation of the students with SLD. Quantitative methods include 165 questionnaires completed by third and fourth-year students and five questionnaires completed by the administration staff. Thematic analyses of the interviews’ data and descriptive statistics on the questionnaires’ data have been applied for the processing of the results. The use of medical terms to define and understand SLD was common in the student cohort, regardless of them having an SLD diagnosis. However, this medical model approach is far more dominant in the group of students without SLD who, by majority, hold misconceptions on a definitional level. The academic staff group seems to be leaning towards a social approach concerning SLD. According to them, diagnoses may lead to social exclusion. The Pedagogical Department generally endorses the principles of inclusion and complies with the provision of oral exams for students with SLD. Nevertheless, in practice, there seems to be a lack of regular academic support for these students. When such support does exist, it is only through individual initiatives. With regards to their prospective profession, students with SLD can utilize their personal experience, as well as their empathy; these appear to be unique weapons in their hands –in comparison with other educators− when it comes to teaching students in the future. In the Department of Pedagogy, provision towards SLD results sporadic, however the vision of an inclusive department does exist. Based on their studies and their experience, pedagogy students with SLD claim that they have an experiential internalized advantage for their future career as educators.Keywords: specific learning disability, SLD, dyslexia, pedagogy department, inclusion, professional role of SLDed educators, higher education, university policy
Procedia PDF Downloads 116