Search results for: performance validity indicators
520 Assessing the Impact of Physical Inactivity on Dialysis Adequacy and Functional Health in Peritoneal Dialysis Patients
Authors: Mohammad Ali Tabibi, Farzad Nazemi, Nasrin Salimian
Abstract:
Background: Peritoneal dialysis (PD) is a prevalent renal replacement therapy for patients with end-stage renal disease. Despite its benefits, PD patients often experience reduced physical activity and physical function, which can negatively impact dialysis adequacy and overall health outcomes. Despite the known benefits of maintaining physical activity in chronic disease management, the specific interplay between physical inactivity, physical function, and dialysis adequacy in PD patients remains underexplored. Understanding this relationship is essential for developing targeted interventions to enhance patient care and outcomes in this vulnerable population. This study aims to assess the impact of physical inactivity on dialysis adequacy and functional health in PD patients. Methods: This cross-sectional study included 135 peritoneal dialysis patients from multiple dialysis centers. Physical inactivity was measured using the International Physical Activity Questionnaire (IPAQ), while physical function was assessed using the Short Physical Performance Battery (SPPB). Dialysis adequacy was evaluated using the Kt/V ratio. Additional variables such as demographic data, comorbidities, and laboratory parameters were collected to control for potential confounders. Statistical analyses were performed to determine the relationships between physical inactivity, physical function, and dialysis adequacy. Results: The study cohort comprised 70 males and 65 females with a mean age of 55.4 ± 13.2 years. A significant proportion of the patients (65%) were categorized as physically inactive based on IPAQ scores. Inactive patients demonstrated significantly lower SPPB scores (mean 6.2 ± 2.1) compared to their more active counterparts (mean 8.5 ± 1.8, p < 0.001). Dialysis adequacy, as measured by Kt/V, was found to be suboptimal (Kt/V < 1.7) in 48% of the patients. There was a significant positive correlation between physical function scores and Kt/V values (r = 0.45, p < 0.01), indicating that better physical function is associated with higher dialysis adequacy. Also, there was a significant negative correlation between physical inactivity and physical function (r = -0.55, p < 0.01). Additionally, physically inactive patients had lower Kt/V ratios compared to their active counterparts (1.3 ± 0.3 vs. 1.8 ± 0.4, p < 0.05). Multivariate regression analysis revealed that physical inactivity was an independent predictor of reduced dialysis adequacy (β = -0.32, p < 0.01) and poorer physical function (β = -0.41, p < 0.01) after adjusting for age, sex, comorbidities, and dialysis vintage. Conclusion: This study underscores the critical role of physical activity and physical function in maintaining adequate dialysis in peritoneal dialysis patients. These findings highlight the need for targeted interventions to promote physical activity in this population to improve their overall health outcomes. Future research should focus on developing and evaluating exercise programs tailored for PD patients to enhance their physical function and dialysis adequacy. The findings suggest that interventions aimed at increasing physical activity and improving physical function may enhance dialysis adequacy and overall health outcomes in this population. Further research is warranted to explore the mechanisms underlying these associations and to develop targeted strategies for enhancing patient care.Keywords: inactivity, physical function, peritoneal dialysis, dialysis adequacy
Procedia PDF Downloads 36519 An Automated Magnetic Dispersive Solid-Phase Extraction Method for Detection of Cocaine in Human Urine
Authors: Feiyu Yang, Chunfang Ni, Rong Wang, Yun Zou, Wenbin Liu, Chenggong Zhang, Fenjin Sun, Chun Wang
Abstract:
Cocaine is the most frequently used illegal drug globally, with the global annual prevalence of cocaine used ranging from 0.3% to 0.4 % of the adult population aged 15–64 years. Growing consumption trend of abused cocaine and drug crimes are a great concern, therefore urine sample testing has become an important noninvasive sampling whereas cocaine and its metabolites (COCs) are usually present in high concentrations and relatively long detection windows. However, direct analysis of urine samples is not feasible because urine complex medium often causes low sensitivity and selectivity of the determination. On the other hand, presence of low doses of analytes in urine makes an extraction and pretreatment step important before determination. Especially, in gathered taking drug cases, the pretreatment step becomes more tedious and time-consuming. So developing a sensitive, rapid and high-throughput method for detection of COCs in human body is indispensable for law enforcement officers, treatment specialists and health officials. In this work, a new automated magnetic dispersive solid-phase extraction (MDSPE) sampling method followed by high performance liquid chromatography-mass spectrometry (HPLC-MS) was developed for quantitative enrichment of COCs from human urine, using prepared magnetic nanoparticles as absorbants. The nanoparticles were prepared by silanizing magnetic Fe3O4 nanoparticles and modifying them with divinyl benzene and vinyl pyrrolidone, which possesses the ability for specific adsorption of COCs. And this kind of magnetic particle facilitated the pretreatment steps by electromagnetically controlled extraction to achieve full automation. The proposed device significantly improved the sampling preparation efficiency with 32 samples in one batch within 40mins. Optimization of the preparation procedure for the magnetic nanoparticles was explored and the performances of magnetic nanoparticles were characterized by scanning electron microscopy, vibrating sample magnetometer and infrared spectra measurements. Several analytical experimental parameters were studied, including amount of particles, adsorption time, elution solvent, extraction and desorption kinetics, and the verification of the proposed method was accomplished. The limits of detection for the cocaine and cocaine metabolites were 0.09-1.1 ng·mL-1 with recoveries ranging from 75.1 to 105.7%. Compared to traditional sampling method, this method is time-saving and environmentally friendly. It was confirmed that the proposed automated method was a kind of highly effective way for the trace cocaine and cocaine metabolites analyses in human urine.Keywords: automatic magnetic dispersive solid-phase extraction, cocaine detection, magnetic nanoparticles, urine sample testing
Procedia PDF Downloads 204518 Comparison of Two Home Sleep Monitors Designed for Self-Use
Authors: Emily Wood, James K. Westphal, Itamar Lerner
Abstract:
Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.Keywords: DREEM, EEG, seep monitoring, Z-machine
Procedia PDF Downloads 107517 Effect of Supplementation of Hay with Noug Seed Cake (Guizotia abyssinica), Wheat Bran and Their Mixtures on Feed Utilization, Digestiblity and Live Weight Change in Farta Sheep
Authors: Fentie Bishaw Wagayie
Abstract:
This study was carried out with the objective of studying the response of Farta sheep in feed intake and live weight change when fed on hay supplemented with noug seed cake (NSC), wheat bran (WB), and their mixtures. The digestibility trial of 7 days and 90 days of feeding trial was conducted using 25 intact male Farta sheep with a mean initial live weight of 16.83 ± 0.169 kg. The experimental animals were arranged randomly into five blocks based on the initial live weight, and the five treatments were assigned randomly to each animal in a block. Five dietary treatments used in the experiment comprised of grass hay fed ad libitum (T1), grass hay ad libitum + 300 g DM WB (T2), grass hay ad libitum + 300 g DM (67% WB: 33% NSC mixture) (T3), grass hay ad libitum + 300 g DM (67% NSC: 33% WB) (T4) and 300 g DM/ head/day NSC (T5). Common salt and water were offered ad libitum. The supplements were offered twice daily at 0800 and 1600 hours. The experimental sheep were kept in individual pens. Supplementation of NSC, WB, and their mixtures significantly increased (p < 0.01) the total dry matter (DM) (665.84-788 g/head/day) and (p < 0.001) crude protein (CP) intake. Unsupplemented sheep consumed significantly higher (p < 0.01) grass hay DM (540.5g/head/day) as compared to the supplemented treatments (365.8-488 g/h/d), except T2. Among supplemented sheep, T5 had significantly higher (p < 0.001) CP intake (99.98 g/head/day) than the others (85.52-90.2 g/head/day). Supplementation significantly improved (p < 0.001) the digestibility of CP (66.61-78.9%), but there was no significant effect (p > 0.05) on DM, OM, NDF, and ADF digestibility between supplemented and control treatments. Very low CP digestibility (11.55%) observed in the basal diet (grass hay) used in this study indicated that feeding sole grass hay could not provide nutrients even for the maintenance requirement of growing sheep. Significant final and daily live weight gain (p < 0.001) in the range of 70.11-82.44 g/head/day was observed in supplemented Farta sheep, but unsupplemented sheep lost weight by 9.11g/head/day. Numerically, among the supplemented treatments, sheep supplemented with a higher proportion of NSC in T4 (201 NSC + 99 g WB) gained more weight than the rest, though not statistically significant (p > 0.05). The absence of statistical difference in daily body weight gain between all supplemented sheep indicated that the supplementation of NSC, WB, and their mixtures had similar potential to provide nutrients. Generally, supplementation of NSC, WB, and their mixtures to the basal grass hay diet improved feed conversion ratio, total DM intake, CP intake, and CP digestibility, and it also improved the growth performance with a similar trend for all supplemented Farta sheep over the control group. Therefore, from a biological point of view, to attain the required level of slaughter body weight within a short period of the growing program, sheep producer can use all the supplement types depending upon their local availability, but in the order of priority, T4, T5, T3, and T2, respectively. However, based on partial budget analysis, supplementation of 300 g DM/head /day NSC (T5) could be recommended as profitable for producers with no capital limitation, whereas T4 supplementation (201 g NSC + 99 WB DM/day) is recommended when there is capital scarcity.Keywords: weight gain, supplement, Farta sheep, hay as basal diet
Procedia PDF Downloads 63516 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization
Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries
Abstract:
Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning
Procedia PDF Downloads 267515 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 154514 Quality by Design in the Optimization of a Fast HPLC Method for Quantification of Hydroxychloroquine Sulfate
Authors: Pedro J. Rolim-Neto, Leslie R. M. Ferraz, Fabiana L. A. Santos, Pablo A. Ferreira, Ricardo T. L. Maia-Jr., Magaly A. M. Lyra, Danilo A F. Fonte, Salvana P. M. Costa, Amanda C. Q. M. Vieira, Larissa A. Rolim
Abstract:
Initially developed as an antimalarial agent, hydroxychloroquine (HCQ) sulfate is often used as a slow-acting antirheumatic drug in the treatment of disorders of connective tissue. The United States Pharmacopeia (USP) 37 provides a reversed-phase HPLC method for quantification of HCQ. However, this method was not reproducible, producing asymmetric peaks in a long analysis time. The asymmetry of the peak may cause an incorrect calculation of the concentration of the sample. Furthermore, the analysis time is unacceptable, especially regarding the routine of a pharmaceutical industry. The aiming of this study was to develop a fast, easy and efficient method for quantification of HCQ sulfate by High Performance Liquid Chromatography (HPLC) based on the Quality by Design (QbD) methodology. This method was optimized in terms of peak symmetry using the surface area graphic as the Design of Experiments (DoE) and the tailing factor (TF) as an indicator to the Design Space (DS). The reference method used was that described at USP 37 to the quantification of the drug. For the optimized method, was proposed a 33 factorial design, based on the QbD concepts. The DS was created with the TF (in a range between 0.98 and 1.2) in order to demonstrate the ideal analytical conditions. Changes were made in the composition of the USP mobile-phase (USP-MP): USP-MP: Methanol (90:10 v/v, 80:20 v/v and 70:30 v/v), in the flow (0.8, 1.0 and 1.2 mL) and in the oven temperature (30, 35, and 40ºC). The USP method allowed the quantification of drug in a long time (40-50 minutes). In addition, the method uses a high flow rate (1,5 mL.min-1) which increases the consumption of expensive solvents HPLC grade. The main problem observed was the TF value (1,8) that would be accepted if the drug was not a racemic mixture, since the co-elution of the isomers can become an unreliable peak integration. Therefore, the optimization was suggested in order to reduce the analysis time, aiming a better peak resolution and TF. For the optimization method, by the analysis of the surface-response plot it was possible to confirm the ideal setting analytical condition: 45 °C, 0,8 mL.min-1 and 80:20 USP-MP: Methanol. The optimized HPLC method enabled the quantification of HCQ sulfate, with a peak of high resolution, showing a TF value of 1,17. This promotes good co-elution of isomers of the HCQ, ensuring an accurate quantification of the raw material as racemic mixture. This method also proved to be 18 times faster, approximately, compared to the reference method, using a lower flow rate, reducing even more the consumption of the solvents and, consequently, the analysis cost. Thus, an analytical method for the quantification of HCQ sulfate was optimized using QbD methodology. This method proved to be faster and more efficient than the USP method, regarding the retention time and, especially, the peak resolution. The higher resolution in the chromatogram peaks supports the implementation of the method for quantification of the drug as racemic mixture, not requiring the separation of isomers.Keywords: analytical method, hydroxychloroquine sulfate, quality by design, surface area graphic
Procedia PDF Downloads 639513 Comparison of Equivalent Linear and Non-Linear Site Response Model Performance in Kathmandu Valley
Authors: Sajana Suwal, Ganesh R. Nhemafuki
Abstract:
Evaluation of ground response under earthquake shaking is crucial in geotechnical earthquake engineering. Damage due to seismic excitation is mainly correlated to local geological and geotechnical conditions. It is evident from the past earthquakes (e.g. 1906 San Francisco, USA, 1923 Kanto, Japan) that the local geology has strong influence on amplitude and duration of ground motions. Since then significant studies has been conducted on ground motion amplification revealing the importance of influence of local geology on ground. Observations from the damaging earthquakes (e.g. Nigata and San Francisco, 1964; Irpinia, 1980; Mexico, 1985; Kobe, 1995; L’Aquila, 2009) divulged that non-uniform damage pattern, particularly in soft fluvio-lacustrine deposit is due to the local amplification of seismic ground motion. Non-uniform damage patterns are also observed in Kathmandu Valley during 1934 Bihar Nepal earthquake and recent 2015 Gorkha earthquake seemingly due to the modification of earthquake ground motion parameters. In this study, site effects resulting from amplification of soft soil in Kathmandu are presented. A large amount of subsoil data was collected and used for defining the appropriate subsoil model for the Kathamandu valley. A comparative study of one-dimensional total-stress equivalent linear and non-linear site response is performed using four strong ground motions for six sites of Kathmandu valley. In general, one-dimensional (1D) site-response analysis involves the excitation of a soil profile using the horizontal component and calculating the response at individual soil layers. In the present study, both equivalent linear and non-linear site response analyses were conducted using the computer program DEEPSOIL. The results show that there is no significant deviation between equivalent linear and non-linear site response models until the maximum strain reaches to 0.06-0.1%. Overall, it is clearly observed from the results that non-linear site response model perform better as compared to equivalent linear model. However, the significant deviation between two models is resulted from other influencing factors such as assumptions made in 1D site response, lack of accurate values of shear wave velocity and nonlinear properties of the soil deposit. The results are also presented in terms of amplification factors which are predicted to be around four times more in case of non-linear analysis as compared to equivalent linear analysis. Hence, the nonlinear behavior of soil prevails the urgent need of study of dynamic characteristics of the soft soil deposit that can specifically represent the site-specific design spectra for the Kathmandu valley for building resilient structures from future damaging earthquakes.Keywords: deep soil, equivalent linear analysis, non-linear analysis, site response
Procedia PDF Downloads 292512 Effects of a School-Based Mindfulness Intervention on Stress and Emotions on Students Enrolled in an Independent School
Authors: Tracie Catlett
Abstract:
Students enrolled in high-achieving schools are under tremendous pressure to perform at high levels inside and outside the classroom. Achievement pressure is a prevalent source of stress for students enrolled in high-achieving schools, and female students in particular experience a higher frequency and higher levels of stress compared to their male peers. The practice of mindfulness in a school setting is one tool that has been linked to improved self-regulation of emotions, increased positive emotions, and stress reduction. A mixed methods randomized pretest-posttest no-treatment control trial, evaluated the effects of a six-session mindfulness intervention taught during a regularly scheduled life skills period in an independent day school, one type of high-achieving school. Twenty-nine students in Grades 10 and 11 were randomized by class where Grade 11 students were in the intervention group (n = 14) and Grade 10 students were in the control group (n = 15). Findings from the study produced mixed results. There was no evidence that the mindfulness program reduced participants’ stress levels and negative emotions. In fact, contrary to what was expected, students enrolled in the intervention group experienced higher levels of stress and increased negative emotions at posttreatment when compared to pretreatment. Neither the within-group nor the between-groups changes in stress level were statistically significant, p > .05, and the between-groups effect size was small, d = .2. The study found evidence that the mindfulness program may have had a positive impact on students’ ability to regulate their emotions. The within-group comparison and the between-groups comparison at posttreatment found that students in the mindfulness course experienced statistically significant improvement in the in their ability to regulate their emotions at posttreatment, p = .009 < .05 and p =. 034 < .05, respectively. The between-groups effect size was medium, d =.7, suggesting that the positive differences in emotion regulation difficulties were substantial and have practical implications. The analysis of gender differences as they relate to stress and emotions revealed that female students perceive higher levels of stress and report experiencing stress more often than males. There were no gender differences when analyzing sources of stress experienced by the student participants. Both females and males experience regular achievement pressures related to their school performance and worry about their future, college acceptance, grades, and parental expectations. Females reported an increased awareness of their stress and actively engaged in practicing mindfulness to manage their stress. Students in the treatment group expressed that the practice of mindfulness resulted in feelings of relaxation and calmness.Keywords: achievement pressure, adolescents, emotion regulation, emotions, high-achieving schools, independent schools, mindfulness, negative affect, positive affect, stress
Procedia PDF Downloads 72511 The Impact of Online Learning on Visual Learners
Authors: Ani Demetrashvili
Abstract:
As online learning continues to reshape the landscape of education, questions arise regarding its efficacy for diverse learning styles, particularly for visual learners. This abstract delves into the impact of online learning on visual learners, exploring how digital mediums influence their educational experience and how educational platforms can be optimized to cater to their needs. Visual learners comprise a significant portion of the student population, characterized by their preference for visual aids such as diagrams, charts, and videos to comprehend and retain information. Traditional classroom settings often struggle to accommodate these learners adequately, relying heavily on auditory and written forms of instruction. The advent of online learning presents both opportunities and challenges in addressing the needs of visual learners. Online learning platforms offer a plethora of multimedia resources, including interactive simulations, virtual labs, and video lectures, which align closely with the preferences of visual learners. These platforms have the potential to enhance engagement, comprehension, and retention by presenting information in visually stimulating formats. However, the effectiveness of online learning for visual learners hinges on various factors, including the design of learning materials, user interface, and instructional strategies. Research into the impact of online learning on visual learners encompasses a multidisciplinary approach, drawing from fields such as cognitive psychology, education, and human-computer interaction. Studies employ qualitative and quantitative methods to assess visual learners' preferences, cognitive processes, and learning outcomes in online environments. Surveys, interviews, and observational studies provide insights into learners' preferences for specific types of multimedia content and interactive features. Cognitive tasks, such as memory recall and concept mapping, shed light on the cognitive mechanisms underlying learning in digital settings. Eye-tracking studies offer valuable data on attentional patterns and information processing during online learning activities. The findings from research on the impact of online learning on visual learners have significant implications for educational practice and technology design. Educators and instructional designers can use insights from this research to create more engaging and effective learning materials for visual learners. Strategies such as incorporating visual cues, providing interactive activities, and scaffolding complex concepts with multimedia resources can enhance the learning experience for visual learners in online environments. Moreover, online learning platforms can leverage the findings to improve their user interface and features, making them more accessible and inclusive for visual learners. Customization options, adaptive learning algorithms, and personalized recommendations based on learners' preferences and performance can enhance the usability and effectiveness of online platforms for visual learners.Keywords: online learning, visual learners, digital education, technology in learning
Procedia PDF Downloads 39510 A Study of Status of Women by Incorporating Literacy and Employment in India and Some Selected States
Authors: Barnali Thakuria, Labananda Choudhury
Abstract:
Gender equality and women’s empowerment is one of the components of eight Millennium Development Goal (MDG).Literacy and employment are the parameters which reflect the empowerment of women. But in a developing country like India, literacy and working status among the females are not satisfactory. Both literacy and employment technically can be measured by Literate Life Expectancy (LLE) and Working Life Expectancy (WLE).One can also combine both the factors literacy and working to get a better new measure. The proposed indicator can be called literate-working life expectancy (LWLE). LLE gives an average number of years a person lives in a literate state under current mortality and literacy conditions while WLE defined as average number of years a person lives in a working state if current mortality and working condition prevails. Similarly, LWLE gives number of expected years by a person living under both literate and working state. The situation of females cannot be figured out without comparing both the sexes. In the present paper an attempt has been made to estimate LLE and WLE in India along with some selected states from various zones of India namely Assam from the North-East, Gujarat from the West, Kerala from the South, Rajasthan from the North, Uttar Pradesh from the Central and West Bengal from the East respectively for both the sexes based on 2011 census. Furthermore, we have also developed a formula for a new indicator namely Literate-Working Life Expectancy (LWLE) and the proposed index has been applied in India and the selected states mentioned above for both males and females. Data has been extracted from SRS(Sample Registration System) based Abridged Life Table and Census of India. The computation of LLE follows the method developed by Lutz while WLE has followed the method developed by Saw Swee Hock. By combining both the factors literacy and employment, the new indicator LWLE also follows the method like LLE and WLE. Contrasted results have been found in different parts of India. The result shows that LLE at birth is highest(lowest) in the state Kerala(Uttar Pradesh) with 61.66 (39.51) years among the males. A similar situation is also observed among the females with 62.58 years and 25.11 years respectively. But male WLE at birth is highest (lowest) in Rajasthan(Kerala) with 37.11 (32.64) years. Highest female WLE at birth is also observed in Rajasthan with 23.51 years and the lowest is concentrated in Uttar Pradesh with 11.76 years. It is also found that Kerala’s performance is exceptionally good in terms of LWLE at birth while the lowest LWLE at birth prevails in the state Uttar Pradesh among the males. Female LWLE at birth is highest(lowest) in Kerala(Uttar Pradesh) with 19.73(4.77)years. The corresponding value of the index increases as the number of factors involved in the life expectancy decrease. It is found that women are lagging behind in terms of both literacy and employment. Findings of the study will help the planners to take necessary steps to improve the position of women.Keywords: life expectancy, literacy, literate life expectancy, working life expectancy
Procedia PDF Downloads 421509 Genetically Modified Fuel-Ethanol Industrial Yeast Strains as Biocontrol Agents
Authors: Patrícia Branco, Catarina Prista, Helena Albergaria
Abstract:
Industrial fuel-ethanol fermentations are carried out under non-sterile conditions, which favors the development of microbial contaminants, leading to huge economic losses. Wild yeasts such as Brettanomyces bruxellensis and lactic acid bacteria are the main contaminants of industrial bioethanol fermentation, affecting Saccharomyces cerevisiae performance and decreasing ethanol yields and productivity. In order to control microbial contaminations, the fuel-ethanol industry uses different treatments, including acid washing and antibiotics. However, these control measures carry environmental risks such as acid toxicity and the rise of antibiotic-resistant bacteria. Therefore, it is crucial to develop and apply less toxic and more environmentally friendly biocontrol methods. In the present study, an industrial fuel-ethanol starter, S. cerevisiae Ethanol-Red, was genetically modified to over-express AMPs with activity against fuel-ethanol microbial contaminants and evaluated regarding its biocontrol effect during mixed-culture alcoholic fermentations artificially contaminated with B. bruxellensis. To achieve this goal, S. cerevisiae Ethanol-Red strain was transformed with a plasmid containing the AMPs-codifying genes, i.e., partial sequences of TDH1 (925-963 bp) and TDH2/3 (925-963 bp) and a geneticin resistance marker. The biocontrol effect of those genetically modified strains was evaluated against B. bruxellensis and compared with the antagonistic effect exerted by the modified strain with an empty plasmid (without the AMPs-codifying genes) and the non-modified strain S. cerevisiae Ethanol-Red. For that purpose, mixed-culture alcoholic fermentations were performed in a synthetic must use the modified S. cerevisiae Ethanol-Red strains together with B. bruxellensis. Single-culture fermentations of B. bruxellensis strains were also performed as a negative control of the antagonistic effect exerted by S. cerevisiae strains. Results clearly showed an improved biocontrol effect of the genetically-modified strains against B. bruxellensis when compared with the modified Ethanol-Red strain with the empty plasmid (without the AMPs-codifying genes) and with the non-modified Ethanol-Red strain. In mixed-culture fermentation with the modified S. cerevisiae strain, B. bruxellensis culturability decreased from 5×104 CFU/mL on day-0 to less than 1 CFU/mL on day-10, while in single-culture B. bruxellensis increased its culturability from 6×104 to 1×106 CFU/mL in the first 6 days and kept this value until day-10. Besides, the modified Ethanol-Red strain exhibited an enhanced antagonistic effect against B. bruxellensis when compared with that induced by the non-modified Ethanol-Red strain. Indeed, culturability loss of B. bruxellensis after 10 days of fermentation with the modified Ethanol-Red strain was 98.7 and 100% higher than that occurred in fermentations performed with the non-modified Ethanol-Red and the empty-plasmid modified strain, respectively. Therefore, one can conclude that the S. cerevisiae genetically modified strain obtained in the present work may be a valuable solution for the mitigation of microbial contamination in fuel-ethanol fermentations, representing a much safer and environmentally friendly preservation strategy than the antimicrobial treatments (acid washing and antibiotics) currently applied in fuel-ethanol industry.Keywords: antimicrobial peptides, fuel-ethanol microbial contaminations, fuel-ethanol fermentation, biocontrol agents, genetically-modified yeasts
Procedia PDF Downloads 99508 Hypersensitivity Reactions Following Intravenous Administration of Contrast Medium
Authors: Joanna Cydejko, Paulina Mika
Abstract:
Hypersensitivity reactions are side effects of medications that resemble an allergic reaction. Anaphylaxis is a generalized, severe allergic reaction of the body caused by exposure to a specific agent at a dose tolerated by a healthy body. The most common causes of anaphylaxis are food (about 70%), Hymenoptera venoms (22%), and medications (7%), despite detailed diagnostics in 1% of people, the cause of the anaphylactic reaction was not indicated. Contrast media are anaphylactic agents of unknown mechanism. Hypersensitivity reactions can occur with both immunological and non-immunological mechanisms. Symptoms of anaphylaxis occur within a few seconds to several minutes after exposure to the allergen. Contrast agents are chemical compounds that make it possible to visualize or improve the visibility of anatomical structures. In the diagnosis of computed tomography, the preparations currently used are derivatives of the triiodide benzene ring. Pharmacokinetic and pharmacodynamic properties, i.e., their osmolality, viscosity, low chemotoxicity and high hydrophilicity, have an impact on better tolerance of the substance by the patient's body. In MRI diagnostics, macrocyclic gadolinium contrast agents are administered during examinations. The aim of this study is to present the results of the number and severity of anaphylactic reactions that occurred in patients in all age groups undergoing diagnostic imaging with intravenous administration of contrast agents. In non-ionic iodine CT and in macrocyclic gadolinium MRI. A retrospective assessment of the number of adverse reactions after contrast administration was carried out on the basis of data from the Department of Radiology of the University Clinical Center in Gdańsk, and it was assessed whether their different physicochemical properties had an impact on the incidence of acute complications. Adverse reactions are divided according to the severity of the patient's condition and the diagnostic method used in a given patient. Complications following the administration of a contrast medium in the form of acute anaphylaxis accounted for less than 0.5% of all diagnostic procedures performed with the use of a contrast agent. In the analysis period from January to December 2022, 34,053 CT scans and 15,279 MRI examinations with the use of contrast medium were performed. The total number of acute complications was 21, of which 17 were complications of iodine-based contrast agents and 5 of gadolinium preparations. The introduction of state-of-the-art contrast formulations was an important step toward improving the safety and tolerability of contrast agents used in imaging. Currently, contrast agents administered to patients are considered to be one of the best-tolerated preparations used in medicine. However, like any drug, they can be responsible for the occurrence of adverse reactions resulting from their toxic effects. The increase in the number of imaging tests performed with the use of contrast agents has a direct impact on the number of adverse events associated with their administration. However, despite the low risk of anaphylaxis, this risk should not be marginalized. The growing threat associated with the mass performance of radiological procedures with the use of contrast agents forces the knowledge of the rules of conduct in the event of symptoms of hypersensitivity to these preparations.Keywords: anaphylactic, contrast medium, diagnostic, medical imagine
Procedia PDF Downloads 63507 Implementation of Active Recovery at Immediate, 12 and 24 Hours Post-Training in Young Soccer Players
Authors: C. Villamizar, M. Serrato
Abstract:
In the pursuit of athletic performance, the role of physical training which is determined by a number of charges or taxes on physiological stress and musculoskeletal systems of the human body generated by the intensity and duration is fundamental. Given the physical demands of these activities both training and competitive must take into account the optimal relationship with a straining process recovery post favoring the process of overcompensation which aims to facilitate the return and rising energy potential and protein synthesis also of different tissues. Allowing muscle function returns to baseline or pre-exercise states. If this recovery process is not performed or is not allowed in a proper way, will result in an increased state of fatigue. Active recovery, is one of the strategies implemented in the sport for a return to pre-exercise physiological states. However, there are some adverse assumptions regarding the negative effects, as is the possibility of increasing the degradation of muscle glycogen and thus delaying the synthesis thereof. For them, it is necessary to investigate what would be the effects generated application made at different times after the effort. The aim of this study was to determine the effects of active recovery post effort made at three different times: immediately, at 12 and 24 hours on biochemical markers creatine kinase in youth soccer player’s categories. A randomized controlled trial with allocation to three groups was performed: A. active recovery immediately after the effort; B. active recovery performed at 12 hours after the effort; C. active recovery made at 24 hours after the effort. This study included 27 subjects belonging to a Colombian soccer team of the second division. Vital signs, weight, height, BMI, the percentage of muscle mass, fat mass percentage, personal medical history, and family were valued. The velocity, explosive force and Creatin Kinase (CK) in blood were tested before and after interventions. SAFT 90 protocol (Soccer Field specific Aerobic Test) was applied to participants for generating fatigue. CK samples were taken one hour before the application of the fatigue test, one hour after the fatigue protocol and 48 of the initial CK sample. Mean age was 18.5 ± 1.1 years old. Improvements in jumping and speed recovery the 3 groups (p < 0.05), but no statistically significant differences between groups was observed after recuperation. In all participants, there was a significant increment of CK when applied SAFT 90 in all the groups (median 103.1-111.1). The CK measurement after 48 hours reflects a recovery in all groups, however the group C, a decline below baseline levels of -55.5 (-96.3 /-20.4) which is a significant find. Other research has shown that CK does not return quickly to their baseline, but our study shows that active recovery favors the clearance of CK and also to perform recovery 24 hours after the effort generates higher clearance of this biomarker.Keywords: active recuperation, creatine phosphokinase, post training, young soccer players
Procedia PDF Downloads 160506 Development and Experimental Evaluation of a Semiactive Friction Damper
Authors: Juan S. Mantilla, Peter Thomson
Abstract:
Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.Keywords: earthquake response, friction damper, semiactive control, shaking table
Procedia PDF Downloads 378505 Managing Climate Change: Vulnerability Reduction or Resilience Building
Authors: Md Kamrul Hassan
Abstract:
Adaptation interventions are the common response to manage the vulnerabilities of climate change. The nature of adaptation intervention depends on the degree of vulnerability and the capacity of a society. The coping interventions can take the form of hard adaptation – utilising technologies and capital goods like dykes, embankments, seawalls, and/or soft adaptation – engaging knowledge and information sharing, capacity building, policy and strategy development, and innovation. Hard adaptation is quite capital intensive but provides immediate relief from climate change vulnerabilities. This type of adaptation is not real development, as the investment for the adaptation cannot improve the performance – just maintain the status quo of a social or ecological system, and often lead to maladaptation in the long-term. Maladaptation creates a two-way loss for a society – interventions bring further vulnerability on top of the existing vulnerability and investment for getting rid of the consequence of interventions. Hard adaptation is popular to the vulnerable groups, but it focuses so much on the immediate solution and often ignores the environmental issues and future risks of climate change. On the other hand, soft adaptation is education oriented where vulnerable groups learn how to live with climate change impacts. Soft adaptation interventions build the capacity of vulnerable groups through training, innovation, and support, which might enhance the resilience of a system. In consideration of long-term sustainability, soft adaptation can contribute more to resilience than hard adaptation. Taking a developing society as the study context, this study aims to investigate and understand the effectiveness of the adaptation interventions of the coastal community of Sundarbans mangrove forest in Bangladesh. Applying semi-structured interviews with a range of Sundarbans stakeholders including community residents, tourism demand-supply side stakeholders, and conservation and management agencies (e.g., Government, NGOs and international agencies) and document analysis, this paper reports several key insights regarding climate change adaptation. Firstly, while adaptation interventions may offer a short-term to medium-term solution to climate change vulnerabilities, interventions need to be revised for long-term sustainability. Secondly, soft adaptation offers advantages in terms of resilience in a rapidly changing environment, as it is flexible and dynamic. Thirdly, there is a challenge to communicate to educate vulnerable groups to understand more about the future effects of hard adaptation interventions (and the potential for maladaptation). Fourthly, hard adaptation can be used if the interventions do not degrade the environmental balance and if the investment of interventions does not exceed the economic benefit of the interventions. Overall, the goal of an adaptation intervention should be to enhance the resilience of a social or ecological system so that the system can with stand present vulnerabilities and future risks. In order to be sustainable, adaptation interventions should be designed in such way that those can address vulnerabilities and risks of climate change in a long-term timeframe.Keywords: adaptation, climate change, maladaptation, resilience, Sundarbans, sustainability, vulnerability
Procedia PDF Downloads 194504 Design of Nano-Reinforced Carbon Fiber Reinforced Plastic Wheel for Lightweight Vehicles with Integrated Electrical Hub Motor
Authors: Davide Cocchi, Andrea Zucchelli, Luca Raimondi, Maria Brugo Tommaso
Abstract:
The increasing attention is given to the issues of environmental pollution and climate change is exponentially stimulating the development of electrically propelled vehicles powered by renewable energy, in particular, the solar one. Given the small amount of solar energy that can be stored and subsequently transformed into propulsive energy, it is necessary to develop vehicles with high mechanical, electrical and aerodynamic efficiencies along with reduced masses. The reduction of the masses is of fundamental relevance especially for the unsprung masses, that is the assembly of those elements that do not undergo a variation of their distance from the ground (wheel, suspension system, hub, upright, braking system). Therefore, the reduction of unsprung masses is fundamental in decreasing the rolling inertia and improving the drivability, comfort, and performance of the vehicle. This principle applies even more in solar propelled vehicles, equipped with an electric motor that is connected directly to the wheel hub. In this solution, the electric motor is integrated inside the wheel. Since the electric motor is part of the unsprung masses, the development of compact and lightweight solutions is of fundamental importance. The purpose of this research is the design development and optimization of a CFRP 16 wheel hub motor for solar propulsion vehicles that can carry up to four people. In addition to trying to maximize aspects of primary importance such as mass, strength, and stiffness, other innovative constructive aspects were explored. One of the main objectives has been to achieve a high geometric packing in order to ensure a reduced lateral dimension, without reducing the power exerted by the electric motor. In the final solution, it was possible to realize a wheel hub motor assembly completely comprised inside the rim width, for a total lateral overall dimension of less than 100 mm. This result was achieved by developing an innovative connection system between the wheel and the rotor with a double purpose: centering and transmission of the driving torque. This solution with appropriate interlocking noses allows the transfer of high torques and at the same time guarantees both the centering and the necessary stiffness of the transmission system. Moreover, to avoid delamination in critical areas, evaluated by means of FEM analysis using 3D Hashin damage criteria, electrospun nanofibrous mats have been interleaved between CFRP critical layers. In order to reduce rolling resistance, the rim has been designed to withstand high inflation pressure. Laboratory tests have been performed on the rim using the Digital Image Correlation technique (DIC). The wheel has been tested for fatigue bending according to E/ECE/324 R124e.Keywords: composite laminate, delamination, DIC, lightweight vehicle, motor hub wheel, nanofiber
Procedia PDF Downloads 214503 p-Type Multilayer MoS₂ Enabled by Plasma Doping for Ultraviolet Photodetectors Application
Authors: Xiao-Mei Zhang, Sian-Hong Tseng, Ming-Yen Lu
Abstract:
Two-dimensional (2D) transition metal dichalcogenides (TMDCs), such as MoS₂, have attracted considerable attention owing to the unique optical and electronic properties related to its 2D ultrathin atomic layer structure. MoS₂ is becoming prevalent in post-silicon digital electronics and in highly efficient optoelectronics due to its extremely low thickness and its tunable band gap (Eg = 1-2 eV). For low-power, high-performance complementary logic applications, both p- and n-type MoS₂ FETs (NFETs and PFETs) must be developed. NFETs with an electron accumulation channel can be obtained using unintentionally doped n-type MoS₂. However, the fabrication of MoS₂ FETs with complementary p-type characteristics is challenging due to the significant difficulty of injecting holes into its inversion channel. Plasma treatments with different species (including CF₄, SF₆, O₂, and CHF₃) have also been found to achieve the desired property modifications of MoS₂. In this work, we demonstrated a p-type multilayer MoS₂ enabled by selective-area doping using CHF₃ plasma treatment. Compared with single layer MoS₂, multilayer MoS₂ can carry a higher drive current due to its lower bandgap and multiple conduction channels. Moreover, it has three times the density of states at its minimum conduction band. Large-area growth of MoS₂ films on 300 nm thick SiO₂/Si substrate is carried out by thermal decomposition of ammonium tetrathiomolybdate, (NH₄)₂MoS₄, in a tube furnace. A two-step annealing process is conducted to synthesize MoS₂ films. For the first step, the temperature is set to 280 °C for 30 min in an N₂ rich environment at 1.8 Torr. This is done to transform (NH₄)₂MoS₄ into MoS₃. To further reduce MoS₃ into MoS₂, the second step of annealing is performed. For the second step, the temperature is set to 750 °C for 30 min in a reducing atmosphere consisting of 90% Ar and 10% H₂ at 1.8 Torr. The grown MoS₂ films are subjected to out-of-plane doping by CHF₃ plasma treatment using a Dry-etching system (ULVAC original NLD-570). The radiofrequency power of this dry-etching system is set to 100 W and the pressure is set to 7.5 mTorr. The final thickness of the treated samples is obtained by etching for 30 s. Back-gated MoS₂ PFETs were presented with an on/off current ratio in the order of 10³ and a field-effect mobility of 65.2 cm²V⁻¹s⁻¹. The MoS₂ PFETs photodetector exhibited ultraviolet (UV) photodetection capability with a rapid response time of 37 ms and exhibited modulation of the generated photocurrent by back-gate voltage. This work suggests the potential application of the mild plasma-doped p-type multilayer MoS₂ in UV photodetectors for environmental monitoring, human health monitoring, and biological analysis.Keywords: photodetection, p-type doping, multilayers, MoS₂
Procedia PDF Downloads 104502 Bank Failures: A Question of Leadership
Authors: Alison L. Miles
Abstract:
Almost all major financial institutions in the world suffered losses due to the financial crisis of 2007, but the extent varied widely. The causes of the crash of 2007 are well documented and predominately focus on the role and complexity of the financial markets. The dominant theme of the literature suggests the causes of the crash were a combination of globalization, financial sector innovation, moribund regulation and short termism. While these arguments are undoubtedly true, they do not tell the whole story. A key weakness in the current analysis is the lack of consideration of those leading the banks pre and during times of crisis. This purpose of this study is to examine the possible link between the leadership styles and characteristics of the CEO, CFO and chairman and the financial institutions that failed or needed recapitalization. As such, it contributes to the literature and debate on international financial crises and systemic risk and also to the debate on risk management and regulatory reform in the banking sector. In order to first test the proposition (p1) that there are prevalent leadership characteristics or traits in financial institutions, an initial study was conducted using a sample of the top 65 largest global banks and financial institutions according to the Banker Top 1000 banks 2014. Secondary data from publically available and official documents, annual reports, treasury and parliamentary reports together with a selection of press articles and analyst meeting transcripts was collected longitudinally from the period 1998 to 2013. A computer aided key word search was used in order to identify the leadership styles and characteristics of the chairman, CEO and CFO. The results were then compared with the leadership models to form a picture of leadership in the sector during the research period. As this resulted in separate results that needed combining, SPSS data editor was used to aggregate the results across the studies using the variables ‘leadership style’ and ‘company financial performance’ together with the size of the company. In order to test the proposition (p2) that there was a prevalent leadership style in the banks that failed and the proposition (P3) that this was different to those that did not, further quantitative analysis was carried out on the leadership styles of the chair, CEO and CFO of banks that needed recapitalization, were taken over, or required government bail-out assistance during 2007-8. These included: Lehman Bros, Merrill Lynch, Royal Bank of Scotland, HBOS, Barclays, Northern Rock, Fortis and Allied Irish. The findings show that although regulatory reform has been a key mechanism of control of behavior in the banking sector, consideration of the leadership characteristics of those running the board are a key factor. They add weight to the argument that if each crisis is met with the same pattern of popular fury with the financier, increased regulation, followed by back to business as usual, the cycle of failure will always be repeated and show that through a different lens, new paradigms can be formed and future clashes avoided.Keywords: banking, financial crisis, leadership, risk
Procedia PDF Downloads 318501 Optimum Drilling States in Down-the-Hole Percussive Drilling: An Experimental Investigation
Authors: Joao Victor Borges Dos Santos, Thomas Richard, Yevhen Kovalyshen
Abstract:
Down-the-hole (DTH) percussive drilling is an excavation method that is widely used in the mining industry due to its high efficiency in fragmenting hard rock formations. A DTH hammer system consists of a fluid driven (air or water) piston and a drill bit; the reciprocating movement of the piston transmits its kinetic energy to the drill bit by means of stress waves that propagate through the drill bit towards the rock formation. In the literature of percussive drilling, the existence of an optimum drilling state (Sweet Spot) is reported in some laboratory and field experimental studies. An optimum rate of penetration is achieved for a specific range of axial thrust (or weight-on-bit) beyond which the rate of penetration decreases. Several authors advance different explanations as possible root causes to the occurrence of the Sweet Spot, but a universal explanation or consensus does not exist yet. The experimental investigation in this work was initiated with drilling experiments conducted at a mining site. A full-scale drilling rig (equipped with a DTH hammer system) was instrumented with high precision sensors sampled at a very high sampling rate (kHz). Data was collected while two boreholes were being excavated, an in depth analysis of the recorded data confirmed that an optimum performance can be achieved for specific ranges of input thrust (weight-on-bit). The high sampling rate allowed to identify the bit penetration at each single impact (of the piston on the drill bit) as well as the impact frequency. These measurements provide a direct method to identify when the hammer does not fire, and drilling occurs without percussion, and the bit propagate the borehole by shearing the rock. The second stage of the experimental investigation was conducted in a laboratory environment with a custom-built equipment dubbed Woody. Woody allows the drilling of shallow holes few centimetres deep by successive discrete impacts from a piston. After each individual impact, the bit angular position is incremented by a fixed amount, the piston is moved back to its initial position at the top of the barrel, and the air pressure and thrust are set back to their pre-set values. The goal is to explore whether the observed optimum drilling state stems from the interaction between the drill bit and the rock (during impact) or governed by the overall system dynamics (between impacts). The experiments were conducted on samples of Calca Red, with a drill bit of 74 millimetres (outside diameter) and with weight-on-bit ranging from 0.3 kN to 3.7 kN. Results show that under the same piston impact energy and constant angular displacement of 15 degrees between impact, the average drill bit rate of penetration is independent of the weight-on-bit, which suggests that the sweet spot is not caused by intrinsic properties of the bit-rock interface.Keywords: optimum drilling state, experimental investigation, field experiments, laboratory experiments, down-the-hole percussive drilling
Procedia PDF Downloads 90500 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint
Authors: Juliane Spaak
Abstract:
A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient
Procedia PDF Downloads 73499 Numerical Simulation on Two Components Particles Flow in Fluidized Bed
Authors: Wang Heng, Zhong Zhaoping, Guo Feihong, Wang Jia, Wang Xiaoyi
Abstract:
Flow of gas and particles in fluidized beds is complex and chaotic, which is difficult to measure and analyze by experiments. Some bed materials with bad fluidized performance always fluidize with fluidized medium. The material and the fluidized medium are different in many properties such as density, size and shape. These factors make the dynamic process more complex and the experiment research more limited. Numerical simulation is an efficient way to describe the process of gas-solid flow in fluidized bed. One of the most popular numerical simulation methods is CFD-DEM, i.e., computational fluid dynamics-discrete element method. The shapes of particles are always simplified as sphere in most researches. Although sphere-shaped particles make the calculation of particle uncomplicated, the effects of different shapes are disregarded. However, in practical applications, the two-component systems in fluidized bed also contain sphere particles and non-sphere particles. Therefore, it is needed to study the two component flow of sphere particles and non-sphere particles. In this paper, the flows of mixing were simulated as the flow of molding biomass particles and quartz in fluidized bad. The integrated model was built on an Eulerian–Lagrangian approach which was improved to suit the non-sphere particles. The constructed methods of cylinder-shaped particles were different when it came to different numerical methods. Each cylinder-shaped particle was constructed as an agglomerate of fictitious small particles in CFD part, which means the small fictitious particles gathered but not combined with each other. The diameter of a fictitious particle d_fic and its solid volume fraction inside a cylinder-shaped particle α_fic, which is called the fictitious volume fraction, are introduced to modify the drag coefficient β by introducing the volume fraction of the cylinder-shaped particles α_cld and sphere-shaped particles α_sph. In a computational cell, the void ε, can be expressed as ε=1-〖α_cld α〗_fic-α_sph. The Ergun equation and the Wen and Yu equation were used to calculate β. While in DEM method, cylinder-shaped particles were built by multi-sphere method, in which small sphere element merged with each other. Soft sphere model was using to get the connect force between particles. The total connect force of cylinder-shaped particle was calculated as the sum of the small sphere particles’ forces. The model (size=1×0.15×0.032 mm3) contained 420000 sphere-shaped particles (diameter=0.8 mm, density=1350 kg/m3) and 60 cylinder-shaped particles (diameter=10 mm, length=10 mm, density=2650 kg/m3). Each cylinder-shaped particle was constructed by 2072 small sphere-shaped particles (d=0.8 mm) in CFD mesh and 768 sphere-shaped particles (d=3 mm) in DEM mesh. The length of CFD and DEM cells are 1 mm and 2 mm. Superficial gas velocity was changed in different models as 1.0 m/s, 1.5 m/s, 2.0m/s. The results of simulation were compared with the experimental results. The movements of particles were regularly as fountain. The effect of superficial gas velocity on cylinder-shaped particles was stronger than that of sphere-shaped particles. The result proved this present work provided a effective approach to simulation the flow of two component particles.Keywords: computational fluid dynamics, discrete element method, fluidized bed, multiphase flow
Procedia PDF Downloads 327498 Reliable and Error-Free Transmission through Multimode Polymer Optical Fibers in House Networks
Authors: Tariq Ahamad, Mohammed S. Al-Kahtani, Taisir Eldos
Abstract:
Optical communications technology has made enormous and steady progress for several decades, providing the key resource in our increasingly information-driven society and economy. Much of this progress has been in finding innovative ways to increase the data carrying capacity of a single optical fiber. In this research article we have explored basic issues in terms of security and reliability for secure and reliable information transfer through the fiber infrastructure. Conspicuously, one potentially enormous source of improvement has however been left untapped in these systems: fibers can easily support hundreds of spatial modes, but today’s commercial systems (single-mode or multi-mode) make no attempt to use these as parallel channels for independent signals. Bandwidth, performance, reliability, cost efficiency, resiliency, redundancy, and security are some of the demands placed on telecommunications today. Since its initial development, fiber optic systems have had the advantage of most of these requirements over copper-based and wireless telecommunications solutions. The largest obstacle preventing most businesses from implementing fiber optic systems was cost. With the recent advancements in fiber optic technology and the ever-growing demand for more bandwidth, the cost of installing and maintaining fiber optic systems has been reduced dramatically. With so many advantages, including cost efficiency, there will continue to be an increase of fiber optic systems replacing copper-based communications. This will also lead to an increase in the expertise and the technology needed to tap into fiber optic networks by intruders. As ever before, all technologies have been subject to hacking and criminal manipulation, fiber optics is no exception. Researching fiber optic security vulnerabilities suggests that not everyone who is responsible for their networks security is aware of the different methods that intruders use to hack virtually undetected into fiber optic cables. With millions of miles of fiber optic cables stretching across the globe and carrying information including but certainly not limited to government, military, and personal information, such as, medical records, banking information, driving records, and credit card information; being aware of fiber optic security vulnerabilities is essential and critical. Many articles and research still suggest that fiber optics is expensive, impractical and hard to tap. Others argue that it is not only easily done, but also inexpensive. This paper will briefly discuss the history of fiber optics, explain the basics of fiber optic technologies and then discuss the vulnerabilities in fiber optic systems and how they can be better protected. Knowing the security risks and knowing the options available may save a company a lot embarrassment, time, and most importantly money.Keywords: in-house networks, fiber optics, security risk, money
Procedia PDF Downloads 422497 Let’s Work It Out: Effects of a Cooperative Learning Approach on EFL Students’ Motivation and Reading Comprehension
Authors: Shiao-Wei Chu
Abstract:
In order to enhance the ability of their graduates to compete in an increasingly globalized economy, the majority of universities in Taiwan require students to pass Freshman English in order to earn a bachelor's degree. However, many college students show low motivation in English class for several important reasons, including exam-oriented lessons, unengaging classroom activities, a lack of opportunities to use English in authentic contexts, and low levels of confidence in using English. Students’ lack of motivation in English classes is evidenced when students doze off, work on assignments from other classes, or use their phones to chat with others, play video games or watch online shows. Cooperative learning aims to address these problems by encouraging language learners to use the target language to share individual experiences, cooperatively complete tasks, and to build a supportive classroom learning community whereby students take responsibility for one another’s learning. This study includes approximately 50 student participants in a low-proficiency Freshman English class. Each week, participants will work together in groups of between 3 and 4 students to complete various in-class interactive tasks. The instructor will employ a reward system that incentivizes students to be responsible for their own as well as their group mates’ learning. The rewards will be based on points that team members earn through formal assessment scores as well as assessment of their participation in weekly in-class discussions. The instructor will record each team’s week-by-week improvement. Once a team meets or exceeds its own earlier performance, the team’s members will each receive a reward from the instructor. This cooperative learning approach aims to stimulate EFL freshmen’s learning motivation by creating a supportive, low-pressure learning environment that is meant to build learners’ self-confidence. Students will practice all four language skills; however, the present study focuses primarily on the learners’ reading comprehension. Data sources include in-class discussion notes, instructor field notes, one-on-one interviews, students’ midterm and final written reflections, and reading scores. Triangulation is used to determine themes and concerns, and an instructor-colleague analyzes the qualitative data to build interrater reliability. Findings are presented through the researcher’s detailed description. The instructor-researcher has developed this approach in the classroom over several terms, and its apparent success at motivating students inspires this research. The aims of this study are twofold: first, to examine the possible benefits of this cooperative approach in terms of students’ learning outcomes; and second, to help other educators to adapt a more cooperative approach to their classrooms.Keywords: freshman English, cooperative language learning, EFL learners, learning motivation, zone of proximal development
Procedia PDF Downloads 148496 Coastal Modelling Studies for Jumeirah First Beach Stabilization
Authors: Zongyan Yang, Gagan K. Jena, Sankar B. Karanam, Noora M. A. Hokal
Abstract:
Jumeirah First beach, a segment of coastline of length 1.5 km, is one of the popular public beaches in Dubai, UAE. The stability of the beach has been affected by several coastal developmental projects, including The World, Island 2 and La Mer. A comprehensive stabilization scheme comprising of two composite groynes (of lengths 90 m and 125m), modification to the northern breakwater of Jumeirah Fishing Harbour and beach re-nourishment was implemented by Dubai Municipality in 2012. However, the performance of the implemented stabilization scheme has been compromised by La Mer project (built in 2016), which modified the wave climate at the Jumeirah First beach. The objective of the coastal modelling studies is to establish design basis for further beach stabilization scheme(s). Comprehensive coastal modelling studies had been conducted to establish the nearshore wave climate, equilibrium beach orientations and stable beach plan forms. Based on the outcomes of the modeling studies, recommendation had been made to extend the composite groynes to stabilize the Jumeirah First beach. Wave transformation was performed following an interpolation approach with wave transformation matrixes derived from simulations of a possible range of wave conditions in the region. The Dubai coastal wave model is developed with MIKE21 SW. The offshore wave conditions were determined from PERGOS wave data at 4 offshore locations with consideration of the spatial variation. The lateral boundary conditions corresponding to the offshore conditions, at Dubai/Abu Dhabi and Dubai Sharjah borders, were derived with application of LitDrift 1D wave transformation module. The Dubai coastal wave model was calibrated with wave records at monitoring stations operated by Dubai Municipality. The wave transformation matrix approach was validated with nearshore wave measurement at a Dubai Municipality monitoring station in the vicinity of the Jumeirah First beach. One typical year wave time series was transformed to 7 locations in front of the beach to count for the variation of wave conditions which are affected by adjacent and offshore developments. Equilibrium beach orientations were estimated with application of LitDrift by finding the beach orientations with null annual littoral transport at the 7 selected locations. The littoral transport calculation results were compared with beach erosion/accretion quantities estimated from the beach monitoring program (twice a year including bathymetric and topographical surveys). An innovative integral method was developed to outline the stable beach plan forms from the estimated equilibrium beach orientations, with predetermined minimum beach width. The optimal lengths for the composite groyne extensions were recommended based on the stable beach plan forms.Keywords: composite groyne, equilibrium beach orientation, stable beach plan form, wave transformation matrix
Procedia PDF Downloads 264495 An Analysis of Gender Discrimination and Horizontal Hostility among Working Women in Pakistan
Authors: Nadia Noor, Farida Faisal
Abstract:
Horizontal hostility has been identified as a special type of workplace violence and refers to the aggressive behavior inflicted by women towards other women due to gender issues or towards minority group members due to minority issues. Many women, while they want eagerly to succeed and invest invigorated efforts to achieve success, harbor negative feelings for other women to succeed in their career. This phenomenon has been known as Horizontal Violence, Horizontal Hostility, Lateral Violence, Indirect Aggression, or The Tall Poppy Syndrome in Australian culture. Tall Poppy is referred to as a visibly successful individual who attracts envy or hostility due to distinctive characteristics. Therefore, horizontal hostility provides theoretical foundation to examine fierce competition among females than males for their limited access to top level management positions. In Pakistan, gender discrimination persists due to male dominance in the society and women do not enjoy basic equality rights in all aspects of life. They are oppressed at social and organizational level. As Government has been trying to enhance women participation through providing more employment opportunities, provision of peaceful workplace is mandatory that will enable aspiring females to achieve objectives of career success. This research study will help to understand antecedents, dimensions and outcomes of horizontal hostility that hinder career success of competitive females. The present paper is a review paper and various forms of horizontal hostility have been discussed in detail. Different psychological and organizational level drivers of horizontal hostility have been explored through literature. Psychological drivers include oppression, lack of empowerment, learned helplessness and low self-esteem. Organizational level drivers include sticky floor, glass ceiling, toxic work environment and leadership role. Horizontal hostility among working women results in psychological and physical outcomes including stress, low motivation, poor job performance and intention to leave. The study recommends provision of healthy and peaceful work environment that will enable competent women to achieve objectives of career success. In this regard, concrete actions and effective steps are required to promote gender equality at social and organizational level. The need is to ensure the enforcement of legal frameworks by government agencies in order to provide healthy working environment to women by reducing harassment and violence against them. Organizations must eradicate drivers of horizontal hostility and provide women peaceful work environment. In order to develop coping skills, training and mentoring must be provided to them.Keywords: gender discrimination, glass ceiling, horizontal hostility, oppression
Procedia PDF Downloads 134494 Entrepreneurial Dynamism and Socio-Cultural Context
Authors: Shailaja Thakur
Abstract:
Managerial literature abounds with discussions on business strategies, success stories as well as cases of failure, which provide an indication of the parameters that should be considered in gauging the dynamism of an entrepreneur. Neoclassical economics has reduced entrepreneurship to a mere factor of production, driven solely by the profit motive, thus stripping him of all creativity and restricting his decision making to mechanical calculations. His ‘dynamism’ is gauged simply by the amount of profits he earns, marginalizing any discussion on the means that he employs to attain this objective. With theoretical backing, we have developed an Index of Entrepreneurial Dynamism (IED) giving weights to the different moves that the entrepreneur makes during his business journey. Strategies such as changes in product lines, markets and technology are gauged as very important (weighting of 4); while adaptations in terms of technology, raw materials used, upgradations in skill set are given a slightly lesser weight of 3. Use of formal market analysis, diversification in related products are considered moderately important (weight of 2) and being a first generation entrepreneur, employing managers and having plans to diversify are taken to be only slightly important business strategies (weight of 1). The maximum that an entrepreneur can score on this index is 53. A semi-structured questionnaire is employed to solicit the responses from the entrepreneurs on the various strategies that have been employed by them during the course of their business. Binary as well as graded responses are obtained, weighted and summed up to give the IED. This index was tested on about 150 tribal entrepreneurs in Mizoram, a state of India and was found to be highly effective in gauging their dynamism. This index has universal acceptability but is devoid of the socio-cultural context, which is very central to the success and performance of the entrepreneurs. We hypothesize that a society that respects risk taking takes failures in its stride, glorifies entrepreneurial role models, promotes merit and achievement is one that has a conducive socio- cultural environment for entrepreneurship. For obtaining an idea about the social acceptability, we are putting forth questions related to the social acceptability of business to another set of respondents from different walks of life- bureaucracy, academia, and other professional fields. Similar weighting technique is employed, and index is generated. This index is used for discounting the IED of the respondent entrepreneurs from that region/ society. This methodology is being tested for a sample of entrepreneurs from two very different socio- cultural milieus- a tribal society and a ‘mainstream’ society- with the hypothesis that the entrepreneurs in the tribal milieu might be showing a higher level of dynamism than their counterparts in other regions. An entrepreneur who scores high on IED and belongs to society and culture that holds entrepreneurship in high esteem, might not be in reality as dynamic as a person who shows similar dynamism in a relatively discouraging or even an outright hostile environment.Keywords: index of entrepreneurial dynamism, India, social acceptability, tribal entrepreneurs
Procedia PDF Downloads 258493 Content Monetization as a Mark of Media Economy Quality
Authors: Bela Lebedeva
Abstract:
Characteristics of the Web as a channel of information dissemination - accessibility and openness, interactivity and multimedia news - become wider and cover the audience quickly, positively affecting the perception of content, but blur out the understanding of the journalistic work. As a result audience and advertisers continue migrating to the Internet. Moreover, online targeting allows monetizing not only the audience (as customarily given to traditional media) but also the content and traffic more accurately. While the users identify themselves with the qualitative characteristics of the new market, its actors are formed. Conflict of interests is laid in the base of the economy of their relations, the problem of traffic tax as an example. Meanwhile, content monetization actualizes fiscal interest of the state too. The balance of supply and demand is often violated due to the political risks, particularly in terms of state capitalism, populism and authoritarian methods of governance such social institutions as the media. A unique example of access to journalistic material, limited by monetization of content is a television channel Dozhd' (Rain) in Russian web space. Its liberal-minded audience has a better possibility for discussion. However, the channel could have been much more successful in terms of unlimited free speech. Avoiding state pressure and censorship its management has decided to save at least online performance and monetizing all of the content for the core audience. The study Methodology was primarily based on the analysis of journalistic content, on the qualitative and quantitative analysis of the audience. Reconstructing main events and relationships of actors on the market for the last six years researcher has reached some conclusions. First, under the condition of content monetization the capitalization of its quality will always strive to quality characteristics of user, thereby identifying him. Vice versa, the user's demand generates high-quality journalism. The second conclusion follows the previous one. The growth of technology, information noise, new political challenges, the economy volatility and the cultural paradigm change – all these factors form the content paying model for an individual user. This model defines him as a beneficiary of specific knowledge and indicates the constant balance of supply and demand other conditions being equal. As a result, a new economic quality of information is created. This feature is an indicator of the market as a self-regulated system. Monetized information quality is less popular than that of the Public Broadcasting Service, but this audience is able to make decisions. These very users keep the niche sectors which have more potential of technology development, including the content monetization ways. The third point of the study allows develop it in the discourse of media space liberalization. This cultural phenomenon may open opportunities for the development of social and economic relations architecture both locally and regionally.Keywords: content monetization, state capitalism, media liberalization, media economy, information quality
Procedia PDF Downloads 250492 A High Amylose-Content and High-Yielding Elite Line Is Favorable to Cook 'Nanhan' (Semi-Soft Rice) for Nursing Care Food Particularly for Serving Aged Persons
Authors: M. Kamimukai, M. Bhattarai, B. B. Rana, K. Maeda, H. B. Kc, T. Kawano, M. Murai
Abstract:
Most of the aged people older than 70 have difficulty in chewing and swallowing more or less. According to magnitude of this difficulty, gruel, “nanhan” (semi-soft rice) and ordinary cooked rice are served in general, particularly in sanatoriums and homes for old people in Japan. Nanhan is the name of a cooked rice used in Japan, having softness intermediate between gruel and ordinary cooked rice, which is boiled with intermediate amount of water between those of the latter two kinds of cooked rice. In the present study, nanhan was made in the rate of 240g of water to 100g of milled rice with an electric rice cooker. Murai developed a high amylose-content and high-yielding elite line ‘Murai 79’. Sensory eating-quality test was performed for nanhan and ordinary cooked rice of Murai 79 and the standard variety ‘Hinohikari’ which is a high eating-quality variety representative in southern Japan. Panelists (6 to 14 persons) scored each cooked rice in six items viz. taste, stickiness, hardness, flavor, external appearance and overall evaluation. Grading (-3 ~ +3) in each trait was performed, regarding the value of the standard variety Hinohikari as 0. Paddy rice produced in a farmer’s field in 2013 and 2014 and in an experimental field of Kochi University in 2015 and 2016 were used for the sensory test. According to results of the sensory eating-quality test for nanhan, Murai 79 is higher in overall evaluation than Hinohikari in the four years. The former was less sticky than the latter in the four years, but the former was statistically significantly harder than the latter throughout the four years. In external appearance, the former was significantly higher than the latter in the four years. In the taste, the former was significantly higher than the latter in 2014, but significant difference was not noticed between them in the other three years. There were no significant differences throughout the four years in flavor. Regarding amylose content, Murai 79 is higher by 3.7 and 5.7% than Hinohikari in 2015 and 2016, respectively. As for protein content, Murai 79 was higher than Hinohikari in 2015, but the former was lower than the latter in 2016. Consequently, the nanhan of Murai 79 was harder and less sticky, keeping the shape of grains as compared with that of Hinohikari, which may be due to its higher amylose content. Hence, the nanhan of Murai 79 may be recognized as grains more easily in a human mouth, which could make easier the continuous performance of mastication and deglutition particularly in aged persons. Regarding ordinary cooked rice, Murai 79 was similar to or higher in both overall evaluation and external appearance as compared with Hinohikari, despite its higher hardness and lower stickiness. Additionally, Murai 79 had brown-rice yield of 1.55 times as compared with Hinohikari, suggesting that it would enable to supply inexpensive rice for making nanhan with high quality particularly for aged people in Japan.Keywords: high-amylose content, high-yielding rice line, nanhan, nursing care food, sensory eating quality test
Procedia PDF Downloads 139491 Guests’ Satisfaction and Intention to Revisit Smart Hotels: Qualitative Interviews Approach
Authors: Raymond Chi Fai Si Tou, Jacey Ja Young Choe, Amy Siu Ian So
Abstract:
Smart hotels can be defined as the hotel which has an intelligent system, through digitalization and networking which achieve hotel management and service information. In addition, smart hotels include high-end designs that integrate information and communication technology with hotel management fulfilling the guests’ needs and improving the quality, efficiency and satisfaction of hotel management. The purpose of this study is to identify appropriate factors that may influence guests’ satisfaction and intention to revisit Smart Hotels based on service quality measurement of lodging quality index and extended UTAUT theory. Unified Theory of Acceptance and Use of Technology (UTAUT) is adopted as a framework to explain technology acceptance and use. Since smart hotels are technology-based infrastructure hotels, UTATU theory could be as the theoretical background to examine the guests’ acceptance and use after staying in smart hotels. The UTAUT identifies four key drivers of the adoption of information systems: performance expectancy, effort expectancy, social influence, and facilitating conditions. The extended UTAUT modifies the definitions of the seven constructs for consideration; the four previously cited constructs of the UTAUT model together with three new additional constructs, which including hedonic motivation, price value and habit. Thus, the seven constructs from the extended UTAUT theory could be adopted to understand their intention to revisit smart hotels. The service quality model will also be adopted and integrated into the framework to understand the guests’ intention of smart hotels. There are rare studies to examine the service quality on guests’ satisfaction and intention to revisit in smart hotels. In this study, Lodging Quality Index (LQI) will be adopted to measure the service quality in smart hotels. Using integrated UTAUT theory and service quality model because technological applications and services require using more than one model to understand the complicated situation for customers’ acceptance of new technology. Moreover, an integrated model could provide more perspective insights to explain the relationships of the constructs that could not be obtained from only one model. For this research, ten in-depth interviews are planned to recruit this study. In order to confirm the applicability of the proposed framework and gain an overview of the guest experience of smart hotels from the hospitality industry, in-depth interviews with the hotel guests and industry practitioners will be accomplished. In terms of the theoretical contribution, it predicts that the integrated models from the UTAUT theory and the service quality will provide new insights to understand factors that influence the guests’ satisfaction and intention to revisit smart hotels. After this study identifies influential factors, smart hotel practitioners could understand which factors may significantly influence smart hotel guests’ satisfaction and intention to revisit. In addition, smart hotel practitioners could also provide outstanding guests experience by improving their service quality based on the identified dimensions from the service quality measurement. Thus, it will be beneficial to the sustainability of the smart hotels business.Keywords: intention to revisit, guest satisfaction, qualitative interviews, smart hotels
Procedia PDF Downloads 208