Search results for: “User acceptance of computer technology: A comparison of two theoretical models ”
2402 Underrepresentation of Women in Management Information Systems: Gender Differences in Key Environmental Barriers
Authors: Asli Yagmur Akbulut
Abstract:
Despite a robust and growing job market and lucrative salaries, there is a global shortage of Information Technology (IT) professionals. To make matters worse, women continue to be underrepresented in the IT workforce and among IT degree holders. In today’s knowledge based economy and society, it is extremely important to increase the presence of women in the IT field. In order to do so, it is necessary to reduce entry barriers and attract more women to pursue degrees in various IT fields including the field of Management Information Systems (MIS). Even though MIS is considered to have a more feminine nature, women still tend to avoid majoring in this field. Unfortunately, there is a lack of research that investigates the specific factors that may deter women from pursuing a degree in MIS. To address this research gap, this study examined a set of key environmental barriers that might prevent women from pursuing an MIS degree and explored whether there were any gender differences between female and male students in terms of these key barriers. Based on a survey of 280 students enrolled in an introductory level MIS course, the study empirically confirmed that there were significant differences between male and female students in terms of the key contextual barriers perceived. Female students demonstrated major concerns about gender discrimination related barriers, whereas male students were more concerned about negative social influences. Both male and female students were equally concerned about not being able to fit in well with other MIS majors. The findings have important implications for MIS programs, as the information gained can be used to design and implement specific intervention strategies to overcome the barriers and attract larger pools of women to the MIS discipline. The paper concludes with a discussion of the findings, implications, and future research directions.Keywords: gender differences, MIS major, underrepresentation, women in IT
Procedia PDF Downloads 2532401 Land Degradation Vulnerability Modeling: A Study on Selected Micro Watersheds of West Khasi Hills Meghalaya, India
Authors: Amritee Bora, B. S. Mipun
Abstract:
Land degradation is often used to describe the land environmental phenomena that reduce land’s original productivity both qualitatively and quantitatively. The study of land degradation vulnerability primarily deals with “Environmentally Sensitive Areas” (ESA) and the amount of topsoil loss due to erosion. In many studies, it is observed that the assessment of the existing status of land degradation is used to represent the vulnerability. Moreover, it is also noticed that in most studies, the primary emphasis of land degradation vulnerability is to assess its sensitivity to soil erosion only. However, the concept of land degradation vulnerability can have different objectives depending upon the perspective of the study. It shows the extent to which changes in land use land cover can imprint their effect on the land. In other words, it represents the susceptibility of a piece of land to degrade its productive quality permanently or in the long run. It is also important to mention that the vulnerability of land degradation is not a single factor outcome. It is a probability assessment to evaluate the status of land degradation and needs to consider both biophysical and human induce parameters. To avoid the complexity of the previous models in this regard, the present study has emphasized on to generate a simplified model to assess the land degradation vulnerability in terms of its current human population pressure, land use practices, and existing biophysical conditions. It is a “Mixed-Method” termed as the land degradation vulnerability index (LDVi). It was originally inspired by the MEDALUS model (Mediterranean Desertification and Land Use), 1999, and Farazadeh’s 2007 revised version of it. It has followed the guidelines of Space Application Center, Ahmedabad / Indian Space Research Organization for land degradation vulnerability. The model integrates the climatic index (Ci), vegetation index (Vi), erosion index (Ei), land utilization index (Li), population pressure index (Pi), and cover management index (CMi) by giving equal weightage to each parameter. The final result shows that the very high vulnerable zone primarily indicates three (3) prominent circumstances; land under continuous population pressure, high concentration of human settlement, and high amount of topsoil loss due to surface runoff within the study sites. As all the parameters of the model are amalgamated with equal weightage further with the help of regression analysis, the LDVi model also provides a strong grasp of each parameter and how far they are competent to trigger the land degradation process.Keywords: population pressure, land utilization, soil erosion, land degradation vulnerability
Procedia PDF Downloads 1672400 Gas Monitoring and Soil Control at the Natural Gas Storage Site (Minerbio, Italy)
Authors: Ana Maria Carmen Ilie, Carmela Vaccaro
Abstract:
Gas migration through wellbore failure, in particular from abandoned wells, is repeatedly identified as the highest risk mechanism. The vadose zone was subject to monitoring system close to the wellbore in Minerbio, methane storage site. The new technology has been well-developed and used with the purpose to provide reliable estimates of leakage parameters. Of these techniques, soil flux sampling at the soil surface, via the accumulation chamber method and soil flux sampling at the depths of 100cm below the ground surface, have been an important technique for characterizing the gas concentrations at the gas storage site. We present results of soil Radon Bq/m3, CO2%, CH4% and O2% concentration gases. Measurements have been taken for radon concentrations with an Durridge RAD7 Company, Inc., USA, instrument. We used for air and soil quality an Biogas ETG instrument monitoring system, with NDIR CO2, CH4 gas sensor and electrochemical O2 gas sensor. The measurements started in September-October 2015, where no outliers have been identified. The measurements have continued in March-April-July-August-September 2016, almost at the same time in the same place around the gas storage site, values measured 15 minutes for each sampling, to determine their concentration, their distribution and to understand the relationship among gases and atmospheric conditions. At a depth of 100 cm, the maximum soil radon gas concentrations were found to be 1770 ±±582 Bq/m3, the soil consists of 64.31% sand, 20.75% silt and 14.94% clay, and with 0.526 ppm of Uranium. The maximum concentration (September 2016), in soil at 100cm below the ground surface, with 83% sand, 8.96% silt and 7.89% clay, was about 0.06% CH4, and in atmosphere 0.06% CH4 at 40°C (T). In the other months the values have been on the range of 0.01% to 0.03% CH4. Since we did not have outliers in the gas storage site, soil-gas samples for isotopic analysis have not been done.Keywords: leakage gas monitoring, lithology, soil gas, methane
Procedia PDF Downloads 4412399 Silver Nanoparticle Application in Food Packaging and Impacts on Food Safety and Consumer’s Health
Authors: Worku Dejene Bekele, András Marczika Csilla Sörös
Abstract:
Silver nanoparticles are silver metal with a size of 1-100nm. The most common source of silver nanoparticles is inorganic salts. Nanoparticles can be ingested through our foods and constitute nanoparticles and silver ions, whether as an additive or by migrants and, in some cases, as a pollutant. Silver nanoparticles are the most widely applicable engineered nanomaterials, especially for antimicrobial function. Ag nanoparticles give different advantages in the case of food safety, quality, and overall acceptability; however, they affect the health of humans and animals, putting them at risk of health problems and environmental pollution. Silver nanoparticles have been used widely in food packaging technologies, especially in water treatments, meat and meat products, fruit, and many other food products. This is for bio-preservation from food products. The primary goal of this review is to determine the safety and health impact of Ag nanoparticles application in food packaging and analysis of the human organs more affected by this preservative technology, to assess the implications of a nanoparticle on food safety, to determine the effects of nanoparticles on consumers health and to determine the impact of nanotechnology on product acceptability. But currently, much research has demonstrated that there is cause to believe that silver nanoparticles may have toxicological effects on biological organs and systems. The silver nanoparticles affect DNA expression, gastrointestinal barriers, lungs, and other breathing organs illness. Silver particles and molecules are very toxic. During its application in food packaging, food industries used the thinnest particle. This particle can potentially affect the gastrointestinal tracts-it suffers from mucus production, DNA, lungs, and other breezing organs. This review is targeted to demonstrate the knowledge gap that industrials use in the application of silver nanoparticles in food packaging and preservation and its health effects on the consumer.Keywords: food preservatives, health impact, nanoparticle, silver nanoparticle
Procedia PDF Downloads 692398 Nanofluid-Based Emulsion Liquid Membrane for Selective Extraction and Separation of Dysprosium
Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari
Abstract:
Dysprosium is a rare earth element which is essential for many growing high-technology applications. Dysprosium along with neodymium plays a significant role in different applications such as metal halide lamps, permanent magnets, and nuclear reactor control rods preparation. The purification and separation of rare earth elements are challenging because of their similar chemical and physical properties. Among the various methods, membrane processes provide many advantages over the conventional separation processes such as ion exchange and solvent extraction. In this work, selective extraction and separation of dysprosium from aqueous solutions containing an equimolar mixture of dysprosium and neodymium by emulsion liquid membrane (ELM) was investigated. The organic membrane phase of the ELM was a nanofluid consisting of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as carrier, kerosene as base fluid, and nitric acid solution as internal aqueous phase. Factors affecting separation of dysprosium such as carrier concentration, MWCNT concentration, feed phase pH and stripping phase concentration were analyzed using Taguchi method. Optimal experimental condition was obtained using analysis of variance (ANOVA) after 10 min extraction. Based on the results, using MWCNT nanofluid in ELM process leads to increase the extraction due to higher stability of membrane and mass transfer enhancement and separation factor of 6 for dysprosium over neodymium can be achieved under the optimum conditions. Additionally, demulsification process was successfully performed and the membrane phase reused effectively in the optimum condition.Keywords: emulsion liquid membrane, MWCNT nanofluid, separation, Taguchi method
Procedia PDF Downloads 2882397 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial
Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa
Abstract:
Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation
Procedia PDF Downloads 892396 Gender and Asylum: A Critical Reassessment of the Case Law of the European Court of Human Right and of United States Courts Concerning Gender-Based Asylum Claims
Authors: Athanasia Petropoulou
Abstract:
While there is a common understanding that a person’s sex, gender, gender identity, and sexual orientation shape every stage of the migration experience, theories of international migration had until recently not been focused on exploring and incorporating a gender perspective in their analysis. In a similar vein, refugee law has long been the object of criticisms for failing to recognize and respond appropriately to women’s and sexual minorities’ experiences of persecution. The present analysis attempts to depict the challenges faced by the European Court of Human Rights (ECtHR) and U.S. courts when adjudicating in cases involving asylum claims with a gendered perspective. By providing a comparison between adjudicating strategies of international and national jurisdictions, the article aims to identify common or distinctive approaches in addressing gendered based claims. The paper argues that, despite the different nature of the judicial bodies and the different legal instruments applied respectively, judges face similar challenges in this context and often fail to qualify and address the gendered dimensions of asylum claims properly. The ECtHR plays a fundamental role in safeguarding human rights protection in Europe not only for European citizens but also for people fleeing violence, war, and dire living conditions. However, this role becomes more difficult to fulfill, not only because of the obvious institutional constraints but also because cases related to claims of asylum seekers concern a domain closely linked to State sovereignty. Amid the current “refugee crisis,” risk assessment performed by national authorities, like in the process of asylum determination, is shaped by wider geopolitical and economic considerations. The failure to recognize and duly address the gendered dimension of non - refoulement claims, one of the many shortcomings of these processes, is reflected in the decisions of the ECtHR. As regards U.S. case law, the study argues that U.S. courts either fail to apply any connection between asylum claims and their gendered dimension or tend to approach gendered based claims through the lens of the “political opinion” or “membership of a particular social group” reasons of fear of persecution. This exercise becomes even more difficult, taking into account that the U.S. asylum law inappropriately qualifies gendered-based claims. The paper calls for more sociologically informed decision-making practices and for a more contextualized and relational approach in the assessment of the risk of ill-treatment and persecution. Such an approach is essential for unearthing the gendered patterns of persecution and addressing effectively related claims, thus securing the human rights of asylum seekers.Keywords: asylum, European court of human rights, gender, human rights, U.S. courts
Procedia PDF Downloads 1082395 Hybrid Knowledge and Data-Driven Neural Networks for Diffuse Optical Tomography Reconstruction in Medical Imaging
Authors: Paola Causin, Andrea Aspri, Alessandro Benfenati
Abstract:
Diffuse Optical Tomography (DOT) is an emergent medical imaging technique which employs NIR light to estimate the spatial distribution of optical coefficients in biological tissues for diagnostic purposes, in a noninvasive and non-ionizing manner. DOT reconstruction is a severely ill-conditioned problem due to prevalent scattering of light in the tissue. In this contribution, we present our research in adopting hybrid knowledgedriven/data-driven approaches which exploit the existence of well assessed physical models and build upon them neural networks integrating the availability of data. Namely, since in this context regularization procedures are mandatory to obtain a reasonable reconstruction [1], we explore the use of neural networks as tools to include prior information on the solution. 2. Materials and Methods The idea underlying our approach is to leverage neural networks to solve PDE-constrained inverse problems of the form 𝒒 ∗ = 𝒂𝒓𝒈 𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃), (1) where D is a loss function which typically contains a discrepancy measure (or data fidelity) term plus other possible ad-hoc designed terms enforcing specific constraints. In the context of inverse problems like (1), one seeks the optimal set of physical parameters q, given the set of observations y. Moreover, 𝑦̃ is the computable approximation of y, which may be as well obtained from a neural network but also in a classic way via the resolution of a PDE with given input coefficients (forward problem, Fig.1 box ). Due to the severe ill conditioning of the reconstruction problem, we adopt a two-fold approach: i) we restrict the solutions (optical coefficients) to lie in a lower-dimensional subspace generated by auto-decoder type networks. This procedure forms priors of the solution (Fig.1 box ); ii) we use regularization procedures of type 𝒒̂ ∗ = 𝒂𝒓𝒈𝒎𝒊𝒏𝒒 𝐃(𝒚, 𝒚̃)+ 𝑹(𝒒), where 𝑹(𝒒) is a regularization functional depending on regularization parameters which can be fixed a-priori or learned via a neural network in a data-driven modality. To further improve the generalizability of the proposed framework, we also infuse physics knowledge via soft penalty constraints (Fig.1 box ) in the overall optimization procedure (Fig.1 box ). 3. Discussion and Conclusion DOT reconstruction is severely hindered by ill-conditioning. The combined use of data-driven and knowledgedriven elements is beneficial and allows to obtain improved results, especially with a restricted dataset and in presence of variable sources of noise.Keywords: inverse problem in tomography, deep learning, diffuse optical tomography, regularization
Procedia PDF Downloads 742394 Efficiency of a Molecularly Imprinted Polymer for Selective Removal of Chlorpyrifos from Water Samples
Authors: Oya A. Urucu, Aslı B. Çiğil, Hatice Birtane, Ece K. Yetimoğlu, Memet Vezir Kahraman
Abstract:
Chlorpyrifos is an organophosphorus pesticide which can be found in environmental water samples. The efficiency and reuse of a molecularly imprinted polymer (chlorpyrifos - MIP) were investigated for the selective removal of chlorpyrifos residues. MIP was prepared with UV curing thiol-ene polymerization technology by using multifunctional thiol and ene monomers. The thiol-ene curing reaction is a radical induced process, however unlike other photoinitiated polymerization processes, this polymerization process is a free-radical reaction that proceeds by a step-growth mechanism, involving two main steps; a free-radical addition followed by a chain transfer reaction. It assures a very rapidly formation of a uniform crosslinked network with low shrinkage, reduced oxygen inhibition during curing and excellent adhesion. In this study, thiol-ene based UV-curable polymeric materials were prepared by mixing pentaerythritol tetrakis(3-mercaptopropionate), glyoxal bis diallyl acetal, polyethylene glycol diacrylate (PEGDA) and photoinitiator. Chlorpyrifos was added at a definite ratio to the prepared formulation. Chemical structure and thermal properties were characterized by FTIR and thermogravimetric analysis (TGA), respectively. The pesticide analysis was performed by gas chromatography-mass spectrometry (GC-MS). The influences of some analytical parameters such as pH, sample volume, amounts of analyte concentration were studied for the quantitative recoveries of the analyte. The proposed MIP method was applied to the determination of chlorpyrifos in river and tap water samples. The use of the MIP provided a selective and easy solution for removing chlorpyrifos from the water.Keywords: molecularly imprinted polymers, selective removal, thilol-ene, uv-curable polymer
Procedia PDF Downloads 3022393 Comparison of Two Home Sleep Monitors Designed for Self-Use
Authors: Emily Wood, James K. Westphal, Itamar Lerner
Abstract:
Background: Polysomnography (PSG) recordings are regularly used in research and clinical settings to study sleep and sleep-related disorders. Typical PSG studies are conducted in professional laboratories and performed by qualified researchers. However, the number of sleep labs worldwide is disproportionate to the increasing number of individuals with sleep disorders like sleep apnea and insomnia. Consequently, there is a growing need to supply cheaper yet reliable means to measure sleep, preferably autonomously by subjects in their own home. Over the last decade, a variety of devices for self-monitoring of sleep became available in the market; however, very few have been directly validated against PSG to demonstrate their ability to perform reliable automatic sleep scoring. Two popular mobile EEG-based systems that have published validation results, the DREEM 3 headband and the Z-Machine, have never been directly compared one to the other by independent researchers. The current study aimed to compare the performance of DREEM 3 and the Z-Machine to help investigators and clinicians decide which of these devices may be more suitable for their studies. Methods: 26 participants have completed the study for credit or monetary compensation. Exclusion criteria included any history of sleep, neurological or psychiatric disorders. Eligible participants arrived at the lab in the afternoon and received the two devices. They then spent two consecutive nights monitoring their sleep at home. Participants were also asked to keep a sleep log, indicating the time they fell asleep, woke up, and the number of awakenings occurring during the night. Data from both devices, including detailed sleep hypnograms in 30-second epochs (differentiating Wake, combined N1/N2, N3; and Rapid Eye Movement sleep), were extracted and aligned upon retrieval. For analysis, the number of awakenings each night was defined as four or more consecutive wake epochs between sleep onset and termination. Total sleep time (TST) and the number of awakenings were compared to subjects’ sleep logs to measure consistency with the subjective reports. In addition, the sleep scores from each device were compared epoch-by-epoch to calculate the agreement between the two devices using Cohen’s Kappa. All analysis was performed using Matlab 2021b and SPSS 27. Results/Conclusion: Subjects consistently reported longer times spent asleep than the time reported by each device (M= 448 minutes for sleep logs compared to M= 406 and M= 345 minutes for the DREEM and Z-Machine, respectively; both ps<0.05). Linear correlations between the sleep log and each device were higher for the DREEM than the Z-Machine for both TST and the number of awakenings, and, likewise, the mean absolute bias between the sleep logs and each device was higher for the Z-Machine for both TST (p<0.001) and awakenings (p<0.04). There was some indication that these effects were stronger for the second night compared to the first night. Epoch-by-epoch comparisons showed that the main discrepancies between the devices were for detecting N2 and REM sleep, while N3 had a high agreement. Overall, the DREEM headband seems superior for reliably scoring sleep at home.Keywords: DREEM, EEG, seep monitoring, Z-machine
Procedia PDF Downloads 1072392 Performance Analysis of a Shell and Tube Heat Exchanger in the Organic Rankine Cycle Power Plant
Authors: Yogi Sirodz Gaos, Irvan Wiradinata
Abstract:
In the 500 kW Organic Rankine Cycle (ORC) power plant in Indonesia, an AFT (according to the Tubular Exchanger Manufacturers Association – TEMA) type shell and tube heat exchanger device is used as a pre-heating system for the ORC’s hot water circulation system. The pre-heating source is a waste heat recovery of the brine water, which is tapped from a geothermal power plant. The brine water itself has 5 MWₜₕ capacities, with average temperature of 170ᵒC, and 7 barg working pressure. The aim of this research is to examine the performance of the heat exchanger in the ORC system in a 500 kW ORC power plant. The data for this research were collected during the commissioning on the middle of December 2016. During the commissioning, the inlet temperature and working pressure of the brine water to the shell and tube type heat exchanger was 149ᵒC, and 4.4 barg respectively. Furthermore, the ΔT for the hot water circulation of the ORC system to the heat exchanger was 27ᵒC, with the inlet temperature of 140ᵒC. The pressure in the hot circulation system was dropped slightly from 7.4ᵒC to 7.1ᵒC. The flow rate of the hot water circulation was 80.5 m³/h. The presentation and discussion of a case study on the performance of the heat exchanger on the 500 kW ORC system is presented as follows: (1) the heat exchange duty is 2,572 kW; (2) log mean temperature of the heat exchanger is 13.2ᵒC; (3) the actual overall thermal conductivity is 1,020.6 W/m².K (4) the required overall thermal conductivity is 316.76 W/m².K; and (5) the over design for this heat exchange performance is 222.2%. An analysis of the heat exchanger detailed engineering design (DED) is briefly discussed. To sum up, this research concludes that the shell and tube heat exchangers technology demonstrated a good performance as pre-heating system for the ORC’s hot water circulation system. Further research need to be conducted to examine the performance of heat exchanger system on the ORC’s hot water circulation system.Keywords: shell and tube, heat exchanger, organic Rankine cycle, performance, commissioning
Procedia PDF Downloads 1432391 Developing an Intelligent Table Tennis Ball Machine with Human Play Simulation for Technical Training
Authors: Chen-Chi An, Jun-Yi He, Cheng-Han Hsieh, Chen-Ching Ting
Abstract:
This research has successfully developed an intelligent table tennis ball machine with human play simulate all situations of human play to take the service. It is well known; an excellent ball machine can help the table tennis coach to provide more efficient teaching, also give players the good technical training and entertainment. An excellent ball machine should be able to service all balls based on human play simulation due to the conventional competitions are today all taken place for people. In this work, two counter-rotating wheels are used to service the balls, where changing the absolute rotating speeds of the two wheels and the differences of rotating speeds between the two wheels can adjust the struck forces and the rotating speeds of the ball. The relationships between the absolute rotating speed of the two wheels and the struck forces of the ball as well as the differences rotating speeds between the two wheels and the rotating speeds of the ball are experimentally determined for technical development. The outlet speed, the ejected distance, and the rotating speed of the ball were measured by changing the absolute rotating speeds of the two wheels in terms of a series of differences in rotating speed between the two wheels for calibration of the ball machine; where the outlet speed and the ejected distance of the ball were further converted to the struck forces of the ball. In process, the balls serviced by the intelligent ball machine were based on the received calibration curves with help of the computer. Experiments technically used photosensitive devices to detect the outlet and rotating speed of the ball. Finally, this research developed some teaching programs for technical training using three ball machines and received more efficient training.Keywords: table tennis, ball machine, human play simulation, counter-rotating wheels
Procedia PDF Downloads 4292390 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder
Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi
Abstract:
With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor
Procedia PDF Downloads 1542389 Enhancing Inservice Education Training Effectiveness Using a Mobile Based E-Learning Model
Authors: Richard Patrick Kabuye
Abstract:
This study focuses on the addressing the enhancement of in-service training programs as a tool of transforming the existing traditional approaches of formal lectures/contact hours. This will be supported with a more versatile, robust, and remotely accessible means of mobile based e-learning, as a support tool for the traditional means. A combination of various factors in education and incorporation of the eLearning strategy proves to be a key factor in effective in-service education. Key factor needs to be factored in so as to maintain a credible co-existence of the programs, with the prevailing social, economic and political environments. Effective in-service education focuses on having immediate transformation of knowledge into practice for a good time period, active participation of attendees, enable before training planning, in training assessment and post training feedback training analysis which will yield knowledge to the trainers of the applicability of knowledge given out. All the above require a more robust approach to attain success in implementation. Incorporating mobile technology in eLearning will enable the above to be factored together in a more coherent manner, as it is evident that participants have to take time off their duties and attend to these training programs. Making it mobile, will save a lot of time since participants would be in position to follow certain modules while away from lecture rooms, get continuous program updates after completing the program, send feedback to instructors on knowledge gaps, and a wholly conclusive evaluation of the entire program on a learn as you work platform. This study will follow both qualitative and quantitative approaches in data collection, and this will be compounded incorporating a mobile eLearning application using Android.Keywords: in service, training, mobile, e- learning, model
Procedia PDF Downloads 2192388 Visualization Tool for EEG Signal Segmentation
Authors: Sweeti, Anoop Kant Godiyal, Neha Singh, Sneh Anand, B. K. Panigrahi, Jayasree Santhosh
Abstract:
This work is about developing a tool for visualization and segmentation of Electroencephalograph (EEG) signals based on frequency domain features. Change in the frequency domain characteristics are correlated with change in mental state of the subject under study. Proposed algorithm provides a way to represent the change in the mental states using the different frequency band powers in form of segmented EEG signal. Many segmentation algorithms have been suggested in literature having application in brain computer interface, epilepsy and cognition studies that have been used for data classification. But the proposed method focusses mainly on the better presentation of signal and that’s why it could be a good utilization tool for clinician. Algorithm performs the basic filtering using band pass and notch filters in the range of 0.1-45 Hz. Advanced filtering is then performed by principal component analysis and wavelet transform based de-noising method. Frequency domain features are used for segmentation; considering the fact that the spectrum power of different frequency bands describes the mental state of the subject. Two sliding windows are further used for segmentation; one provides the time scale and other assigns the segmentation rule. The segmented data is displayed second by second successively with different color codes. Segment’s length can be selected as per need of the objective. Proposed algorithm has been tested on the EEG data set obtained from University of California in San Diego’s online data repository. Proposed tool gives a better visualization of the signal in form of segmented epochs of desired length representing the power spectrum variation in data. The algorithm is designed in such a way that it takes the data points with respect to the sampling frequency for each time frame and so it can be improved to use in real time visualization with desired epoch length.Keywords: de-noising, multi-channel data, PCA, power spectra, segmentation
Procedia PDF Downloads 3982387 An Evolutionary Perspective on the Role of Extrinsic Noise in Filtering Transcript Variability in Small RNA Regulation in Bacteria
Authors: Rinat Arbel-Goren, Joel Stavans
Abstract:
Cell-to-cell variations in transcript or protein abundance, called noise, may give rise to phenotypic variability between isogenic cells, enhancing the probability of survival under stress conditions. These variations may be introduced by post-transcriptional regulatory processes such as non-coding, small RNAs stoichiometric degradation of target transcripts in bacteria. We study the iron homeostasis network in Escherichia coli, in which the RyhB small RNA regulates the expression of various targets as a model system. Using fluorescence reporter genes to detect protein levels and single-molecule fluorescence in situ hybridization to monitor transcripts levels in individual cells, allows us to compare noise at both transcript and protein levels. The experimental results and computer simulations show that extrinsic noise buffers through a feed-forward loop configuration the increase in variability introduced at the transcript level by iron deprivation, illuminating the important role that extrinsic noise plays during stress. Surprisingly, extrinsic noise also decouples of fluctuations of two different targets, in spite of RyhB being a common upstream factor degrading both. Thus, phenotypic variability increases under stress conditions by the decoupling of target fluctuations in the same cell rather than by increasing the noise of each. We also present preliminary results on the adaptation of cells to prolonged iron deprivation in order to shed light on the evolutionary role of post-transcriptional downregulation by small RNAs.Keywords: cell-to-cell variability, Escherichia coli, noise, single-molecule fluorescence in situ hybridization (smFISH), transcript
Procedia PDF Downloads 1642386 Lighting Consumption Analysis in Retail Industry: Comparative Study
Authors: Elena C. Tamaş, Grațiela M. Țârlea, Gianni Flamaropol, Dragoș Hera
Abstract:
This article is referring to a comparative study regarding the electrical energy consumption for lighting on diverse types of big sizes commercial buildings built in Romania after 2007, having 3, 4, 5 versus 8, 9, 10 operational years. Some buildings have installed building management systems (BMS) to monitor also the lighting performances starting with the opening days till the present days but some have chosen only local meters to implement. Firstly, for each analyzed building, the total required energy power and the energy power consumption for lighting were calculated depending on the lamps number, the unit power and the average daily running hours. All objects and installations were chosen depending on the destination/location of the lighting (exterior parking or access, interior or covering parking, building interior and building perimeter). Secondly, to all lighting objects and installations, mechanical counters were installed, and to the ones linked to BMS there were installed the digital meters as well for a better monitoring. Some efficient solutions are proposed to improve the power consumption, for example the 1/3 lighting functioning for the covered and exterior parking lighting to those buildings if can be done. This type of lighting share can be performed on each level, especially on the night shifts. Another example is to use the dimmers to reduce the light level, depending on the executed work in the respective area, and a 30% power energy saving can be achieved. Using the right BMS to monitor, the energy consumption depending on the average operational daily hours and changing the non-performant unit lights with the ones having LED technology or economical ones might increase significantly the energy performances and reduce the energy consumption of the buildings.Keywords: commercial buildings, energy performances, lightning consumption, maintenance
Procedia PDF Downloads 2612385 Co-Gasification of Petroleum Waste and Waste Tires: A Numerical and CFD Study
Authors: Thomas Arink, Isam Janajreh
Abstract:
The petroleum industry generates significant amounts of waste in the form of drill cuttings, contaminated soil and oily sludge. Drill cuttings are a product of the off-shore drilling rigs, containing wet soil and total petroleum hydrocarbons (TPH). Contaminated soil comes from different on-shore sites and also contains TPH. The oily sludge is mainly residue or tank bottom sludge from storage tanks. The two main treatment methods currently used are incineration and thermal desorption (TD). Thermal desorption is a method where the waste material is heated to 450ºC in an anaerobic environment to release volatiles, the condensed volatiles can be used as a liquid fuel. For the thermal desorption unit dry contaminated soil is mixed with moist drill cuttings to generate a suitable mixture. By thermo gravimetric analysis (TGA) of the TD feedstock it was found that less than 50% of the TPH are released, the discharged material is stored in landfill. This study proposes co-gasification of petroleum waste with waste tires as an alternative to thermal desorption. Co-gasification with a high-calorific material is necessary since the petroleum waste consists of more than 60 wt% ash (soil/sand), causing its calorific value to be too low for gasification. Since the gasification process occurs at 900ºC and higher, close to 100% of the TPH can be released, according to the TGA. This work consists of three parts: 1. a mathematical gasification model, 2. a reactive flow CFD model and 3. experimental work on a drop tube reactor. Extensive material characterization was done by means of proximate analysis (TGA), ultimate analysis (CHNOS flash analysis) and calorific value measurements (Bomb calorimeter) for the input parameters of the mathematical and CFD model. The mathematical model is a zero dimensional model based on Gibbs energy minimization together with Lagrange multiplier; it is used to find the product species composition (molar fractions of CO, H2, CH4 etc.) for different tire/petroleum feedstock mixtures and equivalence ratios. The results of the mathematical model act as a reference for the CFD model of the drop-tube reactor. With the CFD model the efficiency and product species composition can be predicted for different mixtures and particle sizes. Finally both models are verified by experiments on a drop tube reactor (1540 mm long, 66 mm inner diameter, 1400 K maximum temperature).Keywords: computational fluid dynamics (CFD), drop tube reactor, gasification, Gibbs energy minimization, petroleum waste, waste tires
Procedia PDF Downloads 5202384 725 Arcadia Street in Pretoria: A Pretoria Case Study Focusing on Urban Acupuncture
Authors: Konrad Steyn, Jacques Laubscher
Abstract:
South African urban design solutions are mostly aligned with European and North American models that are often not appropriate in addressing some of this country’s challenges such as multiculturalism and decaying urban areas. Sustainable urban redevelopment in South Africa should be comprehensive in nature, sensitive in its manifestation, and should be robust and inclusive in order to achieve social relevance. This paper argues that the success of an urban design intervention is largely dependent on the public’s perceptions and expectations, and the way people participate in shaping their environments. The concept of sustainable urbanism is thus more comprehensive than – yet should undoubtedly include – methods of construction, material usage and climate control principles. The case study is a central element of this research paper. 725 Arcadia Street in Pretoria, was originally commissioned as a food market structure. A starkly contrasting existing modernist adjacent building forms the morphological background. Built in 1969, it is a valuable part of Pretoria’s modernist fabric. It was realised early on that the project should not be a mere localised architectural intervention, but rather an occasion to revitalise the neighbourhood through urban regeneration. Because of the complex and comprehensive nature of the site and rich cultural diversity of the area, a multi-faceted approach seemed the most appropriate response. The methodology for collating data consisted of a combination of literature reviews (regarding the historic original fauna and flora and current plants, observation (frequent site visits) and physical surveying on the neighbourhood level (physical location, connectivity to surrounding landmarks as well as movement systems and pedestrian flows). This was followed by an exploratory design phase, culminating in the present redevelopment proposal. Since built environment interventions are increasingly based on generalised normative guidelines, an approach focusing of urban acupuncture could serve as an alternative. Celebrating the specific urban condition, urban acupuncture offers an opportunity to influence the surrounding urban fabric and achieve urban renewal through physical, social and cultural mediation.Keywords: neighbourhood, urban renewal, South African urban design solutions, sustainable urban redevelopment
Procedia PDF Downloads 4962383 Investigation on Behaviour of Reinforced Concrete Beam-Column Joints Retrofitted with CFRP
Authors: Ehsan Mohseni
Abstract:
The aim of this thesis is to provide numerical analyses of reinforced concrete beams-column joints with/without CFRP (Carbon Fiber Reinforced Polymer) in order to achieve a better understanding of the behaviour of strengthened beamcolumn joints. A comprehensive literature survey prior to this study revealed that published studies are limited to a handful only; the results are inconclusive and some are even contradictory. Therefore in order to improve on this situation, following that review, a numerical study was designed and performed as presented in this thesis. For the numerical study, dimensions, end supports, and characteristics of the beam and column models were the same as those chosen in an experimental investigation performed previously where ten beamcolumn joint were tested tofailure. Finite element analysis is a useful tool in cases where analytical methods are not capable of solving the problem due to the complexities associated with the problem. The cyclic behaviour of FRP strengthened reinforced concrete beam-columns joints is such a case. Interaction of steel (longitudinal and stirrups), concrete and FRP, yielding of steel bars and stirrups, cracking of concrete, the redistribution of stresses as some elements unload due to crushing or yielding and the confinement of concrete due to the presence of FRP are some of the issues that introduce the complexities into the problem.Numerical solutions, however, can provide further in formation about the behaviour in lieu of the costly experiments or complex closed form solutions. This thesis presents the results of a numerical study on beam-column joints subjected to cyclic loads that are strengthened with CFRP wraps or strrips in a variety of configurations. The analyses are performed by Abaqus finite element program and are calibrated with the experiments. A range of issues in beam-column joints including the cracking load, the ultimate load, lateral load-displacement curves of joints, are investigated.The numerical results for different configurations of strengthening are compared. Finally, the computed numerical results are compared with those obtained from experiments. the cracking load, the ultimate load, lateral load-displacement curves obtained from numerical analysis for all joints were in very good agreement with the corresponding experimental ones.The results obtained from the numerical analysis in most cases implies that this method is conservative and therefore can be used in design applications with confidence.Keywords: numerical analysis, strengthening, CFRP, reinforced concrete joints
Procedia PDF Downloads 3492382 Awareness among Medical Students and Faculty about Integration of Artifical Intelligence Literacy in Medical Curriculum
Authors: Fatima Faraz
Abstract:
BACKGROUND: While Artificial intelligence (AI) provides new opportunities across a wide variety of industries, healthcare is no exception. AI can lead to advancements in how the healthcare system functions and improves the quality of patient care. Developing countries like Pakistan are lagging in the implementation of AI-based solutions in healthcare. This demands increased knowledge and AI literacy among health care professionals. OBJECTIVES: To assess the level of awareness among medical students and faculty about AI in preparation for teaching AI basics and data science applications in clinical practice in an integrated medical curriculum. METHODS: An online 15-question semi-structured questionnaire, previously tested and validated, was delivered among participants through convenience sampling. The questionnaire composed of 3 parts: participant’s background knowledge, AI awareness, and attitudes toward AI applications in medicine. RESULTS: A total of 182 students and 39 faculty members from Rawalpindi Medical University, Pakistan, participated in the study. Only 26% of students and 46.2% of faculty members responded that they were aware of AI topics in clinical medicine. The major source of AI knowledge was social media (35.7%) for students and professional talks and colleagues (43.6%) for faculty members. 23.5% of participants answered that they personally had a basic understanding of AI. Students and faculty (60.1%) were interested in AI in patient care and teaching domain. These findings parallel similar published AI survey results. CONCLUSION: This survey concludes interest among students and faculty in AI developments and technology applications in healthcare. Further studies are required in order to correctly fit AI in the integrated modular curriculum of medical education.Keywords: medical education, data science, artificial intelligence, curriculum
Procedia PDF Downloads 1012381 Renewable Energy Storage Capacity Rating: A Forecast of Selected Load and Resource Scenario in Nigeria
Authors: Yakubu Adamu, Baba Alfa, Salahudeen Adamu Gene
Abstract:
As the drive towards clean, renewable and sustainable energy generation is gradually been reshaped by renewable penetration over time, energy storage has thus, become an optimal solution for utilities looking to reduce transmission and capacity cost, therefore the need for capacity resources to be adjusted accordingly such that renewable energy storage may have the opportunity to substitute for retiring conventional energy systems with higher capacity factors. Considering the Nigeria scenario, where Over 80% of the current Nigerian primary energy consumption is met by petroleum, electricity demand is set to more than double by mid-century, relative to 2025 levels. With renewable energy penetration rapidly increasing, in particular biomass, hydro power, solar and wind energy, it is expected to account for the largest share of power output in the coming decades. Despite this rapid growth, the imbalance between load and resources has created a hindrance to the development of energy storage capacity, load and resources, hence forecasting energy storage capacity will therefore play an important role in maintaining the balance between load and resources including supply and demand. Therefore, the degree to which this might occur, its timing and more importantly its sustainability, is the subject matter of the current research. Here, we forecast the future energy storage capacity rating and thus, evaluate the load and resource scenario in Nigeria. In doing so, We used the scenario-based International Energy Agency models, the projected energy demand and supply structure of the country through 2030 are presented and analysed. Overall, this shows that in high renewable (solar) penetration scenarios in Nigeria, energy storage with 4-6h duration can obtain over 86% capacity rating with storage comprising about 24% of peak load capacity. Therefore, the general takeaway from the current study is that most power systems currently used has the potential to support fairly large penetrations of 4-6 hour storage as capacity resources prior to a substantial reduction in capacity ratings. The data presented in this paper is a crucial eye-opener for relevant government agencies towards developing these energy resources in tackling the present energy crisis in Nigeria. However, if the transformation of the Nigeria. power system continues primarily through expansion of renewable generation, then longer duration energy storage will be needed to qualify as capacity resources. Hence, the analytical task from the current survey will help to determine whether and when long-duration storage becomes an integral component of the capacity mix that is expected in Nigeria by 2030.Keywords: capacity, energy, power system, storage
Procedia PDF Downloads 342380 Study on Natural Light Distribution Inside the Room by Using Sudare as an Outside Horizontal Blind in Tropical Country of Indonesia
Authors: Agus Hariyadi, Hiroatsu Fukuda
Abstract:
In tropical country like Indonesia, especially in Jakarta, most of the energy consumption on building is for the cooling system, the second one is from lighting electric consumption. One of the passive design strategy that can be done is optimizing the use of natural light from the sun. In this area, natural light is always available almost every day around the year. Natural light have many effect on building. It can reduce the need of electrical lighting but also increase the external load. Another thing that have to be considered in the use of natural light is the visual comfort from occupant inside the room. To optimize the effectiveness of natural light need some modification of façade design. By using external shading device, it can minimize the external load that introduces into the room, especially from direct solar radiation which is the 80 % of the external energy load that introduces into the building. It also can control the distribution of natural light inside the room and minimize glare in the perimeter zone of the room. One of the horizontal blind that can be used for that purpose is Sudare. It is traditional Japanese blind that have been used long time in Japanese traditional house especially in summer. In its original function, Sudare is used to prevent direct solar radiation but still introducing natural ventilation. It has some physical characteristics that can be utilize to optimize the effectiveness of natural light. In this research, different scale of Sudare will be simulated using EnergyPlus and DAYSIM simulation software. EnergyPlus is a whole building energy simulation program to model both energy consumption—for heating, cooling, ventilation, lighting, and plug and process loads—and water use in buildings, while DAYSIM is a validated, RADIANCE-based daylighting analysis software that models the annual amount of daylight in and around buildings. The modelling will be done in Ladybug and Honeybee plugin. These are two open source plugins for Grasshopper and Rhinoceros 3D that help explore and evaluate environmental performance which will directly be connected to EnergyPlus and DAYSIM engines. Using the same model will maintain the consistency of the same geometry used both in EnergyPlus and DAYSIM. The aims of this research is to find the best configuration of façade design which can reduce the external load from the outside of the building to minimize the need of energy for cooling system but maintain the natural light distribution inside the room to maximize the visual comfort for occupant and minimize the need of electrical energy consumption.Keywords: façade, natural light, blind, energy
Procedia PDF Downloads 3452379 From Intuitive to Constructive Audit Risk Assessment: A Complementary Approach to CAATTs Adoption
Authors: Alon Cohen, Jeffrey Kantor, Shalom Levy
Abstract:
The use of the audit risk model in auditing has faced limitations and difficulties, leading auditors to rely on a conceptual level of its application. The qualitative approach to assessing risks has resulted in different risk assessments, affecting the quality of audits and decision-making on the adoption of CAATTs. This study aims to investigate risk factors impacting the implementation of the audit risk model and propose a complementary risk-based instrument (KRIs) to form substance risk judgments and mitigate against heightened risk of material misstatement (RMM). The study addresses the question of how risk factors impact the implementation of the audit risk model, improve risk judgments, and aid in the adoption of CAATTs. The study uses a three-stage scale development procedure involving a pretest and subsequent study with two independent samples. The pretest involves an exploratory factor analysis, while the subsequent study employs confirmatory factor analysis for construct validation. Additionally, the authors test the ability of the KRIs to predict audit efforts needed to mitigate against heightened RMM. Data was collected through two independent samples involving 767 participants. The collected data was analyzed using exploratory factor analysis and confirmatory factor analysis to assess scale validity and construct validation. The suggested KRIs, comprising two risk components and seventeen risk items, are found to have high predictive power in determining audit efforts needed to reduce RMM. The study validates the suggested KRIs as an effective instrument for risk assessment and decision-making on the adoption of CAATTs. This study contributes to the existing literature by implementing a holistic approach to risk assessment and providing a quantitative expression of assessed risks. It bridges the gap between intuitive risk evaluation and the theoretical domain, clarifying the mechanism of risk assessments. It also helps improve the uniformity and quality of risk assessments, aiding audit standard-setters in issuing updated guidelines on CAATT adoption. A few limitations and recommendations for future research should be mentioned. First, the process of developing the scale was conducted in the Israeli auditing market, which follows the International Standards on Auditing (ISAs). Although ISAs are adopted in European countries, for greater generalization, future studies could focus on other countries that adopt additional or local auditing standards. Second, this study revealed risk factors that have a material impact on the assessed risk. However, there could be additional risk factors that influence the assessment of the RMM. Therefore, future research could investigate other risk segments, such as operational and financial risks, to bring a broader generalizability to our results. Third, although the sample size in this study fits acceptable scale development procedures and enables drawing conclusions from the body of research, future research may develop standardized measures based on larger samples to reduce the generation of equivocal results and suggest an extended risk model.Keywords: audit risk model, audit efforts, CAATTs adoption, key risk indicators, sustainability
Procedia PDF Downloads 772378 Association Between Advanced Parental Age and Implantation Failure: A Prospective Cohort Study in Anhui, China
Authors: Jiaqian Yin, Ruoling Chen, David Churchill, Huijuan Zou, Peipei Guo, Chunmei Liang, Xiaoqing Peng, Zhikang Zhang, Weiju Zhou, Yunxia Cao
Abstract:
Purpose: This study aimed to explore the interaction of male and female age on implantation failure from in vitro fertilisation (IVF)/ intracytoplasmic sperm injection (ICSI) treatments in couples following their first cycles using the Anhui Maternal-Child Health Study (AMCHS). Methods: The AMCHS recruited 2042 infertile couples who were physically fit for in vitro fertilisation (IVF) or intracytoplasmic sperm injection (ICSI) treatment at the Reproductive Centre of the First Affiliated Hospital of Anhui Medical University between May 2017 to April 2021. This prospective cohort study analysed the data from 1910 cohort couples for the current paper data analysis. The multivariate logistic regression model was used to identify the effect of male and female age on implantation failure after controlling for confounding factors. Male age and female age were examined as continuous and categorical (male age: 20-<25, 25-<30, 30-<35, 35-<40, ≥40; female age: 20-<25, 25-<30, 30-<35, 35-<40, ≥40) predictors. Results: Logistic regression indicated that advanced maternal age was associated with increased implantation failure (P<0.001). There was evidence of an interaction between maternal age (30-<35 and ≥ 35) and paternal age (≥35) on implantation failure. (p<0.05). Only when the male was ≥35 years of increased maternal age was associated with the risk of implantation failure. Conclusion: In conclusion, there was an additive effect on implantation failure with advanced parental age. The impact of advanced maternal age was only seen in the older paternal age group. The delay of childbearing in both men and women will be a serious public issue that may contribute to a higher risk of implantation failure in patients needing assisted reproductive technology (ART).Keywords: parental age, infertility, cohort study, IVF
Procedia PDF Downloads 1542377 The Effect of Torsional Angle on Reversible Electron Transfer in Donor: Acceptor Frameworks Using Bis(Imino)Pyridines as Proxy
Authors: Ryan Brisbin, Hassan Harb, Justin Debow, Hrant Hratchian, Ryan Baxter
Abstract:
Donor-Acceptor (DA) frameworks are crucial parts of any technology requiring charge transport. This type of behavior is ubiquitous across technologies from semi conductors to solar panels. Currently, most DA systems involve metallic components, but progressive research is being pursued to design fully organic DA systems to be used as both organic semi-conductors and light emitting diodes. These systems are currently comprised of conductive polymers and salts. However, little is known about the effect of various physical aspects (size, torsional angle, electron density) have on the act of reversible charge transfer. Herein, the effect of torsional angle on reductive stability in bis(imino)pyridines is analyzed using a combination of single crystal analysis and electro-chemical peak current ratios from cyclic voltammetry. The computed free energies of reduction and electron attachment points were also investigated through density functional theory and natural ionization orbital theory to gain greater understanding of the global effect torsional angles have on electron transfer in bis(imino)pyridines. Findings indicated that torsional angles are a multi-variable parameter affected by both local steric constraints and resonant electronic contributions. Local steric impacted torsional angles demonstrated a negligible effect on electrochemical reversibility, while resonant affected torsional angles were observed to significantly alter the electrochemical reversibility.Keywords: cyclic voltammetry, bis(imino)pyridines, structure-activity relationship, torsional angles
Procedia PDF Downloads 2382376 Clean Sky 2 Project LiBAT: Light Battery Pack for High Power Applications in Aviation – Simulation Methods in Early Stage Design
Authors: Jan Dahlhaus, Alejandro Cardenas Miranda, Frederik Scholer, Maximilian Leonhardt, Matthias Moullion, Frank Beutenmuller, Julia Eckhardt, Josef Wasner, Frank Nittel, Sebastian Stoll, Devin Atukalp, Daniel Folgmann, Tobias Mayer, Obrad Dordevic, Paul Riley, Jean-Marc Le Peuvedic
Abstract:
Electrical and hybrid aerospace technologies pose very challenging demands on the battery pack – especially with respect to weight and power. In the Clean Sky 2 research project LiBAT (funded by the EU), the consortium is currently building an ambitious prototype with state-of-the art cells that shows the potential of an intelligent pack design with a high level of integration, especially with respect to thermal management and power electronics. For the latter, innovative multi-level-inverter technology is used to realize the required power converting functions with reduced equipment. In this talk the key approaches and methods of the LiBat project will be presented and central results shown. Special focus will be set on the simulative methods used to support the early design and development stages from an overall system perspective. The applied methods can efficiently handle multiple domains and deal with different time and length scales, thus allowing the analysis and optimization of overall- or sub-system behavior. It will be shown how these simulations provide valuable information and insights for the efficient evaluation of concepts. As a result, the construction and iteration of hardware prototypes has been reduced and development cycles shortened.Keywords: electric aircraft, battery, Li-ion, multi-level-inverter, Novec
Procedia PDF Downloads 1662375 Status Quo Bias: A Paradigm Shift in Policy Making
Authors: Divyansh Goel, Varun Jain
Abstract:
Classical economics works on the principle that people are rational and analytical in their decision making and their choices fall in line with the most suitable option according to the dominant strategy in a standard game theory model. This model has failed at many occasions in estimating the behavior and dealings of rational people, giving proof of some other underlying heuristics and cognitive biases at work. This paper probes into the study of these factors, which fall under the umbrella of behavioral economics and through their medium explore the solution to a problem which a lot of nations presently face. There has long been a wide disparity in the number of people holding favorable views on organ donation and the actual number of people signing up for the same. This paper, in its entirety, is an attempt to shape the public policy which leads to an increase the number of organ donations that take place and close the gap in the statistics of the people who believe in signing up for organ donation and the ones who actually do. The key assumption here is that in cases of cognitive dissonance, where people have an inconsistency due to conflicting views, people have a tendency to go with the default choice. This tendency is a well-documented cognitive bias known as the status quo bias. The research in this project involves an assay of mandated choice models of organ donation with two case studies. The first of an opt-in system of Germany (where people have to explicitly sign up for organ donation) and the second of an opt-out system of Austria (every citizen at the time of their birth is an organ donor and has to explicitly sign up for refusal). Additionally, there has also been presented a detailed analysis of the experiment performed by Eric J. Johnson and Daniel G. Goldstein. Their research as well as many other independent experiments such as that by Tsvetelina Yordanova of the University of Sofia, both of which yield similar results. The conclusion being that the general population has by and large no rigid stand on organ donation and are gullible to status quo bias, which in turn can determine whether a large majority of people will consent to organ donation or not. Thus, in our paper, we throw light on how governments can use status quo bias to drive positive social change by making policies in which everyone by default is marked an organ donor, which will, in turn, save the lives of people who succumb on organ transplantation waitlists and save the economy countless hours of economic productivity.Keywords: behavioral economics, game theory, organ donation, status quo bias
Procedia PDF Downloads 3002374 The Spread of Drugs in Higher Education
Authors: Wantana Amatariyakul, Chumnong Amatariyakul
Abstract:
The research aims to examine the spread of drugs in higher education, especially amphetamine which is rapidly increasing in Thai society, its causes and effects, including the sociological perspective, in order to explain, prevent, control, and solve the problems. The students who participated in this research are regular students of Rajamangala University of Technology Isan, Khon Kaen Campus. The data were collected using questionnaires, group discussions, and in-depth interviews. The quantity data were analyzed using frequency, percentage, mean and standard deviation and using content analysis to analyzed quality data. The result of the study showed that the students had the results of examination on level of knowledge and understanding on drug abuse projected that the majority of sample group attained their knowledge on drug abuse respectively. Despite their uncertainty, the majority of samples presumed that amphetamine, marijuana and grathom (Mitragyna Speciosa Korth) would most likely be abused. The reason for first drug abuse is because they want to try and their friends convince them, as well as, they want to relax or solve the problems in life, respectively. The bad effects appearing to the drug addicts shows that their health deteriorates or worsens, as well as, they not only lose their money but also face with worse mental states. The reasons that respondents tried to avoid using drugs or refused drugs offered by friends were: not wanting to disappoint or upset their family members, fear of rejection by family members, afraid of being arrested by the police, afraid of losing their educational opportunity and ruining their future respectively. Students therefore defended themselves against drug addiction by refusing to try all drugs. Besides this, the knowledge about the danger and the harm of drugs persuaded them to stay away from drugs.Keywords: drugs, higher education, drug addiction, spread of drugs
Procedia PDF Downloads 3192373 Development and Experimental Evaluation of a Semiactive Friction Damper
Authors: Juan S. Mantilla, Peter Thomson
Abstract:
Seismic events may result in discomfort on occupants of the buildings, structural damage or even buildings collapse. Traditional design aims to reduce dynamic response of structures by increasing stiffness, thus increasing the construction costs and the design forces. Structural control systems arise as an alternative to reduce these dynamic responses. A commonly used control systems in buildings are the passive friction dampers, which adds energy dissipation through damping mechanisms induced by sliding friction between their surfaces. Passive friction dampers are usually implemented on the diagonal of braced buildings, but such devices have the disadvantage that are optimal for a range of sliding force and out of that range its efficiency decreases. The above implies that each passive friction damper is designed, built and commercialized for a specific sliding/clamping force, in which the damper shift from a locked state to a slip state, where dissipates energy through friction. The risk of having a variation in the efficiency of the device according to the sliding force is that the dynamic properties of the building can change as result of many factor, even damage caused by a seismic event. In this case the expected forces in the building can change and thus considerably reduce the efficiency of the damper (that is designed for a specific sliding force). It is also evident than when a seismic event occurs the forces in each floor varies in the time what means that the damper's efficiency is not the best at all times. Semi-Active Friction devices adapt its sliding force trying to maintain its motion in the slipping phase as much as possible, because of this, the effectiveness of the device depends on the control strategy used. This paper deals with the development and performance evaluation of a low cost Semiactive Variable Friction Damper (SAVFD) in reduced scale to reduce vibrations of structures subject to earthquakes. The SAVFD consist in a (1) hydraulic brake adapted to (2) a servomotor which is controlled with an (3) Arduino board and acquires accelerations or displacement from (4) sensors in the immediately upper and lower floors and a (5) power supply that can be a pair of common batteries. A test structure, based on a Benchmark structure for structural control, was design and constructed. The SAVFD and the structure are experimentally characterized. A numerical model of the structure and the SAVFD is developed based on the dynamic characterization. Decentralized control algorithms were modeled and later tested experimentally using shaking table test using earthquake and frequency chirp signals. The controlled structure with the SAVFD achieved reductions greater than 80% in relative displacements and accelerations in comparison to the uncontrolled structure.Keywords: earthquake response, friction damper, semiactive control, shaking table
Procedia PDF Downloads 378