Search results for: 2D hydraulics model
584 Frailty and Quality of Life among Older Adults: A Study of Six LMICs Using SAGE Data
Authors: Mamta Jat
Abstract:
Background: The increased longevity has resulted in the increase in the percentage of the global population aged 60 years or over. With this “demographic transition” towards ageing, “epidemiologic transition” is also taking place characterised by growing share of non-communicable diseases in the overall disease burden. So, many of the older adults are ageing with chronic disease and high levels of frailty which often results in lower levels of quality of life. Although frailty may be increasingly common in older adults, prevention or, at least, delay the onset of late-life adverse health outcomes and disability is necessary to maintain the health and functional status of the ageing population. This is an effort using SAGE data to assess levels of frailty and its socio-demographic correlates and its relation with quality of life in LMICs of India, China, Ghana, Mexico, Russia and South Africa in a comparative perspective. Methods: The data comes from multi-country Study on Global AGEing and Adult Health (SAGE), consists of nationally representative samples of older adults in six low and middle-income countries (LMICs): China, Ghana, India, Mexico, the Russian Federation and South Africa. For our study purpose, we will consider only 50+ year’s respondents. The logistic regression model has been used to assess the correlates of frailty. Multinomial logistic regression has been used to study the effect of frailty on QOL (quality of life), controlling for the effect of socio-economic and demographic correlates. Results: Among all the countries India is having highest mean frailty in males (0.22) and females (0.26) and China with the lowest mean frailty in males (0.12) and females (0.14). The odds of being frail are more likely with the increase in age across all the countries. In India, China and Russia the chances of frailty are more among rural older adults; whereas, in Ghana, South Africa and Mexico rural residence is protecting against frailty. Among all countries china has high percentage (71.46) of frail people in low QOL; whereas Mexico has lowest percentage (36.13) of frail people in low QOL.s The risk of having low and middle QOL is significantly (p<0.001) higher among frail elderly as compared to non–frail elderly across all countries with controlling socio-demographic correlates. Conclusion: Women and older age groups are having higher frailty levels than men and younger aged adults in LMICs. The mean frailty scores demonstrated a strong inverse relationship with education and income gradients, while lower levels of education and wealth are showing higher levels of frailty. These patterns are consistent across all LMICs. These data support a significant role of frailty with all other influences controlled, in having low QOL as measured by WHOQOL index. Future research needs to be built on this evolving concept of frailty in an effort to improve quality of life for frail elderly population, in LMICs setting.Keywords: Keywords: Ageing, elderly, frailty, quality of life
Procedia PDF Downloads 288583 Implementing the WHO Air Quality Guideline for PM2.5 Worldwide can Prevent Millions of Premature Deaths Per Year
Authors: Despina Giannadaki, Jos Lelieveld, Andrea Pozzer, John Evans
Abstract:
Outdoor air pollution by fine particles ranks among the top ten global health risk factors that can lead to premature mortality. Epidemiological cohort studies, mainly conducted in United States and Europe, have shown that the long-term exposure to PM2.5 (particles with an aerodynamic diameter less than 2.5μm) is associated with increased mortality from cardiovascular, respiratory diseases and lung cancer. Fine particulates can cause health impacts even at very low concentrations. Previously, no concentration level has been defined below which health damage can be fully prevented. The World Health Organization ambient air quality guidelines suggest an annual mean PM2.5 concentration limit of 10μg/m3. Populations in large parts of the world, especially in East and Southeast Asia, and in the Middle East, are exposed to high levels of fine particulate pollution that by far exceeds the World Health Organization guidelines. The aim of this work is to evaluate the implementation of recent air quality standards for PM2.5 in the EU, the US and other countries worldwide and estimate what measures will be needed to substantially reduce premature mortality. We investigated premature mortality attributed to fine particulate matter (PM2.5) under adults ≥ 30yrs and children < 5yrs, applying a high-resolution global atmospheric chemistry model combined with epidemiological concentration-response functions. The latter are based on the methodology of the Global Burden of Disease for 2010, assuming a ‘safe’ annual mean PM2.5 threshold of 7.3μg/m3. We estimate the global premature mortality by PM2.5 at 3.15 million/year in 2010. China is the leading country with about 1.33 million, followed by India with 575 thousand and Pakistan with 105 thousand. For the European Union (EU) we estimate 173 thousand and the United States (US) 52 thousand in 2010. Based on sensitivity calculations we tested the gains from PM2.5 control by applying the air quality guidelines (AQG) and standards of the World Health Organization (WHO), the EU, the US and other countries. To estimate potential reductions in mortality rates we take into consideration the deaths that cannot be avoided after the implementation of PM2.5 upper limits, due to the contribution of natural sources to total PM2.5 and therefore to mortality (mainly airborne desert dust). The annual mean EU limit of 25μg/m3 would reduce global premature mortality by 18%, while within the EU the effect is negligible, indicating that the standard is largely met and that stricter limits are needed. The new US standard of 12μg/m3 would reduce premature mortality by 46% worldwide, 4% in the US and 20% in the EU. Implementing the AQG by the WHO of 10μg/m3 would reduce global premature mortality by 54%, 76% in China and 59% in India. In the EU and US, the mortality would be reduced by 36% and 14%, respectively. Hence, following the WHO guideline will prevent 1.7 million premature deaths per year. Sensitivity calculations indicate that even small changes at the lower PM2.5 standards can have major impacts on global mortality rates.Keywords: air quality guidelines, outdoor air pollution, particulate matter, premature mortality
Procedia PDF Downloads 310582 Analysing the Stability of Electrical Grid for Increased Renewable Energy Penetration by Focussing on LI-Ion Battery Storage Technology
Authors: Hemendra Singh Rathod
Abstract:
Frequency is, among other factors, one of the governing parameters for maintaining electrical grid stability. The quality of an electrical transmission and supply system is mainly described by the stability of the grid frequency. Over the past few decades, energy generation by intermittent sustainable sources like wind and solar has seen a significant increase globally. Consequently, controlling the associated deviations in grid frequency within safe limits has been gaining momentum so that the balance between demand and supply can be maintained. Lithium-ion battery energy storage system (Li-Ion BESS) has been a promising technology to tackle the challenges associated with grid instability. BESS is, therefore, an effective response to the ongoing debate whether it is feasible to have an electrical grid constantly functioning on a hundred percent renewable power in the near future. In recent years, large-scale manufacturing and capital investment into battery production processes have made the Li-ion battery systems cost-effective and increasingly efficient. The Li-ion systems require very low maintenance and are also independent of geographical constraints while being easily scalable. The paper highlights the use of stationary and moving BESS for balancing electrical energy, thereby maintaining grid frequency at a rapid rate. Moving BESS technology, as implemented in the selected railway network in Germany, is here considered as an exemplary concept for demonstrating the same functionality in the electrical grid system. Further, using certain applications of Li-ion batteries, such as self-consumption of wind and solar parks or their ancillary services, wind and solar energy storage during low demand, black start, island operation, residential home storage, etc. offers a solution to effectively integrate the renewables and support Europe’s future smart grid. EMT software tool DIgSILENT PowerFactory has been utilised to model an electrical transmission system with 100% renewable energy penetration. The stability of such a transmission system has been evaluated together with BESS within a defined frequency band. The transmission system operators (TSO) have the superordinate responsibility for system stability and must also coordinate with the other European transmission system operators. Frequency control is implemented by TSO by maintaining a balance between electricity generation and consumption. Li-ion battery systems are here seen as flexible, controllable loads and flexible, controllable generation for balancing energy pools. Thus using Li-ion battery storage solution, frequency-dependent load shedding, i.e., automatic gradual disconnection of loads from the grid, and frequency-dependent electricity generation, i.e., automatic gradual connection of BESS to the grid, is used as a perfect security measure to maintain grid stability in any case scenario. The paper emphasizes the use of stationary and moving Li-ion battery storage for meeting the demands of maintaining grid frequency and stability for near future operations.Keywords: frequency control, grid stability, li-ion battery storage, smart grid
Procedia PDF Downloads 150581 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 78580 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 65579 Virtual Experiments on Coarse-Grained Soil Using X-Ray CT and Finite Element Analysis
Authors: Mohamed Ali Abdennadher
Abstract:
Digital rock physics, an emerging field leveraging advanced imaging and numerical techniques, offers a promising approach to investigating the mechanical properties of granular materials without extensive physical experiments. This study focuses on using X-Ray Computed Tomography (CT) to capture the three-dimensional (3D) structure of coarse-grained soil at the particle level, combined with finite element analysis (FEA) to simulate the soil's behavior under compression. The primary goal is to establish a reliable virtual testing framework that can replicate laboratory results and offer deeper insights into soil mechanics. The methodology involves acquiring high-resolution CT scans of coarse-grained soil samples to visualize internal particle morphology. These CT images undergo processing through noise reduction, thresholding, and watershed segmentation techniques to isolate individual particles, preparing the data for subsequent analysis. A custom Python script is employed to extract particle shapes and conduct a statistical analysis of particle size distribution. The processed particle data then serves as the basis for creating a finite element model comprising approximately 500 particles subjected to one-dimensional compression. The FEA simulations explore the effects of mesh refinement and friction coefficient on stress distribution at grain contacts. A multi-layer meshing strategy is applied, featuring finer meshes at inter-particle contacts to accurately capture mechanical interactions and coarser meshes within particle interiors to optimize computational efficiency. Despite the known challenges in parallelizing FEA to high core counts, this study demonstrates that an appropriate domain-level parallelization strategy can achieve significant scalability, allowing simulations to extend to very high core counts. The results show a strong correlation between the finite element simulations and laboratory compression test data, validating the effectiveness of the virtual experiment approach. Detailed stress distribution patterns reveal that soil compression behavior is significantly influenced by frictional interactions, with frictional sliding, rotation, and rolling at inter-particle contacts being the primary deformation modes under low to intermediate confining pressures. These findings highlight that CT data analysis combined with numerical simulations offers a robust method for approximating soil behavior, potentially reducing the need for physical laboratory experiments.Keywords: X-Ray computed tomography, finite element analysis, soil compression behavior, particle morphology
Procedia PDF Downloads 29578 Hospital Malnutrition and its Impact on 30-day Mortality in Hospitalized General Medicine Patients in a Tertiary Hospital in South India
Authors: Vineet Agrawal, Deepanjali S., Medha R., Subitha L.
Abstract:
Background. Hospital malnutrition is a highly prevalent issue and is known to increase the morbidity, mortality, length of hospital stay, and cost of care. In India, studies on hospital malnutrition have been restricted to ICU, post-surgical, and cancer patients. We designed this study to assess the impact of hospital malnutrition on 30-day post-discharge and in-hospital mortality in patients admitted in the general medicine department, irrespective of diagnosis. Methodology. All patients aged above 18 years admitted in the medicine wards, excluding medico-legal cases, were enrolled in the study. Nutritional assessment was done within 72 h of admission, using Subjective Global Assessment (SGA), which classifies patients into three categories: Severely malnourished, Mildly/moderately malnourished, and Normal/well-nourished. Anthropometric measurements like Body Mass Index (BMI), Triceps skin-fold thickness (TSF), and Mid-upper arm circumference (MUAC) were also performed. Patients were followed-up during hospital stay and 30 days after discharge through telephonic interview, and their final diagnosis, comorbidities, and cause of death were noted. Multivariate logistic regression and cox regression model were used to determine if the nutritional status at admission independently impacted mortality at one month. Results. The prevalence of malnourishment by SGA in our study was 67.3% among 395 hospitalized patients, of which 155 patients (39.2%) were moderately malnourished, and 111 (28.1%) were severely malnourished. Of 395 patients, 61 patients (15.4%) expired, of which 30 died in the hospital, and 31 died within 1 month of discharge from hospital. On univariate analysis, malnourished patients had significantly higher morality (24.3% in 111 Cat C patients) than well-nourished patients (10.1% in 129 Cat A patients), with OR 9.17, p-value 0.007. On multivariate logistic regression, age and higher Charlson Comorbidity Index (CCI) were independently associated with mortality. Higher CCI indicates higher burden of comorbidities on admission, and the CCI in the expired patient group (mean=4.38) was significantly higher than that of the alive cohort (mean=2.85). Though malnutrition significantly contributed to higher mortality on univariate analysis, it was not an independent predictor of outcome on multivariate logistic regression. Length of hospitalisation was also longer in the malnourished group (mean= 9.4 d) compared to the well-nourished group (mean= 8.03 d) with a trend towards significance (p=0.061). None of the anthropometric measurements like BMI, MUAC, or TSF showed any association with mortality or length of hospitalisation. Inference. The results of our study highlight the issue of hospital malnutrition in medicine wards and reiterate that malnutrition contributes significantly to patient outcomes. We found that SGA performs better than anthropometric measurements in assessing under-nutrition. We are of the opinion that the heterogeneity of the study population by diagnosis was probably the primary reason why malnutrition by SGA was not found to be an independent risk factor for mortality. Strategies to identify high-risk patients at admission and treat malnutrition in the hospital and post-discharge are needed.Keywords: hospitalization outcome, length of hospital stay, mortality, malnutrition, subjective global assessment (SGA)
Procedia PDF Downloads 149577 2,7-Diazaindole as a Photophysical Probe for Excited State Hydrogen/Proton Transfer
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among the scientific community over the past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried out on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phases. Derivatives of the above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are studies in the solution phase that suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization-time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy, i.e., fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to the S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1, whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red-shifted transition in the case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV which is significantly higher than the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in the case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red-shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronically excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in the excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: excited state hydrogen transfer, supersonic expansion, gas phase spectroscopy, IR-UV double resonance spectroscopy, laser induced fluorescence, photoionization efficiency spectroscopy
Procedia PDF Downloads 75576 Improving Literacy Level Through Digital Books for Deaf and Hard of Hearing Students
Authors: Majed A. Alsalem
Abstract:
In our contemporary world, literacy is an essential skill that enables students to increase their efficiency in managing the many assignments they receive that require understanding and knowledge of the world around them. In addition, literacy enhances student participation in society improving their ability to learn about the world and interact with others and facilitating the exchange of ideas and sharing of knowledge. Therefore, literacy needs to be studied and understood in its full range of contexts. It should be seen as social and cultural practices with historical, political, and economic implications. This study aims to rebuild and reorganize the instructional designs that have been used for deaf and hard-of-hearing (DHH) students to improve their literacy level. The most critical part of this process is the teachers; therefore, teachers will be the center focus of this study. Teachers’ main job is to increase students’ performance by fostering strategies through collaborative teamwork, higher-order thinking, and effective use of new information technologies. Teachers, as primary leaders in the learning process, should be aware of new strategies, approaches, methods, and frameworks of teaching in order to apply them to their instruction. Literacy from a wider view means acquisition of adequate and relevant reading skills that enable progression in one’s career and lifestyle while keeping up with current and emerging innovations and trends. Moreover, the nature of literacy is changing rapidly. The notion of new literacy changed the traditional meaning of literacy, which is the ability to read and write. New literacy refers to the ability to effectively and critically navigate, evaluate, and create information using a range of digital technologies. The term new literacy has received a lot of attention in the education field over the last few years. New literacy provides multiple ways of engagement, especially to those with disabilities and other diverse learning needs. For example, using a number of online tools in the classroom provides students with disabilities new ways to engage with the content, take in information, and express their understanding of this content. This study will provide teachers with the highest quality of training sessions to meet the needs of DHH students so as to increase their literacy levels. This study will build a platform between regular instructional designs and digital materials that students can interact with. The intervention that will be applied in this study will be to train teachers of DHH to base their instructional designs on the notion of Technology Acceptance Model (TAM) theory. Based on the power analysis that has been done for this study, 98 teachers are needed to be included in this study. This study will choose teachers randomly to increase internal and external validity and to provide a representative sample from the population that this study aims to measure and provide the base for future and further studies. This study is still in process and the initial results are promising by showing how students have engaged with digital books.Keywords: deaf and hard of hearing, digital books, literacy, technology
Procedia PDF Downloads 489575 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review
Authors: R. Christopher, C. Brandt, N. Damons
Abstract:
Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity
Procedia PDF Downloads 179574 Testing Depression in Awareness Space: A Proposal to Evaluate Whether a Psychotherapeutic Method Based on Spatial Cognition and Imagination Therapy Cures Moderate Depression
Authors: Lucas Derks, Christine Beenhakker, Michiel Brandt, Gert Arts, Ruud van Langeveld
Abstract:
Background: The method Depression in Awareness Space (DAS) is a psychotherapeutic intervention technique based on the principles of spatial cognition and imagination therapy with spatial components. The basic assumptions are: mental space is the primary organizing principle in the mind, and all psychological issues can be treated by first locating and by next relocating the conceptualizations involved. The most clinical experience was gathered over the last 20 years in the area of social issues (with the social panorama model). The latter work led to the conclusion that a mental object (image) gains emotional impact when it is placed more central, closer and higher in the visual field – and vice versa. Changing the locations of mental objects in space thus alters the (socio-) emotional meaning of the relationships. The experience of depression seems always associated with darkness. Psychologists tend to see the link between depression and darkness as a metaphor. However, clinical practice hints to the existence of more literal forms of darkness. Aims: The aim of the method Depression in Awareness Space is to reduce the distress of clients with depression in the clinical counseling practice, as a reliable alternative method of psychological therapy for the treatment of depression. The method Depression in Awareness Space aims at making dark areas smaller, lighter and more transparent in order to identify the problem or the cause of the depression which lies behind the darkness. It was hypothesized that the darkness is a subjective side-effect of the neurological process of repression. After reducing the dark clouds the real problem behind the depression becomes more visible, allowing the client to work on it and in that way reduce their feelings of depression. This makes repression of the issue obsolete. Results: Clients could easily get into their 'sadness' when asked to do so and finding the location of the dark zones proved pretty easy as well. In a recent pilot study with five participants with mild depressive symptoms (measured on two different scales and tested against an untreated control group with similar symptoms), the first results were also very promising. If the mental spatial approach to depression can be proven to be really effective, this would be very good news. The Society of Mental Space Psychology is now looking for sponsoring of an up scaled experiment. Conclusions: For spatial cognition and the research into spatial psychological phenomena, the discovery of dark areas can be a step forward. Beside out of pure scientific interest, it is great to know that this discovery has a clinical implication: when darkness can be connected to depression. Also, darkness seems to be more than metaphorical expression. Progress can be monitored over measurement tools that quantify the level of depressive symptoms and by reviewing the areas of darkness.Keywords: depression, spatial cognition, spatial imagery, social panorama
Procedia PDF Downloads 169573 Stochastic Approach for Technical-Economic Viability Analysis of Electricity Generation Projects with Natural Gas Pressure Reduction Turbines
Authors: Roberto M. G. Velásquez, Jonas R. Gazoli, Nelson Ponce Jr, Valério L. Borges, Alessandro Sete, Fernanda M. C. Tomé, Julian D. Hunt, Heitor C. Lira, Cristiano L. de Souza, Fabio T. Bindemann, Wilmar Wounnsoscky
Abstract:
Nowadays, society is working toward reducing energy losses and greenhouse gas emissions, as well as seeking clean energy sources, as a result of the constant increase in energy demand and emissions. Energy loss occurs in the gas pressure reduction stations at the delivery points in natural gas distribution systems (city gates). Installing pressure reduction turbines (PRT) parallel to the static reduction valves at the city gates enhances the energy efficiency of the system by recovering the enthalpy of the pressurized natural gas, obtaining in the pressure-lowering process shaft work and generating electrical power. Currently, the Brazilian natural gas transportation network has 9,409 km in extension, while the system has 16 national and 3 international natural gas processing plants, including more than 143 delivery points to final consumers. Thus, the potential of installing PRT in Brazil is 66 MW of power, which could yearly avoid the emission of 235,800 tons of CO2 and generate 333 GWh/year of electricity. On the other hand, an economic viability analysis of these energy efficiency projects is commonly carried out based on estimates of the project's cash flow obtained from several variables forecast. Usually, the cash flow analysis is performed using representative values of these variables, obtaining a deterministic set of financial indicators associated with the project. However, in most cases, these variables cannot be predicted with sufficient accuracy, resulting in the need to consider, to a greater or lesser degree, the risk associated with the calculated financial return. This paper presents an approach applied to the technical-economic viability analysis of PRTs projects that explicitly considers the uncertainties associated with the input parameters for the financial model, such as gas pressure at the delivery point, amount of energy generated by TRP, the future price of energy, among others, using sensitivity analysis techniques, scenario analysis, and Monte Carlo methods. In the latter case, estimates of several financial risk indicators, as well as their empirical probability distributions, can be obtained. This is a methodology for the financial risk analysis of PRT projects. The results of this paper allow a more accurate assessment of the potential PRT project's financial feasibility in Brazil. This methodology will be tested at the Cuiabá thermoelectric plant, located in the state of Mato Grosso, Brazil, and can be applied to study the potential in other countries.Keywords: pressure reduction turbine, natural gas pressure drop station, energy efficiency, electricity generation, monte carlo methods
Procedia PDF Downloads 113572 Functional Analysis of Variants Implicated in Hearing Loss in a Cohort from Argentina: From Molecular Diagnosis to Pre-Clinical Research
Authors: Paula I. Buonfiglio, Carlos David Bruque, Lucia Salatino, Vanesa Lotersztein, Sebastián Menazzi, Paola Plazas, Ana Belén Elgoyhen, Viviana Dalamón
Abstract:
Hearing loss (HL) is the most prevalent sensorineural disorder affecting about 10% of the global population, with more than half due to genetic causes. About 1 in 500-1000 newborns present congenital HL. Most of the patients are non-syndromic with an autosomal recessive mode of inheritance. To date, more than 100 genes are related to HL. Therefore, the Whole-exome sequencing (WES) technique has become a cost-effective alternative approach for molecular diagnosis. Nevertheless, new challenges arise from the detection of novel variants, in particular missense changes, which can lead to a spectrum of genotype-to-phenotype correlations, which is not always straightforward. In this work, we aimed to identify the genetic causes of HL in isolated and familial cases by designing a multistep approach to analyze target genes related to hearing impairment. Moreover, we performed in silico and in vivo analyses in order to further study the effect of some of the novel variants identified in the hair cell function using the zebrafish model. A total of 650 patients were studied by Sanger Sequencing and Gap-PCR in GJB2 and GJB6 genes, respectively, diagnosing 15.5% of sporadic cases and 36% of familial ones. Overall, 50 different sequence variants were detected. Fifty of the undiagnosed patients with moderate HL were tested for deletions in STRC gene by Multiplex ligation-dependent probe amplification technique (MLPA), leading to 6% of diagnosis. After this initial screening, 50 families were selected to be analyzed by WES, achieving diagnosis in 44% of them. Half of the identified variants were novel. A missense variant in MYO6 gene detected in a family with postlingual HL was selected to be further analyzed. A protein modeling with AlphaFold2 software was performed, proving its pathogenic effect. In order to functionally validate this novel variant, a knockdown phenotype rescue assay in zebrafish was carried out. Injection of wild-type MYO6 mRNA in embryos rescued the phenotype, whereas using the mutant MYO6 mRNA (carrying c.2782C>A variant) had no effect. These results strongly suggest the deleterious effect of this variant on the mobility of stereocilia in zebrafish neuromasts, and hence on the auditory system. In the present work, we demonstrated that our algorithm is suitable for the sequential multigenic approach to HL in our cohort. These results highlight the importance of a combined strategy in order to identify candidate variants as well as the in silico and in vivo studies to analyze and prove their pathogenicity and accomplish a better understanding of the mechanisms underlying the physiopathology of the hearing impairment.Keywords: diagnosis, genetics, hearing loss, in silico analysis, in vivo analysis, WES, zebrafish
Procedia PDF Downloads 94571 Study of Elastic-Plastic Fatigue Crack in Functionally Graded Materials
Authors: Somnath Bhattacharya, Kamal Sharma, Vaibhav Sonkar
Abstract:
Composite materials emerged in the middle of the 20th century as a promising class of engineering materials providing new prospects for modern technology. Recently, a new class of composite materials known as functionally graded materials (FGMs) has drawn considerable attention of the scientific community. In general, FGMs are defined as composite materials in which the composition or microstructure or both are locally varied so that a certain variation of the local material properties is achieved. This gradual change in composition and microstructure of material is suitable to get gradient of properties and performances. FGMs are synthesized in such a way that they possess continuous spatial variations in volume fractions of their constituents to yield a predetermined composition. These variations lead to the formation of a non-homogeneous macrostructure with continuously varying mechanical and / or thermal properties in one or more than one direction. Lightweight functionally graded composites with high strength to weight and stiffness to weight ratios have been used successfully in aircraft industry and other engineering applications like in electronics industry and in thermal barrier coatings. In the present work, elastic-plastic crack growth problems (using Ramberg-Osgood Model) in an FGM plate under cyclic load has been explored by extended finite element method. Both edge and centre crack problems have been solved by taking additionally holes, inclusions and minor cracks under plane stress conditions. Both soft and hard inclusions have been implemented in the problems. The validity of linear elastic fracture mechanics theory is limited to the brittle materials. A rectangular plate of functionally graded material of length 100 mm and height 200 mm with 100% copper-nickel alloy on left side and 100% ceramic (alumina) on right side is considered in the problem. Exponential gradation in property is imparted in x-direction. A uniform traction of 100 MPa is applied to the top edge of the rectangular domain along y direction. In some problems, domain contains major crack along with minor cracks or / and holes or / and inclusions. Major crack is located the centre of the left edge or the centre of the domain. The discontinuities, such as minor cracks, holes, and inclusions are added either singly or in combination with each other. On the basis of this study, it is found that effect of minor crack in the domain’s failure crack length is minimum whereas soft inclusions have moderate effect and the effect of holes have maximum effect. It is observed that the crack growth is more before the failure in each case when hard inclusions are present in place of soft inclusions.Keywords: elastic-plastic, fatigue crack, functionally graded materials, extended finite element method (XFEM)
Procedia PDF Downloads 389570 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India
Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha
Abstract:
Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.Keywords: 2D ERT, landslide, safety factor, slope stability
Procedia PDF Downloads 317569 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior
Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli
Abstract:
The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.Keywords: energy simulation, modelling calibration, occupant behavior, university building
Procedia PDF Downloads 141568 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches
Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli
Abstract:
Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR
Procedia PDF Downloads 108567 SME Internationalisation and Its Financing: An Exploratory Study That Analyses Government Support and Funding Mechanisms for Irish and Scottish International SMEs
Authors: L. Spencer, S. O’ Donohoe
Abstract:
Much of the research to date on internationalisation relates to large firms with much less known about how small and medium-sized enterprises (SMEs) engage in internationalisation. Given the crucial role of SMEs in contributing to economic growth, there is now an emphasis on the need for SMEs internationalise. Yet little is known about how SMEs undertake and finance such expansion and whether or not internationalisation actually hinders or helps them in securing finance. The purpose of this research is to explore the internationalisation process for SMEs, the sources of funding used in financing this expansion and support received from the state agencies in assisting their overseas expansion. A conceptual framework has been devised which marries the two strands of literature together (internationalisation and financing the firm). The exploratory nature of this research dictates that the most appropriate methodology was to use semi-structured interviews with SME owners; bank representatives and support agencies. In essence, a triangulated approach to the research problem facilitates assessment of the perceptions and experiences from firms, the state and the financial institutions. Our sample is drawn from SMEs operating in Ireland and Scotland, two small but very open economies where SMEs are the dominant form of organisation. The sample includes a range of industry sectors. Key findings to date suggest some SMEs are born global; others are born again global whilst a significant cohort can be classed as traditional internationalisers. Unsurprisingly there is a strong industry effect with firms in the high tech sector more likely to be faster internationalisers in contrast to those in the traditional manufacturing sectors. Owner manager’s own funds are deemed key to financing initial internationalisation lending support for the financial growth life cycle model albeit more important for the faster internationalisers in contrast to the slower cohort who are more likely to deploy external sources especially bank finance. Retained earnings remain the predominant source of on-going financing for internationalising firms but trade credit is often used and invoice discounting is utilised quite frequently. In terms of lending, asset based lending backed by personal guarantees appears paramount for securing bank finance. Whilst the lack of diversified sources of funding for internationalising SMEs was found in both jurisdictions there appears no evidence to suggest that internationalisation impedes firms in securing finance. Finally state supports were cited as important to the internationalisation process, in particular those provided by Enterprise Ireland were deemed very valuable. Considering the paucity of studies to date on SME internationalisation and in particular the funding mechanisms deployed by them; this study seeks to contribute to the body of knowledge in both the international business and finance disciplines.Keywords: funding, government support, international pathways, modes of entry
Procedia PDF Downloads 245566 The Role of Formal and Informal Social Support in Predicting the Involvement of Mothers and Fathers of Young Children with Autism Spectrum Disorder
Authors: Adi Sharabi, Dafna Marom-Golan
Abstract:
Parents’ involvement in the care of their children with Autism Spectrum Disorder (ASD) and its beneficial effect on the children’s developmental and educational outcomes is well documented. At the same time, parents of children with ASD tend to experience greater psychological distress than parents of children with other developmental disabilities or with typical development. Positive social support is an important resource used by parents to reduce their psychological distress. The goal of the current research was to examine the contribution of formal and informal social support in explaining mothers’ and fathers’ involvement with their young children with ASD. The sample consisted of 107 parents who live in Israel (61 mothers and 46 fathers) of children aged between 2 and 7, all diagnosed with ASD and attending special kindergartens or special day care for children with ASD. Parental involvement and social support perception were assessed. Initial analysis focused on the relations between involvement, support, and demographic variables. In addition, analysis of variance (ANOVA) was conducted to test differences between mothers and fathers. Two hierarchical multiple regression analyses were performed to examine the predicted factors in the involvement model while controlling for group (mothers/fathers). Results indicate that mothers reported significantly higher levels of parenting involvement than fathers. Mothers reported higher levels of general involvement and all sub-types of involvement. For example, mothers reported that they were more interested in and have higher levels of attendance in their child’s educational program. They were also more collaborative in their child’s educational therapeutic program, and socialized with other parents of children from their child’s kindergarten than fathers. Mothers’ involvement was found to be related to their informal support (non-formal relatives). Findings also reveal significant differences between mothers and fathers on the formal support subscale measure of specializes services. Fathers, more than mothers, reported more specializes services support such as social workers or professional therapists. Separate hierarchical multiple regression analyses revealed a unique gender difference in the factors that explained parental involvement. Specifically, informal support only had a unique positive contribution in explaining mothers’, but not fathers’ involvement. This study highlights the central role of mothers in maintaining constant contact with the educational system and the professionals who help care for their child with ASD. At the same time, this research emphasizes the crucial role of both mothers and fathers in their child's development and well-being at every development stage, particularly in early development. Further, different kinds of social support seem to relate to the different kinds of parental involvement. It is in the best interest of educators and family therapists who work with families with children with ASD to support the cohesiveness of the family and the collaboration of the parents by understanding and respecting the way each member addresses the responsibilities of parenting a child with ASD, and her or his need for different types of social support.Keywords: parental differences, parental involvement, social support, specialized support services
Procedia PDF Downloads 247565 Motivational Profiles of the Entrepreneurial Career in Spanish Businessmen
Authors: Magdalena Suárez-Ortega, M. Fe. Sánchez-García
Abstract:
This paper focuses on the analysis of the motivations that lead people to undertake and consolidate their business. It is addressed from the framework of planned behavior theory, which recognizes the importance of the social environment and cultural values, both in the decision to undertake business and in business consolidation. Similarly, it is also based on theories of career development, which emphasize the importance of career management competencies and their connections to other vital aspects of people, including their roles within their families and other personal activities. This connects directly with the impact of entrepreneurship on the career and the professional-personal project of each individual. This study is part of the project titled Career Design and Talent Management (Ministry of Economy and Competitiveness of Spain, State Plan 2013-2016 Excellence Ref. EDU2013-45704-P). The aim of the study is to identify and describe entrepreneurial competencies and motivational profiles in a sample of 248 Spanish entrepreneurs, considering the consolidated profile and the profile in transition (n = 248).In order to obtain the information, the Questionnaire of Motivation and conditioners of the entrepreneurial career (MCEC) has been applied. This consists of 67 items and includes four scales (E1-Conflicts in conciliation, E2-Satisfaction in the career path, E3-Motivations to undertake, E4-Guidance Needs). Cluster analysis (mixed method, combining k-means clustering with a hierarchical method) was carried out, characterizing the groups profiles according to the categorical variables (chi square, p = 0.05), and the quantitative variables (ANOVA). The results have allowed us to characterize three motivational profiles relevant to the motivation, the degree of conciliation between personal and professional life, and the degree of conflict in conciliation, levels of career satisfaction and orientation needs (in the entrepreneurial project and life-career). The first profile is formed by extrinsically motivated entrepreneurs, professionally satisfied and without conflict of vital roles. The second profile acts with intrinsic motivation and also associated with family models, and although it shows satisfaction with their professional career, it finds a high conflict in their family and professional life. The third is composed of entrepreneurs with high extrinsic motivation, professional dissatisfaction and at the same time, feel the conflict in their professional life by the effect of personal roles. Ultimately, the analysis has allowed us to line the kinds of entrepreneurs to different levels of motivation, satisfaction, needs and articulation in professional and personal life, showing characterizations associated with the use of time for leisure, and the care of the family. Associations related to gender, age, activity sector, environment (rural, urban, virtual), and the use of time for domestic tasks are not identified. The model obtained and its implications for the design of training actions and orientation to entrepreneurs is also discussed.Keywords: motivation, entrepreneurial career, guidance needs, life-work balance, job satisfaction, assessment
Procedia PDF Downloads 301564 Improvement of the Traditional Techniques of Artistic Casting through the Development of Open Source 3D Printing Technologies Based on Digital Ultraviolet Light Processing
Authors: Drago Diaz Aleman, Jose Luis Saorin Perez, Cecile Meier, Itahisa Perez Conesa, Jorge De La Torre Cantero
Abstract:
Traditional manufacturing techniques used in artistic contexts compete with highly productive and efficient industrial procedures. The craft techniques and associated business models tend to disappear under the pressure of the appearance of mass-produced products that compete in all niche markets, including those traditionally reserved for the work of art. The surplus value derived from the prestige of the author, the exclusivity of the product or the mastery of the artist, do not seem to be sufficient reasons to preserve this productive model. In the last years, the adoption of open source digital manufacturing technologies in small art workshops can favor their permanence by assuming great advantages such as easy accessibility, low cost, and free modification, adapting to specific needs of each workshop. It is possible to use pieces modeled by computer and made with FDM (Fused Deposition Modeling) 3D printers that use PLA (polylactic acid) in the procedures of artistic casting. Models printed by PLA are limited to approximate minimum sizes of 3 cm, and optimal layer height resolution is 0.1 mm. Due to these limitations, it is not the most suitable technology for artistic casting processes of smaller pieces. An alternative to solve size limitation, are printers from the type (SLS) "selective sintering by laser". And other possibility is a laser hardens, by layers, metal powder and called DMLS (Direct Metal Laser Sintering). However, due to its high cost, it is a technology that is difficult to introduce in small artistic foundries. The low-cost DLP (Digital Light Processing) type printers can offer high resolutions for a reasonable cost (around 0.02 mm on the Z axis and 0.04 mm on the X and Y axes), and can print models with castable resins that allow the subsequent direct artistic casting in precious metals or their adaptation to processes such as electroforming. In this work, the design of a DLP 3D printer is detailed, using backlit LCD screens with ultraviolet light. Its development is totally "open source" and is proposed as a kit made up of electronic components, based on Arduino and easy to access mechanical components in the market. The CAD files of its components can be manufactured in low-cost FDM 3D printers. The result is less than 500 Euros, high resolution and open-design with free access that allows not only its manufacture but also its improvement. In future works, we intend to carry out different comparative analyzes, which allow us to accurately estimate the print quality, as well as the real cost of the artistic works made with it.Keywords: traditional artistic techniques, DLP 3D printer, artistic casting, electroforming
Procedia PDF Downloads 142563 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 79562 Potential of Aerodynamic Feature on Monitoring Multilayer Rough Surfaces
Authors: Ibtissem Hosni, Lilia Bennaceur Farah, Saber Mohamed Naceur
Abstract:
In order to assess the water availability in the soil, it is crucial to have information about soil distributed moisture content; this parameter helps to understand the effect of humidity on the exchange between soil, plant cover and atmosphere in addition to fully understanding the surface processes and the hydrological cycle. On the other hand, aerodynamic roughness length is a surface parameter that scales the vertical profile of the horizontal component of the wind speed and characterizes the surface ability to absorb the momentum of the airflow. In numerous applications of the surface hydrology and meteorology, aerodynamic roughness length is an important parameter for estimating momentum, heat and mass exchange between the soil surface and atmosphere. It is important on this side, to consider the atmosphere factors impact in general, and the natural erosion in particular, in the process of soil evolution and its characterization and prediction of its physical parameters. The study of the induced movements by the wind over soil vegetated surface, either spaced plants or plant cover, is motivated by significant research efforts in agronomy and biology. The known major problem in this side concerns crop damage by wind, which presents a booming field of research. Obviously, most models of soil surface require information about the aerodynamic roughness length and its temporal and spatial variability. We have used a bi-dimensional multi-scale (2D MLS) roughness description where the surface is considered as a superposition of a finite number of one-dimensional Gaussian processes each one having a spatial scale using the wavelet transform and the Mallat algorithm to describe natural surface roughness. We have introduced multi-layer aspect of the humidity of the soil surface, to take into account a volume component in the problem of backscattering radar signal. As humidity increases, the dielectric constant of the soil-water mixture increases and this change is detected by microwave sensors. Nevertheless, many existing models in the field of radar imagery, cannot be applied directly on areas covered with vegetation due to the vegetation backscattering. Thus, the radar response corresponds to the combined signature of the vegetation layer and the layer of soil surface. Therefore, the key issue of the numerical estimation of soil moisture is to separate the two contributions and calculate both scattering behaviors of the two layers by defining the scattering of the vegetation and the soil blow. This paper presents a synergistic methodology, and it is for estimating roughness and soil moisture from C-band radar measurements. The methodology adequately represents a microwave/optical model which has been used to calculate the scattering behavior of the aerodynamic vegetation-covered area by defining the scattering of the vegetation and the soil below.Keywords: aerodynamic, bi-dimensional, vegetation, synergistic
Procedia PDF Downloads 269561 Cognitive Deficits and Association with Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder in 22q11.2 Deletion Syndrome
Authors: Sinead Morrison, Ann Swillen, Therese Van Amelsvoort, Samuel Chawner, Elfi Vergaelen, Michael Owen, Marianne Van Den Bree
Abstract:
22q11.2 Deletion Syndrome (22q11.2DS) is caused by the deletion of approximately 60 genes on chromosome 22 and is associated with high rates of neurodevelopmental disorders such as Attention Deficit Hyperactivity Disorder (ADHD) and Autism Spectrum Disorders (ASD). The presentation of these disorders in 22q11.2DS is reported to be comparable to idiopathic forms and therefore presents a valuable model for understanding mechanisms of neurodevelopmental disorders. Cognitive deficits are thought to be a core feature of neurodevelopmental disorders, and possibly manifest in behavioural and emotional problems. There have been mixed findings in 22q11.2DS on whether the presence of ADHD or ASD is associated with greater cognitive deficits. Furthermore, the influence of developmental stage has never been taken into account. The aim was therefore to examine whether the presence of ADHD or ASD was associated with cognitive deficits in childhood and/or adolescence in 22q11.2DS. We conducted the largest study to date of this kind in 22q11.2DS. The same battery of tasks measuring processing speed, attention and spatial working memory were completed by 135 participants with 22q11.2DS. Wechsler IQ tests were completed, yielding Full Scale (FSIQ), Verbal (VIQ) and Performance IQ (PIQ). Age-standardised difference scores were produced for each participant. Developmental stages were defined as children (6-10 years) and adolescents (10-18 years). ADHD diagnosis was ascertained from a semi-structured interview with a parent. ASD status was ascertained from a questionnaire completed by a parent. Interaction and main effects of cognitive performance of those with or without a diagnosis of ADHD or ASD in childhood or adolescence were conducted with 2x2 ANOVA. Significant interactions were followed up with t-tests of simple effects. Adolescents with ASD displayed greater deficits in all measures (processing speed, p = 0.022; sustained attention, p = 0.016; working memory, p = 0.006) than adolescents without ASD; there was no difference between children with and without ASD. There were no significant differences on IQ measures. Both children and adolescents with ADHD displayed greater deficits on sustained attention (p = 0.002) than those without ADHD. There were no significant differences on any other measures for ADHD. Magnitude of cognitive deficit in individuals with 22q11.2DS varied by cognitive domain, developmental stage and presence of neurodevelopmental disorder. Adolescents with 22q11.2DS and ASD showed greater deficits on all measures, which suggests there may be a sensitive period in childhood to acquire these domains, or reflect increasing social and academic demands in adolescence. The finding of poorer sustained attention in children and adolescents with ADHD supports previous research and suggests a specific deficit which can be separated from processing speed and working memory. This research provides unique insights into the association of ASD and ADHD with cognitive deficits in a group at high genomic risk of neurodevelopmental disorders.Keywords: 22q11.2 deletion syndrome, attention deficit hyperactivity disorder, autism spectrum disorder, cognitive development
Procedia PDF Downloads 151560 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 109559 Belarus Rivers Runoff: Current State, Prospects
Authors: Aliaksandr Volchak, Мaryna Barushka
Abstract:
The territory of Belarus is studied quite well in terms of hydrology but runoff fluctuations over time require more detailed research in order to forecast changes in rivers runoff in future. Generally, river runoff is shaped by natural climatic factors, but man-induced impact has become so big lately that it can be compared to natural processes in forming runoffs. In Belarus, a heavy man load on the environment was caused by large-scale land reclamation in the 1960s. Lands of southern Belarus were reclaimed most, which contributed to changes in runoff. Besides, global warming influences runoff. Today we observe increase in air temperature, decrease in precipitation, changes in wind velocity and direction. These result from cyclic climate fluctuations and, to some extent, the growth of concentration of greenhouse gases in the air. Climate change affects Belarus’s water resources in different ways: in hydropower industry, other water-consuming industries, water transportation, agriculture, risks of floods. In this research we have done an assessment of river runoff according to the scenarios of climate change and global climate forecast presented in the 4th and 5th Assessment Reports conducted by Intergovernmental Panel on Climate Change (IPCC) and later specified and adjusted by experts from Vilnius Gediminas Technical University with the use of a regional climatic model. In order to forecast changes in climate and runoff, we analyzed their changes from 1962 up to now. This period is divided into two: from 1986 up to now in comparison with the changes observed from 1961 to 1985. Such a division is a common world-wide practice. The assessment has revealed that, on the average, changes in runoff are insignificant all over the country, even with its irrelevant increase by 0.5 – 4.0% in the catchments of the Western Dvina River and north-eastern part of the Dnieper River. However, changes in runoff have become more irregular both in terms of the catchment area and inter-annual distribution over seasons and river lengths. Rivers in southern Belarus (the Pripyat, the Western Bug, the Dnieper, the Neman) experience reduction of runoff all year round, except for winter, when their runoff increases. The Western Bug catchment is an exception because its runoff reduces all year round. Significant changes are observed in spring. Runoff of spring floods reduces but the flood comes much earlier. There are different trends in runoff changes in spring, summer, and autumn. Particularly in summer, we observe runoff reduction in the south and west of Belarus, with its growth in the north and north-east. Our forecast of runoff up to 2035 confirms the trend revealed in 1961 – 2015. According to it, in the future, there will be a strong difference between northern and southern Belarus, between small and big rivers. Although we predict irrelevant changes in runoff, it is quite possible that they will be uneven in terms of seasons or particular months. Especially, runoff can change in summer, but decrease in the rest seasons in the south of Belarus, whereas in the northern part the runoff is predicted to change insignificantly.Keywords: assessment, climate fluctuation, forecast, river runoff
Procedia PDF Downloads 121558 An Emergentist Defense of Incompatibility between Morally Significant Freedom and Causal Determinism
Authors: Lubos Rojka
Abstract:
The common perception of morally responsible behavior is that it presupposes freedom of choice, and that free decisions and actions are not determined by natural events, but by a person. In other words, the moral agent has the ability and the possibility of doing otherwise when making morally responsible decisions, and natural causal determinism cannot fully account for morally significant freedom. The incompatibility between a person’s morally significant freedom and causal determinism appears to be a natural position. Nevertheless, some of the most influential philosophical theories on moral responsibility are compatibilist or semi-compatibilist, and they exclude the requirement of alternative possibilities, which contradicts the claims of classical incompatibilism. The compatibilists often employ Frankfurt-style thought experiments to prove their theory. The goal of this paper is to examine the role of imaginary Frankfurt-style examples in compatibilist accounts. More specifically, the compatibilist accounts defended by John Martin Fischer and Michael McKenna will be inserted into the broader understanding of a person elaborated by Harry Frankfurt, Robert Kane and Walter Glannon. Deeper analysis reveals that the exclusion of alternative possibilities based on Frankfurt-style examples is problematic and misleading. A more comprehensive account of moral responsibility and morally significant (source) freedom requires higher order complex theories of human will and consciousness, in which rational and self-creative abilities and a real possibility to choose otherwise, at least on some occasions during a lifetime, are necessary. Theoretical moral reasons and their logical relations seem to require a sort of higher-order agent-causal incompatibilism. The ability of theoretical or abstract moral reasoning requires complex (strongly emergent) mental and conscious properties, among which an effective free will, together with first and second-order desires. Such a hierarchical theoretical model unifies reasons-responsiveness, mesh theory and emergentism. It is incompatible with physical causal determinism, because such determinism only allows non-systematic processes that may be hard to predict, but not complex (strongly) emergent systems. An agent’s effective will and conscious reflectivity is the starting point of a morally responsible action, which explains why a decision is 'up to the subject'. A free decision does not always have a complete causal history. This kind of an emergentist source hyper-incompatibilism seems to be the best direction of the search for an adequate explanation of moral responsibility in the traditional (merit-based) sense. Physical causal determinism as a universal theory would exclude morally significant freedom and responsibility in the traditional sense because it would exclude the emergence of and supervenience by the essential complex properties of human consciousness.Keywords: consciousness, free will, determinism, emergence, moral responsibility
Procedia PDF Downloads 164557 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test
Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi
Abstract:
Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde
Procedia PDF Downloads 103556 Toward Understanding the Glucocorticoid Receptor Network in Cancer
Authors: Swati Srivastava, Mattia Lauriola, Yuval Gilad, Adi Kimchi, Yosef Yarden
Abstract:
The glucocorticoid receptor (GR) has been proposed to play important, but incompletely understood roles in cancer. Glucocorticoids (GCs) are widely used as co-medication of various carcinomas, due to their ability to reduce the toxicity of chemotherapy. Furthermore, GR antagonism has proven to be a strategy to treat triple negative breast cancer and castration-resistant prostate cancer. These observations suggest differential GR involvement in cancer subtypes. The goal of our study has been to elaborate the current understanding of GR signaling in tumor progression and metastasis. Our study involves two cellular models, non-tumorigenic breast epithelial cells (MCF10A) and Ewing sarcoma cells (CHLA9). In our breast cell model, the results indicated that the GR agonist dexamethasone inhibits EGF-induced mammary cell migration, and this effect was blocked when cells were stimulated with a GR antagonist, namely RU486. Microarray analysis for gene expression revealed that the mechanism underlying inhibition involves dexamenthasone-mediated repression of well-known activators of EGFR signaling, alongside with enhancement of several EGFR’s negative feedback loops. Because GR mainly acts primarily through composite response elements (GREs), or via a tethering mechanism, our next aim has been to find the transcription factors (TFs) which can interact with GR in MCF10A cells.The TF-binding motif overrepresented at the promoter of dexamethasone-regulated genes was predicted by using bioinformatics. To validate the prediction, we performed high-throughput Protein Complementation Assays (PCA). For this, we utilized the Gaussia Luciferase PCA strategy, which enabled analysis of protein-protein interactions between GR and predicted TFs of mammary cells. A library comprising both nuclear receptors (estrogen receptor, mineralocorticoid receptor, GR) and TFs was fused to fragments of GLuc, namely GLuc(1)-X, X-GLuc(1), and X-GLuc(2), where GLuc(1) and GLuc(2) correspond to the N-terminal and C-terminal fragments of the luciferase gene.The resulting library was screened, in human embryonic kidney 293T (HEK293T) cells, for all possible interactions between nuclear receptors and TFs. By screening all of the combinations between TFs and nuclear receptors, we identified several positive interactions, which were strengthened in response to dexamethasone and abolished in response to RU486. Furthermore, the interactions between GR and the candidate TFs were validated by co-immunoprecipitation in MCF10A and in CHLA9 cells. Currently, the roles played by the uncovered interactions are being evaluated in various cellular processes, such as cellular proliferation, migration, and invasion. In conclusion, our assay provides an unbiased network analysis between nuclear receptors and other TFs, which can lead to important insights into transcriptional regulation by nuclear receptors in various diseases, in this case of cancer.Keywords: epidermal growth factor, glucocorticoid receptor, protein complementation assay, transcription factor
Procedia PDF Downloads 227555 A Density Function Theory Based Comparative Study of Trans and Cis - Resveratrol
Authors: Subhojyoti Chatterjee, Peter J. Mahon, Feng Wang
Abstract:
Resveratrol (RvL), a phenolic compound, is a key ingredient in wine and tomatoes that has been studied over the years because of its important bioactivities such as anti-oxidant, anti-aging and antimicrobial properties. Out of the two isomeric forms of resveratrol i.e. trans and cis, the health benefit is primarily associated with the trans form. Thus, studying the structural properties of the isomers will not only provide an insight into understanding the RvL isomers, but will also help in designing parameters for differentiation in order to achieve 99.9% purity of trans-RvL. In the present study, density function theory (DFT) study is conducted, using the B3LYP/6-311++G** model to explore the through bond and through space intramolecular interactions. Properties such as vibrational spectroscopy (IR and Raman), nuclear magnetic resonance (NMR) spectra, excess orbital energy spectrum (EOES), energy based decomposition analyses (EDA) and Fukui function are calculated. It is discovered that the structure of trans-RvL, although it is C1 non-planar, the backbone non-H atoms are nearly in the same plane; whereas the cis-RvL consists of two major planes of R1 and R2 that are not in the same plane. The absence of planarity gives rise to a H-bond of 2.67Å in cis-RvL. Rotation of the C(5)-C(8) single bond in trans-RvL produces higher energy barriers since it may break the (planar) entire conjugated structure; while such rotation in cis-RvL produces multiple minima and maxima depending on the positions of the rings. The calculated FT-IR spectrum shows very different spectral features for trans and cis-RvL in the region 900 – 1500 cm-1, where the spectral peaks at 1138-1158 cm-1 are split in cis-RvL compared to a single peak at 1165 cm-1 in trans-RvL. In the Raman spectra, there is significant enhancement of cis-RvL in the region above 3000cm-1. Further, the carbon chemical environment (13C NMR) of the RvL molecule exhibit a larger chemical shift for cis-RvL compared to trans-RvL (Δδ = 8.18 ppm) for the carbon atom C(11), indicating that the chemical environment of the C group in cis-RvL is more diverse than its other isomer. The energy gap between highest occupied molecular orbital (HOMO) and the lowest occupied molecular orbital (LUMO) is 3.95 eV for trans and 4.35 eV for cis-RvL. A more detailed inspection using the recently developed EOES revealed that most of the large energy differences i.e. Δεcis-trans > ±0.30 eV, in their orbitals are contributed from the outer valence shell. They are MO60 (HOMO), MO52-55 and MO46. The active sites that has been captured by Fukui function (f + > 0.08) are associated with the stilbene C=C bond of RvL and cis-RvL is more active at these sites than in trans-RvL, as cis orientation breaks the large conjugation of trans-RvL so that the hydroxyl oxygen’s are more active in cis-RvL. Finally, EDA highlights the interaction energy (ΔEInt) of the phenolic compound, where trans is preferred over the cis-RvL (ΔΔEi = -4.35 kcal.mol-1) isomer. Thus, these quantum mechanics results could help in unwinding the diversified beneficial activities associated with resveratrol.Keywords: resveratrol, FT-IR, Raman, NMR, excess orbital energy spectrum, energy decomposition analysis, Fukui function
Procedia PDF Downloads 194