Search results for: laminar and turbulent effect
945 Household Perspectives and Resistance to Preventive Relocation in Flood Prone Areas: A Case Study in the Polwatta River Basin, Southern Sri Lanka
Authors: Ishara Madusanka, So Morikawa
Abstract:
Natural disasters, particularly floods, pose severe challenges globally, affecting both developed and developing countries. In many regions, especially Asia, riverine floods are prevalent and devastating. Integrated flood management incorporates structural and non-structural measures, with preventive relocation emerging as a cost-effective and proactive strategy for areas repeatedly impacted by severe flooding. However, preventive relocation is often hindered by economic, psychological, social, and institutional barriers. This study investigates the factors influencing resistance to preventive relocation and evaluates the role of flood risk information in shaping relocation decisions through risk perception. A conceptual model was developed, incorporating variables such as Flood Risk Information (FRI), Place Attachment (PA), Good Living Conditions (GLC), and Adaptation to Flooding (ATF), with Flood Risk Perception (FRP) serving as a mediating variable. The research was conducted in Welipitiya in the Polwatta river basin, Matara district, Sri Lanka, a region experiencing recurrent flood damage. For this study, an experimental design involving a structured questionnaire survey was utilized, with 185 households participating. The treatment group received flood risk information, including flood risk maps and historical data, while the control group did not. Data were collected in 2023 and analyzed using independent sample t-tests and Partial Least Squares Structural Equation Modeling (PLS-SEM). PLS-SEM was chosen for its ability to model latent variables, handle complex relationships, and suitability for exploratory research. Multi-group Analysis (MGA) assessed variations across different flood risk areas. Findings indicate that flood risk information had a limited impact on flood risk perception and relocation decisions, though its effect was significant in specific high-risk areas. Place attachment was a significant factor influencing relocation decisions across the sample. One potential reason for the limited impact of flood risk information on relocation decisions could be the lack of specificity in the information provided. The results suggest that while flood risk information alone may not significantly influence relocation decisions, it is crucial in specific contexts. Future studies and practitioners should focus on providing more detailed risk information and addressing psychological factors like place attachments to enhance preventive relocation efforts.Keywords: flood risk communication, flood risk perception, place attachment, preventive relocation, structural equation modeling
Procedia PDF Downloads 33944 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 114943 Comparison of Gait Variability in Individuals with Trans-Tibial and Trans-Femoral Lower Limb Loss: A Pilot Study
Authors: Hilal Keklicek, Fatih Erbahceci, Elif Kirdi, Ali Yalcin, Semra Topuz, Ozlem Ulger, Gul Sener
Abstract:
Objectives and Goals: The stride-to-stride fluctuations in gait is a determinant of qualified locomotion as known as gait variability. Gait variability is an important predictive factor of fall risk and useful for monitoring the effects of therapeutic interventions and rehabilitation. Comparison of gait variability in individuals with trans-tibial lower limb loss and trans femoral lower limb loss was the aim of the study. Methods: Ten individuals with traumatic unilateral trans femoral limb loss(TF), 12 individuals with traumatic transtibial lower limb loss(TT) and 12 healthy individuals(HI) were the participants of the study. All participants were evaluated with treadmill. Gait characteristics including mean step length, step length variability, ambulation index, time on each foot of participants were evaluated with treadmill. Participants were walked at their preferred speed for six minutes. Data from 4th minutes to 6th minutes were selected for statistical analyses to eliminate learning effect. Results: There were differences between the groups in intact limb step length variation, time on each foot, ambulation index and mean age (p < .05) according to the Kruskal Wallis Test. Pairwise analyses showed that there were differences between the TT and TF in residual limb variation (p=.041), time on intact foot (p=.024), time on prosthetic foot(p=.024), ambulation index(p = .003) in favor of TT group. There were differences between the TT and HI group in intact limb variation (p = .002), time on intact foot (p<.001), time on prosthetic foot (p < .001), ambulation index result (p < .001) in favor of HI group. There were differences between the TF and HI group in intact limb variation (p = .001), time on intact foot (p=.01) ambulation index result (p < .001) in favor of HI group. There was difference between the groups in mean age result from HI group were younger (p < .05).There were similarity between the groups in step lengths (p>.05) and time of prosthesis using in individuals with lower limb loss (p > .05). Conclusions: The pilot study provided basic data about gait stability in individuals with traumatic lower limb loss. Results of the study showed that to evaluate the gait differences between in different amputation level, long-range gait analyses methods may be useful to get more valuable information. On the other hand, similarity in step length may be resulted from effective prosthetic using or effective gait rehabilitation, in conclusion, all participants with lower limb loss were already trained. The differences between the TT and HI; TF and HI may be resulted from the age related features, therefore, age matched population in HI were recommended future studies. Increasing the number of participants and comparison of age-matched groups also recommended to generalize these result.Keywords: lower limb loss, amputee, gait variability, gait analyses
Procedia PDF Downloads 280942 Non-Linear Transformation of Bulk Acoustic Waves at Oblique Incidence on Plane Solid Boundary
Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy
Abstract:
The transformation of two types of acoustic waves can occur on a flat interface between two solids at oblique incidence of longitudinal and shear bulk acoustic waves (BAW). This paper presents the results of experimental studies of the properties of reflection and propagation of longitudinal wave and generation of second and third longitudinal and shear harmonics of BAW at oblique incidence of longitudinal BAW on a flat rough boundary between two solids. The experimental sample was a rectangular isosceles pyramid made of D16 aluminum alloy with the plane parallel bases cylinder made of D16 aluminum alloy pressed to the base. The piezoelectric lithium niobate transducer with a resonance frequency of 5 MHz was secured to one face of the pyramid to generate a longitudinal wave. Longitudinal waves emitted by this transducer felt at an angle of 45° to the interface between two solids and reflected at the same angle. On the opposite face of the pyramid, and on the flat side of the cylinder was attached longitudinal transducer with resonance frequency of 10 MHz or the shear transducer with resonance frequency of 15 MHz. These transducers also effectively received signal at a frequency of 5 MHz. In the spectrum of the transmitted and reflected BAW was observed shear and longitudinal waves at a frequency of 5 MHz, as well as longitudinal harmonic at a frequency harmonic of 10 MHz and a shear harmonic at frequency of 15 MHz. The effect of reversing changing of external pressure applied to the rough interface between two solids on the value of the first and higher harmonics of the BAW at oblique incidence on the interface of the longitudinal BAW was experimentally investigated. In the spectrum of the reflected signal from the interface, there was a decrease of amplitudes of the first harmonics of the signal, and non-monotonic dependence of the second and third harmonics of shear wave with an increase of the static pressure applied to the interface. In the spectrum of the transmitted signal growth of the first longitudinal and shear harmonic amplitude and non-monotonic dependence - first increase and then decrease in the amplitude of the second and third longitudinal shear harmonic with increasing external static pressure was observed. These dependencies were hysteresis at reversing changing of external pressure. When pressure applied to the border increased, acoustic contact between the surfaces improves. This increases the energy of the transmitted elastic wave and decreases the energy of the reflected wave. The second longitudinal acoustic harmonics generation was associated with the Hertz nonlinearity on the interface of two pressed rough surfaces, the generation of the third harmonic was caused by shear hysteresis nonlinearity due to dry friction on a rough interface. This study was supported by the Russian Science Foundation (project №14-22-00042).Keywords: generation of acoustic harmonics, hysteresis nonlinearity, Hertz nonlinearity, transformation of acoustic waves
Procedia PDF Downloads 378941 Efficacy of Preimplantation Genetic Screening in Women with a Spontaneous Abortion History with Eukaryotic or Aneuploidy Abortus
Authors: Jayeon Kim, Eunjung Yu, Taeki Yoon
Abstract:
Most spontaneous miscarriage is believed to be a consequence of embryo aneuploidies. Transferring eukaryotic embryos selected by PGS is expected to decrease the miscarriage rate. Current PGS indications include advanced maternal age, recurrent pregnancy loss, repeated implantation failure. Recently, use of PGS for healthy women without above indications for the purpose of improving in vitro fertilization (IVF) outcomes is on the rise. However, it is still controversy about the beneficial effect of PGS in this population, especially, in women with a history of no more than 2 miscarriages or miscarriage of eukaryotic abortus. This study aimed to investigate if karyotyping result of abortus is a good indicator of preimplantation genetic screening (PGS) in subsequent IVF cycle in women with a history of spontaneous abortion. A single-center retrospective cohort study was performed. Women who had spontaneous abortion(s) (less than 3) and dilatation and evacuation, and subsequent IVF from January 2016 to November 2016 were included. Their medical information was extracted from the charts. Clinical pregnancy was defined as presence of a gestational sac with fetal heart beat detected on ultrasound in week 7. Statistical analysis was performed using SPSS software. Total 234 women were included. 121 out of 234 (51.7%) underwent karyotyping of the abortus, and 113 did not have the abortus karyotyped. Embryo biopsy was performed on 3 or 5 days after oocyte retrieval, followed by embryo transfer (ET) on a fresh or frozen cycle. The biopsied materials were subjected to microarray comparative genomic hybridization. Clinical pregnancy rate per ET was compared between PGS and non-PGS group in each study group. Patients were grouped by two criteria: karyotype of the abortus from previous miscarriage (unknown fetal karyotype (n=89, Group 1), eukaryotic abortus (n=36, Group 2) or aneuploidy abortus (n=67, Group 3)), and pursuing PGS in subsequent IVF cycle (pursuing PGS (PGS group, n=105) or not pursuing PGS (non-PGS group, n=87)). The PGS group was significantly older and had higher number of retrieved oocytes and prior miscarriages compared to non-PGS group. There were no differences in BMI and AMH level between those two groups. In PGS group, the mean number of transferable embryos (eukaryotic embryo) was 1.3 ± 0.7, 1.5 ± 0.5 and 1.4 ± 0.5, respectively (p = 0.049). In 42 cases, ET was cancelled because all embryos biopsied turned out to be abnormal. In all three groups (group 1, 2, and 3), clinical pregnancy rates were not statistically different between PGS and non-PGS group (Group 1: 48.8% vs. 52.2% (p=0.858), Group 2: 70% vs. 73.1% (p=0.730), Group 3: 42.3% vs. 46.7% (p=0.640), in PGS and non-PGS group, respectively). In both groups who had miscarriage with eukaryotic and aneuploidy abortus, the clinical pregnancy rate between IVF cycles with and without PGS was not different. When we compare miscarriage and ongoing pregnancy rate, there were no significant differences between PGS and non-PGS group in all three groups. Our results show that the routine application of PGS in women who had less than 3 miscarriages would not be beneficial, even in cases that previous miscarriage had been caused by fetal aneuploidy.Keywords: preimplantation genetic diagnosis, miscarriage, kpryotyping, in vitro fertilization
Procedia PDF Downloads 182940 A Comparative Study in Acute Pancreatitis to Find out the Effectiveness of Early Addition of Ulinastatin to Current Standard Care in Indian Subjects
Authors: Dr. Jenit Gandhi, Dr. Manojith SS, Dr. Nakul GV, Dr. Sharath Honnani, Dr. Shaurav Ghosh, Dr. Neel Shetty, Dr. Nagabhushan JS, Dr. Manish Joshi
Abstract:
Introduction: Acute pancreatitis is an inflammatory condition of the pancreas which begins in pancreatic acinar cells and triggers local inflammation that may progress to systemic inflammatory response (SIRS) and causing distant organ involvement and its function and ending up with multiple organ dysfunction syndromes (MODS). Aim: A comparative study in acute pancreatitis to find out the effectiveness of early addition of Ulinastatin to current standard care in Indian subjects . Methodology: A current prospective observational study is done during study period of 1year (Dec 2018 –Dec 2019) duration to evaluate the effect of early addition of Ulinastatin to the current standard treatment and its efficacy to reduce the early complication, analgesic requirement and duration of hospital stay in patients with Acute Pancreatitis. Results: In the control group 25 were males and 05 were females. In the test group 18 were males and 12 females. Majority was in the age group between 30 - 70 yrs of age with >50% in the 30-50yrs age group in both test and control groups. The VAS was median grade 3 in control group as compared to median grade 2 in test group , the pain was more in the initial 2 days in test group compared to 4 days in test group , the analgesic requirement was used for more in control group (median 6) to test group( median 3 days ). On follow up after 5 days for a period of 2 weeks none of the patients in the test group developed any complication. Where as in the control group 8 patients developed pleural effusion, 04-Pseudopancreatic cyst, 02 – patient developed portal vein and splenic vein thrombosis, 02 patients – ventilator with ARDS which were treated symptomatically whereas in test group 02 patient developed pleural effusions and 01 pseudo pancreatic cyst with splenic artery aneurysm, 01 – patient with AKI and MODS symptomatically treated. The duration of hospital stay for a median period of 4 days (2 – 7 days) in test group and 7 days (4 -10 days) in control group. All patients were able to return to normal work on an average of 5days compared 8days in control group, the difference was significant. Conclusion:The study concluded that early addition of Ulinastatin to current standard treatment of acute Pancreatitis is effective in reducing pain, early complication and duration of hospital stay in Indian subjectKeywords: Ulinastatin, VAS – visual analogue score , AKI – acute kidney injury , ARDS – acute respiratory distress syndrome
Procedia PDF Downloads 122939 Malnutrition Among Adult Hospitalized Orthopedic Patients: Nursing Role And Nutrition Screening
Authors: Ehsan Ahmed Yahia
Abstract:
Introduction: The nursing role in nutrition screening and assessing hospitalized patients is important. Malnutrition is a common and costly problem, particularly among hospitalized patients, and can have an adverse effect on the healing process. The study's goal is to assess the prevalence of malnutrition among adult hospitalized orthopedic patients and to detect the barriers to the nutrition screening process. Aim of the study: This study aimed to (a) assess the prevalence of malnutrition in hospitalized orthopedic patients and (b) evaluate the relationship between malnutrition and selected clinical outcomes. Material and Methods: This prospective field study was conducted for three months between 03/2022 and 06/2022 in the selected orthopedic departments in a teaching hospital affiliated withCairo University, Egypt. with a total number of one hundred twenty (120) patients. Patients' assessment included checking for malnutrition using the Nutritional Risk Screening Questionnaire. Patients at risk for malnourishment were defined as NRS score ≥ 3. Clinical outcomes under consideration included 1) length of hospitalization, 2) mobilization after surgery and conservative treatment, and 3) rate of adverse events. Results: This study found that malnutrition is a significant problem among patients hospitalized in an orthopedic ward. The prevalence of malnutrition was the highest in patients with lumbar spine and pelvis fractures, followed by the proximal femur and proximal humerus fractures. Patients at risk for malnutrition had significantly prolonged hospitalization, delayed postoperative mobilization, and increased incidence of adverse events.27.8% of the study sample were at risk for malnutrition. The highest prevalence of malnourishment was found in Septic Surgery with 32%, followed by Traumatology with 19.6% and Arthroplasty with 15.3%. A higher prevalence of malnutrition was detected among patients with typical fractures, such as lumbar spine and pelvis (46.7%), proximal femur (34.4%), and proximal humeral (23.7%) fractures. Additionally, patients at risk for malnutrition showed prolonged hospitalization (14.7 ± 11.1 vs. 21.2 ± 11.7 days), delayed postoperative mobilization (2.3 ± 2.9 vs. 4.1 ± 4.9 days), and delayed to mobilize after conservative treatment (1.1 ± 2.7 vs. 1.8 ± 1.9 days). A significant statistical correlation of NRS with individual parameters (Spearman's rank correlation, p < 0.05) was observed. The rate of adverse incidents in patients at risk for malnutrition was significantly higher than that of patients with a regular nutritional status (37.2% vs. 21.1%, p < 0.001). Conclusions: Our results indicate that the prevalence of malnutrition in surgical patients is significant. The nutritional status of patients with typical fractures is especially at risk. Prolonged hospitalization, delayed postoperative mobilization, and delayed mobilization after conservative treatment is significantly associated with malnutrition. In addition, the incidence of adverse events in patients at risk for malnutrition is significantly higher.Keywords: malnutrition, nutritional risk screening, surgery, nursing, orthopedic nurse
Procedia PDF Downloads 99938 Solar Power Generation in a Mining Town: A Case Study for Australia
Authors: Ryan Chalk, G. M. Shafiullah
Abstract:
Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality
Procedia PDF Downloads 166937 The Double Standard: Ethical Issues and Gender Discrimination in Traditional Western Ethics
Authors: Merina Islam
Abstract:
The feminists have identified the traditional western ethical theories as basically male centered. Feminists are committed to develop a critique showing how the traditional western ethics together with traditional philosophy, irrespective of the claim for gender neutrality, all throughout remained gender-biased. This exclusion of women’s experiences from the moral discourse is justified on the ground that women cannot be moral agents, since they are not rational. By way of entailment, we are thus led to the position that virtues of traditional ethics, so viewed, can nothing but rational and hence male. The ears of traditional Western ethicists have been attuned to male rather than female ethical voices. Right from the Plato, Aristotle, Augustine, Aquinas, Rousseau, Kant, Hegel and even philosophers like Freud, Schopenhauer, Nietzsche and many others the dualism between reason-passion or mind and body started gaining prominence. These, according to them, have either intentionally excluded women or else have used certain male moral experience as the standard for all moral experiences, thereby resulting once again in exclusion of women’s experiences. Men are identified with rationality and hence contrasted with women whose sphere is believed to be that of emotion and feeling. This act of exclusion of women’s experience from moral discourse has given birth to a tradition that emphasizes reason over emotion, universal over the particular, and justice over caring. That patriarchy’s use of gender distinctions in the realm of Ethics has resulted in gender discriminations is an undeniable fact. Hence women’s moral agency is said to have often been denied, not simply by the act of exclusion of women from moral debate or sheer ignorance of their contributions, but through philosophical claims to the effect that women lack moral reason. Traditional or mainstream ethics cannot justify its claim for universality, objectivity and gender neutrality the standards from which were drawn the legitimacy of the various moral maxims or principles of it. Right from the Platonic and Aristotelian period the dualism between reason-passion or mind and body started gaining prominence. Men are identified with rationality and hence contrasted with women whose sphere is believed to be that of emotion and feeling. Through the Association of the masculine values with reason (the feminine with irrational), was created the standard prototype of moral virtues The feminists’ critique of the traditional mainstream Ethics is based on this charge that because of its inherent gender bias, in the name of gender distinctions, Ethics has so far been justifying discriminations. In this paper, attempt would make upon the gender biased-ness of traditional ethics. But Feminists are committed to develop a critique showing how the traditional ethics together with traditional philosophy, irrespective of the claim for gender neutrality, all throughout remained gender-biased. We would try to show to what extent traditional ethics is male centered and consequentially fails to justify its claims for universality and gender neutrality.Keywords: ethics, gender, male-centered, traditional
Procedia PDF Downloads 428936 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion
Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan
Abstract:
In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion
Procedia PDF Downloads 219935 Aerobic Training Combined with Nutritional Guidance as an Effective Strategy for Improving Aerobic Fitness and Reducing BMI in Inactive Adults
Authors: Leif Inge Tjelta, Gerd Lise Nordbotten, Cathrine Nyhus Hagum, Merete Hagen Helland
Abstract:
Overweight and obesity can lead to numerous health problems, and inactive people are more often overweight and obese compared to physically active people. Even a moderate weight loss can improve cardiovascular and endocrine disease risk factors. The aim of the study was to examine to what extent overweight and obese adults starting up with two weekly intensive running sessions had an increase in aerobic capacity, reduction in BMI and waist circumference and changes in body composition after 33 weeks of training. An additional aim was to see if there were differences between participants who, in addition to training, also received lifestyle modification education, including practical cooking (nutritional guidance and training group (NTG =32)) compared to those who were not given any nutritional guidance (training group (TG=40)). 72 participants (49 women), mean age of 46.1 ( ± 10.4) were included. Inclusion Criteria: Previous untrained and inactive adults in all age groups, BMI ≥ 25, desire to become fitter and reduce their BMI. The two weekly supervised training sessions consisted of 10 min warm up followed by 20 to 21 min effective interval running where the participants’ heart rate were between 82 and 92% of hearth rate maximum. The sessions were completed with ten minutes whole body strength training. Measures of BMI, waist circumference (WC) and 3000m running time were performed at the start of the project (T1), after 15 weeks (T2) and at the end of the project (T3). Measurements of fat percentage, muscle mass, and visceral fat were performed at T1 and T3. Twelve participants (9 women) from both groups, who all scored around average on the 3000 m pre-test, were chosen to do a VO₂max test at T1 and T3. The NTG were given ten theoretical sessions (80 minutes each) and eight practical cooking sessions (140 minutes each). There was a significant reduction in bout groups for WC and BMI from T1 to T2. There was not found any further reduction from T2 to T3. Although not significant, NTG reduced their WC more than TG. For both groups, the percentage reduction in WC was similar to the reduction in BMI. There was a decrease in fat percentage in both groups from pre-test to post-test, whereas, for muscle mass, a small, but insignificant increase was observed for both groups. There was a decrease in 3000m running time for both groups from T1 to T2 as well as from T2 to T3. The difference between T2 and T3 was not statistically significant. The 12 participants who tested VO₂max had an increase of 2.86 ( ± 3.84) mlkg⁻¹ min⁻¹ in VO₂max and 3:02 min (± 2:01 min) reduction in running time over 3000 m from T1 until T3. There was a strong, negative correlation between the two variables. The study shows that two intensive running session in 33 weeks can increase aerobic fitness and reduce BMI, WC and fat percent in inactive adults. Cost guidance in addition to training will give additional effect.Keywords: interval training, nutritional guidance, fitness, BMI
Procedia PDF Downloads 142934 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach
Authors: Jared Beard, Ali Baheri
Abstract:
As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification
Procedia PDF Downloads 157933 Promoting Libraries' Services and Events by Librarians Led Instagram Account: A Case Study on Qatar National Library's Research and Learning Instagram Account
Authors: Maryam Alkhalosi, Ahmad Naddaf, Rana Alani
Abstract:
Qatar National Library has its main accounts on social media, which presents the general image of the library and its daily news. A paper will be presented based on a case study researching the outcome of having a separate Instagram account led by librarians, not the Communication Department of the library. The main purpose of the librarians-led account is to promote librarians’ services and events, such as research consultation, reference questions, community engagement programs, collection marketing, etc. all in the way that librarians think it reflects their role in the community. Librarians had several obstacles to help users understanding librarians' roles. As was noticed that Instagram is the most popular social media platform in Qatar, it was selected to promote how librarians can help users through a focused account to create a direct channel between librarians and users. Which helps librarians understand users’ needs and interests. This research will use a quantitative approach depending on the case study, librarians have used their case in the department of Research and learning to find out the best practices might help in promoting the librarians' services and reaching out to a bigger number of users. Through the descriptive method, this research will describe the changes observed in the numbers of community users who interact with the Instagram account and engaged in librarians’ events. Statistics of this study are based on three main sources: 1. The internal monthly statistics sheet of events and programs held by the Research and Learning Department. 2. The weekly tracking of the Instagram account statistics. 3. Instagram’s tools such as polls, quizzes, questions, etc. This study will show the direct effect of a librarian-led Instagram account on the number of community members who participate and engage in librarian-led programs and services. In addition to highlighting the librarians' role directly with the community members. The study will also show the best practices on Instagram, which helps reaching a wider community of users. This study is important because, in the region, there is a lack of studies focusing on librarianship, especially on contemporary problems and its solution. Besides, there is a lack of understanding of the role of a librarian in the Arab region. The research will also highlight how librarians can help the public and researchers as well. All of these benefits can come through one popular easy channel in social media. From another side, this paper is a chance to share the details of this experience starting from scratch, including the phase of setting the policy and guidelines of managing the social media account, until librarians reached to a point where the benefits of this experience are in reality. This experience had even added many skills to the librarians.Keywords: librarian’s role, social media, instagram and libraries, promoting libraries’ services
Procedia PDF Downloads 97932 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 54931 A Hydrometallurgical Route for the Recovery of Molybdenum from Mo-Co Spent Catalyst
Authors: Bina Gupta, Rashmi Singh, Harshit Mahandra
Abstract:
Molybdenum is a strategic metal and finds applications in petroleum refining, thermocouples, X-ray tubes and in making of steel alloy owing to its high melting temperature and tensile strength. The growing significance and economic value of molybdenum have increased interest in the development of efficient processes aiming its recovery from secondary sources. Main secondary sources of Mo are molybdenum catalysts which are used for hydrodesulphurisation process in petrochemical refineries. The activity of these catalysts gradually decreases with time during the desulphurisation process as the catalysts get contaminated with toxic material and are dumped as waste which leads to environmental issues. In this scenario, recovery of molybdenum from spent catalyst is significant from both economic and environmental point of view. Recently ionic liquids have gained prominence due to their low vapour pressure, high thermal stability, good extraction efficiency and recycling capacity. Present study reports recovery of molybdenum from Mo-Co spent leach liquor using Cyphos IL 102[trihexyl(tetradecyl)phosphonium bromide] as an extractant. Spent catalyst was leached with 3 mol/L HCl and the leach liquor containing Mo-870 ppm, Co-341 ppm, Al-508 ppm and Fe-42 ppm was subjected to extraction step. The effect of extractant concentration on the leach liquor was investigated and almost 85% extraction of Mo was achieved with 0.05 mol/L Cyphos IL 102. Results of stripping studies revealed that 2 mol/L HNO3 can effectively strip 94% of the extracted Mo from the loaded organic phase. McCabe-Thiele diagrams were constructed to determine the number of stages required for quantitative extraction and stripping of molybdenum and were confirmed by counter current simulation studies. According to McCabe-Thiele extraction and stripping isotherms, two stages are required for quantitative extraction and stripping of molybdenum at A/O= 1:1. Around 95.4% extraction of molybdenum was achieved in two stage counter current at A/O= 1:1 with negligible extraction of Co and Al. However, iron was coextracted and removed from the loaded organic phase by scrubbing with 0.01 mol/L HCl. Quantitative stripping (~99.5 %) of molybdenum was achieved with 2.0 mol/L HNO3 in two stages at O/A=1:1. Overall ~95.0% molybdenum with 99 % purity was recovered from Mo-Co spent catalyst. From the strip solution, MoO3 was obtained by crystallization followed by thermal decomposition. The product obtained after thermal decomposition was characterized by XRD, FE-SEM and EDX techniques. XRD peaks of MoO3correspond to molybdite Syn-MoO3 structure. FE-SEM depicts the rod like morphology of synthesized MoO3. EDX analysis of MoO3 shows 1:3 atomic percentage of molybdenum and oxygen. The synthesised MoO3 can find application in gas sensors, electrodes of batteries, display devices, smart windows, lubricants and as catalyst.Keywords: cyphos IL 102, extraction, Mo-Co spent catalyst, recovery
Procedia PDF Downloads 269930 Using Lysosomal Immunogenic Cell Death to Target Breast Cancer via Xanthine Oxidase/Micro-Antibody Fusion Protein
Authors: Iulianna Taritsa, Kuldeep Neote, Eric Fossel
Abstract:
Lysosome-induced immunogenic cell death (LIICD) is a powerful mechanism of targeting cancer cells that kills circulating malignant cells and primes the host’s immune cells against future remission. Current immunotherapies for cancer are limited in preventing recurrence – a gap that can be bridged by training the immune system to recognize cancer neoantigens. Lysosomal leakage can be induced therapeutically to traffic antigens from dying cells to dendritic cells, which can later present those tumorigenic antigens to T cells. Previous research has shown that oxidative agents administered in the tumor microenvironment can initiate LIICD. We generated a fusion protein between an oxidative agent known as xanthine oxidase (XO) and a mini-antibody specific for EGFR/HER2-sensitive breast tumor cells. The anti-EGFR single domain antibody fragment is uniquely sourced from llama, which is functional without the presence of a light chain. These llama micro-antibodies have been shown to be better able to penetrate tissues and have improved physicochemical stability as compared to traditional monoclonal antibodies. We demonstrate that the fusion protein created is stable and can induce early markers of immunogenic cell death in an in vitro human breast cancer cell line (SkBr3). Specifically, we measured overall cell death, as well as surface-expressed calreticulin, extracellular ATP release, and HMGB1 production. These markers are consensus indicators of ICD. Flow cytometry, luminescence assays, and ELISA were used respectively to quantify biomarker levels between treated versus untreated cells. We also included a positive control group of SkBr3 cells dosed with doxorubicin (a known inducer of LIICD) and a negative control dosed with cisplatin (a known inducer of cell death, but not of the immunogenic variety). We looked at each marker at various time points after cancer cells were treated with the XO/antibody fusion protein, doxorubicin, and cisplatin. Upregulated biomarkers after treatment with the fusion protein indicate an immunogenic response. We thus show the potential for this fusion protein to induce an anticancer effect paired with an adaptive immune response against EGFR/HER2+ cells. Our research in human cell lines here provides evidence for the success of the same therapeutic method for patients and serves as the gateway to developing a new treatment approach against breast cancer.Keywords: apoptosis, breast cancer, immunogenic cell death, lysosome
Procedia PDF Downloads 199929 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia
Authors: Yonas Shuke Kitawa
Abstract:
Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix
Procedia PDF Downloads 82928 Inertial Particle Focusing Dynamics in Trapezoid Straight Microchannels: Application to Continuous Particle Filtration
Authors: Reza Moloudi, Steve Oh, Charles Chun Yang, Majid Ebrahimi Warkiani, May Win Naing
Abstract:
Inertial microfluidics has emerged recently as a promising tool for high-throughput manipulation of particles and cells for a wide range of flow cytometric tasks including cell separation/filtration, cell counting, and mechanical phenotyping. Inertial focusing is profoundly reliant on the cross-sectional shape of the channel and its impacts not only on the shear field but also the wall-effect lift force near the wall region. Despite comprehensive experiments and numerical analysis of the lift forces for rectangular and non-rectangular microchannels (half-circular and triangular cross-section), which all possess planes of symmetry, less effort has been made on the 'flow field structure' of trapezoidal straight microchannels and its effects on inertial focusing. On the other hand, a rectilinear channel with trapezoidal cross-sections breaks down all planes of symmetry. In this study, particle focusing dynamics inside trapezoid straight microchannels was first studied systematically for a broad range of channel Re number (20 < Re < 800). The altered axial velocity profile and consequently new shear force arrangement led to a cross-laterally movement of equilibration toward the longer side wall when the rectangular straight channel was changed to a trapezoid; however, the main lateral focusing started to move backward toward the middle and the shorter side wall, depending on particle clogging ratio (K=a/Hmin, a is particle size), channel aspect ratio (AR=W/Hmin, W is channel width, and Hmin is smaller channel height), and slope of slanted wall, as the channel Reynolds number further increased (Re > 50). Increasing the channel aspect ratio (AR) from 2 to 4 and the slope of slanted wall up to Tan(α)≈0.4 (Tan(α)=(Hlonger-sidewall-Hshorter-sidewall)/W) enhanced the off-center lateral focusing position from the middle of channel cross-section, up to ~20 percent of the channel width. It was found that the focusing point was spoiled near the slanted wall due to the dissymmetry; it mainly focused near the bottom wall or fluctuated between the channel center and the bottom wall, depending on the slanted wall and Re (Re < 100, channel aspect ratio 4:1). Eventually, as a proof of principle, a trapezoidal straight microchannel along with a bifurcation was designed and utilized for continuous filtration of a broader range of particle clogging ratio (0.3 < K < 1) exiting through the longer wall outlet with ~99% efficiency (Re < 100) in comparison to the rectangular straight microchannels (W > H, 0.3 ≤ K < 0.5).Keywords: cell/particle sorting, filtration, inertial microfluidics, straight microchannel, trapezoid
Procedia PDF Downloads 227927 Reduction in Hospital Acquire Infections after Intervention of Hand Hygiene and Personal Protective Equipment at COVID Unit Indus Hospital Karachi
Authors: Aisha Maroof
Abstract:
Introduction: Coronavirus Disease 2019 (COVID-19) is spreading rapidly around the world with devastating consequences on patients, health care workers and health systems. Severe 2019 novel coronavirus infectious disease (COVID-19) with pneumonia is associated with high rates of admission to the intensive care unit (ICU) and they are at high risk to obtain the hospital acquire bloodstream infection (HAIs) such as central line associated bloodstream infection (CLABSI), catheter associated urinary tract infections (CAUTI) and laboratory confirm bloodstream infection (LCBSI). The chances of infection transmission increase when healthcare worker’s (HCWs) practice is inappropriate. Risk related to hand hygiene (HH) and personal protective equipment (PPE) as regards multidrug-resistant organism transmission: use of multiple gloving instead of HH and incorrect use of PPE can lead to a significant increase of device-related infections. As it reaches low- and middle-income countries, its effects could be even more, because it will be difficult for them to react aggressively to the pandemic. HAIs are one of the biggest medical concerns, resulting in increased mortality rates. Objective: To assess the effect of intervention on compliance of hand hygiene and PPE among HCWs reduce the rate of HAI in COVID-19 patients. Method: An interventional study was done between July to December, 2020. CLABSI, CAUTI and LCBSI data were collected from the medical record and direct observation. There were total of 50 Nurses, 18 doctors and all patients with laboratory-confirmed severe COVID-19 admitted to the hospital were included in this research study. Respiratory tract specimens were obtained after the first 48 h of ICU admission. Practices were observed after and before intervention. Education was provided based on WHO guidelines. Results: During the six months of study July to December, the rate of CLABSI, CAUTI and LCBSI pre and post intervention was reported. CLABSI rate decreasedd from 22.7 to 0, CAUTI rate was decreased from 1.6 to 0, LCBSI declined from 3.3 to 0 after implementation of intervention. Conclusion: HAIs are an important cause of morbidity and mortality. Most of the device related infections occurs due to lack of correct use of PPE and hand hygiene compliance. Hand hygiene and PPE is the most important measure to protect patients, through education it can be improved the correct use of PPE and hand hygiene compliance and can reduce the bacterial infection in COVID-19 patients.Keywords: hospital acquire infection, healthcare workers, hand hygiene, personal protective equipment
Procedia PDF Downloads 130926 Working From Home: On the Relationship Between Place Attachment to Work Place, Extraversion and Segmentation Preference to Burnout
Authors: Diamant Irene, Shklarnik Batya
Abstract:
In on to its widespread effects on health and economic issues, Covid-19 shook the work and employment world. Among the prominent changes during the pandemic is the work-from-home trend, complete or partial, as part of social distancing. In fact, these changes accelerated an existing tendency of work flexibility already underway before the pandemic. Technology and means of advanced communications led to a re-assessment of “place of work” as a physical space in which work takes place. Today workers can remotely carry out meetings, manage projects, work in groups, and different research studies point to the fact that this type of work has no adverse effect on productivity. However, from the worker’s perspective, despite numerous advantages associated with work from home, such as convenience, flexibility, and autonomy, various drawbacks have been identified such as loneliness, reduction of commitment, home-work boundary erosion, all risk factors relating to the quality of life and burnout. Thus, a real need has arisen in exploring differences in work-from-home experiences and understanding the relationship between psychological characteristics and the prevalence of burnout. This understanding may be of significant value to organizations considering a future hybrid work model combining in-office and remote working. Based on Hobfoll’s Theory of Conservation of Resources, we hypothesized that burnout would mainly be found among workers whose physical remoteness from the workplace threatens or hinders their ability to retain significant individual resources. In the present study, we compared fully remote and partially remote workers (hybrid work), and we examined psychological characteristics and their connection to the formation of burnout. Based on the conceptualization of Place Attachment as the cognitive-emotional bond of an individual to a meaningful place and the need to maintain closeness to it, we assumed that individuals characterized with Place Attachment to the workplace would suffer more from burnout when working from home. We also assumed that extrovert individuals, characterized by the need of social interaction at the workplace and individuals with segmentationpreference – a need for separation between different life domains, would suffer more from burnout, especially among fully remote workers relative to partially remote workers. 194 workers, of which 111 worked from home in full and 83 worked partially from home, aged 19-53, from different sectors, were tested using an online questionnaire through social media. The results of the study supported our assumptions. The repercussions of these findings are discussed, relating to future occupational experience, with an emphasis on suitable occupational adjustment according to the psychological characteristics and needs of workers.Keywords: working from home, burnout, place attachment, extraversion, segmentation preference, Covid-19
Procedia PDF Downloads 191925 Modeling, Topology Optimization and Experimental Validation of Glass-Transition-Based 4D-Printed Polymeric Structures
Authors: Sara A. Pakvis, Giulia Scalet, Stefania Marconi, Ferdinando Auricchio, Matthijs Langelaar
Abstract:
In recent developments in the field of multi-material additive manufacturing, differences in material properties are exploited to create printed shape-memory structures, which are referred to as 4D-printed structures. New printing techniques allow for the deliberate introduction of prestresses in the specimen during manufacturing, and, in combination with the right design, this enables new functionalities. This research focuses on bi-polymer 4D-printed structures, where the transformation process is based on a heat-induced glass transition in one material lowering its Young’s modulus, combined with an initial prestress in the other material. Upon the decrease in stiffness, the prestress is released, which results in the realization of an essentially pre-programmed deformation. As the design of such functional multi-material structures is crucial but far from trivial, a systematic methodology to find the design of 4D-printed structures is developed, where a finite element model is combined with a density-based topology optimization method to describe the material layout. This modeling approach is verified by a convergence analysis and validated by comparing its numerical results to analytical and published data. Specific aspects that are addressed include the interplay between the definition of the prestress and the material interpolation function used in the density-based topology description, the inclusion of a temperature-dependent stiffness relationship to simulate the glass transition effect, and the importance of the consideration of geometric nonlinearity in the finite element modeling. The efficacy of topology optimization to design 4D-printed structures is explored by applying the methodology to a variety of design problems, both in 2D and 3D settings. Bi-layer designs composed of thermoplastic polymers are printed by means of the fused deposition modeling (FDM) technology. Acrylonitrile butadiene styrene (ABS) polymer undergoes the glass transition transformation, while polyurethane (TPU) polymer is prestressed by means of the 3D-printing process itself. Tests inducing shape transformation in the printed samples through heating are performed to calibrate the prestress and validate the modeling approach by comparing the numerical results to the experimental findings. Using the experimentally obtained prestress values, more complex designs have been generated through topology optimization, and samples have been printed and tested to evaluate their performance. This study demonstrates that by combining topology optimization and 4D-printing concepts, stimuli-responsive structures with specific properties can be designed and realized.Keywords: 4D-printing, glass transition, shape memory polymer, topology optimization
Procedia PDF Downloads 210924 Lightweight Sheet Molding Compound Composites by Coating Glass Fiber with Cellulose Nanocrystals
Authors: Amir Asadi, Karim Habib, Robert J. Moon, Kyriaki Kalaitzidou
Abstract:
There has been considerable interest in cellulose nanomaterials (CN) as polymer and polymer composites reinforcement due to their high specific modulus and strength, low density and toxicity, and accessible hydroxyl side groups that can be readily chemically modified. The focus of this study is making lightweight composites for better fuel efficiency and lower CO2 emission in auto industries with no compromise on mechanical performance using a scalable technique that can be easily integrated in sheet molding compound (SMC) manufacturing lines. Light weighting will be achieved by replacing part of the heavier components, i.e. glass fibers (GF), with a small amount of cellulose nanocrystals (CNC) in short GF/epoxy composites made using SMC. CNC will be introduced as coating of the GF rovings prior to their use in the SMC line. The employed coating method is similar to the fiber sizing technique commonly used and thus it can be easily scaled and integrated to industrial SMC lines. This will be an alternative route to the most techniques that involve dispersing CN in polymer matrix, in which the nanomaterials agglomeration limits the capability for scaling up in an industrial production. We have demonstrated that incorporating CNC as a coating on GF surface by immersing the GF in CNC aqueous suspensions, a simple and scalable technique, increases the interfacial shear strength (IFSS) by ~69% compared to the composites produced by uncoated GF, suggesting an enhancement of stress transfer across the GF/matrix interface. As a result of IFSS enhancement, incorporation of 0.17 wt% CNC in the composite results in increases of ~10% in both elastic modulus and tensile strength, and 40 % and 43 % in flexural modulus and strength respectively. We have also determined that dispersing 1.4 and 2 wt% CNC in the epoxy matrix of short GF/epoxy SMC composites by sonication allows removing 10 wt% GF with no penalty on tensile and flexural properties leading to 7.5% lighter composites. Although sonication is a scalable technique, it is not quite as simple and inexpensive as coating the GF by passing through an aqueous suspension of CNC. In this study, the above findings are integrated to 1) investigate the effect of CNC content on mechanical properties by passing the GF rovings through CNC aqueous suspension with various concentrations (0-5%) and 2) determine the optimum ratio of the added CNC to the removed GF to achieve the maximum possible weight reduction with no cost on mechanical performance of the SMC composites. The results of this study are of industrial relevance, providing a path toward producing high volume lightweight and mechanically enhanced SMC composites using cellulose nanomaterials.Keywords: cellulose nanocrystals, light weight polymer-matrix composites, mechanical properties, sheet molding compound (SMC)
Procedia PDF Downloads 225923 Investigating the Influences of Long-Term, as Compared to Short-Term, Phonological Memory on the Word Recognition Abilities of Arabic Readers vs. Arabic Native Speakers: A Word-Recognition Study
Authors: Insiya Bhalloo
Abstract:
It is quite common in the Muslim faith for non-Arabic speakers to be able to convert written Arabic, especially Quranic Arabic, into a phonological code without significant semantic or syntactic knowledge. This is due to prior experience learning to read the Quran (a religious text written in Classical Arabic), from a very young age such as via enrolment in Quranic Arabic classes. As compared to native speakers of Arabic, these Arabic readers do not have a comprehensive morpho-syntactic knowledge of the Arabic language, nor can understand, or engage in Arabic conversation. The study seeks to investigate whether mere phonological experience (as indicated by the Arabic readers’ experience with Arabic phonology and the sound-system) is sufficient to cause phonological-interference during word recognition of previously-heard words, despite the participants’ non-native status. Both native speakers of Arabic and non-native speakers of Arabic, i.e., those individuals that learned to read the Quran from a young age, will be recruited. Each experimental session will include two phases: An exposure phase and a test phase. During the exposure phase, participants will be presented with Arabic words (n=40) on a computer screen. Half of these words will be common words found in the Quran while the other half will be words commonly found in Modern Standard Arabic (MSA) but either non-existent or prevalent at a significantly lower frequency within the Quran. During the test phase, participants will then be presented with both familiar (n = 20; i.e., those words presented during the exposure phase) and novel Arabic words (n = 20; i.e., words not presented during the exposure phase. ½ of these presented words will be common Quranic Arabic words and the other ½ will be common MSA words but not Quranic words. Moreover, ½ the Quranic Arabic and MSA words presented will be comprised of nouns, while ½ the Quranic Arabic and MSA will be comprised of verbs, thereby eliminating word-processing issues affected by lexical category. Participants will then determine if they had seen that word during the exposure phase. This study seeks to investigate whether long-term phonological memory, such as via childhood exposure to Quranic Arabic orthography, has a differential effect on the word-recognition capacities of native Arabic speakers and Arabic readers; we seek to compare the effects of long-term phonological memory in comparison to short-term phonological exposure (as indicated by the presentation of familiar words from the exposure phase). The researcher’s hypothesis is that, despite the lack of lexical knowledge, early experience with converting written Quranic Arabic text into a phonological code will help participants recall the familiar Quranic words that appeared during the exposure phase more accurately than those that were not presented during the exposure phase. Moreover, it is anticipated that the non-native Arabic readers will also report more false alarms to the unfamiliar Quranic words, due to early childhood phonological exposure to Quranic Arabic script - thereby causing false phonological facilitatory effects.Keywords: modern standard arabic, phonological facilitation, phonological memory, Quranic arabic, word recognition
Procedia PDF Downloads 358922 First Step into a Smoke-Free Life: The Effectivity of Peer Education Programme of Midwifery Students
Authors: Rabia Genc, Aysun Eksioglu, Emine Serap Sarican, Sibel Icke
Abstract:
Today the habit of cigarette smoking is among one of the most important public health concerns because of the health problems it leads to. The most important and hazardous group to use tobacco and tobacco products is adolescents and teenagers. And one of the most effective ways to prevent them from starting to smoke is education. This research is a kind of educational intervention study which was carried out in order to evaluate the effect of peer education on the teenagers' knowledge about smoking. The research was carried out between October 15, 2013 and September 9, 2015 at Ege University Ataturk Vocational Health School. The population of the research comprised of the students that have been studying at Ege University Atatürk Vocational Health School, Midwifery Department (N=390). The peer educator group that would give training on smoking consisted of 10 people, and the peer groups that would be trained were divided into two groups via simple randomization as experimental group (n=185) and control group (n=185). Questionnaire, information evaluation form, and informed consent forms were used as date collection tools. The analysis of the data which were collected in the study was carried out on Statistical Package for Social Science (SPSS 15.0). It was found out that 62.5 % of the students who were in peer educator group had smoked in some period of their lives; however, none of them continued to smoke. When they were asked about their reasons to start smoking, 25% said they just wanted to try it, and 25% of them answered that it was because of their friend groups. When the pre-peer education and post-peer education point averages of peer educator group were evaluated, the results showed that there was a significant difference between the point averages (p < 0.05). When the cigarette use of experimental group and the control group were evaluated, it was clear that 18.2% of the experimental group and 24.2%of the control group still smokes. 9.1% of the experimental group and 14.8% of control group stated that they started smoking because of their friend groups. Among the students who smoke 15.9% of the ones who belongs to the experimental group and 21.9% of the ones who belong to the control group stated they are thinking of quitting smoking. It was clear that there is a significant difference between the pre-education and post-education point averages of experimental group statistically (p ≤ 0.05); however, in terms of control group, there were no significant differences between the pre-test post-test averages statistically. Between the pre-test post-test averages of experimental and control groups there were not any statistically significant differences (p > 0.05). It was found out in the study that the peer education programme is not effective on the smoking habit of Vocational Health School students. When the future studies are being planned in order to evaluate the peer education activity, it can be taken into consideration that the peer education takes a long term and the students in the educator group will be more enthusiastic and a kind of leader in their environment.Keywords: midwifery, peer, peer education, smoking
Procedia PDF Downloads 223921 Restricted Boltzmann Machines and Deep Belief Nets for Market Basket Analysis: Statistical Performance and Managerial Implications
Authors: H. Hruschka
Abstract:
This paper presents the first comparison of the performance of the restricted Boltzmann machine and the deep belief net on binary market basket data relative to binary factor analysis and the two best-known topic models, namely Dirichlet allocation and the correlated topic model. This comparison shows that the restricted Boltzmann machine and the deep belief net are superior to both binary factor analysis and topic models. Managerial implications that differ between the investigated models are treated as well. The restricted Boltzmann machine is defined as joint Boltzmann distribution of hidden variables and observed variables (purchases). It comprises one layer of observed variables and one layer of hidden variables. Note that variables of the same layer are not connected. The comparison also includes deep belief nets with three layers. The first layer is a restricted Boltzmann machine based on category purchases. Hidden variables of the first layer are used as input variables by the second-layer restricted Boltzmann machine which then generates second-layer hidden variables. Finally, in the third layer hidden variables are related to purchases. A public data set is analyzed which contains one month of real-world point-of-sale transactions in a typical local grocery outlet. It consists of 9,835 market baskets referring to 169 product categories. This data set is randomly split into two halves. One half is used for estimation, the other serves as holdout data. Each model is evaluated by the log likelihood for the holdout data. Performance of the topic models is disappointing as the holdout log likelihood of the correlated topic model – which is better than Dirichlet allocation - is lower by more than 25,000 compared to the best binary factor analysis model. On the other hand, binary factor analysis on its own is clearly surpassed by both the restricted Boltzmann machine and the deep belief net whose holdout log likelihoods are higher by more than 23,000. Overall, the deep belief net performs best. We also interpret hidden variables discovered by binary factor analysis, the restricted Boltzmann machine and the deep belief net. Hidden variables characterized by the product categories to which they are related differ strongly between these three models. To derive managerial implications we assess the effect of promoting each category on total basket size, i.e., the number of purchased product categories, due to each category's interdependence with all the other categories. The investigated models lead to very different implications as they disagree about which categories are associated with higher basket size increases due to a promotion. Of course, recommendations based on better performing models should be preferred. The impressive performance advantages of the restricted Boltzmann machine and the deep belief net suggest continuing research by appropriate extensions. To include predictors, especially marketing variables such as price, seems to be an obvious next step. It might also be feasible to take a more detailed perspective by considering purchases of brands instead of purchases of product categories.Keywords: binary factor analysis, deep belief net, market basket analysis, restricted Boltzmann machine, topic models
Procedia PDF Downloads 201920 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 300919 Quantitative Evaluation of Efficiency of Surface Plasmon Excitation with Grating-Assisted Metallic Nanoantenna
Authors: Almaz R. Gazizov, Sergey S. Kharintsev, Myakzyum Kh. Salakhov
Abstract:
This work deals with background signal suppression in tip-enhanced near-field optical microscopy (TENOM). The background appears because an optical signal is detected not only from the subwavelength area beneath the tip but also from a wider diffraction-limited area of laser’s waist that might contain another substance. The background can be reduced by using a taper probe with a grating on its lateral surface where an external illumination causes surface plasmon excitation. It requires the grating with parameters perfectly matched with a given incident light for effective light coupling. This work is devoted to an analysis of the light-grating coupling and a quest of grating parameters to enhance a near-field light beneath the tip apex. The aim of this work is to find the figure of merit of plasmon excitation depending on grating period and location of grating in respect to the apex. In our consideration the metallic grating on the lateral surface of the tapered plasmonic probe is illuminated by a plane wave, the electric field is perpendicular to the sample surface. Theoretical model of efficiency of plasmon excitation and propagation toward the apex is tested by fdtd-based numerical simulation. An electric field of the incident light is enhanced on the grating by every single slit due to lightning rod effect. Hence, grating causes amplitude and phase modulation of the incident field in various ways depending on geometry and material of grating. The phase-modulating grating on the probe is a sort of metasurface that provides manipulation by spatial frequencies of the incident field. The spatial frequency-dependent electric field is found from the angular spectrum decomposition. If one of the components satisfies the phase-matching condition then one can readily calculate the figure of merit of plasmon excitation, defined as a ratio of the intensities of the surface mode and the incident light. During propagation towards the apex, surface wave undergoes losses in probe material, radiation losses, and mode compression. There is an optimal location of the grating in respect to the apex. One finds the value by matching quadratic law of mode compression and the exponential law of light extinction. Finally, performed theoretical analysis and numerical simulations of plasmon excitation demonstrate that various surface waves can be effectively excited by using the overtones of a period of the grating or by phase modulation of the incident field. The gratings with such periods are easy to fabricate. Tapered probe with the grating effectively enhances and localizes the incident field at the sample.Keywords: angular spectrum decomposition, efficiency, grating, surface plasmon, taper nanoantenna
Procedia PDF Downloads 283918 Reservoir-Triggered Seismicity of Water Level Variation in the Lake Aswan
Authors: Abdel-Monem Sayed Mohamed
Abstract:
Lake Aswan is one of the largest man-made reservoirs in the world. The reservoir began to fill in 1964 and the level rose gradually, with annual irrigation cycles, until it reached a maximum water level of 181.5 m in November 1999, with a capacity of 160 km3. The filling of such large reservoir changes the stress system either through increasing vertical compressional stress by loading and/or increased pore pressure through the decrease of the effective normal stress. The resulted effect on fault zones changes stability depending strongly on the orientation of pre-existing stress and geometry of the reservoir/fault system. The main earthquake occurred on November 14, 1981, with magnitude 5.5. This event occurred after 17 years of the reservoir began to fill, along the active part of the Kalabsha fault and located not far from the High Dam. Numerous of small earthquakes follow this earthquake and continue till now. For this reason, 13 seismograph stations (radio-telemetry network short-period seismometers) were installed around the northern part of Lake Aswan. The main purpose of the network is to monitor the earthquake activity continuously within Aswan region. The data described here are obtained from the continuous record of earthquake activity and lake-water level variation through the period from 1982 to 2015. The seismicity is concentrated in the Kalabsha area, where there is an intersection of the easterly trending Kalabsha fault with the northerly trending faults. The earthquake foci are distributed in two seismic zones, shallow and deep in the crust. Shallow events have focal depths of less than 12 km while deep events extend from 12 to 28 km. Correlation between the seismicity and the water level variation in the lake provides great suggestion to distinguish the micro-earthquakes, particularly, those in shallow seismic zone in the reservoir–triggered seismicity category. The water loading is one factor from several factors, as an activating medium in triggering earthquakes. The common factors for all cases of induced seismicity seem to be the presence of specific geological conditions, the tectonic setting and water loading. The role of the water loading is as a supplementary source of earthquake events. So, the earthquake activity in the area originated tectonically (ML ≥ 4) and the water factor works as an activating medium in triggering small earthquakes (ML ≤ 3). Study of the inducing seismicity from the water level variation in Aswan Lake is of great importance and play great roles necessity for the safety of the High Dam body and its economic resources.Keywords: Aswan lake, Aswan seismic network, seismicity, water level variation
Procedia PDF Downloads 372917 Ultrasound Assisted Alkaline Potassium Permanganate Pre-Treatment of Spent Coffee Waste
Authors: Rajeev Ravindran, Amit K. Jaiswal
Abstract:
Lignocellulose is the largest reservoir of inexpensive, renewable source of carbon. It is composed of lignin, cellulose and hemicellulose. Cellulose and hemicellulose is composed of reducing sugars glucose, xylose and several other monosaccharides which can be metabolised by microorganisms to produce several value added products such as biofuels, enzymes, aminoacids etc. Enzymatic treatment of lignocellulose leads to the release of monosaccharides such as glucose and xylose. However, factors such as the presence of lignin, crystalline cellulose, acetyl groups, pectin etc. contributes to recalcitrance restricting the effective enzymatic hydrolysis of cellulose and hemicellulose. In order to overcome these problems, pre-treatment of lignocellulose is generally carried out which essentially facilitate better degradation of lignocellulose. A range of pre-treatment strategy is commonly employed based on its mode of action viz. physical, chemical, biological and physico-chemical. However, existing pretreatment strategies result in lower sugar yield and formation of inhibitory compounds. In order to overcome these problems, we proposes a novel pre-treatment, which utilises the superior oxidising capacity of alkaline potassium permanganate assisted by ultra-sonication to break the covalent bonds in spent coffee waste to remove recalcitrant compounds such as lignin. The pre-treatment was conducted for 30 minutes using 2% (w/v) potassium permanganate at room temperature with solid to liquid ratio of 1:10. The pre-treated spent coffee waste (SCW) was subjected to enzymatic hydrolysis using enzymes cellulase and hemicellulase. Shake flask experiments were conducted with a working volume of 50mL buffer containing 1% substrate. The results showed that the novel pre-treatment strategy yielded 7 g/L of reducing sugar as compared to 3.71 g/L obtained from biomass that had undergone dilute acid hydrolysis after 24 hours. From the results obtained it is fairly certain that ultrasonication assists the oxidation of recalcitrant components in lignocellulose by potassium permanganate. Enzyme hydrolysis studies suggest that ultrasound assisted alkaline potassium permanganate pre-treatment is far superior over treatment by dilute acid. Furthermore, SEM, XRD and FTIR were carried out to analyse the effect of the new pre-treatment strategy on structure and crystallinity of pre-treated spent coffee wastes. This novel one-step pre-treatment strategy was implemented under mild conditions and exhibited high efficiency in the enzymatic hydrolysis of spent coffee waste. Further study and scale up is in progress in order to realise future industrial applications.Keywords: spent coffee waste, alkaline potassium permanganate, ultra-sonication, physical characterisation
Procedia PDF Downloads 358916 The Renewed Constitutional Roots of Agricultural Law in Hungary in Line with Sustainability
Authors: Gergely Horvath
Abstract:
The study analyzes the special provisions of the highest level of national agricultural legislation in the Fundamental Law of Hungary (25 April 2011) with descriptive, analytic and comparative methods. The agriculturally relevant articles of the constitution are very important, because –in spite of their high level of abstraction– they can determine and serve the practice comprehensively and effectively. That is why the objective of the research is to interpret the concrete sentences and phrases in connection with agriculture compared with the methods of some other relevant constitutions (historical-grammatical interpretation). The major findings of the study focus on searching for the appropriate provisions and approach capable of solving the problems of sustainable food production. The real challenge agricultural law must face with in the future is protecting or conserving its background and subjects: the environment, the ecosystem services and all the 'roots' of food production. In effect, agricultural law is the legal aspect of the production of 'our daily bread' from farm to table. However, it also must guarantee the safe daily food for our children and for all our descendants. In connection with sustainability, this unique, value-oriented constitution of an agrarian country even deals with uncustomary questions in this level of legislation like GMOs (by banning the production of genetically modified crops). The starting point is that the principle of public good (principium boni communis) must be the leading notion of the norm, which is an idea partly outside the law. The public interest is reflected by the agricultural law mainly in the concept of public health (in connection with food security) and the security of supply with healthy food. The construed Article P claims the general protection of our natural resources as a requirement. The enumeration of the specific natural resources 'which all form part of the common national heritage' also means the conservation of the grounds of sustainable agriculture. The reference of the arable land represents the subfield of law of the protection of land (and soil conservation), that of the water resources represents the subfield of water protection, the reference of forests and the biological diversity visualize the specialty of nature conservation, which is an essential support for agrobiodiversity. The mentioned protected objects constituting the nation's common heritage metonymically melt with their protective regimes, strengthening them and forming constitutional references of law. This regimes also mean the protection of the natural foundations of the life of the living and also the future generations, in the name of intra- and intergenerational equity.Keywords: agricultural law, constitutional values, natural resources, sustainability
Procedia PDF Downloads 167