Search results for: accuracy ratio
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7880

Search results for: accuracy ratio

590 Prenatal Use of Serotonin Reuptake Inhibitors (SRIs) and Congenital Heart Anomalies (CHA): An Exploratory Pharmacogenetics Study

Authors: Aizati N. A. Daud, Jorieke E. H. Bergman, Wilhelmina S. Kerstjens-Frederikse, Pieter Van Der Vlies, Eelko Hak, Rolf M. F. Berger, Henk Groen, Bob Wilffert

Abstract:

Prenatal use of SRIs was previously associated with Congenital Heart Anomalies (CHA). The aim of the study is to explore whether pharmacogenetics plays a role in this teratogenicity using a gene-environment interaction study. A total of 33 case-mother dyads and 2 mother-only (children deceased) registered in EUROCAT Northern Netherlands were included in a case-only study. Five case-mother dyads and two mothers-only were exposed to SRIs (paroxetine=3, fluoxetine=2, venlafaxine=1, paroxetine and venlafaxine=1) in the first trimester of pregnancy. The remaining 28 case-mother dyads were not exposed to SRIs. Ten genes that encode the enzymes or proteins important in determining fetal exposure to SRIs or its mechanism of action were selected: CYPs (CYP1A2, CYP2C9, CYP2C19, CYP2D6), ABCB1 (placental P-glycoprotein), SLC6A4 (serotonin transporter) and serotonin receptor genes (HTR1A, HTR1B, HTR2A, and HTR3B). All included subjects were genotyped for 58 genetic variations in these ten genes. Logistic regression analyses were performed to determine the interaction odds ratio (OR) between genetic variations and SRIs exposure on the risk of CHA. Due to low phenotype frequencies of CYP450 poor metabolizers among exposed cases, the OR cannot be calculated. For ABCB1, there was no indication of changes in the risk of CHA with any of the ABCB1 SNPs in the children and their mothers. Several genetic variations of the serotonin transporter and receptors (SLC6A4 5-HTTLPR and 5-HTTVNTR, HTR1A rs1364043, HTR1B rs6296 & rs6298, HTR3B rs1176744) were associated with an increased risk of CHA, but with too limited sample size to reach statistical significance. For SLC6A4 genetic variations, the mean genetic scores of the exposed case-mothers tended to be higher than the unexposed mothers (2.5 ± 0.8 and 1.88 ± 0.7, respectively; p=0.061). For SNPs of the serotonin receptors, the mean genetic score for exposed cases (children) tended to be higher than the unexposed cases (3.4 ± 2.2, and 1.9 ± 1.6, respectively; p=0.065). This study might be among the first to explore the potential gene-environment interaction between pharmacogenetic determinants and SRIs use on the risk of CHA. With small sample sizes, it was not possible to find a significant interaction. However, there were indications for a role of serotonin receptor polymorphisms in fetuses exposed to SRIs on fetal risk of CHA which warrants further investigation.

Keywords: gene-environment interaction, heart defects, pharmacogenetics, serotonin reuptake inhibitors, teratogenicity

Procedia PDF Downloads 200
589 Boiler Ash as a Reducer of Formaldehyde Emission in Medium-Density Fiberboard

Authors: Alexsandro Bayestorff da Cunha, Dpebora Caline de Mello, Camila Alves Corrêa

Abstract:

In the production of fiberboards, an adhesive based on urea-formaldehyde resin is used, which has the advantages of low cost, homogeneity of distribution, solubility in water, high reactivity in an acid medium, and high adhesion to wood. On the other hand, as a disadvantage, there is low resistance to humidity and the release of formaldehyde. The objective of the study was to determine the viability of adding industrial boiler ash to the urea formaldehyde-based adhesive for the production of medium-density fiberboard. The raw material used was composed of Pinus spp fibers, urea-formaldehyde resin, paraffin emulsion, ammonium sulfate, and boiler ash. The experimental plan, consisting of 8 treatments, was completely randomized with a factorial arrangement, with 0%, 1%, 3%, and 5% ash added to the adhesive, with and without the application of a catalyst. In each treatment, 4 panels were produced with density of 750 kg.m⁻³, dimensions of 40 x 40 x 1,5 cm, 12% urea formaldehyde resin, 1% paraffin emulsion and hot pressing at a temperature of 180ºC, the pressure of 40 kgf/cm⁻² for a time of 10 minutes. The different compositions of the adhesive were characterized in terms of viscosity, pH, gel time and solids, and the panels by physical and mechanical properties, in addition to evaluation using the IMAL DPX300 X-ray densitometer and formaldehyde emission by the perforator method. The results showed a significant reduction of all adhesive properties with the use of the catalyst, regardless of the treatment; while the percentage increase of ashes provided an increase in the average values of viscosity, gel time, and solids and a reduction in pH for the panels with a catalyst; for panels without catalyst, the behavior was the opposite, with the exception of solids. For the physical properties, the results of the variables of density, compaction ratio, and thickness were equivalent and in accordance with the standard, while the moisture content was significantly reduced with the use of the catalyst but without the influence of the percentage of ash. The density profile for all treatments was characteristic of medium-density fiberboard, with more compacted and dense surfaces when compared to the central layer. For thickness, the swelling was not influenced by the catalyst and the use of ash, presenting average values within the normalized parameters. For mechanical properties, the influence of ashes on the adhesive was negatively observed in the modulus of rupture from 1% and in the traction test from 3%; however, only this last property, in the percentages of 3% and 5%, were below the minimum limit of the norm. The use of catalyst and ashes with percentages of 3% and 5% reduced the formaldehyde emission of the panels; however, only the panels that used adhesive with catalyst presented emissions below 8mg of formaldehyde / 100g of the panel. In this way, it can be said that boiler ash can be added to the adhesive with a catalyst without impairing the technological properties by up to 1%.

Keywords: reconstituted wood panels, formaldehyde emission, technological properties of panels, perforator

Procedia PDF Downloads 51
588 Changes in Heavy Metals Bioavailability in Manure-Derived Digestates and Subsequent Hydrochars to Be Used as Soil Amendments

Authors: Hellen L. De Castro e Silva, Ana A. Robles Aguilar, Erik Meers

Abstract:

Digestates are residual by-products, rich in nutrients and trace elements, which can be used as organic fertilisers on soils. However, due to the non-digestibility of these elements and reduced dry matter during the anaerobic digestion process, metal concentrations are higher in digestates than in feedstocks, which might hamper their use as fertilisers according to the threshold values of some country policies. Furthermore, there is uncertainty regarding the required assimilated amount of these elements by some crops, which might result in their bioaccumulation. Therefore, further processing of the digestate to obtain safe fertilizing products has been recommended. This research aims to analyze the effect of applying the hydrothermal carbonization process to manure-derived digestates as a thermal treatment to reduce the bioavailability of heavy metals in mono and co-digestates derived from pig manure and maize from contaminated land in France. This study examined pig manure collected from a novel stable system (VeDoWs, province of East Flanders, Belgium) that separates the collection of pig urine and feces, resulting in a solid fraction of manure with high up-concentration of heavy metals and nutrients. Mono-digestion and co-digestion processes were conducted in semi-continuous reactors for 45 days at mesophilic conditions, in which the digestates were dried at 105 °C for 24 hours. Then, hydrothermal carbonization was applied to a 1:10 solid/water ratio to guarantee controlled experimental conditions in different temperatures (180, 200, and 220 °C) and residence times (2 h and 4 h). During the process, the pressure was generated autogenously, and the reactor was cooled down after completing the treatments. The solid and liquid phases were separated through vacuum filtration, in which the solid phase of each treatment -hydrochar- was dried and ground for chemical characterization. Different fractions (exchangeable / adsorbed fraction - F1, carbonates-bound fraction - F2, organic matter-bound fraction - F3, and residual fraction – F4) of some heavy metals (Cd, Cr, Ni, and Cr) have been determined in digestates and derived hydrochars using the modified Community Bureau of Reference (BCR) sequential extraction procedure. The main results indicated a difference in the heavy metals fractionation between digestates and their derived hydrochars; however, the hydrothermal carbonization operating conditions didn’t have remarkable effects on heavy metals partitioning between the hydrochars of the proposed treatments. Based on the estimated potential ecological risk assessment, there was one level decrease (considerate to moderate) when comparing the HMs partitioning in digestates and derived hydrochars.

Keywords: heavy metals, bioavailability, hydrothermal treatment, bio-based fertilisers, agriculture

Procedia PDF Downloads 88
587 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application

Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal

Abstract:

This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.

Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism

Procedia PDF Downloads 114
586 A Study of the Effect of Early and Late Meal Time on Anthropometric and Biochemical Parameters in Patients of Type 2 Diabetes

Authors: Smriti Rastogi, Narsingh Verma

Abstract:

Background: A vast body of research exists on the use of oral hypoglycaemic drugs, insulin injections and the like in managing diabetes but no such research exists that has taken into consideration the parameter of time restricted meal intake and its positive effects in managing diabetes. The utility of this project is immense as it offers a solution to the woes of diabetics based on circadian rhythm and normal physiology of the human body. Method: 80 Diabetics, enrolled from the Out Patient Department of Endocrinology, KGMU (King George's Medical University) were randomly divided based on consent to early dinner TRM(time restricted meal) group or not (control group). Follow up was done at six months and 12 months for anthropometric measurement, height, weight, waist-hip ratio, neck size, fasting, postprandial blood sugar, HbA1c, serum urea, serum creatinine, and lipid profile. The patient was given a clear understanding of chronomedicine and how it affects their health. A single intervention was done - the timing of dinner was at or around 7 pm for TRM group. Result: 65% of TRM group and 40 %(non- TRM) had normal HbA1c after 12 months. HbA1c in TRM Group (first visit to second follow up) had a significant p value=0.017. A p value of <0.0001 was observed on comparing the values of blood sugar (fasting) in TRM Group from the first visit and second follow up. The values of blood sugar (postprandial) in TRM Group (first visit and second follow up) showed a p-value <0.0001 (highly significant). Values of the three parameters were non- significant in the control group. Hip size(First Visit to Second Follow Up) TRM Group showed a p-value = 0.0344 (Significant) (Difference between means=2.762 ± 1.261)Detailed results of the above parameters and a few newer ones will be presented at the conference. Conclusion: Time restricted meal intake in diabetics shows promise and is worth exploring further. Time Restricted Meal intake in Type 2 diabetics has a significant effect in controlling and maintaining HbA1c as the reduction in HbA1c value was very significant in the TRM group vs. the control group. Similar highly significant results were obtained in the case of fasting and postprandial values of blood sugar in the TRM group when compared to the control group. The effects of time restricted meal intake in diabetics show promise and are worth exploring further. It is one of the first studies which have been undertaken in Indian diabetics, although the initial data obtained is encouraging yet further research and study are required to corroborate results.

Keywords: chronomedicine, diabetes, endocrinology, time restricted meal intake

Procedia PDF Downloads 108
585 Effects of Polydispersity on the Glass Transition Dynamics of Aqueous Suspensions of Soft Spherical Colloidal Particles

Authors: Sanjay K. Behera, Debasish Saha, Paramesh Gadige, Ranjini Bandyopadhyay

Abstract:

The zero shear viscosity (η₀) of a suspension of hard sphere colloids characterized by a significant polydispersity (≈10%) increases with increase in volume fraction (ϕ) and shows a dramatic increase at ϕ=ϕg with the system entering a colloidal glassy state. Fragility which is the measure of the rapidity of approach of these suspensions towards the glassy state is sensitive to its size polydispersity and stiffness of the particles. Soft poly(N-isopropylacrylamide) (PNIPAM) particles deform in the presence of neighboring particles at volume fraction above the random close packing volume fraction of undeformed monodisperse spheres. Softness, therefore, enhances the packing efficiency of these particles. In this study PNIPAM particles of a nearly constant swelling ratio and with polydispersities varying over a wide range (7.4%-48.9%) are synthesized to study the effects of polydispersity on the dynamics of suspensions of soft PNIPAM colloidal particles. The size and polydispersity of these particles are characterized using dynamic light scattering (DLS) and scanning electron microscopy (SEM). As these particles are deformable, their packing in aqueous suspensions is quantified in terms of effective volume fraction (ϕeff). The zero shear viscosity (η₀) data of these colloidal suspensions, estimated from rheometric experiments as a function of the effective volume fraction ϕeff of the suspensions, increases with increase in ϕeff and shows a dramatic increase at ϕeff = ϕ₀. The data for η₀ as a function of ϕeff fits well to the Vogel-Fulcher-Tammann equation. It is observed that increasing polydispersity results in increasingly fragile supercooled liquid-like behavior, with the parameter ϕ₀, extracted from the fits to the VFT equation shifting towards higher ϕeff. The observed increase in fragility is attributed to the prevalence of dynamical heterogeneities (DHs) in these polydisperse suspensions, while the simultaneous shift in ϕ₀ is ascribed to the decoupling of the dynamics of the smallest and largest particles. Finally, it is observed that the intrinsic nonlinearity of these suspensions, estimated at the third harmonic near ϕ₀ in Fourier transform oscillatory rheological experiments, increases with increase in polydispersity. These results are in agreement with theoretical predictions and simulation results for polydisperse hard sphere colloidal glasses and clearly demonstrate that jammed suspensions of polydisperse colloidal particles can be effectively fluidized with increasing polydispersity. Suspensions of these particles are therefore excellent candidates for detailed experimental studies of the effects of polydispersity on the dynamics of glass formation.

Keywords: dynamical heterogeneity, effective volume fraction, fragility, intrinsic nonlinearity

Procedia PDF Downloads 149
584 The Risk of Deaths from Viral Hepatitis among the Female Workers in the Beauty Service Industry

Authors: Byeongju Choi, Sanggil Lee, Kyung-Eun Lee

Abstract:

Introduction: In the republic of Korea, the number of workers in the beauty industry has been increasing. Because the prevalence of hepatitis B carriers in Korea is higher than in other countries, the risk of blood-borne infection including viral hepatitis B and C, among the workers by using the sharp and contaminated instruments during procedure can be expected among beauty salon workers. However, the health care policies for the workers to prevent the blood-borne infection are not established due to the lack of evidences. Moreover, the workers in hair and nail salon were mostly employed at small businesses, where national mandatory systems or policies for workers’ health management are not applied. In this study, the risk of the viral hepatitis B and C from the job experiencing the hair and nail procedures in the mortality was assessed. Method: We conducted a retrospective review of the job histories and causes of death in the female deaths from 2006-2016. 132,744 of female deaths who had one more job experiences during their lifetime were included in this study. Job histories were assessed using the employment insurance database in Korea Employment Information Service (KEIS) and the causes of death were in death statistics produced by Statistics Korea. Case group (n= 666) who died from viral hepatitis was classified the death having record involved in ‘B15-B19’ as a cause of deaths based on Korean Standard Classification of Diseases(KCD) with the deaths from other causes, control group (n=132,078). The group of the workers in the beauty service industry were defined as the employees who had ever worked in the industry coded as ‘9611’ based on Korea Standard Industry Classification (KSIC) and others were others. Other than job histories, birth year, marital status, education level were investigated from the death statistics. Multiple logistic regression analysis were used to assess the risk of deaths from viral hepatitis in the case and control group. Result: The number of the deaths having ever job experiences at the hair and nail salon was 255. After adjusting confounders of age, marital status and education, the odds ratio(OR) for deaths from viral hepatitis was quite high in the group having experiences with working in the beauty service industry with 3.14(95% confidence interval(CI) 1.00-9.87). Other associated factors with increasing the risk of deaths from viral hepatitis were low education level(OR=1.34, 95% CI 1.04-1.73), married women (OR=1.42, 95% CI 1.02-1.97). Conclusion: The risk of deaths from viral hepatitis were high in the workers in the beauty service industry but not statistically significant, which might attributed from the small number of workers in beauty service industry. It was likely that the number of workers in beauty service industry could be underestimated due to their temporary job position. Further studies evaluating the status and the incidence of viral infection among the workers with consideration of the vertical transmission would be required.

Keywords: beauty service, viral hepatitis, blood-borne infection, viral infection

Procedia PDF Downloads 112
583 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 83
582 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 176
581 Stable Diffusion, Context-to-Motion Model to Augmenting Dexterity of Prosthetic Limbs

Authors: André Augusto Ceballos Melo

Abstract:

Design to facilitate the recognition of congruent prosthetic movements, context-to-motion translations guided by image, verbal prompt, users nonverbal communication such as facial expressions, gestures, paralinguistics, scene context, and object recognition contributes to this process though it can also be applied to other tasks, such as walking, Prosthetic limbs as assistive technology through gestures, sound codes, signs, facial, body expressions, and scene context The context-to-motion model is a machine learning approach that is designed to improve the control and dexterity of prosthetic limbs. It works by using sensory input from the prosthetic limb to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. This can help to improve the performance of the prosthetic limb and make it easier for the user to perform a wide range of tasks. There are several key benefits to using the context-to-motion model for prosthetic limb control. First, it can help to improve the naturalness and smoothness of prosthetic limb movements, which can make them more comfortable and easier to use for the user. Second, it can help to improve the accuracy and precision of prosthetic limb movements, which can be particularly useful for tasks that require fine motor control. Finally, the context-to-motion model can be trained using a variety of different sensory inputs, which makes it adaptable to a wide range of prosthetic limb designs and environments. Stable diffusion is a machine learning method that can be used to improve the control and stability of movements in robotic and prosthetic systems. It works by using sensory feedback to learn about the dynamics of the environment and then using this information to generate smooth, stable movements. One key aspect of stable diffusion is that it is designed to be robust to noise and uncertainty in the sensory feedback. This means that it can continue to produce stable, smooth movements even when the sensory data is noisy or unreliable. To implement stable diffusion in a robotic or prosthetic system, it is typically necessary to first collect a dataset of examples of the desired movements. This dataset can then be used to train a machine learning model to predict the appropriate control inputs for a given set of sensory observations. Once the model has been trained, it can be used to control the robotic or prosthetic system in real-time. The model receives sensory input from the system and uses it to generate control signals that drive the motors or actuators responsible for moving the system. Overall, the use of the context-to-motion model has the potential to significantly improve the dexterity and performance of prosthetic limbs, making them more useful and effective for a wide range of users Hand Gesture Body Language Influence Communication to social interaction, offering a possibility for users to maximize their quality of life, social interaction, and gesture communication.

Keywords: stable diffusion, neural interface, smart prosthetic, augmenting

Procedia PDF Downloads 87
580 Willingness to Pay for Improvements of MSW Disposal: Views from Online Survey

Authors: Amornchai Challcharoenwattana, Chanathip Pharino

Abstract:

Rising amount of MSW every day, maximizing material diversions from landfills via recycling is a prefer method to land dumping. Characteristic of Thai MSW is classified as 40 -60 per cent compostable wastes while potentially recyclable materials in waste streams are composed of plastics, papers, glasses, and metals. However, rate of material recovery from MSW, excluding composting or biogas generation, in Thailand is still low. Thailand’s recycling rate in 2010 was only 20.5 per cent. Central government as well as local governments in Thailand have tried to curb this problem by charging some of MSW management fees at the users. However, the fee is often too low to promote MSW minimization. The objective of this paper is to identify levels of willingness-to-pay (WTP) for MSW recycling in different social structures with expected outcome of sustainable MSW managements for different town settlements to maximize MSW recycling pertaining to each town’s potential. The method of eliciting WTP is a payment card. The questionnaire was deployed using online survey during December 2012. Responses were categorized into respondents living in Bangkok, living in other municipality areas, or outside municipality area. The responses were analysed using descriptive statistics, and multiple linear regression analysis to identify relationships and factors that could influence high or low WTP. During the survey period, there were 168 filled questionnaires from total 689 visits. However, only 96 questionnaires could be usable. Among respondents in the usable questionnaires, 36 respondents lived in within the boundary of Bangkok Metropolitan Administration while 45 respondents lived in the chartered areas that were classified as other municipality but not in BMA. Most of respondents were well-off as 75 respondents reported positive monthly cash flow (77.32%), 15 respondents reported neutral monthly cash flow (15.46%) while 7 respondent reported negative monthly cash flow (7.22%). For WTP data including WTP of 0 baht with valid responses, ranking from the highest means of WTP to the lowest WTP of respondents by geographical locations for good MSW management were Bangkok (196 baht/month), municipalities (154 baht/month), and non-urbanized towns (111 baht/month). In-depth analysis was conducted to analyse whether there are additional room for further increase of MSW management fees from the current payment that each correspondent is currently paying. The result from multiple-regression analysis suggested that the following factors could impacts the increase or decrease of WTP: incomes, age, and gender. Overall, the outcome of this study suggests that survey respondents are likely to support improvement of MSW treatments that are not solely relying on landfilling technique. Recommendations for further studies are to obtain larger sample sizes in order to improve statistical powers and to provide better accuracy of WTP study.

Keywords: MSW, willingness to pay, payment card, waste seperation

Procedia PDF Downloads 275
579 Incidence of Breast Cancer and Enterococcus Infection: A Retrospective Analysis

Authors: Matthew Cardeiro, Amalia D. Ardeljan, Lexi Frankel, Dianela Prado Escobar, Catalina Molnar, Omar M. Rashid

Abstract:

Introduction: Enterococci comprise the natural flora of nearly all animals and are ubiquitous in food manufacturing and probiotics. However, its role in the microbiome remains controversial. The gut microbiome has shown to play an important role in immunology and cancer. Further, recent data has suggested a relationship between gut microbiota and breast cancer. These studies have shown that the gut microbiome of patients with breast cancer differs from that of healthy patients. Research regarding enterococcus infection and its sequala is limited, and further research is needed in order to understand the relationship between infection and cancer. Enterococcus may prevent the development of breast cancer (BC) through complex immunologic and microbiotic adaptations following an enterococcus infection. This study investigated the effect of enterococcus infection and the incidence of BC. Methods: A retrospective study (January 2010- December 2019) was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database and conducted using a Humans Health Insurance Database. International Classification of Disease (ICD) 9th and 10th codes, Current Procedural Terminology (CPT), and National Drug Codes were used to identify BC diagnosis and enterococcus infection. Patients were matched for age, sex, Charlson Comorbidity Index (CCI), antibiotic treatment, and region of residence. Chi-squared, logistic regression, and odds ratio were implemented to assess the significance and estimate relative risk. Results: 671 out of 28,518 (2.35%) patients with a prior enterococcus infection and 1,459 out of 28,518 (5.12%) patients without enterococcus infection subsequently developed BC, and the difference was statistically significant (p<2.2x10⁻¹⁶). Logistic regression also indicated enterococcus infection was associated with a decreased incidence of BC (RR=0.60, 95% CI [0.57, 0.63]). Treatment for enterococcus infection was analyzed and controlled for in both enterococcus infected and noninfected populations. 398 out of 11,523 (3.34%) patients with a prior enterococcus infection and treated with antibiotics were compared to 624 out of 11,523 (5.41%) patients with no history of enterococcus infection (control) and received antibiotic treatment. Both populations subsequently developed BC. Results remained statistically significant (p<2.2x10-16) with a relative risk of 0.57 (95% CI [0.54, 0.60]). Conclusion & Discussion: This study shows a statistically significant correlation between enterococcus infection and a decrease incidence of breast cancer. Further exploration is needed to identify and understand not only the role of enterococcus in the microbiome but also the protective mechanism(s) and impact enterococcus infection may have on breast cancer development. Ultimately, further research is needed in order to understand the complex and intricate relationship between the microbiome, immunology, bacterial infections, and carcinogenesis.

Keywords: breast cancer, enterococcus, immunology, infection, microbiome

Procedia PDF Downloads 160
578 Mechanical and Material Characterization on the High Nitrogen Supersaturated Tool Steels for Die-Technology

Authors: Tatsuhiko Aizawa, Hiroshi Morita

Abstract:

The tool steels such as SKD11 and SKH51 have been utilized as punch and die substrates for cold stamping, forging, and fine blanking processes. The heat-treated SKD11 punches with the hardness of 700 HV wrought well in the stamping of SPCC, normal steel plates, and non-ferrous alloy such as a brass sheet. However, they suffered from severe damage in the fine blanking process of smaller holes than 1.5 mm in diameter. Under the high aspect ratio of punch length to diameter, an elastoplastic bucking of slender punches occurred on the production line. The heat-treated punches had a risk of chipping at their edges. To be free from those damages, the blanking punch must have sufficient rigidity and strength at the same time. In the present paper, the small-hole blanking punch with a dual toughness structure was proposed to provide a solution to this engineering issue in production. The low-temperature plasma nitriding process was utilized to form the nitrogen supersaturated thick layer into the original SKD11 punch. Through the plasma nitriding at 673 K for 14.4 ks, the nitrogen supersaturated layer, with the thickness of 50 μm and without nitride precipitates, was formed as a high nitrogen steel (HNS) layer surrounding the original SKD11 punch. In this two-zone structured SKD11 punch, the surface hardness increased from 700 HV for the heat-treated SKD11 to 1400 HV. This outer high nitrogen SKD11 (HN-SKD11) layer had a homogeneous nitrogen solute depth profile with a nitrogen solute content plateau of 4 mass% till the border between the outer HN-SKD11 layer and the original SKD11 matrix. When stamping the brass sheet with the thickness of 1 mm by using this dually toughened SKD11 punch, the punch life was extended from 500 K shots to 10000 K shots to attain a much more stable production line to yield the brass American snaps. Furthermore, with the aid of the masking technique, the punch side surface layer with the thickness of 50 μm was modified by this high nitrogen super-saturation process to have a stripe structure where the un-nitrided SKD11 and the HN-SKD11 layers were alternatively aligned from the punch head to the punch bottom. This flexible structuring promoted the mechanical integrity of total rigidity and toughness as a punch with an extremely small diameter.

Keywords: high nitrogen supersaturation, semi-dry cold stamping, solid solution hardening, tool steel dies, low temperature nitriding, dual toughness structure, extremely small diameter punch

Procedia PDF Downloads 75
577 Multilevel of Factors Affected Optimal Adherence to Antiretroviral Therapy and Viral Suppression amongst HIV-Infected Prisoners in South Ethiopia: A Prospective Cohort Study

Authors: Terefe Fuge, George Tsourtos , Emma Miller

Abstract:

Objectives: Maintaining optimal adherence and viral suppression in people living with HIV (PLWHA) is essential to ensure both preventative and therapeutic benefits of antiretroviral therapy (ART). Prisoners bear a particularly high burden of HIV infection and are highly likely to transmit to others during and after incarceration. However, the level of adherence and viral suppression, as well as its associated factors in incarcerated populations in low-income countries is unknown. This study aimed to determine the prevalence of non-adherence and viral failure, and contributing factors to this amongst prisoners in South Ethiopia. Methods: A prospective cohort study was conducted between June 1, 2019 and July 31, 2020 to compare the level of adherence and viral suppression between incarcerated and non-incarcerated PLWHA. The study involved 74 inmates living with HIV (ILWHA) and 296 non-incarcerated PLWHA. Background information including sociodemographic, socioeconomic, psychosocial, behavioural, and incarceration-related characteristics was collected using a structured questionnaire. Adherence was determined based on participants’ self-report and pharmacy refill records, and plasma viral load measurements which were undertaken within the study period were prospectively extracted to determine viral suppression. Various univariate and multivariate regression models were used to analyse data. Results: Self-reported dose adherence was approximately similar between ILWHA and non-incarcerated PLWHA (81% and 83% respectively), but ILWHA had a significantly higher medication possession ratio (MPR) (89% vs 75%). The prevalence of viral failure (VF) was slightly higher (6%) in ILWHA compared to non-incarcerated PLWHA (4.4%). The overall dose non-adherence (NA) was significantly associated with missing ART appointments, level of satisfaction with ART services, patient’s ability to comply with a specified medication schedule and types of methods used to monitor the schedule. In ILWHA specifically, accessing ART services from a hospital compared to a health centre, an inability to always attend clinic appointments, experience of depression and a lack of social support predicted NA. VF was significantly higher in males, people of age 31-35 years and in those who experienced social stigma, regardless of their incarceration status. Conclusions: This study revealed that HIV-infected prisoners in South Ethiopia were more likely to be non-adherent to doses and so to develop viral failure compared to their non-incarcerated counterparts. A multitude of factors was found to be responsible for this requiring multilevel intervention strategies focusing on the specific needs of prisoners.

Keywords: Adherence , Antiretroviral therapy, Incarceration, South Ethiopia, Viral suppression

Procedia PDF Downloads 113
576 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 114
575 Suspended Sediment Concentration and Water Quality Monitoring Along Aswan High Dam Reservoir Using Remote Sensing

Authors: M. Aboalazayem, Essam A. Gouda, Ahmed M. Moussa, Amr E. Flifl

Abstract:

Field data collecting is considered one of the most difficult work due to the difficulty of accessing large zones such as large lakes. Also, it is well known that the cost of obtaining field data is very expensive. Remotely monitoring of lake water quality (WQ) provides an economically feasible approach comparing to field data collection. Researchers have shown that lake WQ can be properly monitored via Remote sensing (RS) analyses. Using satellite images as a method of WQ detection provides a realistic technique to measure quality parameters across huge areas. Landsat (LS) data provides full free access to often occurring and repeating satellite photos. This enables researchers to undertake large-scale temporal comparisons of parameters related to lake WQ. Satellite measurements have been extensively utilized to develop algorithms for predicting critical water quality parameters (WQPs). The goal of this paper is to use RS to derive WQ indicators in Aswan High Dam Reservoir (AHDR), which is considered Egypt's primary and strategic reservoir of freshwater. This study focuses on using Landsat8 (L-8) band surface reflectance (SR) observations to predict water-quality characteristics which are limited to Turbidity (TUR), total suspended solids (TSS), and chlorophyll-a (Chl-a). ArcGIS pro is used to retrieve L-8 SR data for the study region. Multiple linear regression analysis was used to derive new correlations between observed optical water-quality indicators in April and L-8 SR which were atmospherically corrected by values of various bands, band ratios, and or combinations. Field measurements taken in the month of May were used to validate WQP obtained from SR data of L-8 Operational Land Imager (OLI) satellite. The findings demonstrate a strong correlation between indicators of WQ and L-8 .For TUR, the best validation correlation with OLI SR bands blue, green, and red, were derived with high values of Coefficient of correlation (R2) and Root Mean Square Error (RMSE) equal 0.96 and 3.1 NTU, respectively. For TSS, Two equations were strongly correlated and verified with band ratios and combinations. A logarithm of the ratio of blue and green SR was determined to be the best performing model with values of R2 and RMSE equal to 0.9861 and 1.84 mg/l, respectively. For Chl-a, eight methods were presented for calculating its value within the study area. A mix of blue, red, shortwave infrared 1(SWR1) and panchromatic SR yielded the greatest validation results with values of R2 and RMSE equal 0.98 and 1.4 mg/l, respectively.

Keywords: remote sensing, landsat 8, nasser lake, water quality

Procedia PDF Downloads 82
574 An Unified Model for Longshore Sediment Transport Rate Estimation

Authors: Aleksandra Dudkowska, Gabriela Gic-Grusza

Abstract:

Wind wave-induced sediment transport is an important multidimensional and multiscale dynamic process affecting coastal seabed changes and coastline evolution. The knowledge about sediment transport rate is important to solve many environmental and geotechnical issues. There are many types of sediment transport models but none of them is widely accepted. It is bacause the process is not fully defined. Another problem is a lack of sufficient measurment data to verify proposed hypothesis. There are different types of models for longshore sediment transport (LST, which is discussed in this work) and cross-shore transport which is related to different time and space scales of the processes. There are models describing bed-load transport (discussed in this work), suspended and total sediment transport. LST models use among the others the information about (i) the flow velocity near the bottom, which in case of wave-currents interaction in coastal zone is a separate problem (ii) critical bed shear stress that strongly depends on the type of sediment and complicates in the case of heterogeneous sediment. Moreover, LST rate is strongly dependant on the local environmental conditions. To organize existing knowledge a series of sediment transport models intercomparisons was carried out as a part of the project “Development of a predictive model of morphodynamic changes in the coastal zone”. Four classical one-grid-point models were studied and intercompared over wide range of bottom shear stress conditions, corresponding with wind-waves conditions appropriate for coastal zone in polish marine areas. The set of models comprises classical theories that assume simplified influence of turbulence on the sediment transport (Du Boys, Meyer-Peter & Muller, Ribberink, Engelund & Hansen). It turned out that the values of estimated longshore instantaneous mass sediment transport are in general in agreement with earlier studies and measurements conducted in the area of interest. However, none of the formulas really stands out from the rest as being particularly suitable for the test location over the whole analyzed flow velocity range. Therefore, based on the models discussed a new unified formula for longshore sediment transport rate estimation is introduced, which constitutes the main original result of this study. Sediment transport rate is calculated based on the bed shear stress and critical bed shear stress. The dependence of environmental conditions is expressed by one coefficient (in a form of constant or function) thus the model presented can be quite easily adjusted to the local conditions. The discussion of the importance of each model parameter for specific velocity ranges is carried out. Moreover, it is shown that the value of near-bottom flow velocity is the main determinant of longshore bed-load in storm conditions. Thus, the accuracy of the results depends less on the sediment transport model itself and more on the appropriate modeling of the near-bottom velocities.

Keywords: bedload transport, longshore sediment transport, sediment transport models, coastal zone

Procedia PDF Downloads 372
573 Nanofluidic Cell for Resolution Improvement of Liquid Transmission Electron Microscopy

Authors: Deybith Venegas-Rojas, Sercan Keskin, Svenja Riekeberg, Sana Azim, Stephanie Manz, R. J. Dwayne Miller, Hoc Khiem Trieu

Abstract:

Liquid Transmission Electron Microscopy (TEM) is a growing area with a broad range of applications from physics and chemistry to material engineering and biology, in which it is possible to image in-situ unseen phenomena. For this, a nanofluidic device is used to insert the nanoflow with the sample inside the microscope in order to keep the liquid encapsulated because of the high vacuum. In the last years, Si3N4 windows have been widely used because of its mechanical stability and low imaging contrast. Nevertheless, the pressure difference between the inside fluid and the outside vacuum in the TEM generates bulging in the windows. This increases the imaged fluid volume, which decreases the signal to noise ratio (SNR), limiting the achievable spatial resolution. With the proposed device, the membrane is fortified with a microstructure capable of stand higher pressure differences, and almost removing completely the bulging. A theoretical study is presented with Finite Element Method (FEM) simulations which provide a deep understanding of the membrane mechanical conditions and proves the effectiveness of this novel concept. Bulging and von Mises Stress were studied for different membrane dimensions, geometries, materials, and thicknesses. The microfabrication of the device was made with a thin wafer coated with thin layers of SiO2 and Si3N4. After the lithography process, these layers were etched (reactive ion etching and buffered oxide etch (BOE) respectively). After that, the microstructure was etched (deep reactive ion etching). Then the back side SiO2 was etched (BOE) and the array of free-standing micro-windows was obtained. Additionally, a Pyrex wafer was patterned with windows, and inlets/outlets, and bonded (anodic bonding) to the Si side to facilitate the thin wafer handling. Later, a thin spacer is sputtered and patterned with microchannels and trenches to guide the nanoflow with the samples. This approach reduces considerably the common bulging problem of the window, improving the SNR, contrast and spatial resolution, increasing substantially the mechanical stability of the windows, allowing a larger viewing area. These developments lead to a wider range of applications of liquid TEM, expanding the spectrum of possible experiments in the field.

Keywords: liquid cell, liquid transmission electron microscopy, nanofluidics, nanofluidic cell, thin films

Procedia PDF Downloads 238
572 Sintering of YNbO3:Eu3+ Compound: Correlation between Luminescence and Spark Plasma Sintering Effect

Authors: Veronique Jubera, Ka-Young Kim, U-Chan Chung, Amelie Veillere, Jean-Marc Heintz

Abstract:

Emitting materials and all solid state lasers are widely used in the field of optical applications and materials science as a source of excitement, instrumental measurements, medical applications, metal shaping etc. Recently promising optical efficiencies were recorded on ceramics which result from a cheaper and faster ways to obtain crystallized materials. The choice and optimization of the sintering process is the key point to fabricate transparent ceramics. It includes a high control on the preparation of the powder with the choice of an adequate synthesis, a pre-heat-treatment, the reproducibility of the sintering cycle, the polishing and post-annealing of the ceramic. The densification is the main factor needed to reach a satisfying transparency, and many technologies are now available. The symmetry of the unit cell plays a crucial role in the diffusion rate of the material. Therefore, the cubic symmetry compounds having an isotropic refractive index is preferred. The cubic Y3NbO7 matrix is an interesting host which can accept a high concentration of rare earth doping element and it has been demonstrated that SPS is an efficient way to sinter this material. The optimization of diffusion losses requires a microstructure of fine ceramics, generally less than one hundred nanometers. In this case, grain growth is not an obstacle to transparency. The ceramics properties are then isotropic thereby to free-shaping step by orienting the ceramics as this is the case for the compounds of lower symmetry. After optimization of the synthesis route, several SPS parameters as heating rate, holding, dwell time and pressure were adjusted in order to increase the densification of the Eu3+ doped Y3NbO7 pellets. The luminescence data coupled with X-Ray diffraction analysis and electronic diffraction microscopy highlight the existence of several distorted environments of the doping element in the studied defective fluorite-type host lattice. Indeed, the fast and high crystallization rate obtained to put in evidence a lack of miscibility in the phase diagram, being the final composition of the pellet driven by the ratio between niobium and yttrium elements. By following the luminescence properties, we demonstrate a direct impact on the SPS process on this material.

Keywords: emission, niobate of rare earth, Spark plasma sintering, lack of miscibility

Procedia PDF Downloads 245
571 Tracing the Developmental Repertoire of the Progressive: Evidence from L2 Construction Learning

Authors: Tianqi Wu, Min Wang

Abstract:

Research investigating language acquisition from a constructionist perspective has demonstrated that language is learned as constructions at various linguistic levels, which is related to factors of frequency, semantic prototypicality, and form-meaning contingency. However, previous research on construction learning tended to focus on clause-level constructions such as verb argument constructions but few attempts were made to study morpheme-level constructions such as the progressive construction, which is regarded as a source of acquisition problems for English learners from diverse L1 backgrounds, especially for those whose L1 do not have an equivalent construction such as German and Chinese. To trace the developmental trajectory of Chinese EFL learners’ use of the progressive with respect to verb frequency, verb-progressive contingency, and verbal prototypicality and generality, a learner corpus consisting of three sub-corpora representing three different English proficiency levels was extracted from the Chinese Learners of English Corpora (CLEC). As the reference point, a native speakers’ corpus extracted from the Louvain Corpus of Native English Essays was also established. All the texts were annotated with C7 tagset by part-of-speech tagging software. After annotation all valid progressive hits were retrieved with AntConc 3.4.3 followed by a manual check. Frequency-related data showed that from the lowest to the highest proficiency level, (1) the type token ratio increased steadily from 23.5% to 35.6%, getting closer to 36.4% in the native speakers’ corpus, indicating a wider use of verbs in the progressive; (2) the normalized entropy value rose from 0.776 to 0.876, working towards the target score of 0.886 in native speakers’ corpus, revealing that upper-intermediate learners exhibited a more even distribution and more productive use of verbs in the progressive; (3) activity verbs (i.e., verbs with prototypical progressive meanings like running and singing) dropped from 59% to 34% but non-prototypical verbs such as state verbs (e.g., being and living) and achievement verbs (e.g., dying and finishing) were increasingly used in the progressive. Apart from raw frequency analyses, collostructional analyses were conducted to quantify verb-progressive contingency and to determine what verbs were distinctively associated with the progressive construction. Results were in line with raw frequency findings, which showed that contingency between the progressive and non-prototypical verbs represented by light verbs (e.g., going, doing, making, and coming) increased as English proficiency proceeded. These findings altogether suggested that beginning Chinese EFL learners were less productive in using the progressive construction: they were constrained by a small set of verbs which had concrete and typical progressive meanings (e.g., the activity verbs). But with English proficiency increasing, their use of the progressive began to spread to marginal members such as the light verbs.

Keywords: Construction learning, Corpus-based, Progressives, Prototype

Procedia PDF Downloads 113
570 EQMamba - Method Suggestion for Earthquake Detection and Phase Picking

Authors: Noga Bregman

Abstract:

Accurate and efficient earthquake detection and phase picking are crucial for seismic hazard assessment and emergency response. This study introduces EQMamba, a deep-learning method that combines the strengths of the Earthquake Transformer and the Mamba model for simultaneous earthquake detection and phase picking. EQMamba leverages the computational efficiency of Mamba layers to process longer seismic sequences while maintaining a manageable model size. The proposed architecture integrates convolutional neural networks (CNNs), bidirectional long short-term memory (BiLSTM) networks, and Mamba blocks. The model employs an encoder composed of convolutional layers and max pooling operations, followed by residual CNN blocks for feature extraction. Mamba blocks are applied to the outputs of BiLSTM blocks, efficiently capturing long-range dependencies in seismic data. Separate decoders are used for earthquake detection, P-wave picking, and S-wave picking. We trained and evaluated EQMamba using a subset of the STEAD dataset, a comprehensive collection of labeled seismic waveforms. The model was trained using a weighted combination of binary cross-entropy loss functions for each task, with the Adam optimizer and a scheduled learning rate. Data augmentation techniques were employed to enhance the model's robustness. Performance comparisons were conducted between EQMamba and the EQTransformer over 20 epochs on this modest-sized STEAD subset. Results demonstrate that EQMamba achieves superior performance, with higher F1 scores and faster convergence compared to EQTransformer. EQMamba reached F1 scores of 0.8 by epoch 5 and maintained higher scores throughout training. The model also exhibited more stable validation performance, indicating good generalization capabilities. While both models showed lower accuracy in phase-picking tasks compared to detection, EQMamba's overall performance suggests significant potential for improving seismic data analysis. The rapid convergence and superior F1 scores of EQMamba, even on a modest-sized dataset, indicate promising scalability for larger datasets. This study contributes to the field of earthquake engineering by presenting a computationally efficient and accurate method for simultaneous earthquake detection and phase picking. Future work will focus on incorporating Mamba layers into the P and S pickers and further optimizing the architecture for seismic data specifics. The EQMamba method holds the potential for enhancing real-time earthquake monitoring systems and improving our understanding of seismic events.

Keywords: earthquake, detection, phase picking, s waves, p waves, transformer, deep learning, seismic waves

Procedia PDF Downloads 14
569 Thermal and Visual Comfort Assessment in Office Buildings in Relation to Space Depth

Authors: Elham Soltani Dehnavi

Abstract:

In today’s compact cities, bringing daylighting and fresh air to buildings is a significant challenge, but it also presents opportunities to reduce energy consumption in buildings by reducing the need for artificial lighting and mechanical systems. Simple adjustments to building form can contribute to their efficiency. This paper examines how the relationship between the width and depth of the rooms in office buildings affects visual and thermal comfort, and consequently energy savings. Based on these evaluations, we can determine the best location for sedentary areas in a room. We can also propose improvements to occupant experience and minimize the difference between the predicted and measured performance in buildings by changing other design parameters, such as natural ventilation strategies, glazing properties, and shading. This study investigates the condition of spatial daylighting and thermal comfort for a range of room configurations using computer simulations, then it suggests the best depth for optimizing both daylighting and thermal comfort, and consequently energy performance in each room type. The Window-to-Wall Ratio (WWR) is 40% with 0.8m window sill and 0.4m window head. Also, there are some fixed parameters chosen according to building codes and standards, and the simulations are done in Seattle, USA. The simulation results are presented as evaluation grids using the thresholds for different metrics such as Daylight Autonomy (DA), spatial Daylight Autonomy (sDA), Annual Sunlight Exposure (ASE), and Daylight Glare Probability (DGP) for visual comfort, and Predicted Mean Vote (PMV), Predicted Percentage of Dissatisfied (PPD), occupied Thermal Comfort Percentage (occTCP), over-heated percent, under-heated percent, and Standard Effective Temperature (SET) for thermal comfort that are extracted from Grasshopper scripts. The simulation tools are Grasshopper plugins such as Ladybug, Honeybee, and EnergyPlus. According to the results, some metrics do not change much along the room depth and some of them change significantly. So, we can overlap these grids in order to determine the comfort zone. The overlapped grids contain 8 metrics, and the pixels that meet all 8 mentioned metrics’ thresholds define the comfort zone. With these overlapped maps, we can determine the comfort zones inside rooms and locate sedentary areas there. Other parts can be used for other tasks that are not used permanently or need lower or higher amounts of daylight and thermal comfort is less critical to user experience. The results can be reflected in a table to be used as a guideline by designers in the early stages of the design process.

Keywords: occupant experience, office buildings, space depth, thermal comfort, visual comfort

Procedia PDF Downloads 167
568 Long-Term Variabilities and Tendencies in the Zonally Averaged TIMED-SABER Ozone and Temperature in the Middle Atmosphere over 10°N-15°N

Authors: Oindrila Nath, S. Sridharan

Abstract:

Long-term (2002-2012) temperature and ozone measurements by Sounding of Atmosphere by Broadband Emission Radiometry (SABER) instrument onboard Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED) satellite zonally averaged over 10°N-15°N are used to study their long-term changes and their responses to solar cycle, quasi-biennial oscillation and El Nino Southern Oscillation. The region is selected to provide more accurate long-term trends and variabilities, which were not possible earlier with lidar measurements over Gadanki (13.5°N, 79.2°E), which are limited to cloud-free nights, whereas continuous data sets of SABER temperature and ozone are available. Regression analysis of temperature shows a cooling trend of 0.5K/decade in the stratosphere and that of 3K/decade in the mesosphere. Ozone shows a statistically significant decreasing trend of 1.3 ppmv per decade in the mesosphere although there is a small positive trend in stratosphere at 25 km. Other than this no significant ozone trend is observed in stratosphere. Negative ozone-QBO response (0.02ppmv/QBO), positive ozone-solar cycle (0.91ppmv/100SFU) and negative response to ENSO (0.51ppmv/SOI) have been found more in mesosphere whereas positive ozone response to ENSO (0.23ppmv/SOI) is pronounced in stratosphere (20-30 km). The temperature response to solar cycle is more positive (3.74K/100SFU) in the upper mesosphere and its response to ENSO is negative around 80 km and positive around 90-100 km and its response to QBO is insignificant at most of the heights. Composite monthly mean of ozone volume mixing ratio shows maximum values during pre-monsoon and post-monsoon season in middle stratosphere (25-30 km) and in upper mesosphere (85-95 km) around 10 ppmv. Composite monthly mean of temperature shows semi-annual variation with large values (~250-260 K) in equinox months and less values in solstice months in upper stratosphere and lower mesosphere (40-55 km) whereas the SAO becomes weaker above 55 km. The semi-annual variation again appears at 80-90 km, with large values in spring equinox and winter months. In the upper mesosphere (90-100 km), less temperature (~170-190 K) prevails in all the months except during September, when the temperature is slightly more. The height profiles of amplitudes of semi-annual and annual oscillations in ozone show maximum values of 6 ppmv and 2.5 ppmv respectively in upper mesosphere (80-100 km), whereas SAO and AO in temperature show maximum values of 5.8 K and 4.6 K in lower and middle mesosphere around 60-85 km. The phase profiles of both SAO and AO show downward progressions. These results are being compared with long-term lidar temperature measurements over Gadanki (13.5°N, 79.2°E) and the results obtained will be presented during the meeting.

Keywords: trends, QBO, solar cycle, ENSO, ozone, temperature

Procedia PDF Downloads 394
567 Poisoning in Morocco: Evolution and Risk Factors

Authors: El Khaddam Safaa, Soulaymani Abdelmajid, Mokhtari Abdelghani, Ouammi Lahcen, Rachida Soulaymani-Beincheikh

Abstract:

The poisonings represent a problem of health in the world and Morocco, The exact dimensions of this phenomenon are still poorly recorded that we see the lack of exhaustive statistical data. The objective of this retrospective study of a series of cases of the poisonings declared at the level of the region of Tadla-Azilal and collected by the Moroccan Poison Control and Pharmacovigilance Center. An epidemiological profile of the poisonings was to raise, to determine the risk factors influencing the vital preview of the poisoned And to follow the evolution of the incidence, the lethality, and the mortality. During the period of study, we collected and analyzed 9303 cases of poisonings by different incriminated toxic products with the exception of the scorpion poisonings. These poisonings drove to 99 deaths. The epidemiological profile which we raised, showed that the poisoned were of any age with an average of 24.62±16.61 years, The sex-ratio (woman/man) was 1.36 in favor of the women. The difference between both sexes is highly significant (χ2 = 210.5; p<0,001). Most of the poisoned which declared to be of urban origin (60.5 %) (χ2=210.5; p<0,001). Carbon monoxide was the most incriminated among the cases of poisonings (24.15 %), them putting in head, followed by some pesticides and farm produces (21.44 %) and food (19.95 %). The analysis of the risk factors showed that the grown-up patients whose age is between 20 and 74 years have twice more risk of evolving towards the death (RR=1,57; IC95 % = 1,03-2,38) than the other age brackets, so the male genital organ was the most exposed (explained) to the death that the female genital organ (RR=1,59; IC95 % = 1,07-2,38) The patients of rural origin had presented 5 times more risk (RR=4,713; IC95 % = 2,543-8,742). Poisoned by the mineral products had presented the maximum of risk on the vital preview death (RR=23,19, IC95 % = 2,39-224,1). The poisonings by pesticides produce a risk of 9 (RR=9,31; IC95 % = 6,10-14,18). The incidence was 3,3 cases of 10000 inhabitants, and the mortality was 0,004 cases of 1000 inhabitants (that is 4 cases by 1000 000 inhabitants). The rate of lethality registered annually was 10.6 %. The evolution of the indicators of health according to the years showed that the rate of statement measured by the incidence increased by a significant way. We also noted an improvement in the coverage which (who) ended up with a decrease in the rate of the lethality and the mortality during last years. The fight anti-toxic is a work of length time. He asks for a lot of work various levels. It is necessary to attack the delay accumulated by our country on the various legal, institutional and technical aspects. The ideal solution is to develop and to set up a national strategy.

Keywords: epidemiology, poisoning, risk factors, indicators of health, Tadla-Azilal grated by anti-toxic fight

Procedia PDF Downloads 345
566 Facial Recognition and Landmark Detection in Fitness Assessment and Performance Improvement

Authors: Brittany Richardson, Ying Wang

Abstract:

For physical therapy, exercise prescription, athlete training, and regular fitness training, it is crucial to perform health assessments or fitness assessments periodically. An accurate assessment is propitious for tracking recovery progress, preventing potential injury and making long-range training plans. Assessments include necessary measurements, height, weight, blood pressure, heart rate, body fat, etc. and advanced evaluation, muscle group strength, stability-mobility, and movement evaluation, etc. In the current standard assessment procedures, the accuracy of assessments, especially advanced evaluations, largely depends on the experience of physicians, coaches, and personal trainers. And it is challenging to track clients’ progress in the current assessment. Unlike the tradition assessment, in this paper, we present a deep learning based face recognition algorithm for accurate, comprehensive and trackable assessment. Based on the result from our assessment, physicians, coaches, and personal trainers are able to adjust the training targets and methods. The system categorizes the difficulty levels of the current activity for the client or user, furthermore make more comprehensive assessments based on tracking muscle group over time using a designed landmark detection method. The system also includes the function of grading and correcting the form of the clients during exercise. Experienced coaches and personal trainer can tell the clients' limit based on their facial expression and muscle group movements, even during the first several sessions. Similar to this, using a convolution neural network, the system is trained with people’s facial expression to differentiate challenge levels for clients. It uses landmark detection for subtle changes in muscle groups movements. It measures the proximal mobility of the hips and thoracic spine, the proximal stability of the scapulothoracic region and distal mobility of the glenohumeral joint, as well as distal mobility, and its effect on the kinetic chain. This system integrates data from other fitness assistant devices, including but not limited to Apple Watch, Fitbit, etc. for a improved training and testing performance. The system itself doesn’t require history data for an individual client, but the history data of a client can be used to create a more effective exercise plan. In order to validate the performance of the proposed work, an experimental design is presented. The results show that the proposed work contributes towards improving the quality of exercise plan, execution, progress tracking, and performance.

Keywords: exercise prescription, facial recognition, landmark detection, fitness assessments

Procedia PDF Downloads 116
565 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 121
564 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 659
563 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4

Authors: Ryan A. Black, Stacey A. McCaffrey

Abstract:

Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.

Keywords: instrument development, item response theory, latent trait theory, psychometrics

Procedia PDF Downloads 333
562 Healthcare Professionals' Perspectives on Warfarin Therapy at Lao-Luxembourg Heart Centre, Mahosot Hospital, Lao PDR

Authors: Vanlounni Sibounheuang, Wanarat Anusornsangiam, Pattarin Kittiboonyakun, Chanthanom Manithip

Abstract:

In worldwide, one of the most common use of oral anticoagulant is warfarin. Its margin between therapeutic inhibition of clot formation and bleeding complications is narrow. Mahosot Hospital, warfarin clinic had not been established yet. The descriptive study was conducted by investigating drug-related problems of outpatients using warfarin, the value of the international normalized ratio (INR) higher than normal ranges (25.40 % of the total 272 outpatients) were mostly identified at Lao-Luxembourg Heart Centre, Mahosot Hospital, Lao PDR. This result led to the present study conducting qualitative interviews in order to help establish a warfarin clinic at Mahosot Hospital for the better outcomes of patients using warfarin. The purpose of this study was to explore perspectives of healthcare professional providing services for outpatients using warfarin. The face to face, in-depth interviews were undertaken among nine healthcare professionals (doctor=3, nurse=3, pharmacist=3) working at out-patient clinic, Lao-Luxembourg Heart Centre, Mahosot Hospital, Lao PDR. The interview guides were developed, and they were validated by the experts in the fields of qualitative research. Each interview lasted approximately 20 minutes. Three major themes emerged; healthcare professional’s experiences of current practice problems with warfarin therapy, healthcare professionals’ views of medical problems related to patients using warfarin, and healthcare professionals’ perspectives on ways of service improvement. All healthcare professionals had the same views that it’s difficult to achieve INR goal for individual patients because of some important patient barriers especially lack of knowledge about to use warfarin properly and safety, patients not regularly follow-up due to problems with transportations and financial support. Doctors and nurses agreed to have a pharmacist running a routine warfarin clinic and provided counselling to individual patients on the following points: how to take drug properly and safety, drug-drug and food-drug interactions, common side effects and how to manage them, lifestyle modifications. From the interviews, some important components of the establishment of a warfarin clinic included financial support, increased human resources, improved the system of keeping patients’ medical records, short course training for pharmacists. This study indicated the acceptance of healthcare professionals on the important roles of pharmacists and the feasibility of setting up warfarin clinic by working together with the multidisciplinary health care team in order to help improve health outcomes of patients using warfarin at Mahosot Hospital, Lao PDR.

Keywords: perspectives, healthcare professional, warfarin therapy, Mahosot Hospital

Procedia PDF Downloads 87
561 Development of Perovskite Quantum Dots Light Emitting Diode by Dual-Source Evaporation

Authors: Antoine Dumont, Weiji Hong, Zheng-Hong Lu

Abstract:

Light emitting diodes (LEDs) are steadily becoming the new standard for luminescent display devices because of their energy efficiency and relatively low cost, and the purity of the light they emit. Our research focuses on the optical properties of the lead halide perovskite CsPbBr₃ and its family that is showing steadily improving performances in LEDs and solar cells. The objective of this work is to investigate CsPbBr₃ as an emitting layer made by physical vapor deposition instead of the usual solution-processed perovskites, for use in LEDs. The deposition in vacuum eliminates any risk of contaminants as well as the necessity for the use of chemical ligands in the synthesis of quantum dots. Initial results show the versatility of the dual-source evaporation method, which allowed us to create different phases in bulk form by altering the mole ratio or deposition rate of CsBr and PbBr₂. The distinct phases Cs₄PbBr₆, CsPbBr₃ and CsPb₂Br₅ – confirmed through XPS (x-ray photoelectron spectroscopy) and X-ray diffraction analysis – have different optical properties and morphologies that can be used for specific applications in optoelectronics. We are particularly focused on the blue shift expected from quantum dots (QDs) and the stability of the perovskite in this form. We already obtained proof of the formation of QDs through our dual source evaporation method with electron microscope imaging and photoluminescence testing, which we understand is a first in the community. We also incorporated the QDs in an LED structure to test the electroluminescence and the effect on performance and have already observed a significant wavelength shift. The goal is to reach 480nm after shifting from the original 528nm bulk emission. The hole transport layer (HTL) material onto which the CsPbBr₃ is evaporated is a critical part of this study as the surface energy interaction dictates the behaviour of the QD growth. A thorough study to determine the optimal HTL is in progress. A strong blue shift for a typically green emitting material like CsPbBr₃ would eliminate the necessity of using blue emitting Cl-based perovskite compounds and could prove to be more stable in a QD structure. The final aim is to make a perovskite QD LED with strong blue luminescence, fabricated through a dual-source evaporation technique that could be scalable to industry level, making this device a viable and cost-effective alternative to current commercial LEDs.

Keywords: material physics, perovskite, light emitting diode, quantum dots, high vacuum deposition, thin film processing

Procedia PDF Downloads 149