Search results for: higher order thinking skills
5418 Design and Development of Power Sources for Plasma Actuators to Control Flow Separation
Authors: Himanshu J. Bahirat, Apoorva S. Janawlekar
Abstract:
Plasma actuators are essential for aerodynamic flow separation control due to their lack of mechanical parts, lightweight, and high response frequency, which have numerous applications in hypersonic or supersonic aircraft. The working of these actuators is based on the formation of a low-temperature plasma between a pair of parallel electrodes by the application of a high-voltage AC signal across the electrodes, after which air molecules from the air surrounding the electrodes are ionized and accelerated through the electric field. The high-frequency operation is required in dielectric discharge barriers to ensure plasma stability. To carry out flow separation control in a hypersonic flow, the optimal design and construction of a power supply to generate dielectric barrier discharges is carried out in this paper. In this paper, it is aspired to construct a simplified circuit topology to emulate the dielectric barrier discharge and study its various frequency responses. The power supply can generate high voltage pulses up to 20kV at the repetitive frequency range of 20-50kHz with an input power of 500W. The power supply has been designed to be short circuit proof and can endure variable plasma load conditions. Its general outline is to charge a capacitor through a half-bridge converter and then later discharge it through a step-up transformer at a high frequency in order to generate high voltage pulses. After simulating the circuit, the PCB design and, eventually, lab tests are carried out to study its effectiveness in controlling flow separation.Keywords: aircraft propulsion, dielectric barrier discharge, flow separation control, power source
Procedia PDF Downloads 1315417 Learning Dynamic Representations of Nodes in Temporally Variant Graphs
Authors: Sandra Mitrovic, Gaurav Singh
Abstract:
In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.Keywords: churn prediction, dynamic networks, node2vec, auto-encoders
Procedia PDF Downloads 3165416 Appliance of the Analytic Hierarchy Process Methodology for the Selection of a Small Modular Reactors to Enhance Maritime Traffic Decarbonisation
Authors: Sara Martín, Ying Jie Zheng, César Hueso
Abstract:
International shipping is considered one of the largest sources of pollution in the world, accounting for 812 million tons of CO2 emissions in the year 2018. Current maritime decarbonisation is based on the implementation of new fuel alternatives, such as LNG, biofuels, and methanol, among others, which are less polluting as well as less efficient. Despite being a carbon-free and highly-developed technology, nuclear propulsion is hardly discussed as an alternative. Scientifically, it is believed that Small Modular Reactors (SMR) could be a promising solution to decarbonized maritime traffic due to their small dimensions and safety capabilities. However, as of today, there are no merchant ships powered by nuclear systems. Therefore, this project aims to understand the challenges of the development of nuclear-fuelled vessels by analysing all SMR designs to choose the most suitable one. In order not to fall into subjectivities, the Analytic Hierarchy Process (AHP) will be used to make the selection. This multiple-criteria evaluation technique analyses complex decisions by pairwise comparison of a number of evaluation criteria that can be applied to each SMR. The state-of-the-art 72 SMRs presented by the International Atomic Energy Agency (IAEA) will be analysed and ranked by a global parameter, calculated by applying the AHP methodology. The main target of the work is to find an adequate SMR system to power a ship. Top designs will be described in detail, and conclusions will be drawn from the results. This project has been conceived as an effort to foster the near-term development of zero-emission maritime traffic.Keywords: international shipping, decarbonization, SMR, AHP, nuclear-fuelled vessels
Procedia PDF Downloads 1275415 Multidimensional Approach to Analyse the Environmental Impacts of Mobility
Authors: Andras Gyorfi, Andras Torma, Adrienn Buruzs
Abstract:
Mobility has been evolved to a determining field of science. The continuously developing segment involves a variety of affected issues such as public and economic sectors. Beside the changes in mobility the state of environment had also changed in the last period. Alternative mobility as a separate category and the idea of its widespread appliance is such a new field that needs to be studied deeper. Alternative mobility implies finding new types of propulsion, using innovative kinds of power and energy resources, revolutionizing the approach to vehicular control. Including new resources and excluding others has such a complex effect which cannot be unequivocally confirmed by today’s scientific achievements. Changes in specific parameters will most likely reduce the environmental impacts, however, the production of new substances or even their subtraction of the system will cause probably energy deficit as well. The aim of this research is to elaborate the environmental impact matrix of alternative mobility and cognize the factors that are yet unknown, analyse them, look for alternative solutions and conclude all the above in a coherent system. In order to this, we analyse it with a method called ‘the system of systems (SoS) method’ to model the effects and the dynamics of the system. A part of the research process is to examine its impacts on the environment, and to decide whether the newly developed versions of alternative mobility are affecting the environmental state. As a final result, a complex approach will be used which can supplement the current scientific studies. By using the SoS approach, we create a framework of reference containing elements in which we examine the interactions as well. In such a way, a flexible and modular model can be established which supports the prioritizing of effects and the deeper analysis of the complex system.Keywords: environment, alternative mobility, complex model, element analysis, multidimensional map
Procedia PDF Downloads 3285414 Supporting International Student’s Acculturation Through Chatbot Technology: A Proposed Study
Authors: Sylvie Studente
Abstract:
Despite the increase in international students migrating to the UK, the transition from home environment to a host institution abroad can be overwhelming for many students due to acculturative stressors. These stressors are reported to peak within the first six months of transitioning into study abroad which has determinantal impacts for Higher Education Institutions. These impacts include; increased drop-out rates and overall decreases in academic performance. Research suggests that belongingness can negate acculturative stressors through providing opportunities for students to form necessary social connections. In response to this universities have focussed on utilising technology to create learning communities with the most commonly deployed being social media, blogs, and discussion forums. Despite these attempts, the application of technology in supporting international students is still ambiguous. With the reported growing popularity of mobile devices among students and accelerations in learning technology owing to the COVID-19 pandemic, the potential is recognised to address this challenge via the use of chatbot technology. Whilst traditionally, chatbots were deployed as conversational agents in business domains, they have since been applied to the field of education. Within this emerging area of research, a gap exists in addressing the educational value of chatbots over and above the traditional service orientation categorisation. The proposed study seeks to extend upon current understandings by investigating the challenges faced by international students in studying abroad and exploring the potential of chatbots as a solution to assist students’ acculturation. There has been growing interest in the application of chatbot technology to education accelerated by the shift to online learning during the COVID-19 pandemic. Although interest in educational chatbots has surged, there is a lack of consistency in the research area in terms of guidance on the design to support international students in HE. This gap is widened when considering the additional challenge of supporting multicultural international students with diverse. Diversification in education is rising due to increases in migration trends for international study. As global opportunities for education increase, so does the need for multiculturally inclusive learning support.Keywords: chatbots, education, international students, acculturation
Procedia PDF Downloads 485413 Application of Value Engineering Approach for Improving the Quality and Productivity of Ready-Mixed Concrete Used in Construction and Hydraulic Projects
Authors: Adel Mohamed El-Baghdady, Walid Sayed Abdulgalil, Ahmad Asran, Ibrahim Nosier
Abstract:
This paper studies the effectiveness of applying value engineering to actual concrete mixtures. The study was conducted in the State of Qatar on a number of strategic construction projects with international engineering specifications for the 2022 World Cup projects. The study examined the concrete mixtures of Doha Metro project and the development of KAHRAMAA’s (Qatar Electricity and Water Company) Abu Funtas Strategic Desalination Plant, in order to generally improve the quality and productivity of ready-mixed concrete used in construction and hydraulic projects. The application of value engineering to such concrete mixtures resulted in the following: i) improving the quality of concrete mixtures and increasing the durability of buildings in which they are used; ii) reducing the waste of excess materials of concrete mixture, optimizing the use of resources, and enhancing sustainability; iii) reducing the use of cement, thus reducing CO₂ emissions which ensures the protection of environment and public health; iv) reducing actual costs of concrete mixtures and, in turn, reducing the costs of construction projects; and v) increasing the market share and competitiveness of concrete producers. This research shows that applying the methodology of value engineering to ready-mixed concrete is an effective way to save around 5% of the total cost of concrete mixtures supplied to construction and hydraulic projects, improve the quality according to the technical requirements and as per the standards and specifications for ready-mixed concrete, improve the environmental impact, and promote sustainability.Keywords: value management, cost of concrete, performance, optimization, sustainability, environmental impact
Procedia PDF Downloads 3575412 Effect of Sintering Time and Porosity on Microstructure, Mechanical and Corrosion Properties of Ti6Al15Mo Alloy for Implant Applications
Authors: Jyotsna Gupta, S. Ghosh, S. Aravindan
Abstract:
The requirement of artificial prostheses (such as hip and knee joints) has increased with time. Many researchers are working to develop new implants with improved properties such as excellent biocompatibility with no tissue reactions, corrosion resistance in body fluid, high yield strength and low elastic modulus. Further, the morphological properties of the artificial implants should also match with that of the human bone so that cell adhesion, proliferation and transportation of the minerals and nutrition through body fluid can be obtained. Present study attempts to make porous Ti6Al15Mo alloys through powder metallurgy route using space holder technique. The alloy consists of 6wt% of Al which was taken as α phase stabilizer and 15wt% Mo was taken as β phase stabilizer with theoretical density 4.708. Ammonium hydrogen carbonate is used as a space holder in order to generate the porosity. The porosity of these fabricated porous alloys was controlled by adding the 0, 50, 70 vol.% of the space holder content. Three phases were found in the microstructure: α, α_2 and β phase of titanium. Kirkendall pores are observed to be decreased with increase of holding time during sintering and parallelly compressive strength and elastic modulus value increased slightly. Compressive strength and elastic modulus of porous Ti-6Al-15Mo alloy (1.17 g/cm3 density) is found to be suitable for cancellous bone. Released ions from Ti-6Al-15Mo alloy are far below from the permissible limits in human body.Keywords: bone implant, powder metallurgy, sintering time, Ti-6Al-15Mo
Procedia PDF Downloads 1485411 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements
Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating
Abstract:
Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly
Procedia PDF Downloads 2355410 Generation of ZnO-Au Nanocomposite in Water Using Pulsed Laser Irradiation
Authors: Elmira Solati, Atousa Mehrani, Davoud Dorranian
Abstract:
Generation of ZnO-Au nanocomposite under laser irradiation of a mixture of the ZnO and Au colloidal suspensions are experimentally investigated. In this work, firstly ZnO and Au nanoparticles are prepared by pulsed laser ablation of the corresponding metals in water using the 1064 nm wavelength of Nd:YAG laser. In a second step, the produced ZnO and Au colloidal suspensions were mixed in different volumetric ratio and irradiated using the second harmonic of a Nd:YAG laser operating at 532 nm wavelength. The changes in the size of the nanostructure and optical properties of the ZnO-Au nanocomposite are studied as a function of the volumetric ratio of ZnO and Au colloidal suspensions. The crystalline structure of the ZnO-Au nanocomposites was analyzed by X-ray diffraction (XRD). The optical properties of the samples were examined at room temperature by a UV-Vis-NIR absorption spectrophotometer. Transmission electron microscopy (TEM) was done by placing a drop of the concentrated suspension on a carbon-coated copper grid. To further confirm the morphology of ZnO-Au nanocomposites, we performed Scanning electron microscopy (SEM) analysis. Room temperature photoluminescence (PL) of the ZnO-Au nanocomposites was measured to characterize the luminescence properties of the ZnO-Au nanocomposites. The ZnO-Au nanocomposites were characterized by Fourier transform infrared (FTIR) spectroscopy. The X-ray diffraction pattern shows that the ZnO-Au nanocomposites had the polycrystalline structure of Au. The behavior observed by images of transmission electron microscope reveals that soldering of Au and ZnO nanoparticles include their adhesion. The plasmon peak in ZnO-Au nanocomposites was red-shifted and broadened in comparison with pure Au nanoparticles. By using the Tauc’s equation, the band gap energy for ZnO-Au nanocomposites is calculated to be 3.15–3.27 eV. In this work, the formation of ZnO-Au nanocomposites shifts the FTIR peak of metal oxide bands to higher wavenumbers. PL spectra of the ZnO-Au nanocomposites show that several weak peaks in the ultraviolet region and several relatively strong peaks in the visible region. SEM image indicates that the morphology of ZnO-Au nanocomposites produced in water was spherical. The TEM images of ZnO-Au nanocomposites demonstrate that with increasing the volumetric ratio of Au colloidal suspension the adhesion increased. According to the size distribution graphs of ZnO-Au nanocomposites with increasing the volumetric ratio of Au colloidal suspension the amount of ZnO-Au nanocomposites with the smaller size is further.Keywords: Au nanoparticles, pulsed laser ablation, ZnO-Au nanocomposites, ZnO nanoparticles
Procedia PDF Downloads 3485409 In-silico DFT Study, Molecular Docking, ADMET Predictions, and DMS of Isoxazolidine and Isoxazoline Analogs with Anticancer Properties
Authors: Moulay Driss Mellaoui, Khadija Zaki, Khalid Abbiche, Abdallah Imjjad, Rachid Boutiddar, Abdelouahid Sbai, Aaziz Jmiai, Souad El Issami, Al Mokhtar Lamsabhi, Hanane Zejli
Abstract:
This study presents a comprehensive analysis of six isoxazolidine and isoxazoline derivatives, leveraging a multifaceted approach that combines Density Functional Theory (DFT), AdmetSAR analysis, and molecular docking simulations to explore their electronic, pharmacokinetic, and anticancer properties. Through DFT analysis, using the B3LYP-D3BJ functional and the 6-311++G(d,p) basis set, we optimized molecular geometries, analyzed vibrational frequencies, and mapped Molecular Electrostatic Potentials (MEP), identifying key sites for electrophilic attacks and hydrogen bonding. Frontier Molecular Orbital (FMO) analysis and Density of States (DOS) plots revealed varying stability levels among the compounds, with 1b, 2b, and 3b showing slightly higher stability. Chemical potential assessments indicated differences in binding affinities, suggesting stronger potential interactions for compounds 1b and 2b. AdmetSAR analysis predicted favorable human intestinal absorption (HIA) rates for all compounds, highlighting compound 3b superior oral effectiveness. Molecular docking and molecular dynamics simulations were conducted on isoxazolidine and 4-isoxazoline derivatives targeting the EGFR receptor (PDB: 1JU6). Molecular docking simulations confirmed the high affinity of these compounds towards the target protein 1JU6, particularly compound 3b, among the isoxazolidine derivatives, compound 3b exhibited the most favorable binding energy, with a g score of -8.50 kcal/mol. Molecular dynamics simulations over 100 nanoseconds demonstrated the stability and potential of compound 3b as a superior candidate for anticancer applications, further supported by structural analyses including RMSD, RMSF, Rg, and SASA values. This study underscores the promising role of compound 3b in anticancer treatments, providing a solid foundation for future drug development and optimization efforts.Keywords: isoxazolines, DFT, molecular docking, molecular dynamic, ADMET, drugs.
Procedia PDF Downloads 515408 Clinical Trial of VEUPLEXᵀᴹ TBI Assay to Help Diagnose Traumatic Brain Injury by Quantifying Glial Fibrillary Acidic Protein and Ubiquitin Carboxy-Terminal Hydrolase L1 in the Serum of Patients Suspected of Mild TBI by Fluorescence Immunoassay
Authors: Moon Jung Kim, Guil Rhim
Abstract:
The clinical sensitivity of the “VEUPLEXTM TBI assay”, a clinical trial medical device, in mild traumatic brain injury was 28.6% (95% CI, 19.7%-37.5%), and the clinical specificity was 94.0% (95% CI, 89.3%). -98.7%). In addition, when the results analyzed by marker were put together, the sensitivity was higher when interpreting the two tests together than the two tests, UCHL1 and GFAP alone. Additionally, when sensitivity and specificity were analyzed based on CT results for the mild traumatic brain injury patient group, the clinical sensitivity for 2 CT-positive cases was 50.0% (95% CI: 1.3%-98.7%), and 19 CT-negative cases. The clinical specificity for cases was 68.4% (95% CI: 43.5% - 87.4%). Since the low clinical sensitivity for the two CT-positive cases was not statistically significant due to the small number of samples analyzed, it was judged necessary to secure and analyze more samples in the future. Regarding the clinical specificity analysis results for 19 CT-negative cases, there were a large number of patients who were actually clinically diagnosed with mild traumatic brain injury but actually received a CT-negative result, and about 31.6% of them showed abnormal results on VEUPLEXTM TBI assay. Although traumatic brain injury could not be detected in 31.6% of the CT scans, the possibility of actually suffering a mild brain injury could not be ruled out, so it was judged that this could be confirmed through follow-up observation of the patient. In addition, among patients with mild traumatic brain injury, CT examinations were not performed in many cases because the symptoms were very mild, but among these patients, about 25% or more showed abnormal results in the VEUPLEXTM TBI assay. In fact, no damage is observed with the naked eye immediately after traumatic brain injury, and traumatic brain injury is not observed even on CT. But in some cases, brain hemorrhage may occur (delayed cerebral hemorrhage) after a certain period of time, so the patients who did show abnormal results on VEUPLEXTM TBI assay should be followed up for the delayed cerebral hemorrhage. In conclusion, it was judged that it was difficult to judge mild traumatic brain injury with the VEUPLEXTM TBI assay only through clinical findings without CT results, that is, based on the GCS value. Even in the case of CT, it does not detect all mild traumatic brain injury, so it is difficult to necessarily judge that there is no traumatic brain injury, even if there is no evidence of traumatic brain injury in CT. And in the long term, more patients should be included to evaluate the usefulness of the VEUPLEXTM TBI assay in the detection of microscopic traumatic brain injuries without using CT.Keywords: brain injury, traumatic brain injury, GFAP, UCHL1
Procedia PDF Downloads 1145407 Effect of Retained Posterior Horn of Medial Meniscus on Functional Outcome of ACL Reconstructed Knees
Authors: Kevin Syam, Devendra K. Chauhan, Mandeep Singh Dhillon
Abstract:
Background: The posterior horn of medial meniscus (PHMM) is a secondary stabilizer against anterior translation of tibia. Cadaveric studies have revealed increased strain on the ACL graft and greater instrumented laxity in Posterior horn deficient knees. Clinical studies have shown higher prevalence of radiological OA after ACL reconstruction combined with menisectomy. However, functional outcomes in ACL reconstructed knee in the absence of Posterior horn is less discussed, and specific role of posterior horn is ill-documented. This study evaluated functional and radiological outcomes in posterior horn preserved and posterior horn sacrificed ACL reconstructed knees. Materials: Of the 457 patients who had ACL reconstruction done over a 6 year period, 77 cases with minimum follow up of 18 months were included in the study after strict exclusion criteria (associated lateral meniscus injury, other ligamentous injuries, significant cartilage degeneration, repeat injury and contralateral knee injuries were excluded). 41 patients with intact menisci were compared with 36 patients with absent posterior horn of medial meniscus. Radiological and clinical tests for instability were conducted, and knees were evaluated using subjective International Knee Documentation Committee (IKDC) score and the Orthopadische Arbeitsgruppe Knie score (OAK). Results: We found a trend towards significantly better overall outcome (OAK) in cases with intact PHMM at average follow-up of 43.03 months (p value 0.082). Cases with intact PHMM had significantly better objective stability (p value 0.004). No significant differences were noted in the subjective IKDC score (p value 0.526) and the functional OAK outcome (category D) (p value 0.363). More cases with absent posterior horn had evidence of radiological OA (p value 0.022) even at mid-term follow-up. Conclusion: Even though the overall OAK and subjective IKDC scores did not show significant difference between the two subsets, the poorer outcomes in terms of objective stability and radiological OA noted in the absence of PHMM, indicates the importance of preserving this important part of the meniscus.Keywords: ACL, functional outcome, knee, posterior of medial meniscus
Procedia PDF Downloads 3595406 Prevalence and Risk Factors Associated with Nutrition Related Non-Communicable Diseases in a Cohort of Males in the Central Province of Sri Lanka
Authors: N. W. I. A. Jayawardana, W. A. T. A. Jayalath, W. M. T. Madhujith, U. Ralapanawa, R. S. Jayasekera, S. A. S. B. Alagiyawanna, A. M. K. R. Bandara, N. S. Kalupahana
Abstract:
There is mounting evidence to the effect that dietary and lifestyle changes affect the incidence of non-communicable diseases (NCDs). This study was conducted to investigate the association of diet, physical activity, smoking, alcohol consumption and duration of sleep with overweight, obesity, hypertension and diabetes in a cohort of males from the Central Province of Sri Lanka. A total of 2694 individuals aged between 17 – 68 years (Mean = 31) were included in the study. Body Mass Index cutoff values for Asians were used to categorize the participants as normal, overweight and obese. The dietary data were collected using a food frequency questionnaire [FFQ] and data on the level of physical activity, smoking, alcohol consumption and sleeping hours were obtained using a self-administered validated questionnaire. Systolic and diastolic blood pressure, random blood glucose levels were measured to determine the incidence of hypertension and diabetes. Among the individuals, the prevalence of overweight and obesity were 34% and 16.4% respectively. Approximately 37% of the participants suffered from hypertension. Overweight and obesity were associated with older age men (P<0.0001), frequency of smoking (P=0.0434), alcohol consumption level (P=0.0287) and the quantity of lipid intake (P=0.0081). Consumption of fish (P=0.6983) and salty snacks (P=0.8327), sleeping hours (P=0.6847) and the level of physical activity were not significantly (P=0.3301) associated with the incidence of overweight and obesity. Based on the fitted model, only age was significantly associated with hypertension (P < 0.001). Further, age (P < 0.0001), sleeping hours (P=0.0953) and consumption of fatty foods (P=0.0930) were significantly associated with diabetes. Age was associated with higher odds of pre diabetes (OR:1.089;95% CI:1.053,1.127) and diabetes (OR:1.077;95% CI:1.055,1.1) whereas 7-8 hrs. of sleep per day was associated with lesser odds of diabetes (OR:0.403;95% CI:0.184,0.884). High prevalence of overweight, obesity and hypertension in working-age males is a threatening sign for this area. As this population ages in the future and urbanization continues, the prevalence of above risk factors will likely to escalate.Keywords: age, males, non-communicable diseases, obesity
Procedia PDF Downloads 3385405 Modeling Biomass and Biodiversity across Environmental and Management Gradients in Temperate Grasslands with Deep Learning and Sentinel-1 and -2
Authors: Javier Muro, Anja Linstadter, Florian Manner, Lisa Schwarz, Stephan Wollauer, Paul Magdon, Gohar Ghazaryan, Olena Dubovyk
Abstract:
Monitoring the trade-off between biomass production and biodiversity in grasslands is critical to evaluate the effects of management practices across environmental gradients. New generations of remote sensing sensors and machine learning approaches can model grasslands’ characteristics with varying accuracies. However, studies often fail to cover a sufficiently broad range of environmental conditions, and evidence suggests that prediction models might be case specific. In this study, biomass production and biodiversity indices (species richness and Fishers’ α) are modeled in 150 grassland plots for three sites across Germany. These sites represent a North-South gradient and are characterized by distinct soil types, topographic properties, climatic conditions, and management intensities. Predictors used are derived from Sentinel-1 & 2 and a set of topoedaphic variables. The transferability of the models is tested by training and validating at different sites. The performance of feed-forward deep neural networks (DNN) is compared to a random forest algorithm. While biomass predictions across gradients and sites were acceptable (r2 0.5), predictions of biodiversity indices were poor (r2 0.14). DNN showed higher generalization capacity than random forest when predicting biomass across gradients and sites (relative root mean squared error of 0.5 for DNN vs. 0.85 for random forest). DNN also achieved high performance when using the Sentinel-2 surface reflectance data rather than different combinations of spectral indices, Sentinel-1 data, or topoedaphic variables, simplifying dimensionality. This study demonstrates the necessity of training biomass and biodiversity models using a broad range of environmental conditions and ensuring spatial independence to have realistic and transferable models where plot level information can be upscaled to landscape scale.Keywords: ecosystem services, grassland management, machine learning, remote sensing
Procedia PDF Downloads 2205404 An Efficient Tool for Mitigating Voltage Unbalance with Reactive Power Control of Distributed Grid-Connected Photovoltaic Systems
Authors: Malinwo Estone Ayikpa
Abstract:
With the rapid increase of grid-connected PV systems over the last decades, genuine challenges have arisen for engineers and professionals of energy field in the planning and operation of existing distribution networks with the integration of new generation sources. However, the conventional distribution network, in its design was not expected to receive other generation outside the main power supply. The tools generally used to analyze the networks become inefficient and cannot take into account all the constraints related to the operation of grid-connected PV systems. Some of these constraints are voltage control difficulty, reverse power flow, and especially voltage unbalance which could be due to the poor distribution of single-phase PV systems in the network. In order to analyze the impact of the connection of small and large number of PV systems to the distribution networks, this paper presents an efficient optimization tool that minimizes voltage unbalance in three-phase distribution networks with active and reactive power injections from the allocation of single-phase and three-phase PV plants. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. Good reduction of voltage unbalance can be achieved by reactive power control of the PV systems. The presented tool is based on the three-phase current injection method and the PV systems are modeled via an equivalent circuit. The primal-dual interior point method is used to obtain the optimal operating points for the systems.Keywords: Photovoltaic system, Primal-dual interior point method, Three-phase optimal power flow, Voltage unbalance
Procedia PDF Downloads 3345403 Development of Database for Risk Assessment Appling to Ballast Water Managements
Authors: Eun-Chan Kim, Jeong-Hwan Oh, Seung-Guk Lee
Abstract:
Billions of tones of ballast water including various aquatic organisms are being carried around the world by ships. When the ballast water is discharged into new environments, some aquatic organisms discharged with ballast water may become invasive and severely disrupt the native ecology. Thus, International Maritime Organization (IMO) adopted the Ballast Water Management Convention in 2004. Regulation A-4 of the convention states that a government in waters under their jurisdiction may grant exemptions to any requirements to ballast water management, but only when they are granted to a ship or ships on a voyage or voyages between specified ports or locations, or to a ship which operates exclusively between specified ports or locations. In order to grant exemptions, risk assessment should be conducted based on the guidelines for risk assessment developed by the IMO. For the risk assessment, it is essential to collect the relevant information and establish a database system. This paper studies the database system for ballast water risk assessment. This database consists of the shipping database, ballast water database, port environment database and species database. The shipping database has been established based on the data collected from the port management information system of Korea Government. For the ballast water database, ballast water discharge has only been estimated by the loading/unloading of the cargoes as the convention has not come into effect yet. The port environment database and species database are being established based on the reference documents, and existing and newly collected monitoring data. This database system has been approved to be a useful system, capable of appropriately analyzing the risk assessment in the all ports of Korea.Keywords: ballast water, IMO, risk assessment, shipping, environment, species
Procedia PDF Downloads 5245402 Applying Kinect on the Development of a Customized 3D Mannequin
Authors: Shih-Wen Hsiao, Rong-Qi Chen
Abstract:
In the field of fashion design, 3D Mannequin is a kind of assisting tool which could rapidly realize the design concepts. While the concept of 3D Mannequin is applied to the computer added fashion design, it will connect with the development and the application of design platform and system. Thus, the situation mentioned above revealed a truth that it is very critical to develop a module of 3D Mannequin which would correspond with the necessity of fashion design. This research proposes a concrete plan that developing and constructing a system of 3D Mannequin with Kinect. In the content, ergonomic measurements of objective human features could be attained real-time through the implement with depth camera of Kinect, and then the mesh morphing can be implemented through transformed the locations of the control-points on the model by inputting those ergonomic data to get an exclusive 3D mannequin model. In the proposed methodology, after the scanned points from the Kinect are revised for accuracy and smoothening, a complete human feature would be reconstructed by the ICP algorithm with the method of image processing. Also, the objective human feature could be recognized to analyze and get real measurements. Furthermore, the data of ergonomic measurements could be applied to shape morphing for the division of 3D Mannequin reconstructed by feature curves. Due to a standardized and customer-oriented 3D Mannequin would be generated by the implement of subdivision, the research could be applied to the fashion design or the presentation and display of 3D virtual clothes. In order to examine the practicality of research structure, a system of 3D Mannequin would be constructed with JAVA program in this study. Through the revision of experiments the practicability-contained research result would come out.Keywords: 3D mannequin, kinect scanner, interactive closest point, shape morphing, subdivision
Procedia PDF Downloads 3125401 Nickel Substituted Cobalt Ferrites via Ceramic Rout Approach: Exploration of Structural, Optical, Dielectric and Electrochemical Behavior for Pseudo-Capacitors
Authors: Talat Zeeshan
Abstract:
Nickel doped cobalt ferrites 〖(Co〗_(1-x) Ni_x Fe_2 O_4) has been synthesized with the variation of Ni dopant (x=0.0, 0.25, 0.50, 0.75) by ball milling route at 150 RPM for 3hrs. The impact of nickel on Co ferrites has been investigated by using various approaches of characterization such as XRD (X-Ray diffraction), SEM (Scanning electron microscopy, FTIR (Fourier transform infrared spectroscopy), UV-Vis spectroscopy, LCR meter and CV (Cyclic voltammetry). The cubic structure of the nanoparticles confirmed by the XRD data, the increase in Ni dopant reduces the crystallite size. FTIR spectroscopy has been employed in order to analyze various functional groups. The agglomerated morphology of the particles has been observed by SEM images.. UV-Vis analysis reveals that the optical energy bandgap progressively rises with nickel doping, from 1.50 eV to 2.02 eV. The frequency range of 20 Hz to 20 MHz has been used for dielectric evaluation, where dielectric parameters such as AC conductivity, tan loss, and dielectric constant are examined. When the frequency of the applied AC field rises the AC conductivity increases, while the dielectric constant and tan loss constantly decrease. The pseudocapacitive behavior revealed by the CV curve showed that at high scan rates, specific capacitance values (Cs) are low, whereas at low scan rates, they are high. At the low scan rate of 10 mVs-1, the maximum specific capacitance of 244.4 Fg-1 has been attained at x = 0.75. Nickel doped cobalt ferrites electrodes have incredible electrochemical characteristics that make them a promising option for pseudo capacitor applications.Keywords: lattice parameters, crystallite size, pseudo capacitor, band gap: magnetic material, energy band gap
Procedia PDF Downloads 245400 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection
Authors: Devadrita Dey Sarkar
Abstract:
Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)
Procedia PDF Downloads 4575399 Inferring the Ecological Quality of Seagrass Beds from Using Composition and Configuration Indices
Authors: Fabrice Houngnandan, Celia Fery, Thomas Bockel, Julie Deter
Abstract:
Getting water cleaner and stopping global biodiversity loss requires indices to measure changes and evaluate the achievement of objectives. The endemic and protected seagrass species Posidonia oceanica is a biological indicator used to monitor the ecological quality of marine Mediterranean waters. One ecosystem index (EBQI), two biotic indices (PREI, Bipo), and several landscape indices, which measure the composition and configuration of the P. oceanica seagrass at the population scale have been developed. While the formers are measured at monitoring sites, the landscape indices can be calculated for the entire seabed covered by this ecosystem. This present work aims to search on the link between these indices and the best scale to be used in order to maximize this link. We used data collected between 2014 to 2019 along the French Mediterranean coastline to calculate EBQI, PREI, and Bipo at 100 sites. From the P. oceanica seagrass distribution map, configuration and composition indices around these different sites in 6 different grid sizes (100 m x 100 to 1000 m x 1000 m) were determined. Correlation analyses were first used to find out the grid size presenting the strongest and most significant link between the different types of indices. Finally, several models were compared basis on various metrics to identify the one that best explains the nature of the link between these indices. Our results showed a strong and significant link between biotic indices and the best correlations between biotic and landscape indices within the 600 m x 600 m grid cells. These results showed that the use of landscape indices is possible to monitor the health of seagrass beds at a large scale.Keywords: ecological indicators, decline, conservation, submerged aquatic vegetation
Procedia PDF Downloads 1355398 An Event-Related Potentials Study on the Processing of English Subjunctive Mood by Chinese ESL Learners
Authors: Yan Huang
Abstract:
Event-related potentials (ERPs) technique helps researchers to make continuous measures on the whole process of language comprehension, with an excellent temporal resolution at the level of milliseconds. The research on sentence processing has developed from the behavioral level to the neuropsychological level, which brings about a variety of sentence processing theories and models. However, the applicability of these models to L2 learners is still under debate. Therefore, the present study aims to investigate the neural mechanisms underlying English subjunctive mood processing by Chinese ESL learners. To this end, English subject clauses with subjunctive moods are used as the stimuli, all of which follow the same syntactic structure, “It is + adjective + that … + (should) do + …” Besides, in order to examine the role that language proficiency plays on L2 processing, this research deals with two groups of Chinese ESL learners (18 males and 22 females, mean age=21.68), namely, high proficiency group (Group H) and low proficiency group (Group L). Finally, the behavioral and neurophysiological data analysis reveals the following findings: 1) Syntax and semantics interact with each other on the SECOND phase (300-500ms) of sentence processing, which is partially in line with the Three-phase Sentence Model; 2) Language proficiency does affect L2 processing. Specifically, for Group H, it is the syntactic processing that plays the dominant role in sentence processing while for Group L, semantic processing also affects the syntactic parsing during the THIRD phase of sentence processing (500-700ms). Besides, Group H, compared to Group L, demonstrates a richer native-like ERPs pattern, which further demonstrates the role of language proficiency in L2 processing. Based on the research findings, this paper also provides some enlightenment for the L2 pedagogy as well as the L2 proficiency assessment.Keywords: Chinese ESL learners, English subjunctive mood, ERPs, L2 processing
Procedia PDF Downloads 1335397 Effects of Different Meteorological Variables on Reference Evapotranspiration Modeling: Application of Principal Component Analysis
Authors: Akinola Ikudayisi, Josiah Adeyemo
Abstract:
The correct estimation of reference evapotranspiration (ETₒ) is required for effective irrigation water resources planning and management. However, there are some variables that must be considered while estimating and modeling ETₒ. This study therefore determines the multivariate analysis of correlated variables involved in the estimation and modeling of ETₒ at Vaalharts irrigation scheme (VIS) in South Africa using Principal Component Analysis (PCA) technique. Weather and meteorological data between 1994 and 2014 were obtained both from South African Weather Service (SAWS) and Agricultural Research Council (ARC) in South Africa for this study. Average monthly data of minimum and maximum temperature (°C), rainfall (mm), relative humidity (%), and wind speed (m/s) were the inputs to the PCA-based model, while ETₒ is the output. PCA technique was adopted to extract the most important information from the dataset and also to analyze the relationship between the five variables and ETₒ. This is to determine the most significant variables affecting ETₒ estimation at VIS. From the model performances, two principal components with a variance of 82.7% were retained after the eigenvector extraction. The results of the two principal components were compared and the model output shows that minimum temperature, maximum temperature and windspeed are the most important variables in ETₒ estimation and modeling at VIS. In order words, ETₒ increases with temperature and windspeed. Other variables such as rainfall and relative humidity are less important and cannot be used to provide enough information about ETₒ estimation at VIS. The outcome of this study has helped to reduce input variable dimensionality from five to the three most significant variables in ETₒ modelling at VIS, South Africa.Keywords: irrigation, principal component analysis, reference evapotranspiration, Vaalharts
Procedia PDF Downloads 2615396 Modelling Volatility of Cryptocurrencies: Evidence from GARCH Family of Models with Skewed Error Innovation Distributions
Authors: Timothy Kayode Samson, Adedoyin Isola Lawal
Abstract:
The past five years have shown a sharp increase in public interest in the crypto market, with its market capitalization growing from $100 billion in June 2017 to $2158.42 billion on April 5, 2022. Despite the outrageous nature of the volatility of cryptocurrencies, the use of skewed error innovation distributions in modelling the volatility behaviour of these digital currencies has not been given much research attention. Hence, this study models the volatility of 5 largest cryptocurrencies by market capitalization (Bitcoin, Ethereum, Tether, Binance coin, and USD Coin) using four variants of GARCH models (GJR-GARCH, sGARCH, EGARCH, and APARCH) estimated using three skewed error innovation distributions (skewed normal, skewed student- t and skewed generalized error innovation distributions). Daily closing prices of these currencies were obtained from Yahoo Finance website. Finding reveals that the Binance coin reported higher mean returns compared to other digital currencies, while the skewness indicates that the Binance coin, Tether, and USD coin increased more than they decreased in values within the period of study. For both Bitcoin and Ethereum, negative skewness was obtained, meaning that within the period of study, the returns of these currencies decreased more than they increased in value. Returns from these cryptocurrencies were found to be stationary but not normality distributed with evidence of the ARCH effect. The skewness parameters in all best forecasting models were all significant (p<.05), justifying of use of skewed error innovation distributions with a fatter tail than normal, Student-t, and generalized error innovation distributions. For Binance coin, EGARCH-sstd outperformed other volatility models, while for Bitcoin, Ethereum, Tether, and USD coin, the best forecasting models were EGARCH-sstd, APARCH-sstd, EGARCH-sged, and GJR-GARCH-sstd, respectively. This suggests the superiority of skewed Student t- distribution and skewed generalized error distribution over the skewed normal distribution.Keywords: skewed generalized error distribution, skewed normal distribution, skewed student t- distribution, APARCH, EGARCH, sGARCH, GJR-GARCH
Procedia PDF Downloads 1245395 Mechanical Properties of Hybrid Ti6Al4V Part with Wrought Alloy to Powder-Bed Additive Manufactured Interface
Authors: Amnon Shirizly, Ohad Dolev
Abstract:
In recent years, the implementation and use of Metal Additive Manufacturing (AM) parts increase. As a result, the demand for bigger parts rises along with the desire to reduce it’s the production cost. Generally, in powder bed Additive Manufacturing technology the part size is limited by the machine build volume. In order to overcome this limitation, the parts can be built in one or more machine operations and mechanically joint or weld them together. An alternative option could be a production of wrought part and built on it the AM structure (mainly to reduce costs). In both cases, the mechanical properties of the interface have to be defined and recognized. In the current study, the authors introduce guidelines on how to examine the interface between wrought alloy and powder-bed AM. The mechanical and metallurgical properties of the Ti6Al4V materials (wrought alloy and powder-bed AM) and their hybrid interface were examined. The mechanical properties gain from tensile test bars in the built direction and fracture toughness samples in various orientations. The hybrid specimens were built onto a wrought Ti6Al4V start-plate. The standard fracture toughness (CT25 samples) and hybrid tensile specimens' were heat treated and milled as a post process to final diminutions. In this Study, the mechanical tensile tests and fracture toughness properties supported by metallurgical observation will be introduced and discussed. It will show that the hybrid approach of utilizing powder bed AM onto wrought material expanding the current limitation of the future manufacturing technology.Keywords: additive manufacturing, hybrid, fracture-toughness, powder bed
Procedia PDF Downloads 1085394 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)
Authors: Silvia Arrate, Waldo Salud, Eloy París
Abstract:
The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.Keywords: cutting tools, data science, prediction, TBM, wear
Procedia PDF Downloads 515393 Factors That Contribute to Noise Induced Hearing Loss Amongst Employees at the Platinum Mine in Limpopo Province, South Africa
Authors: Livhuwani Muthelo, R. N. Malema, T. M. Mothiba
Abstract:
Long term exposure to excessive noise in the mining industry increases the risk of noise induced hearing loss, with consequences for employee’s health, productivity and the overall quality of life. Objective: The objective of this study was to investigate the factors that contribute to Noise Induced Hearing Loss amongst employees at the Platinum mine in the Limpopo Province, South Africa. Study method: A qualitative, phenomenological, exploratory, descriptive, contextual design was applied in order to explore and describe the contributory factors. Purposive non-probability sampling was used to select 10 male employees who were diagnosed with NIHL in the year 2014 in four mine shafts, and 10 managers who were involved in a Hearing Conservation Programme. The data were collected using semi-structured one-on-one interviews. A qualitative data analysis of Tesch’s approach was followed. Results: The following themes emerged: Experiences and challenges faced by employees in the work environment, hearing protective device factors and management and leadership factors. Hearing loss was caused by partial application of guidelines, policies, and procedures from the Department of Minerals and Energy. Conclusion: The study results indicate that although there are guidelines, policies, and procedures available, failure in the implementation of one element will affect the development and maintenance of employees hearing mechanism. It is recommended that the mine management should apply the guidelines, policies, and procedures and promptly repair the broken hearing protective devices.Keywords: employees, factors, noise induced hearing loss, noise exposure
Procedia PDF Downloads 1295392 The Economic Burden of Breast Cancer on Women in Nigeria: Implication for Socio-Economic Development
Authors: Tolulope Allo, Mofoluwake P. Ajayi, Adenike E. Idowu, Emmanuel O. Amoo, Fadeke Esther Olu-Owolabi
Abstract:
Breast cancer which was more prevalent in Europe and America in the past is gradually being mirrored across the world today with greater economic burden on low and middle income countries (LMCs). Breast cancer is the most common cancer among women globally and current studies have shown that a woman dies with the diagnosis of breast cancer every thirteen minutes. The economic cost of breast cancer is overwhelming particularly for developing economies. While it causes billion of dollar in losses of national income, it pushes millions of people below poverty line. This study examined the economic burden of breast cancer on Nigerian women, its impacts on their standard of living and its effects on Nigeria’s socio economic development. The study adopts a qualitative research approach using the in-depth interview technique to elicit valuable information from respondents with cancer experience from the Southern part of Nigeria. Respondents constituted women in their reproductive age (15-49 years) that have experienced and survived cancer and also those that are currently receiving treatment. Excerpts from the interviews revealed that the cost of treatment is one of the major factors contributing to the late presentation of breast cancer incidences among women as many of them could not afford to pay for their own treatment. The study also revealed that many women prefer to explore other options such as herbal treatments and spiritual consultations which is less expensive and affordable. The study therefore concludes that breast cancer diagnosis and treatment should be subsidized by the government in order to facilitate easy access and affordability thereby promoting early detection and reducing the economic burden of treatment on women.Keywords: breast cancer, development, economic burden, women
Procedia PDF Downloads 3635391 Understanding Relationships between Listening to Music and Pronunciation Learning: An Investigation Based upon Japanese EFL Learners' Self-Evaluation
Authors: Hirokatsu Kawashima
Abstract:
In an attempt to elucidate relationships between listening to music and pronunciation learning, a classroom-based investigation was conducted with Japanese EFL learners (n=45). The subjects were instructed to listen to English songs they liked on YouTube, especially paying attention to phonologically similar vowel and consonant minimal pair words (e.g., live and leave). This kind of activity, which included taking notes, was regularly carried out in the classroom, and the same kind of task was given to the subjects as homework in order to reinforce the in-class activity. The duration of these activities was eight weeks, after which the program was evaluated on a 9-point scale (1: the lowest and 9: the highest) by learners’ self-evaluation. The main questions for this evaluation included 1) how good the learners had been at pronouncing vowel and consonant minimal pair words originally, 2) how often they had listened to songs good for pronouncing vowel and consonant minimal pair words, 3) how frequently they had moved their mouths to vowel and consonant minimal pair words of English songs, and 4) how much they thought the program would support and enhance their pronunciation learning of phonologically similar vowel and consonant minimal pair words. It has been found, for example, A) that the evaluation of this program is by no means low (Mean: 6.51 and SD: 1.23), suggesting that listening to music may support and enhance pronunciation learning, and B) that listening to consonant minimal pair words in English songs and moving the mouth to them are more related to the program’s evaluation (r =.69, p=.00 and r =.55, p=.00, respectively) than listening to vowel minimal pair words in English songs and moving the mouth to them (r =.45, p=.00 and r =.39, p=.01, respectively).Keywords: minimal pair, music, pronunciation, song
Procedia PDF Downloads 3215390 A Review of the Drawbacks of Current Fixed Connection Façade Systems, Non-Structural Standards, and Ways of Integrating Movable Façade Technology into Buildings
Abstract:
Façade panels of various shapes, weights, and connections usually act as a barrier between the indoor and outdoor environments. They also play a major role in enhancing the aesthetics of building structures. They are attached by different types of connections to the primary structure or inner panels in double skin façade skins. Structural buildings designed to withstand seismic shocks have been undergoing a critical appraisal in recent years, with the emphasis changing from ‘strength’ to ‘performance’. Performance based design and analysis have found their way into research, development, and practice of earthquake engineering, particularly after the 1994 Northridge and 1995 Kobe earthquakes. The design performance of facades as non-structural elements has now focused mainly on evaluating the damage sustained by façade frames with fixed connections, not movable ones. This paper will review current design standards for structural buildings, including the performance of structural and non-structural components during earthquake excitations in order to overview and evaluate the damage assessment and behaviour of various façade systems in building structures during seismic activities. The proposed solutions for each facade system will be discussed case by case to evaluate their potential for incorporation with newly designed connections. Finally, Double-Skin-Facade systems can potentially be combined with movable facade technology, although other glazing systems would require minor to major changes in their design before being integrated into the system.Keywords: building performance, earthquake engineering, glazing system, movable façade technology
Procedia PDF Downloads 5515389 Genotypic Identification of Oral Bacteria Using 16S rRNA in Children with and without Early Childhood Caries in Kelantan, Malaysia
Authors: Zuliani Mahmood, Thirumulu Ponnuraj Kannan, Yean Yean Chan, Salahddin A. Al-Hudhairy
Abstract:
Caries is the most common childhood disease which develops due to disturbances in the physiological equilibrium in the dental plaque resulting in demineralization of tooth structures. Plaque and dentine samples were collected from three different tooth surfaces representing caries progression (intact, over carious lesion and dentine) in children with early childhood caries (ECC, n=36). In caries free (CF) children, plaque samples were collected from sound tooth surfaces at baseline and after one year (n=12). The genomic DNA was extracted from all samples and subjected to 16S rRNA PCR amplification. The end products were cloned into pCR®2.1-TOPO® Vector. Five randomly selected positive clones collected from each surface were sent for sequencing. Identification of the bacterial clones was performed using BLAST against GenBank database. In the ECC group, the frequency of Lactobacillus sp. detected was significantly higher in the dentine surface (p = 0.031) than over the cavitated lesion. The highest frequency of bacteria detected in the intact surfaces was Fusobacterium nucleatum subsp. polymorphum (33.3%) while Streptococcus mutans was detected over the carious lesions and dentine surfaces at a frequency of 33.3% and 52.7% respectively. Fusobacterium nucleatum subsp. polymorphum was also found to be highest in the CF group (41.6%). Follow up at the end of one year showed that the frequency of Corynebacterium matruchotii detected was highest in those who remained caries free (16.6%), while Porphyromonas catoniae was highest in those who developed caries (25%). In conclusion, Streptococcus mutans and Porphyromonas catoniae are strongly associated with caries progression, while Lactobacillus sp. is restricted to deep carious lesions. Fusobacterium nucleatum subsp. polymorphum and Corynebacterium matruchotii may play a role in sustaining the healthy equilibrium in the dental plaque. These identified bacteria show promise as potential biomarkers in diagnosis which could help in the management of dental caries in children.Keywords: early childhood caries, genotypic identification, oral bacteria, 16S rRNA
Procedia PDF Downloads 278