Search results for: neural tube defects
942 Design, Construction And Validation Of A Simple, Low-cost Phi Meter
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
The use of a phi meter allows for definition of equivalence ratio during a fire test. Previous phi meter designs have used expensive catalysts and had restricted portability due to the large furnace and requirement for pure oxygen. The new design of the phi meter did not require the use of a catalyst. The furnace design was based on the existing micro-scale combustion calorimetry (MCC) furnace and operating conditions based on the secondary oxidizer furnace used in the steady state tube furnace (SSTF). Preliminary tests were conducted to study the effects of varying furnace temperatures on combustion efficiency. The SSTF was chosen to validate the phi meter measurements as it can both pre-set and independently quantify the equivalence ratio during a test. The data were in agreement with the data obtained on the SSTF. It was also validated by a comparison of CO2 yields obtained from the SSTF oxidizer and those obtained by the phi meter. The phi meter designed and constructed in this work was proven to work effectively on a bench-scale. The phi meter was then used to measure the equivalence ratio on a series of large-scale ISO 9705 tests for numerous fire conditions. The materials used were a range of non-homogenous materials such as polyurethane. The measurements corresponded accurately to the data collected, showing the novel design can be used from bench to large-scale tests to measure equivalence ratio. This cheaper, more portable, safer and easier to use phi meter design will enable more widespread use and the ability to quantify fire conditions of tests, allowing for better understanding of flammability and smoke toxicity.Keywords: phi meter, smoke toxicity, fire condition, ISO9705, novel equipment
Procedia PDF Downloads 102941 Machine Learning Techniques for Estimating Ground Motion Parameters
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this study is to evaluate the advantages and disadvantages of various machine learning techniques in forecasting ground-motion intensity measures given source characteristics, source-to-site distance, and local site condition. Intensity measures such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Estimating these variables for future earthquake events is a key step in seismic hazard assessment and potentially subsequent risk assessment of different types of structures. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as a statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The algorithms are adjusted to quantify event-to-event and site-to-site variability of the ground motions by implementing them as random effects in the proposed models to reduce the aleatory uncertainty. All the algorithms are trained using a selected database of 4,528 ground-motions, including 376 seismic events with magnitude 3 to 5.8, recorded over the hypocentral distance range of 4 to 500 km in Oklahoma, Kansas, and Texas since 2005. The main reason of the considered database stems from the recent increase in the seismicity rate of these states attributed to petroleum production and wastewater disposal activities, which necessities further investigation in the ground motion models developed for these states. Accuracy of the models in predicting intensity measures, generalization capability of the models for future data, as well as usability of the models are discussed in the evaluation process. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available.Keywords: artificial neural network, ground-motion models, machine learning, random forest, support vector machine
Procedia PDF Downloads 121940 Simulation of Ammonia-Water Two Phase Flow in Bubble Pump
Authors: Jemai Rabeb, Benhmidene Ali, Hidouri Khaoula, Chaouachi Bechir
Abstract:
The diffusion-absorption refrigeration cycle consists of a generator bubble pump, an absorber, an evaporator and a condenser, and usually operates with ammonia/water/ hydrogen or helium as the working fluid. The aim of this paper is to study the stability problem a bubble pump. In fact instability can caused a reduction of bubble pump efficiency. To achieve this goal, we have simulated the behaviour of two-phase flow in a bubble pump by using a drift flow model. Equations of a drift flow model are formulated in the transitional regime, non-adiabatic condition and thermodynamic equilibrium between the liquid and vapour phases. Equations resolution allowed to define void fraction, and liquid and vapour velocities, as well as pressure and mixing enthalpy. Ammonia-water mixing is used as working fluid, where ammonia mass fraction in the inlet is 0.6. Present simulation is conducted out for a heating flux of 2 kW/m² to 5 kW/m² and bubble pump tube length of 1 m and 2.5 mm of inner diameter. Simulation results reveal oscillations of vapour and liquid velocities along time. Oscillations decrease with time and with heat flux. For sufficient time the steady state is established, it is characterised by constant liquid velocity and void fraction values. However, vapour velocity does not have the same behaviour, it increases for steady state too. On the other hand, pressure drop oscillations are studied.Keywords: bubble pump, drift flow model, instability, simulation
Procedia PDF Downloads 260939 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis
Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara
Abstract:
Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy
Procedia PDF Downloads 350938 Optimization of Bifurcation Performance on Pneumatic Branched Networks in next Generation Soft Robots
Authors: Van-Thanh Ho, Hyoungsoon Lee, Jaiyoung Ryu
Abstract:
Efficient pressure distribution within soft robotic systems, specifically to the pneumatic artificial muscle (PAM) regions, is essential to minimize energy consumption. This optimization involves adjusting reservoir pressure, pipe diameter, and branching network layout to reduce flow speed and pressure drop while enhancing flow efficiency. The outcome of this optimization is a lightweight power source and reduced mechanical impedance, enabling extended wear and movement. To achieve this, a branching network system was created by combining pipe components and intricate cross-sectional area variations, employing the principle of minimal work based on a complete virtual human exosuit. The results indicate that modifying the cross-sectional area of the branching network, gradually decreasing it, reduces velocity and enhances momentum compensation, preventing flow disturbances at separation regions. These optimized designs achieve uniform velocity distribution (uniformity index > 94%) prior to entering the connection pipe, with a pressure drop of less than 5%. The design must also consider the length-to-diameter ratio for fluid dynamic performance and production cost. This approach can be utilized to create a comprehensive PAM system, integrating well-designed tube networks and complex pneumatic models.Keywords: pneumatic artificial muscles, pipe networks, pressure drop, compressible turbulent flow, uniformity flow, murray's law
Procedia PDF Downloads 82937 Analysis of Long-term Results After External Dacryocystorhinostomy Surgery in Patients Suffered from Diabetes Mellitus
Authors: N. Musayeva, N. Rustamova, N. Bagirov, S. Ibadov
Abstract:
Purpose: to analyze the long-term results of external dacryocystorhinostomy (DCR), which remains the preferred primary procedure in the surgical treatment of lacrimal duct obstruction in chronic dacryocystitis. Methodology: long-term results of external DCR (after 3 years) performed on 90 patients (90 eyes) with chronic dacryocystitis from 2018 to 2020 were evaluated. The Azerbaijan National Center of Ophthalmology, named after acad. Zarifa Aliyeva. 15 of the patients were men, 75 – women. The average age was 45±3.2 years. Surgical operations were performed under local anesthesia. All patients suffered from diabetes mellitus for more than 3 years. All patients underwent external DCR and silicone drainage (tube) was implanted. In the postoperative period (after 3 years), lacrimation, purulent discharge, and the condition of the scar at the operation site were assessed. Results: All patients were under observation for more than 18 months. In general, the effectiveness of the surgical operation was 93.34%. Recurrence of disease was observed in 6 patients and in 3 patients (3.33%), the scar at the site of the operation was rough (non-cosmetic). In 3 patients (3.33%) – the surgically formed anastomosis between the lacrimal sac and the nasal bone was obstructed by scar tissue. These patients were reoperated by trans canalicular laser DCR. Conclusion: Despite the long-term (more than a hundred years) use of external DCR, it remains one of the primary techniques in the surgery of chronic dacryocystitis. Due to the high success rate and good long-term results of DCR in the treatment of chronic dacryocystitis in patients suffering from diabetes mellitus, we recommend external DCR for this group of patients.Keywords: chronic dacryocystitis, diabetes mellitus, external dacryocystorhinostomy, long-term results
Procedia PDF Downloads 64936 Learning to Translate by Learning to Communicate to an Entailment Classifier
Authors: Szymon Rutkowski, Tomasz Korbak
Abstract:
We present a reinforcement-learning-based method of training neural machine translation models without parallel corpora. The standard encoder-decoder approach to machine translation suffers from two problems we aim to address. First, it needs parallel corpora, which are scarce, especially for low-resource languages. Second, it lacks psychological plausibility of learning procedure: learning a foreign language is about learning to communicate useful information, not merely learning to transduce from one language’s 'encoding' to another. We instead pose the problem of learning to translate as learning a policy in a communication game between two agents: the translator and the classifier. The classifier is trained beforehand on a natural language inference task (determining the entailment relation between a premise and a hypothesis) in the target language. The translator produces a sequence of actions that correspond to generating translations of both the hypothesis and premise, which are then passed to the classifier. The translator is rewarded for classifier’s performance on determining entailment between sentences translated by the translator to disciple’s native language. Translator’s performance thus reflects its ability to communicate useful information to the classifier. In effect, we train a machine translation model without the need for parallel corpora altogether. While similar reinforcement learning formulations for zero-shot translation were proposed before, there is a number of improvements we introduce. While prior research aimed at grounding the translation task in the physical world by evaluating agents on an image captioning task, we found that using a linguistic task is more sample-efficient. Natural language inference (also known as recognizing textual entailment) captures semantic properties of sentence pairs that are poorly correlated with semantic similarity, thus enforcing basic understanding of the role played by compositionality. It has been shown that models trained recognizing textual entailment produce high-quality general-purpose sentence embeddings transferrable to other tasks. We use stanford natural language inference (SNLI) dataset as well as its analogous datasets for French (XNLI) and Polish (CDSCorpus). Textual entailment corpora can be obtained relatively easily for any language, which makes our approach more extensible to low-resource languages than traditional approaches based on parallel corpora. We evaluated a number of reinforcement learning algorithms (including policy gradients and actor-critic) to solve the problem of translator’s policy optimization and found that our attempts yield some promising improvements over previous approaches to reinforcement-learning based zero-shot machine translation.Keywords: agent-based language learning, low-resource translation, natural language inference, neural machine translation, reinforcement learning
Procedia PDF Downloads 127935 Prioritization Ranking for Managing Moisture Problems in a Building
Authors: Sai Amulya Gollapalli, Dilip A. Patel, Parth Patel K., Lukman E. Mansuri
Abstract:
Accumulation of moisture is one of the most worrisome aspects of a building. Architects and engineers tend to ignore its vitality during the designing and construction stage. Major fatalities in buildings can be caused by it. People avoid spending a lot of money on waterproofing. If the same mistake is repeated, no deep thinking is done. The quality of workmanship and construction is depleting due to negligence. It is important to do an analysis of the water maintenance issues happening in the current buildings and give a database for all the factors that are causing the defect. In this research, surveys are done with two waterproofing consultants, two client engineers, and two project managers. The survey was based on a matrix that was based on the causes of water maintenance issues. There were around 100 causes that were identified. The causes were categorized into six, namely, manpower, finance, method, management, environment, and material. In the matrices, the causes on the x-direction matched with the causes on the y-direction. 3 Likert scale was used to make a pairwise comparison between causes on each cell. Matrices were evaluated for the main categories and for each category separately. A final ranking was done by the weights achieved, and ‘cracks arriving from various construction joints’ was the highest with 0.57 relative significance, and ‘usage of the material’ was the lowest with 0.03 relative significance. Twelve defects due to water leakage were identified, and interviewees were asked to make a pairwise comparison of them, too, to understand the priorities. When the list of causes is achieved, the prioritization as per the stratification analysis is done. This will be beneficial to the consultants and contractors as they will get a primary idea of which causes to focus on.Keywords: water leakage, survey, causes, matrices, prioritization
Procedia PDF Downloads 96934 Navigating Neural Pathways to Success with Students on the Autism Spectrum
Authors: Panda Krouse
Abstract:
This work is a marriage of the science of Applied Behavioral Analysis and an educator’s look at Neuroscience. The focus is integrating what we know about the anatomy of the brain in autism and evidence-based practices in education. It is a bold attempt to present links between neurological research and the application of evidence-based practices in education. In researching for this work, no discovery of articles making these connections was made. Consideration of the areas of structural differences in the brain are aligned with evidence-based strategies. A brief literary review identifies how identified areas affect overt behavior, which is what, as educators, is what we can see and measure. Giving further justification and validation of our practices in education from a second scientific field is significant for continued improvement in intervention for students on the autism spectrum.Keywords: autism, evidence based practices, neurological differences, education intervention
Procedia PDF Downloads 62933 Effect of the Endotracheal Care Nursing Guideline Utilization on the Incidence of Endotracheal Tube Displacement, Oxygen Deficiency after Extubation, Re-intubation, and Nurses Satisfaction
Authors: Rabeab Khunpukdee, Aranya Sukchoui, Nonluk Somgit, Chitima Bunnaul
Abstract:
Endotracheal displacement is a major risk of life threatening among critically ill patients. Standard nursing protocol is needed to minimize this risk and to improve clinical outcomes. To evaluate the effectiveness of the endothacheal care nursing guideline. The incidence rates of endochacheal displacement, oxygen deficiency after extubation, re-intubation, and nurse’s satisfaction on the utilization of the endotracheal care nursing guideline. An evidence-based nursing practice framework was used to develop the endotracheal care nursing guideline. The guideline valid content was review by a 3 panel of experts. The index of item objective (IOC) of the guideline was 0.93. The guideline was implemented in 130 patients (guideline group) and 19 registered nurses at a medicine ward, Had Yai hospital, Thailand. Patient’s outcomes were evaluated by comparison with those 155 patients who received the routine nursing care (routine care group). Descriptive statistics, frequency, percentage, mean, standard deviation and Mann Whitney U-test was analyzed using the computer program. All significantly and better outcomes were found in the guideline group compared to the routine care group. The guideline group has less incidence rates of endotracheal displacement (1.54 % vs 9.03 %, p < 0.05), and none of the guideline group had oxygen deficiency after extubation (0 % vs 83.33%) compared to the routine care group. All of the 2 patients in the guideline group, compared to 6 of 14 patients in the routine care group were re-intubation. The overall rate of re-intubation in the total group (n = 130 vs 155) was seen less in the guideline group than the routine care group (1.54 % vs 3.87). Overall, nurses satisfaction was at high-level (89.50%) on the utilization of the guideline.Keywords: endotracheal care, nursing guideline, re-intubation, satisfaction
Procedia PDF Downloads 511932 Pressure-Controlled Dynamic Equations of the PFC Model: A Mathematical Formulation
Authors: Jatupon Em-Udom, Nirand Pisutha-Arnond
Abstract:
The phase-field-crystal, PFC, approach is a density-functional-type material model with an atomic resolution on a diffusive timescale. Spatially, the model incorporates periodic nature of crystal lattices and can naturally exhibit elasticity, plasticity and crystal defects such as grain boundaries and dislocations. Temporally, the model operates on a diffusive timescale which bypasses the need to resolve prohibitively small atomic-vibration time steps. The PFC model has been used to study many material phenomena such as grain growth, elastic and plastic deformations and solid-solid phase transformations. In this study, the pressure-controlled dynamic equation for the PFC model was developed to simulate a single-component system under externally applied pressure; these coupled equations are important for studies of deformable systems such as those under constant pressure. The formulation is based on the non-equilibrium thermodynamics and the thermodynamics of crystalline solids. To obtain the equations, the entropy variation around the equilibrium point was derived. Then the resulting driving forces and flux around the equilibrium were obtained and rewritten as conventional thermodynamic quantities. These dynamics equations are different from the recently-proposed equations; the equations in this study should provide more rigorous descriptions of the system dynamics under externally applied pressure.Keywords: driving forces and flux, evolution equation, non equilibrium thermodynamics, Onsager’s reciprocal relation, phase field crystal model, thermodynamics of single-component solid
Procedia PDF Downloads 303931 Emotion-Convolutional Neural Network for Perceiving Stress from Audio Signals: A Brain Chemistry Approach
Authors: Anup Anand Deshmukh, Catherine Soladie, Renaud Seguier
Abstract:
Emotion plays a key role in many applications like healthcare, to gather patients’ emotional behavior. Unlike typical ASR (Automated Speech Recognition) problems which focus on 'what was said', it is equally important to understand 'how it was said.' There are certain emotions which are given more importance due to their effectiveness in understanding human feelings. In this paper, we propose an approach that models human stress from audio signals. The research challenge in speech emotion detection is finding the appropriate set of acoustic features corresponding to an emotion. Another difficulty lies in defining the very meaning of emotion and being able to categorize it in a precise manner. Supervised Machine Learning models, including state of the art Deep Learning classification methods, rely on the availability of clean and labelled data. One of the problems in affective computation is the limited amount of annotated data. The existing labelled emotions datasets are highly subjective to the perception of the annotator. We address the first issue of feature selection by exploiting the use of traditional MFCC (Mel-Frequency Cepstral Coefficients) features in Convolutional Neural Network. Our proposed Emo-CNN (Emotion-CNN) architecture treats speech representations in a manner similar to how CNN’s treat images in a vision problem. Our experiments show that Emo-CNN consistently and significantly outperforms the popular existing methods over multiple datasets. It achieves 90.2% categorical accuracy on the Emo-DB dataset. We claim that Emo-CNN is robust to speaker variations and environmental distortions. The proposed approach achieves 85.5% speaker-dependant categorical accuracy for SAVEE (Surrey Audio-Visual Expressed Emotion) dataset, beating the existing CNN based approach by 10.2%. To tackle the second problem of subjectivity in stress labels, we use Lovheim’s cube, which is a 3-dimensional projection of emotions. Monoamine neurotransmitters are a type of chemical messengers in the brain that transmits signals on perceiving emotions. The cube aims at explaining the relationship between these neurotransmitters and the positions of emotions in 3D space. The learnt emotion representations from the Emo-CNN are mapped to the cube using three component PCA (Principal Component Analysis) which is then used to model human stress. This proposed approach not only circumvents the need for labelled stress data but also complies with the psychological theory of emotions given by Lovheim’s cube. We believe that this work is the first step towards creating a connection between Artificial Intelligence and the chemistry of human emotions.Keywords: deep learning, brain chemistry, emotion perception, Lovheim's cube
Procedia PDF Downloads 153930 Spatial Cognition and 3-Dimensional Vertical Urban Design Guidelines
Authors: Hee Sun (Sunny) Choi, Gerhard Bruyns, Wang Zhang, Sky Cheng, Saijal Sharma
Abstract:
The main focus of this paper is to propose a comprehensive framework for the cognitive measurement and modelling of the built environment. This will involve exploring and measuring neural mechanisms. The aim is to create a foundation for further studies in this field that are consistent and rigorous. Additionally, this framework will facilitate collaboration with cognitive neuroscientists by establishing a shared conceptual basis. The goal of this research is to develop a human-centric approach for urban design that is scientific and measurable, producing a set of urban design guidelines that incorporate cognitive measurement and modelling. By doing so, the broader intention is to design urban spaces that prioritize human needs and well-being, making them more liveable.Keywords: vertical urbanism, human centric design, spatial cognition and psychology, vertical urban design guidelines
Procedia PDF Downloads 81929 Vascularized Adipose Tissue Engineering by Using Adipose ECM/Fibroin Hydrogel
Authors: Alisan Kayabolen, Dilek Keskin, Ferit Avcu, Andac Aykan, Fatih Zor, Aysen Tezcaner
Abstract:
Adipose tissue engineering is a promising field for regeneration of soft tissue defects. However, only very thin implants can be used in vivo since vascularization is still a problem for thick implants. Another problem is finding a biocompatible scaffold with good mechanical properties. In this study, the aim is to develop a thick vascularized adipose tissue that will integrate with the host, and perform its in vitro and in vivo characterizations. For this purpose, a hydrogel of decellularized adipose tissue (DAT) and fibroin was produced, and both endothelial cells and adipocytes that were differentiated from adipose derived stem cells were encapsulated in this hydrogel. Mixing DAT with fibroin allowed rapid gel formation by vortexing. It also provided to adjust mechanical strength by changing fibroin to DAT ratio. Based on compression tests, gels of DAT/fibroin ratio with similar mechanical properties to adipose tissue was selected for cell culture experiments. In vitro characterizations showed that DAT is not cytotoxic; on the contrary, it has many natural ECM components which provide biocompatibility and bioactivity. Subcutaneous implantation of hydrogels resulted with no immunogenic reaction or infection. Moreover, localized empty hydrogels gelled successfully around host vessel with required shape. Implantations of cell encapsulated hydrogels and histological analyses are under study. It is expected that endothelial cells inside the hydrogel will form a capillary network and they will bind to the host vessel passing through hydrogel.Keywords: adipose tissue engineering, decellularization, encapsulation, hydrogel, vascularization
Procedia PDF Downloads 526928 Features Reduction Using Bat Algorithm for Identification and Recognition of Parkinson Disease
Authors: P. Shrivastava, A. Shukla, K. Verma, S. Rungta
Abstract:
Parkinson's disease is a chronic neurological disorder that directly affects human gait. It leads to slowness of movement, causes muscle rigidity and tremors. Gait serve as a primary outcome measure for studies aiming at early recognition of disease. Using gait techniques, this paper implements efficient binary bat algorithm for an early detection of Parkinson's disease by selecting optimal features required for classification of affected patients from others. The data of 166 people, both fit and affected is collected and optimal feature selection is done using PSO and Bat algorithm. The reduced dataset is then classified using neural network. The experiments indicate that binary bat algorithm outperforms traditional PSO and genetic algorithm and gives a fairly good recognition rate even with the reduced dataset.Keywords: parkinson, gait, feature selection, bat algorithm
Procedia PDF Downloads 543927 A Methodology of Using Fuzzy Logics and Data Analytics to Estimate the Life Cycle Indicators of Solar Photovoltaics
Authors: Thor Alexis Sazon, Alexander Guzman-Urbina, Yasuhiro Fukushima
Abstract:
This study outlines the method of how to develop a surrogate life cycle model based on fuzzy logic using three fuzzy inference methods: (1) the conventional Fuzzy Inference System (FIS), (2) the hybrid system of Data Analytics and Fuzzy Inference (DAFIS), which uses data clustering for defining the membership functions, and (3) the Adaptive-Neuro Fuzzy Inference System (ANFIS), a combination of fuzzy inference and artificial neural network. These methods were demonstrated with a case study where the Global Warming Potential (GWP) and the Levelized Cost of Energy (LCOE) of solar photovoltaic (PV) were estimated using Solar Irradiation, Module Efficiency, and Performance Ratio as inputs. The effects of using different fuzzy inference types, either Sugeno- or Mamdani-type, and of changing the number of input membership functions to the error between the calibration data and the model-generated outputs were also illustrated. The solution spaces of the three methods were consequently examined with a sensitivity analysis. ANFIS exhibited the lowest error while DAFIS gave slightly lower errors compared to FIS. Increasing the number of input membership functions helped with error reduction in some cases but, at times, resulted in the opposite. Sugeno-type models gave errors that are slightly lower than those of the Mamdani-type. While ANFIS is superior in terms of error minimization, it could generate solutions that are questionable, i.e. the negative GWP values of the Solar PV system when the inputs were all at the upper end of their range. This shows that the applicability of the ANFIS models highly depends on the range of cases at which it was calibrated. FIS and DAFIS generated more intuitive trends in the sensitivity runs. DAFIS demonstrated an optimal design point wherein increasing the input values does not improve the GWP and LCOE anymore. In the absence of data that could be used for calibration, conventional FIS presents a knowledge-based model that could be used for prediction. In the PV case study, conventional FIS generated errors that are just slightly higher than those of DAFIS. The inherent complexity of a Life Cycle study often hinders its widespread use in the industry and policy-making sectors. While the methodology does not guarantee a more accurate result compared to those generated by the Life Cycle Methodology, it does provide a relatively simpler way of generating knowledge- and data-based estimates that could be used during the initial design of a system.Keywords: solar photovoltaic, fuzzy logic, inference system, artificial neural networks
Procedia PDF Downloads 164926 Speech Perception by Video Hosting Services Actors: Urban Planning Conflicts
Authors: M. Pilgun
Abstract:
The report presents the results of a study of the specifics of speech perception by actors of video hosting services on the material of urban planning conflicts. To analyze the content, the multimodal approach using neural network technologies is employed. Analysis of word associations and associative networks of relevant stimulus revealed the evaluative reactions of the actors. Analysis of the data identified key topics that generated negative and positive perceptions from the participants. The calculation of social stress and social well-being indices based on user-generated content made it possible to build a rating of road transport construction objects according to the degree of negative and positive perception by actors.Keywords: social media, speech perception, video hosting, networks
Procedia PDF Downloads 147925 Evaluation of Developmental Toxicity and Teratogenicity of Perfluoroalkyl Compounds Using FETAX
Authors: Hyun-Kyung Lee, Jehyung Oh, Young Eun Jeong, Hyun-Shik Lee
Abstract:
Perfluoroalkyl compounds (PFCs) are environmental toxicants that persistently accumulate in the human blood. Their widespread detection and accumulation in the environment raise concerns about whether these chemicals might be developmental toxicants and teratogens in the ecosystem. We evaluated and compared the toxicity of PFCs of containing various numbers of carbon atoms (C8-11 carbons) on vertebrate embryogenesis. We assessed the developmental toxicity and teratogenicity of various PFCs. The toxic effects on Xenopus embryos were evaluated using different methods. We measured teratogenic indices (TIs) and investigated the mechanisms underlying developmental toxicity and teratogenicity by measuring the expression of organ-specific biomarkers such as xPTB (liver), Nkx2.5 (heart), and Cyl18 (intestine). All PFCs that we tested were found to be developmental toxicants and teratogens. Their toxic effects were strengthened with increasing length of the fluorinated carbon chain. Furthermore, we produced evidence showing that perfluorodecanoic acid (PFDA) and perfluoroundecanoic acid (PFuDA) are more potent developmental toxicants and teratogens in an animal model compared to the other PFCs we evaluated [perfluorooctanoic acid (PFOA) and perfluorononanoic acid (PFNA)]. In particular, severe defects resulting from PFDA and PFuDA exposure were observed in the liver and heart, respectively, using the whole mount in situ hybridization, real-time PCR, pathologic analysis of the heart, and dissection of the liver. Our studies suggest that most PFCs are developmental toxicants and teratogens, however, compounds that have higher numbers of carbons (i.e., PFDA and PFuDA) exert more potent effects.Keywords: PFC, xenopus, fetax, development
Procedia PDF Downloads 350924 The Long-Term Effects of Immediate Implantation, Early Implantation and Delayed Implantation at Aesthetics Area
Authors: Xing Wang, Lin Feng, Xuan Zou, Hongchen liu
Abstract:
Immediate Implantation after tooth extraction is considered to be the ideal way to retain the alveolar bone, but some scholars believe the aesthetic effect in the Early Implantation case are more reliable. In this study, 89 patients were added to this retrospective study up to 5 years. Assessment indicators was including the survival of the implant (peri-implant infection, implant loosening, shedding, crowns and occlusal), aesthetics (color and fullness gums, papilla height, probing depth, X-ray alveolar crest height, the patient's own aesthetic satisfaction, doctors aesthetics score), repair defects around the implant (peri-implant bone changes in height and thickness, whether the use of autologous bone graft, whether to use absorption/repair manual nonabsorbable material), treatment time, cost and the use of antibiotics.The results demonstrated that there is no significant difference in long-term success rate of immediate implantation, early implantation and delayed implantation (p> 0.05). But the results indicated immediate implantation group could get get better aesthetic results after two years (p< 0.05), but may increase the risk of complications and failures (p< 0.05). High-risk indicators include gingival recession, labial bone wall damage, thin gingival biotypes, planting position and occlusal restoration bad and so on. No matter which type of implanting methods was selected, the extraction methods and bone defect amplification techniques are observed as a significant factors on aesthetic effect (p< 0.05).Keywords: immediate implantation, long-term effects, aesthetics area, dental implants
Procedia PDF Downloads 355923 Fractal-Wavelet Based Techniques for Improving the Artificial Neural Network Models
Authors: Reza Bazargan lari, Mohammad H. Fattahi
Abstract:
Natural resources management including water resources requires reliable estimations of time variant environmental parameters. Small improvements in the estimation of environmental parameters would result in grate effects on managing decisions. Noise reduction using wavelet techniques is an effective approach for pre-processing of practical data sets. Predictability enhancement of the river flow time series are assessed using fractal approaches before and after applying wavelet based pre-processing. Time series correlation and persistency, the minimum sufficient length for training the predicting model and the maximum valid length of predictions were also investigated through a fractal assessment.Keywords: wavelet, de-noising, predictability, time series fractal analysis, valid length, ANN
Procedia PDF Downloads 368922 Fabrication of High-Aspect Ratio Vertical Silicon Nanowire Electrode Arrays for Brain-Machine Interfaces
Authors: Su Yin Chiam, Zhipeng Ding, Guang Yang, Danny Jian Hang Tng, Peiyi Song, Geok Ing Ng, Ken-Tye Yong, Qing Xin Zhang
Abstract:
Brain-machine interfaces (BMI) is a ground rich of exploration opportunities where manipulation of neural activity are used for interconnect with myriad form of external devices. These research and intensive development were evolved into various areas from medical field, gaming and entertainment industry till safety and security field. The technology were extended for neurological disorders therapy such as obsessive compulsive disorder and Parkinson’s disease by introducing current pulses to specific region of the brain. Nonetheless, the work to develop a real-time observing, recording and altering of neural signal brain-machine interfaces system will require a significant amount of effort to overcome the obstacles in improving this system without delay in response. To date, feature size of interface devices and the density of the electrode population remain as a limitation in achieving seamless performance on BMI. Currently, the size of the BMI devices is ranging from 10 to 100 microns in terms of electrodes’ diameters. Henceforth, to accommodate the single cell level precise monitoring, smaller and denser Nano-scaled nanowire electrode arrays are vital in fabrication. In this paper, we would like to showcase the fabrication of high aspect ratio of vertical silicon nanowire electrodes arrays using microelectromechanical system (MEMS) method. Nanofabrication of the nanowire electrodes involves in deep reactive ion etching, thermal oxide thinning, electron-beam lithography patterning, sputtering of metal targets and bottom anti-reflection coating (BARC) etch. Metallization on the nanowire electrode tip is a prominent process to optimize the nanowire electrical conductivity and this step remains a challenge during fabrication. Metal electrodes were lithographically defined and yet these metal contacts outline a size scale that is larger than nanometer-scale building blocks hence further limiting potential advantages. Therefore, we present an integrated contact solution that overcomes this size constraint through self-aligned Nickel silicidation process on the tip of vertical silicon nanowire electrodes. A 4 x 4 array of vertical silicon nanowires electrodes with the diameter of 290nm and height of 3µm has been successfully fabricated.Keywords: brain-machine interfaces, microelectromechanical systems (MEMS), nanowire, nickel silicide
Procedia PDF Downloads 433921 DeClEx-Processing Pipeline for Tumor Classification
Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba
Abstract:
Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline that ensures that data mirrors real-world settings by incorporating Gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification, and explainability in a single pipeline called DeClEx.Keywords: machine learning, healthcare, classification, explainability
Procedia PDF Downloads 54920 Application of an Artificial Neural Network to Determine the Risk of Malignant Tumors from the Images Resulting from the Asymmetry of Internal and External Thermograms of the Mammary Glands
Authors: Amdy Moustapha Drame, Ilya V. Germashev, E. A. Markushevskaya
Abstract:
Among the main problems of medicine is breast cancer, from which a significant number of women around the world are constantly dying. Therefore, the detection of malignant breast tumors is an urgent task. For many years, various technologies for detecting these tumors have been used, in particular, in thermal imaging in order to determine different levels of breast cancer development. These periodic screening methods are a diagnostic tool for women and may have become an alternative to older methods such as mammography. This article proposes a model for the identification of malignant neoplasms of the mammary glands by the asymmetry of internal and external thermal imaging fields.Keywords: asymmetry, breast cancer, tumors, deep learning, thermogram, convolutional transformation, classification
Procedia PDF Downloads 59919 Investigating the Viability of Ultra-Low Parameter Count Networks for Real-Time Football Detection
Authors: Tim Farrelly
Abstract:
In recent years, AI-powered object detection systems have opened the doors for innovative new applications and products, especially those operating in the real world or ‘on edge’ – namely, in sport. This paper investigates the viability of an ultra-low parameter convolutional neural network specially designed for the detection of footballs on ‘on the edge’ devices. The main contribution of this paper is the exploration of integrating new design features (depth-wise separable convolutional blocks and squeezed and excitation modules) into an ultra-low parameter network and demonstrating subsequent improvements in performance. The results show that tracking the ball from Full HD images with negligibly high accu-racy is possible in real-time.Keywords: deep learning, object detection, machine vision applications, sport, network design
Procedia PDF Downloads 143918 Human Brain Organoids-on-a-Chip Systems to Model Neuroinflammation
Authors: Feng Guo
Abstract:
Human brain organoids, 3D brain tissue cultures derived from human pluripotent stem cells, hold promising potential in modeling neuroinflammation for a variety of neurological diseases. However, challenges remain in generating standardized human brain organoids that can recapitulate key physiological features of a human brain. Here, this study presents a series of organoids-on-a-chip systems to generate better human brain organoids and model neuroinflammation. By employing 3D printing and microfluidic 3D cell culture technologies, the study’s systems enable the reliable, scalable, and reproducible generation of human brain organoids. Compared with conventional protocols, this study’s method increased neural progenitor proliferation and reduced heterogeneity of human brain organoids. As a proof-of-concept application, the study applied this method to model substance use disorders.Keywords: human brain organoids, microfluidics, organ-on-a-chip, neuroinflammation
Procedia PDF Downloads 200917 High Temperature Oxidation of Additively Manufactured Silicon Carbide/Carbon Fiber Nanocomposites
Authors: Saja M. Nabat Al-Ajrash, Charles Browning, Rose Eckerle, Li Cao, Robyn L. Bradford, Donald Klosterman
Abstract:
An additive manufacturing process and subsequent pyrolysis cycle were used to fabricate SiC matrix/carbon fiber hybrid composites. The matrix was fabricated using a mixture of preceramic polymer and acrylate monomers, while polyacrylonitrile (PAN) precursor was used to fabricate fibers via electrospinning. The precursor matrix and reinforcing fibers at 0, 2, 5, or 10 wt% were printed using digital light processing, and both were simultaneously pyrolyzed to yield the final ceramic matrix composite structure. After pyrolysis, XRD and SEAD analysis proved the existence of SiC nanocrystals and turbostratic carbon structure in the matrix, while the reinforcement phase was shown to have a turbostratic carbon structure similar to commercial carbon fibers. Thermogravimetric analysis (TGA) in the air up to 1400 °C was used to evaluate the oxidation resistance of this material. TGA results showed some weight loss due to oxidation of SiC and/or carbon up to about 900 °C, followed by weight gain to about 1200 °C due to the formation of a protective SiO2 layer. Although increasing carbon fiber content negatively impacted the total mass loss for the first heating cycle, exposure of the composite to second-run air revealed negligible weight chance. This is explained by SiO2 layer formation, which acts as a protective film that prevents oxygen diffusion. Oxidation of SiC and the formation of a glassy layer has been proven to protect the sample from further oxidation, as well as provide healing of surface cracks and defects, as revealed by SEM analysis.Keywords: silicon carbide, carbon fibers, additive manufacturing, composite
Procedia PDF Downloads 73916 Modification of Electrical and Switching Characteristics of a Non Punch-Through Insulated Gate Bipolar Transistor by Gamma Irradiation
Authors: Hani Baek, Gwang Min Sun, Chansun Shin, Sung Ho Ahn
Abstract:
Fast neutron irradiation using nuclear reactors is an effective method to improve switching loss and short circuit durability of power semiconductor (insulated gate bipolar transistors (IGBT) and insulated gate transistors (IGT), etc.). However, not only fast neutrons but also thermal neutrons, epithermal neutrons and gamma exist in the nuclear reactor. And the electrical properties of the IGBT may be deteriorated by the irradiation of gamma. Gamma irradiation damages are known to be caused by Total Ionizing Dose (TID) effect and Single Event Effect (SEE), Displacement Damage. Especially, the TID effect deteriorated the electrical properties such as leakage current and threshold voltage of a power semiconductor. This work can confirm the effect of the gamma irradiation on the electrical properties of 600 V NPT-IGBT. Irradiation of gamma forms lattice defects in the gate oxide and Si-SiO2 interface of the IGBT. It was confirmed that this lattice defect acts on the center of the trap and affects the threshold voltage, thereby negatively shifted the threshold voltage according to TID. In addition to the change in the carrier mobility, the conductivity modulation decreases in the n-drift region, indicating a negative influence that the forward voltage drop decreases. The turn-off delay time of the device before irradiation was 212 ns. Those of 2.5, 10, 30, 70 and 100 kRad(Si) were 225, 258, 311, 328, and 350 ns, respectively. The gamma irradiation increased the turn-off delay time of the IGBT by approximately 65%, and the switching characteristics deteriorated.Keywords: NPT-IGBT, gamma irradiation, switching, turn-off delay time, recombination, trap center
Procedia PDF Downloads 154915 Dual-Network Memory Model for Temporal Sequences
Authors: Motonobu Hattori
Abstract:
In neural networks, when new patters are learned by a network, they radically interfere with previously stored patterns. This drawback is called catastrophic forgetting. We have already proposed a biologically inspired dual-network memory model which can much reduce this forgetting for static patterns. In this model, information is first stored in the hippocampal network, and thereafter, it is transferred to the neocortical network using pseudo patterns. Because, temporal sequence learning is more important than static pattern learning in the real world, in this study, we improve our conventional dual-network memory model so that it can deal with temporal sequences without catastrophic forgetting. The computer simulation results show the effectiveness of the proposed dual-network memory model.Keywords: catastrophic forgetting, dual-network, temporal sequences, hippocampal
Procedia PDF Downloads 269914 User-Awareness from Eye Line Tracing During Specification Writing to Improve Specification Quality
Authors: Yoshinori Wakatake
Abstract:
Many defects after the release of software packages are caused due to omissions of sufficient test items in test specifications. Poor test specifications are detected by manual review, which imposes a high human load. The prevention of omissions depends on the end-user awareness of test specification writers. If test specifications were written while envisioning the behavior of end-users, the number of omissions in test items would be greatly reduced. The paper pays attention to the point that writers who can achieve it differ from those who cannot in not only the description richness but also their gaze information. It proposes a method to estimate the degree of user-awareness of writers through the analysis of their gaze information when writing test specifications. We conduct an experiment to obtain the gaze information of a writer of the test specifications. Test specifications are automatically classified using gaze information. In this method, a Random Forest model is constructed for the classification. The classification is highly accurate. By looking at the explanatory variables which turn out to be important variables, we know behavioral features to distinguish test specifications of high quality from others. It is confirmed they are pupil diameter size and the number and the duration of blinks. The paper also investigates test specifications automatically classified with gaze information to discuss features in their writing ways in each quality level. The proposed method enables us to automatically classify test specifications. It also prevents test item omissions, because it reveals writing features that test specifications of high quality should satisfy.Keywords: blink, eye tracking, gaze information, pupil diameter, quality improvement, specification document, user-awareness
Procedia PDF Downloads 64913 Behavior of Epoxy Insulator with Surface Defect under HVDC Stress
Authors: Qingying Liu, S. Liu, L. Hao, B. Zhang, J. D. Yan
Abstract:
HVDC technology is becoming increasingly popular due to its simplicity in topology and less power loss over long distance of power transmission, in comparison with HVAC technology. However, the dielectric behavior of insulators in the long term under HVDC stress is completely different from that under HVAC stress as a result of charge accumulation in a constant electric field. Insulators used in practical systems are never perfect in their structural conditions. Over time shallow cracks may develop on their surface. The presence of defects can lead to drastic change in their dielectric behaviour and thus increase the probability of surface flashover. In this contribution, experimental investigations have been carried out on the charge accumulation phenomenon on the surface of a rod insulator made of epoxy that is placed between two disk shaped electrodes at different voltage levels and in different gases (SF6, CO2 and N2). Many results obtained, such as, the two-dimensional electrostatic potential distribution along the insulator surface after the removal of the power source following a pre-defined period of application. The probe has been carefully calibrated before each test. Results show that surface charge distribution near the two disk shaped electrodes is not uniform in the circumferential direction, possibly due to the imperfect electrical connections between the embeded conductor in the insulator and the disk shaped electrodes. The axial length of this non-uniform region is experimentally determined, which provides useful information for shielding design. A charge transport model is also used to explain the formation of the long term electrostatic potential distribution under a constant applied voltage.Keywords: HVDC, power systems, dielectric behavior, insulation, charge accumulation
Procedia PDF Downloads 222