Search results for: neural simulated annealing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3620

Search results for: neural simulated annealing

770 Fluid–Structure Interaction Modeling of Wind Turbines

Authors: Andre F. A. Cyrino

Abstract:

Knowing that the technological advance is the focus on the efficient extraction of energy from wind, and therefore in the design of wind turbine structures, this work aims the study of the fluid-structure interaction of an idealized wind turbine. The blade was studied as a beam attached to a cylindrical Hub with rotation axis pointing the air flow that passes through the rotor. Using the calculus of variations and the finite difference method the blade will be simulated by a discrete number of nodes and the aerodynamic forces were evaluated. The study presented here was written on Matlab and performs a numeric simulation of a simplified model of windmill containing a Hub and three blades modeled as Euler-Bernoulli beams for small strains and under the constant and uniform wind. The mathematical approach is done by Hamilton’s Extended Principle with the aerodynamic loads applied on the nodes considering the local relative wind speed, angle of attack and aerodynamic lift and drag coefficients. Due to the wide range of angles of attack, a wind turbine blade operates, the airfoil used on the model was NREL SERI S809 which allowed obtaining equations for Cl and Cd as functions of the angle of attack, based on a NASA study. Tridimensional flow effects were no taken in part, as well as torsion of the beam, which only bends. The results showed the dynamic response of the system in terms of displacement and rotational speed as the turbine reached the final speed. Although the results were not compared to real windmills or more complete models, the resulting values were consistent with the size of the system and wind speed.

Keywords: blade aerodynamics, fluid–structure interaction, wind turbine aerodynamics, wind turbine blade

Procedia PDF Downloads 264
769 Preparation and Functional Properties of Synbiotic Yogurt Fermented with Lactobacillus brevis PML1 Derived from a Fermented Cereal-Dairy Product

Authors: Farideh Tabatabei-Yazdi, Fereshteh Falah, Alireza Vasiee

Abstract:

Nowadays, production of functional foods has become very essential. Inulin is one of the most functional hydrocolloid compounds used in such products. In the present study, the production of a synbiotic yogurt containing 1, 2.5, and 5% (w/v) inulin has been investigated. The yogurt was fermented with Lactobacillus brevis PML1 derived from Tarkhineh, an Iranian cereal-dairy fermented food. Furthermore, the physicochemical properties, antioxidant activity, sensory attributes, and microbial viability properties were investigated on the 0th, 7th, and 14th days of storage after fermentation. The viable cells of L. brevis PML1 reached 108 CFU/g, and the product resisted to simulated digestive juices. Moreover, the synbiotic yogurt impressively increased the production of antimicrobial compounds and had the most profound antimicrobial effect on S. typhimurium. The physiochemical properties were in the normal range, and the fat content of the synbiotic yogurt was reduced remarkably. The antioxidant capacity of the fermented yogurt was significantly increased (p<0:05), which was equal to those of DPPH (69:18±1:00%) and BHA (89:16±2:00%). The viability of L. brevis PML1 was increased during storage. Sensory analysis showed that there were significant differences in terms of the impressive parameters between the samples and the control (p<0:05). Addition of 2.5% inulin not only improved the physical properties but also retained the viability of the probiotic after 14 days of storage, in addition to the viability of L. brevis with a viability count above 6 log CFU/g in the yogurt. Therefore, a novel synbiotic product containing L. brevis PML1, which can exert the desired properties, can be used as a suitable carrier for the delivery of the probiotic strain, exerting its beneficial health effects.

Keywords: functional food, lactobacillus brevis, symbiotic yogurt, physiochemical properties

Procedia PDF Downloads 88
768 Perceived and Performed E-Health Literacy: Survey and Simulated Performance Test

Authors: Efrat Neter, Esther Brainin, Orna Baron-Epel

Abstract:

Background: Connecting end-users to newly developed ICT technologies and channeling patients to new products requires an assessment of compatibility. End user’s assessment is conveyed in the concept of eHealth literacy. The study examined the association between perceived and performed eHealth literacy (EHL) in a heterogeneous age sample in Israel. Methods: Participants included 100 Israeli adults (mean age 43,SD 13.9) who were first phone interviewed and then tested on a computer simulation of health-related Internet tasks. Performed, perceived and evaluated EHL were assessed. Levels of successful completion of tasks represented EHL performance and evaluated EHL included observed motivation, confidence, and amount of help provided. Results: The skills of accessing, understanding, appraising, applying, and generating new information had a decreasing successful completion rate with increase in complexity of the task. Generating new information, though highly correlated with all other skills, was least correlated with the other skills. Perceived and performed EHL were correlated (r=.40, P=.001), while facets of performance (i.e, digital literacy and EHL) were highly correlated (r=.89, P<.001). Participants low and high in performed EHL were significantly different: low performers were older, had attained less education, used the Internet for less time and perceived themselves as less healthy. They also encountered more difficulties, required more assistance, were less confident in their conduct and exhibited less motivation than high performers. Conclusions: The association in this age-hetrogenous ample was larger than in previous age-homogenous samples. The moderate association between perceived and performed EHL indicates that the two are associated yet distinct, the latter requiring separate assessment. Features of future rapid performed EHL tools are discussed.

Keywords: eHealth, health literacy, performance, simulation

Procedia PDF Downloads 231
767 DCDNet: Lightweight Document Corner Detection Network Based on Attention Mechanism

Authors: Kun Xu, Yuan Xu, Jia Qiao

Abstract:

The document detection plays an important role in optical character recognition and text analysis. Because the traditional detection methods have weak generalization ability, and deep neural network has complex structure and large number of parameters, which cannot be well applied in mobile devices, this paper proposes a lightweight Document Corner Detection Network (DCDNet). DCDNet is a two-stage architecture. The first stage with Encoder-Decoder structure adopts depthwise separable convolution to greatly reduce the network parameters. After introducing the Feature Attention Union (FAU) module, the second stage enhances the feature information of spatial and channel dim and adaptively adjusts the size of receptive field to enhance the feature expression ability of the model. Aiming at solving the problem of the large difference in the number of pixel distribution between corner and non-corner, Weighted Binary Cross Entropy Loss (WBCE Loss) is proposed to define corner detection problem as a classification problem to make the training process more efficient. In order to make up for the lack of Dataset of document corner detection, a Dataset containing 6620 images named Document Corner Detection Dataset (DCDD) is made. Experimental results show that the proposed method can obtain fast, stable and accurate detection results on DCDD.

Keywords: document detection, corner detection, attention mechanism, lightweight

Procedia PDF Downloads 352
766 Time-Domain Analysis Approaches of Soil-Structure Interaction: A Comparative Study

Authors: Abdelrahman Taha, Niloofar Malekghaini, Hamed Ebrahimian, Ramin Motamed

Abstract:

This paper compares the substructure and direct methods for soil-structure interaction (SSI) analysis in the time domain. In the substructure SSI method, the soil domain is replaced by a set of springs and dashpots, also referred to as the impedance function, derived through the study of the behavior of a massless rigid foundation. The impedance function is inherently frequency dependent, i.e., it varies as a function of the frequency content of the structural response. To use the frequency-dependent impedance function for time-domain SSI analysis, the impedance function is approximated at the fundamental frequency of the structure-soil system. To explore the potential limitations of the substructure modeling process, a two-dimensional reinforced concrete frame structure is modeled using substructure and direct methods in this study. The results show discrepancies between the simulated responses of the substructure and the direct approaches. To isolate the effects of higher modal responses, the same study is repeated using a harmonic input motion, in which a similar discrepancy is still observed between the substructure and direct approaches. It is concluded that the main source of discrepancy between the substructure and direct SSI approaches is likely attributed to the way the impedance functions are calculated, i.e., assuming a massless rigid foundation without considering the presence of the superstructure. Hence, a refined impedance function, considering the presence of the superstructure, shall be developed. This refined impedance function is expected to significantly improve the simulation accuracy of the substructure approach for structural systems whose behavior is dominated by the fundamental mode response.

Keywords: direct approach, impedance function, soil-structure interaction, substructure approach

Procedia PDF Downloads 110
765 Performance Comparison of Microcontroller-Based Optimum Controller for Fruit Drying System

Authors: Umar Salisu

Abstract:

This research presents the development of a hot air tomatoes drying system. To provide a more efficient and continuous temperature control, microcontroller-based optimal controller was developed. The system is based on a power control principle to achieve smooth power variations depending on a feedback temperature signal of the process. An LM35 temperature sensor and LM399 differential comparator were used to measure the temperature. The mathematical model of the system was developed and the optimal controller was designed and simulated and compared with the PID controller transient response. A controlled environment suitable for fruit drying is developed within a closed chamber and is a three step process. First, the infrared light is used internally to preheated the fruit to speedily remove the water content inside the fruit for fast drying. Second, hot air of a specified temperature is blown inside the chamber to maintain the humidity below a specified level and exhaust the humid air of the chamber. Third, the microcontroller disconnects the power to the chamber after the moisture content of the fruits is removed to minimal. Experiments were conducted with 1kg of fresh tomatoes at three different temperatures (40, 50 and 60 °C) at constant relative humidity of 30%RH. The results obtained indicate that the system is significantly reducing the drying time without affecting the quality of the fruits. In the context of temperature control, the results obtained showed that the response of the optimal controller has zero overshoot whereas the PID controller response overshoots to about 30% of the set-point. Another performance metric used is the rising time; the optimal controller rose without any delay while the PID controller delayed for more than 50s. It can be argued that the optimal controller performance is preferable than that of the PID controller since it does not overshoot and it starts in good time.

Keywords: drying, microcontroller, optimum controller, PID controller

Procedia PDF Downloads 294
764 Construction of Submerged Aquatic Vegetation Index through Global Sensitivity Analysis of Radiative Transfer Model

Authors: Guanhua Zhou, Zhongqi Ma

Abstract:

Submerged aquatic vegetation (SAV) in wetlands can absorb nitrogen and phosphorus effectively to prevent the eutrophication of water. It is feasible to monitor the distribution of SAV through remote sensing, but for the reason of weak vegetation signals affected by water body, traditional terrestrial vegetation indices are not applicable. This paper aims at constructing SAV index to enhance the vegetation signals and distinguish SAV from water body. The methodology is as follows: (1) select the bands sensitive to the vegetation parameters based on global sensitivity analysis of SAV canopy radiative transfer model; (2) take the soil line concept as reference, analyze the distribution of SAV and water reflectance simulated by SAV canopy model and semi-analytical water model in the two-dimensional space built by different sensitive bands; (3)select the band combinations which have better separation performance between SAV and water, and use them to build the SAVI indices in the form of normalized difference vegetation index(NDVI); (4)analyze the sensitivity of indices to the water and vegetation parameters, choose the one more sensitive to vegetation parameters. It is proved that index formed of the bands with central wavelengths in 705nm and 842nm has high sensitivity to chlorophyll content in leaves while it is less affected by water constituents. The model simulation shows a general negative, little correlation of SAV index with increasing water depth. Moreover, the index enhances capabilities in separating SAV from water compared to NDVI. The SAV index is expected to have potential in parameter inversion of wetland remote sensing.

Keywords: global sensitivity analysis, radiative transfer model, submerged aquatic vegetation, vegetation indices

Procedia PDF Downloads 256
763 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function

Procedia PDF Downloads 303
762 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 62
761 Deriving Framework for Slum Rehabilitation through Environmental Perspective: Case of Mumbai

Authors: Ashwini Bhosale, Yogesh Patil

Abstract:

Urban areas are extremely complicated environmental settings, where health and well-being of an individual and population are governed by a large number of bio-physical, socio-economical, and inclusive aspects. Although poverty and slums are the prime issues under UN-HABITAT agenda of environmental sustainability, slums, the inevitable part of urban environment, have not accounted for inclusive city planning. Developing nations, where about 60 % of world slum population resides, are increasingly under pressure to uplift the urban poor, particularly slum dwellers. From a point of advantage, these new slum redevelopment projects have succeeded in providing legitimized and more permanent and stable shelter for the low income people, as well as individualized sanitation and water supply. However, they unfortunately follow the “one type fits all" approach and exhibit no response to the climatic design needs on Mumbai. The thesis focuses on the study of environmental perspectives in the context of Daylight, natural ventilation and social aspects in the design process of Slum-Rehabilitation schemes (SRS) – case of Mumbai. It attempts to investigate into Indian approaches about SRS and concludes upon strategies to be incorporated in SRS to improve the overall SRS environment. The main objectives of this work have been to identify and study the spatial configuration and possibilities of daylight and natural ventilation in Slum Rehabilitated buildings. The performance of the proposed method was evaluated by comparison with the daylight luminance simulated by lighting software, namely ECOTECT, and with measurements under real skies whereas for the ventilation study purpose, software named FLOW DESIGN was used.

Keywords: urban environment, slum-rehabilitation, daylight, natural-ventilation, architectural consequences

Procedia PDF Downloads 383
760 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon

Abstract:

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Keywords: battery energy storage system, electrical network frequency stability, frequency control unit, PowerFactor

Procedia PDF Downloads 127
759 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method

Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga

Abstract:

Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.

Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses

Procedia PDF Downloads 255
758 Radiation Protection Assessment of the Emission of a d-t Neutron Generator: Simulations with MCNP Code and Experimental Measurements in Different Operating Conditions

Authors: G. M. Contessa, L. Lepore, G. Gandolfo, C. Poggi, N. Cherubini, R. Remetti, S. Sandri

Abstract:

Practical guidelines are provided in this work for the safe use of a portable d-t Thermo Scientific MP-320 neutron generator producing pulsed 14.1 MeV neutron beams. The neutron generator’s emission was tested experimentally and reproduced by MCNPX Monte Carlo code. Simulations were particularly accurate, even generator’s internal components were reproduced on the basis of ad-hoc collected X-ray radiographic images. Measurement campaigns were conducted under different standard experimental conditions using an LB 6411 neutron detector properly calibrated at three different energies, and comparing simulated and experimental data. In order to estimate the dose to the operator vs. the operating conditions and the energy spectrum, the most appropriate value of the conversion factor between neutron fluence and ambient dose equivalent has been identified, taking into account both direct and scattered components. The results of the simulations show that, in real situations, when there is no information about the neutron spectrum at the point where the dose has to be evaluated, it is possible - and in any case conservative - to convert the measured value of the count rate by means of the conversion factor corresponding to 14 MeV energy. This outcome has a general value when using this type of generator, enabling a more accurate design of experimental activities in different setups. The increasingly widespread use of this type of device for industrial and medical applications makes the results of this work of interest in different situations, especially as a support for the definition of appropriate radiation protection procedures and, in general, for risk analysis.

Keywords: instrumentation and monitoring, management of radiological safety, measurement of individual dose, radiation protection of workers

Procedia PDF Downloads 129
757 Development of Method for Recovery of Nickel from Aqueous Solution Using 2-Hydroxy-5-Nonyl- Acetophenone Oxime Impregnated on Activated Charcoal

Authors: A. O. Adebayo, G. A. Idowu, F. Odegbemi

Abstract:

Investigations on the recovery of nickel from aqueous solution using 2-hydroxy-5-nonyl- acetophenone oxime (LIX-84I) impregnated on activated charcoal was carried out. The LIX-84I was impregnated onto the pores of dried activated charcoal by dry method and optimum conditions for different equilibrium parameters (pH, adsorbent dosage, extractant concentration, agitation time and temperature) were determined using a simulated solution of nickel. The kinetics and adsorption isotherm studies were also evaluated. It was observed that the efficiency of recovery with LIX-84I impregnated on charcoal was dependent on the pH of the aqueous solution as there was little or no recovery at pH below 4. However, as the pH was raised, percentage recovery increases and peaked at pH 5.0. The recovery was found to increase with temperature up to 60ºC. Also it was observed that nickel adsorbed onto the loaded charcoal best at a lower concentration (0.1M) of the extractant when compared with higher concentrations. Similarly, a moderately low dosage (1 g) of the adsorbent showed better recovery than larger dosages. These optimum conditions were used to recover nickel from the leachate of Ni-MH batteries dissolved with sulphuric acid, and a 99.6% recovery was attained. Adsorption isotherm studies showed that the equilibrium data fitted best to Temkin model, with a negative value of constant, b (-1.017 J/mol) and a high correlation coefficient, R² of 0.9913. Kinetic studies showed that the adsorption process followed a pseudo-second order model. Thermodynamic parameter values (∆G⁰, ∆H⁰, and ∆S⁰) showed that the adsorption was endothermic and spontaneous. The impregnated charcoal appreciably recovered nickel using a relatively smaller volume of extractant than what is required in solvent extraction. Desorption studies showed that the loaded charcoal is reusable for three times, and so might be economical for nickel recovery from waste battery.

Keywords: charcoal, impregnated, LIX-84I, nickel, recovery

Procedia PDF Downloads 145
756 Health Trajectory Clustering Using Deep Belief Networks

Authors: Farshid Hajati, Federico Girosi, Shima Ghassempour

Abstract:

We present a Deep Belief Network (DBN) method for clustering health trajectories. Deep Belief Network (DBN) is a deep architecture that consists of a stack of Restricted Boltzmann Machines (RBM). In a deep architecture, each layer learns more complex features than the past layers. The proposed method depends on DBN in clustering without using back propagation learning algorithm. The proposed DBN has a better a performance compared to the deep neural network due the initialization of the connecting weights. We use Contrastive Divergence (CD) method for training the RBMs which increases the performance of the network. The performance of the proposed method is evaluated extensively on the Health and Retirement Study (HRS) database. The University of Michigan Health and Retirement Study (HRS) is a nationally representative longitudinal study that has surveyed more than 27,000 elderly and near-elderly Americans since its inception in 1992. Participants are interviewed every two years and they collect data on physical and mental health, insurance coverage, financial status, family support systems, labor market status, and retirement planning. The dataset is publicly available and we use the RAND HRS version L, which is easy to use and cleaned up version of the data. The size of sample data set is 268 and the length of the trajectories is equal to 10. The trajectories do not stop when the patient dies and represent 10 different interviews of live patients. Compared to the state-of-the-art benchmarks, the experimental results show the effectiveness and superiority of the proposed method in clustering health trajectories.

Keywords: health trajectory, clustering, deep learning, DBN

Procedia PDF Downloads 366
755 The Classification Accuracy of Finance Data through Holder Functions

Authors: Yeliz Karaca, Carlo Cattani

Abstract:

This study focuses on the local Holder exponent as a measure of the function regularity for time series related to finance data. In this study, the attributes of the finance dataset belonging to 13 countries (India, China, Japan, Sweden, France, Germany, Italy, Australia, Mexico, United Kingdom, Argentina, Brazil, USA) located in 5 different continents (Asia, Europe, Australia, North America and South America) have been examined.These countries are the ones mostly affected by the attributes with regard to financial development, covering a period from 2012 to 2017. Our study is concerned with the most important attributes that have impact on the development of finance for the countries identified. Our method is comprised of the following stages: (a) among the multi fractal methods and Brownian motion Holder regularity functions (polynomial, exponential), significant and self-similar attributes have been identified (b) The significant and self-similar attributes have been applied to the Artificial Neuronal Network (ANN) algorithms (Feed Forward Back Propagation (FFBP) and Cascade Forward Back Propagation (CFBP)) (c) the outcomes of classification accuracy have been compared concerning the attributes that have impact on the attributes which affect the countries’ financial development. This study has enabled to reveal, through the application of ANN algorithms, how the most significant attributes are identified within the relevant dataset via the Holder functions (polynomial and exponential function).

Keywords: artificial neural networks, finance data, Holder regularity, multifractals

Procedia PDF Downloads 242
754 Data Compression in Ultrasonic Network Communication via Sparse Signal Processing

Authors: Beata Zima, Octavio A. Márquez Reyes, Masoud Mohammadgholiha, Jochen Moll, Luca de Marchi

Abstract:

This document presents the approach of using compressed sensing in signal encoding and information transferring within a guided wave sensor network, comprised of specially designed frequency steerable acoustic transducers (FSATs). Wave propagation in a damaged plate was simulated using commercial FEM-based software COMSOL. Guided waves were excited by means of FSATs, characterized by the special shape of its electrodes, and modeled using PIC255 piezoelectric material. The special shape of the FSAT, allows for focusing wave energy in a certain direction, accordingly to the frequency components of its actuation signal, which makes available a larger monitored area. The process begins when a FSAT detects and records reflection from damage in the structure, this signal is then encoded and prepared for transmission, using a combined approach, based on Compressed Sensing Matching Pursuit and Quadrature Amplitude Modulation (QAM). After codification of the signal is in binary chars the information is transmitted between the nodes in the network. The message reaches the last node, where it is finally decoded and processed, to be used for damage detection and localization purposes. The main aim of the investigation is to determine the location of detected damage using reconstructed signals. The study demonstrates that the special steerable capabilities of FSATs, not only facilitate the detection of damage but also permit transmitting the damage information to a chosen area in a specific direction of the investigated structure.

Keywords: data compression, ultrasonic communication, guided waves, FEM analysis

Procedia PDF Downloads 120
753 Effects of a Simulated Power Cut in Automatic Milking Systems on Dairy Cows Heart Activity

Authors: Anja Gräff, Stefan Holzer, Manfred Höld, Jörn Stumpenhausen, Heinz Bernhardt

Abstract:

In view of the increasing quantity of 'green energy' from renewable raw materials and photovoltaic facilities, it is quite conceivable that power supply variations may occur, so that constantly working machines like automatic milking systems (AMS) may break down temporarily. The usage of farm-made energy is steadily increasing in order to keep energy costs as low as possible. As a result, power cuts are likely to happen more frequently. Current work in the framework of the project 'stable 4.0' focuses on possible stress reactions by simulating power cuts up to four hours in dairy farms. Based on heart activity it should be found out whether stress on dairy cows increases under these circumstances. In order to simulate a power cut, 12 random cows out of 2 herds were not admitted to the AMS for at least two hours on three consecutive days. The heart rates of the cows were measured and the collected data evaluated with HRV Program Kubios Version 2.1 on the basis of eight parameters (HR, RMSSD, pNN50, SD1, SD2, LF, HF and LF/HF). Furthermore, stress reactions were examined closely via video analysis, milk yield, ruminant activity, pedometer and measurements of cortisol metabolites. Concluding it turned out, that during the test only some animals were suffering from minor stress symptoms, when they tried to get into the AMS at their regular milking time, but couldn´t be milked because the system was manipulated. However, the stress level during a regular “time-dependent milking rejection” was just as high. So the study comes to the conclusion, that the low psychological stress level in the case of a 2-4 hours failure of an AMS does not have any impact on animal welfare and health.

Keywords: dairy cow, heart activity, power cut, stable 4.0

Procedia PDF Downloads 309
752 A System Dynamics Model for Analyzing Customer Satisfaction in Healthcare Systems

Authors: Mahdi Bastan, Ali Mohammad Ahmadvand, Fatemeh Soltani Khamsehpour

Abstract:

Health organizations’ sustainable development has nowadays become highly affected by customers’ satisfaction due to significant changes made in the business environment of the healthcare system and emerging of Competitiveness paradigm. In case we look at the hospitals and other health organizations as service providers concerning profit issues, the satisfaction of employees as interior customers, and patients as exterior customers would be of significant importance in health business success. Furthermore, satisfaction rate could be considered in performance assessment of healthcare organizations as a perceived quality measure. Several researches have been carried out in identification of effective factors on patients’ satisfaction in health organizations. However, considering a systemic view, the complex causal relations among many components of healthcare system would be an issue that its acquisition and sustainability requires an understanding of the dynamic complexity, an appropriate cognition of different components, and effective relationships among them resulting ultimately in identifying the generative structure of patients’ satisfaction. Hence, the presenting paper applies system dynamics approaches coherently and methodologically to represent the systemic structure of customers’ satisfaction of a health system involving the constituent components and interactions among them. Then, the results of different policies taken on the system are simulated via developing mathematical models, identifying leverage points, and using scenario making technique and then, the best solutions are presented to improve customers’ satisfaction of the services. The presenting approach supports taking advantage of decision support systems. Additionally, relying on understanding of system behavior Dynamics, the effective policies for improving the health system would be recognized.

Keywords: customer satisfaction, healthcare, scenario, simulation, system dynamics

Procedia PDF Downloads 406
751 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011

Authors: S. Abera, T. Gidey, W. Terefe

Abstract:

Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.

Keywords: data mining, HIV, testing, ethiopia

Procedia PDF Downloads 493
750 Hydrological Response of the Glacierised Catchment: Himalayan Perspective

Authors: Sonu Khanal, Mandira Shrestha

Abstract:

Snow and Glaciers are the largest dependable reserved sources of water for the river system originating from the Himalayas so an accurate estimate of the volume of water contained in the snowpack and the rate of release of water from snow and glaciers are, therefore, needed for efficient management of the water resources. This research assess the fusion of energy exchanges between the snowpack, air above and soil below according to mass and energy balance which makes it apposite than the models using simple temperature index for the snow and glacier melt computation. UEBGrid a Distributed energy based model is used to calculate the melt which is then routed by Geo-SFM. The model robustness is maintained by incorporating the albedo generated from the Landsat-7 ETM images on a seasonal basis for the year 2002-2003 and substrate map derived from TM. The Substrate file includes predominantly the 4 major thematic layers viz Snow, clean ice, Glaciers and Barren land. This approach makes use of CPC RFE-2 and MERRA gridded data sets as the source of precipitation and climatic variables. The subsequent model run for the year between 2002-2008 shows a total annual melt of 17.15 meter is generate from the Marshyangdi Basin of which 71% is contributed by the glaciers , 18% by the rain and rest being from the snow melt. The albedo file is decisive in governing the melt dynamics as 30% increase in the generated surface albedo results in the 10% decrease in the simulated discharge. The melt routed with the land cover and soil variables using Geo-SFM shows Nash-Sutcliffe Efficiency of 0.60 with observed discharge for the study period.

Keywords: Glacier, Glacier melt, Snowmelt, Energy balance

Procedia PDF Downloads 452
749 Numerical Analysis for Soil Compaction and Plastic Points Extension in Pile Drivability

Authors: Omid Tavasoli, Mahmoud Ghazavi

Abstract:

A numerical analysis of drivability of piles in different geometry is presented. In this paper, a three-dimensional finite difference analysis for plastic point extension and soil compaction in the effect of pile driving is analyzed. Four pile configurations such as cylindrical pile, fully tapered pile, T-C pile consists of a top tapered segment and a lower cylindrical segment and C-T pile has a top cylindrical part followed by a tapered part are investigated. All piles which driven up to a total penetration depth of 16 m have the same length with equivalent surface area and approximately with identical material volumes. An idealization for pile-soil system in pile driving is considered for this approach. A linear elastic material is assumed to model the vertical pile behaviors and the soil obeys the elasto-plastic constitutive low and its failure is controlled by the Mohr-Coulomb failure criterion. A slip which occurred at the pile-soil contact surfaces along the shaft and the toe in pile driving procedures is simulated with interface elements. All initial and boundary conditions are the same in all analyses. Quiet boundaries are used to prevent wave reflection in the lateral and vertical directions for the soil. The results obtained from numerical analyses were compared with available other numerical data and laboratory tests, indicating a satisfactory agreement. It will be shown that with increasing the angle of taper, the permanent piles toe settlement increase and therefore, the extension of plastic points increase. These are interesting phenomena in pile driving and are on the safe side for driven piles.

Keywords: pile driving, finite difference method, non-uniform piles, pile geometry, pile set, plastic points, soil compaction

Procedia PDF Downloads 481
748 Simulation and Fabrication of Plasmonic Lens for Bacteria Detection

Authors: Sangwoo Oh, Jaewoo Kim, Dongmin Seo, Jaewon Park, Yongha Hwang, Sungkyu Seo

Abstract:

Plasmonics has been regarded one of the most powerful bio-sensing modalities to evaluate bio-molecular interactions in real-time. However, most of the plasmonic sensing methods are based on labeling metallic nanoparticles, e.g. gold or silver, as optical modulation markers, which are non-recyclable and expensive. This plasmonic modulation can be usually achieved through various nano structures, e.g., nano-hole arrays. Among those structures, plasmonic lens has been regarded as a unique plasmonic structure due to its light focusing characteristics. In this study, we introduce a custom designed plasmonic lens array for bio-sensing, which was simulated by finite-difference-time-domain (FDTD) approach and fabricated by top-down approach. In our work, we performed the FDTD simulations of various plasmonic lens designs for bacteria sensor, i.e., Samonella and Hominis. We optimized the design parameters, i.e., radius, shape, and material, of the plasmonic lens. The simulation results showed the change in the peak intensity value with the introduction of each bacteria and antigen i.e., peak intensity 1.8711 a.u. with the introduction of antibody layer of thickness of 15nm. For Salmonella, the peak intensity changed from 1.8711 a.u. to 2.3654 a.u. and for Hominis, the peak intensity changed from 1.8711 a.u. to 3.2355 a.u. This significant shift in the intensity due to the interaction between bacteria and antigen showed a promising sensing capability of the plasmonic lens. With the batch processing and bulk production of this nano scale design, the cost of biological sensing can be significantly reduced, holding great promise in the fields of clinical diagnostics and bio-defense.

Keywords: plasmonic lens, FDTD, fabrication, bacteria sensor, salmonella, hominis

Procedia PDF Downloads 268
747 Wind Speed Forecasting Based on Historical Data Using Modern Prediction Methods in Selected Sites of Geba Catchment, Ethiopia

Authors: Halefom Kidane

Abstract:

This study aims to assess the wind resource potential and characterize the urban area wind patterns in Hawassa City, Ethiopia. The estimation and characterization of wind resources are crucial for sustainable urban planning, renewable energy development, and climate change mitigation strategies. A secondary data collection method was used to carry out the study. The collected data at 2 meters was analyzed statistically and extrapolated to the standard heights of 10-meter and 30-meter heights using the power law equation. The standard deviation method was used to calculate the value of scale and shape factors. From the analysis presented, the maximum and minimum mean daily wind speed at 2 meters in 2016 was 1.33 m/s and 0.05 m/s in 2017, 1.67 m/s and 0.14 m/s in 2018, 1.61m and 0.07 m/s, respectively. The maximum monthly average wind speed of Hawassa City in 2016 at 2 meters was noticed in the month of December, which is around 0.78 m/s, while in 2017, the maximum wind speed was recorded in the month of January with a wind speed magnitude of 0.80 m/s and in 2018 June was maximum speed which is 0.76 m/s. On the other hand, October was the month with the minimum mean wind speed in all years, with a value of 0.47 m/s in 2016,0.47 in 2017 and 0.34 in 2018. The annual mean wind speed was 0.61 m/s in 2016,0.64, m/s in 2017 and 0.57 m/s in 2018 at a height of 2 meters. From extrapolation, the annual mean wind speeds for the years 2016,2017 and 2018 at 10 heights were 1.17 m/s,1.22 m/s, and 1.11 m/s, and at the height of 30 meters, were 3.34m/s,3.78 m/s, and 3.01 m/s respectively/Thus, the site consists mainly primarily classes-I of wind speed even at the extrapolated heights.

Keywords: artificial neural networks, forecasting, min-max normalization, wind speed

Procedia PDF Downloads 72
746 Fabrication of Cheap Novel 3d Porous Scaffolds Activated by Nano-Particles and Active Molecules for Bone Regeneration and Drug Delivery Applications

Authors: Mostafa Mabrouk, Basma E. Abdel-Ghany, Mona Moaness, Bothaina M. Abdel-Hady, Hanan H. Beherei

Abstract:

Tissue engineering became a promising field for bone repair and regenerative medicine in which cultured cells, scaffolds and osteogenic inductive signals are used to regenerate tissues. The annual cost of treating bone defects in Egypt has been estimated to be many billions, while enormous costs are spent on imported bone grafts for bone injuries, tumors, and other pathologies associated with defective fracture healing. The current study is aimed at developing a more strategic approach in order to speed-up recovery after bone damage. This will reduce the risk of fatal surgical complications and improve the quality of life of people affected with such fractures. 3D scaffolds loaded with cheap nano-particles that possess an osteogenic effect were prepared by nano-electrospinning. The Microstructure and morphology characterizations of the 3D scaffolds were monitored using scanning electron microscopy (SEM). The physicochemical characterization was investigated using X-ray diffractometry (XRD) and infrared spectroscopy (IR). The Physicomechanical properties of the 3D scaffold were determined by a universal testing machine. The in vitro bioactivity of the 3D scaffold was assessed in simulated body fluid (SBF). The bone-bonding ability of novel 3D scaffolds was also evaluated. The obtained nanofibrous scaffolds demonstrated promising microstructure, physicochemical and physicomechanical features appropriate for enhanced bone regeneration. Therefore, the utilized nanomaterials loaded with the drug are greatly recommended as cheap alternatives to growth factors.

Keywords: bone regeneration, cheap scaffolds, nanomaterials, active molecules

Procedia PDF Downloads 185
745 Fire and Explosion Consequence Modeling Using Fire Dynamic Simulator: A Case Study

Authors: Iftekhar Hassan, Sayedil Morsalin, Easir A Khan

Abstract:

Accidents involving fire occur frequently in recent times and their causes showing a great deal of variety which require intervention methods and risk assessment strategies are unique in each case. On September 4, 2020, a fire and explosion occurred in a confined space caused by a methane gas leak from an underground pipeline in Baitus Salat Jame mosque during Night (Esha) prayer in Narayanganj District, Bangladesh that killed 34 people. In this research, this incident is simulated using Fire Dynamics Simulator (FDS) software to analyze and understand the nature of the accident and associated consequences. FDS is an advanced computational fluid dynamics (CFD) system of fire-driven fluid flow which solves numerically a large eddy simulation form of the Navier–Stokes’s equations for simulation of the fire and smoke spread and prediction of thermal radiation, toxic substances concentrations and other relevant parameters of fire. This study focuses on understanding the nature of the fire and consequence evaluation due to thermal radiation caused by vapor cloud explosion. An evacuation modeling was constructed to visualize the effect of evacuation time and fractional effective dose (FED) for different types of agents. The results were presented by 3D animation, sliced pictures and graphical representation to understand fire hazards caused by thermal radiation or smoke due to vapor cloud explosion. This study will help to design and develop appropriate respond strategy for preventing similar accidents.

Keywords: consequence modeling, fire and explosion, fire dynamics simulation (FDS), thermal radiation

Procedia PDF Downloads 223
744 Alpha: A Groundbreaking Avatar Merging User Dialogue with OpenAI's GPT-3.5 for Enhanced Reflective Thinking

Authors: Jonas Colin

Abstract:

Standing at the vanguard of AI development, Alpha represents an unprecedented synthesis of logical rigor and human abstraction, meticulously crafted to mirror the user's unique persona and personality, a feat previously unattainable in AI development. Alpha, an avant-garde artefact in the realm of artificial intelligence, epitomizes a paradigmatic shift in personalized digital interaction, amalgamating user-specific dialogic patterns with the sophisticated algorithmic prowess of OpenAI's GPT-3.5 to engender a platform for enhanced metacognitive engagement and individualized user experience. Underpinned by a sophisticated algorithmic framework, Alpha integrates vast datasets through a complex interplay of neural network models and symbolic AI, facilitating a dynamic, adaptive learning process. This integration enables the system to construct a detailed user profile, encompassing linguistic preferences, emotional tendencies, and cognitive styles, tailoring interactions to align with individual characteristics and conversational contexts. Furthermore, Alpha incorporates advanced metacognitive elements, enabling real-time reflection and adaptation in communication strategies. This self-reflective capability ensures continuous refinement of its interaction model, positioning Alpha not just as a technological marvel but as a harbinger of a new era in human-computer interaction, where machines engage with us on a deeply personal and cognitive level, transforming our interaction with the digital world.

Keywords: chatbot, GPT 3.5, metacognition, symbiose

Procedia PDF Downloads 68
743 Development of Fault Diagnosis Technology for Power System Based on Smart Meter

Authors: Chih-Chieh Yang, Chung-Neng Huang

Abstract:

In power system, how to improve the fault diagnosis technology of transmission line has always been the primary goal of power grid operators. In recent years, due to the rise of green energy, the addition of all kinds of distributed power also has an impact on the stability of the power system. Because the smart meters are with the function of data recording and bidirectional transmission, the adaptive Fuzzy Neural inference system, ANFIS, as well as the artificial intelligence that has the characteristics of learning and estimation in artificial intelligence. For transmission network, in order to avoid misjudgment of the fault type and location due to the input of these unstable power sources, combined with the above advantages of smart meter and ANFIS, a method for identifying fault types and location of faults is proposed in this study. In ANFIS training, the bus voltage and current information collected by smart meters can be trained through the ANFIS tool in MATLAB to generate fault codes to identify different types of faults and the location of faults. In addition, due to the uncertainty of distributed generation, a wind power system is added to the transmission network to verify the diagnosis correctness of the study. Simulation results show that the method proposed in this study can correctly identify the fault type and location of fault with more efficiency, and can deal with the interference caused by the addition of unstable power sources.

Keywords: ANFIS, fault diagnosis, power system, smart meter

Procedia PDF Downloads 133
742 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 179
741 Optimal Concentration of Fluorescent Nanodiamonds in Aqueous Media for Bioimaging and Thermometry Applications

Authors: Francisco Pedroza-Montero, Jesús Naín Pedroza-Montero, Diego Soto-Puebla, Osiris Alvarez-Bajo, Beatriz Castaneda, Sofía Navarro-Espinoza, Martín Pedroza-Montero

Abstract:

Nanodiamonds have been widely studied for their physical properties, including chemical inertness, biocompatibility, optical transparency from the ultraviolet to the infrared region, high thermal conductivity, and mechanical strength. In this work, we studied how the fluorescence spectrum of nanodiamonds quenches concerning the concentration in aqueous solutions systematically ranging from 0.1 to 10 mg/mL. Our results demonstrated a non-linear fluorescence quenching as the concentration increases for both of the NV zero-phonon lines; the 5 mg/mL concentration shows the maximum fluorescence emission. Furthermore, this behaviour is theoretically explained as an electronic recombination process that modulates the intensity in the NV centres. Finally, to gain more insight, the FRET methodology is used to determine the fluorescence efficiency in terms of the fluorophores' separation distance. Thus, the concentration level is simulated as follows, a small distance between nanodiamonds would be considered a highly concentrated system, whereas a large distance would mean a low concentrated one. Although the 5 mg/mL concentration shows the maximum intensity, our main interest is focused on the concentration of 0.5 mg/mL, which our studies demonstrate the optimal human cell viability (99%). In this respect, this concentration has the feature of being as biocompatible as water giving the possibility to internalize it in cells without harming the living media. To this end, not only can we track nanodiamonds on the surface or inside the cell with excellent precision due to their fluorescent intensity, but also, we can perform thermometry tests transforming a fluorescence contrast image into a temperature contrast image.

Keywords: nanodiamonds, fluorescence spectroscopy, concentration, bioimaging, thermometry

Procedia PDF Downloads 401