Search results for: drift flow model
14403 Imputing the Minimum Social Value of Public Healthcare: A General Equilibrium Model of Israel
Authors: Erez Yerushalmi, Sani Ziv
Abstract:
The rising demand for healthcare services, without a corresponding rise in public supply, led to a debate on whether to increase private healthcare provision - especially in hospital services and second-tier healthcare. Proponents for increasing private healthcare highlight gains in efficiency, while opponents its risk to social welfare. None, however, provide a measure of the social value and its impact on the economy in terms of a monetary value. In this paper, we impute a minimum social value of public healthcare that corresponds to indifference between gains in efficiency, with losses to social welfare. Our approach resembles contingent valuation methods that introduce a hypothetical market for non-commodities, but is different from them because we use numerical simulation techniques to exploit certain market failure conditions. In this paper, we develop a general equilibrium model that distinguishes between public-private healthcare services and public-private financing. Furthermore, the social value is modelled as a by product of healthcare services. The model is then calibrated to our unique health focused Social Accounting Matrix of Israel, and simulates the introduction of a hypothetical health-labour market - given that it is heavily regulated in the baseline (i.e., the true situation in Israel today). For baseline parameters, we estimate the minimum social value at around 18% public healthcare financing. The intuition is that the gain in economic welfare from improved efficiency, is offset by the loss in social welfare due to a reduction in available social value. We furthermore simulate a deregulated healthcare scenario that internalizes the imputed value of social value and searches for the optimal weight of public and private healthcare provision.Keywords: contingent valuation method (CVM), general equilibrium model, hypothetical market, private-public healthcare, social value of public healthcare
Procedia PDF Downloads 14614402 Development and Investigation of Efficient Substrate Feeding and Dissolved Oxygen Control Algorithms for Scale-Up of Recombinant E. coli Cultivation Process
Authors: Vytautas Galvanauskas, Rimvydas Simutis, Donatas Levisauskas, Vykantas Grincas, Renaldas Urniezius
Abstract:
The paper deals with model-based development and implementation of efficient control strategies for recombinant protein synthesis in fed-batch E.coli cultivation processes. Based on experimental data, a kinetic dynamic model for cultivation process was developed. This model was used to determine substrate feeding strategies during the cultivation. The proposed feeding strategy consists of two phases – biomass growth phase and recombinant protein production phase. In the first process phase, substrate-limited process is recommended when the specific growth rate of biomass is about 90-95% of its maximum value. This ensures reduction of glucose concentration in the medium, improves process repeatability, reduces the development of secondary metabolites and other unwanted by-products. The substrate limitation can be enhanced to satisfy restriction on maximum oxygen transfer rate in the bioreactor and to guarantee necessary dissolved carbon dioxide concentration in culture media. In the recombinant protein production phase, the level of substrate limitation and specific growth rate are selected within the range to enable optimal target protein synthesis rate. To account for complex process dynamics, to efficiently exploit the oxygen transfer capability of the bioreactor, and to maintain the required dissolved oxygen concentration, adaptive control algorithms for dissolved oxygen control have been proposed. The developed model-based control strategies are useful in scale-up of cultivation processes and accelerate implementation of innovative biotechnological processes for industrial applications.Keywords: adaptive algorithms, model-based control, recombinant E. coli, scale-up of bioprocesses
Procedia PDF Downloads 25714401 Multi-Stream Graph Attention Network for Recommendation with Knowledge Graph
Abstract:
In recent years, Graph neural network has been widely used in knowledge graph recommendation. The existing recommendation methods based on graph neural network extract information from knowledge graph through entity and relation, which may not be efficient in the way of information extraction. In order to better propose useful entity information for the current recommendation task in the knowledge graph, we propose an end-to-end Neural network Model based on multi-stream graph attentional Mechanism (MSGAT), which can effectively integrate the knowledge graph into the recommendation system by evaluating the importance of entities from both users and items. Specifically, we use the attention mechanism from the user's perspective to distil the domain nodes information of the predicted item in the knowledge graph, to enhance the user's information on items, and generate the feature representation of the predicted item. Due to user history, click items can reflect the user's interest distribution, we propose a multi-stream attention mechanism, based on the user's preference for entities and relationships, and the similarity between items to be predicted and entities, aggregate user history click item's neighborhood entity information in the knowledge graph and generate the user's feature representation. We evaluate our model on three real recommendation datasets: Movielens-1M (ML-1M), LFM-1B 2015 (LFM-1B), and Amazon-Book (AZ-book). Experimental results show that compared with the most advanced models, our proposed model can better capture the entity information in the knowledge graph, which proves the validity and accuracy of the model.Keywords: graph attention network, knowledge graph, recommendation, information propagation
Procedia PDF Downloads 11614400 An Inspection of Two Layer Model of Agency: An fMRI Study
Authors: Keyvan Kashkouli Nejad, Motoaki Sugiura, Atsushi Sato, Takayuki Nozawa, Hyeonjeong Jeong, Sugiko Hanawa , Yuka Kotozaki, Ryuta Kawashima
Abstract:
The perception of agency/control is altered with presence of discrepancies in the environment or mismatch of predictions (of possible results) and actual results the sense of agency might become altered. Synofzik et al. proposed a two layer model of agency: In the first layer, the Feeling of Agency (FoA) is not directly available to awareness; a slight mismatch in the environment/outcome might cause alterations in FoA, while the agent still feels in control. If the discrepancy passes a threshold, it becomes available to consciousness and alters Judgment of Agency (JoA), which is directly available in the person’s awareness. Most experiments so far only investigate subjects rather conscious JoA, while FoA has been neglected. In this experiment we target FoA by using subliminal discrepancies that can not be consciously detectable by the subjects. Here, we explore whether we can detect this two level model in the subjects behavior and then try to map this in their brain activity. To do this, in a fMRI study, we incorporated both consciously detectable mismatching between action and result and also subliminal discrepancies in the environment. Also, unlike previous experiments where subjective questions from the participants mainly trigger the rather conscious JoA, we also tried to measure the rather implicit FoA by asking participants to rate their performance. We compared behavioral results and also brain activation when there were conscious discrepancies and when there were subliminal discrepancies against trials with no discrepancies and against each other. In line with our expectations, conditions with consciously detectable incongruencies triggered lower JoA ratings than conditions without. Also, conditions with any type of discrepancies had lower FoA ratings compared to conditions without. Additionally, we found out that TPJ and angular gyrus in particular to have a role in coding of JoA and also FoA.Keywords: agency, fMRI, TPJ, two layer model
Procedia PDF Downloads 47014399 Parameter Tuning of Complex Systems Modeled in Agent Based Modeling and Simulation
Authors: Rabia Korkmaz Tan, Şebnem Bora
Abstract:
The major problem encountered when modeling complex systems with agent-based modeling and simulation techniques is the existence of large parameter spaces. A complex system model cannot be expected to reflect the whole of the real system, but by specifying the most appropriate parameters, the actual system can be represented by the model under certain conditions. When the studies conducted in recent years were reviewed, it has been observed that there are few studies for parameter tuning problem in agent based simulations, and these studies have focused on tuning parameters of a single model. In this study, an approach of parameter tuning is proposed by using metaheuristic algorithms such as Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Artificial Bee Colonies (ABC), Firefly (FA) algorithms. With this hybrid structured study, the parameter tuning problems of the models in the different fields were solved. The new approach offered was tested in two different models, and its achievements in different problems were compared. The simulations and the results reveal that this proposed study is better than the existing parameter tuning studies.Keywords: parameter tuning, agent based modeling and simulation, metaheuristic algorithms, complex systems
Procedia PDF Downloads 22614398 Exploring Communities of Practice through Public Health Walks for Nurse Education
Authors: Jacqueline P. Davies
Abstract:
Introduction: Student nurses must develop skills in observation, communication and reflection as well as public health knowledge from their first year of training. This paper will explain a method developed for students to collect their own findings about public health in urban areas. These areas are both rich in the history of old public health that informs the content of many traditional public health walks, but are also locations where new public health concerns about chronic disease are concentrated. The learning method explained in this paper enables students to collect their own data and write original work as first year students. Examples of their findings will be given. Methodology: In small groups, health care students are instructed to walk in neighbourhoods near to the hospitals they will soon attend as apprentice nurses. On their walks, they wander slowly, engage in conversations, and enter places open to the public. As they drift, they observe with all five senses in the real three dimensional world to collect data for their reflective accounts of old and new public health. They are encouraged to stop for refreshments and taste, as well as look, hear, smell, and touch while on their walk. They reflect as a group and later develop an individual reflective account in which they write up their deep reflections about what they observed on their walk. In preparation for their walk, they are encouraged to look at studies of quality of Life and other neighbourhood statistics as well as undertaking a risk assessment for their walk. Findings: Reflecting on their walks, students apply theoretical concepts around social determinants of health and health inequalities to develop their understanding of communities in the neighbourhoods visited. They write about the treasured historical architecture made of stone, bronze and marble which have outlived those who built them; but also how the streets are used now. The students develop their observations into thematic analyses such as: what we drink as illustrated by the empty coke can tossed into a now disused drinking fountain; the shift in home-life balance illustrated by streets where families once lived over the shop which are now walked by commuters weaving around each other as they talk on their mobile phones; and security on the street, with CCTV cameras placed at regular intervals, signs warning trespasses and barbed wire; but little evidence of local people watching the street. Conclusion: In evaluations of their first year, students have reported the health walk as one of their best experiences. The innovative approach was commended by the UK governing body of nurse education and it received a quality award from the nurse education funding body. This approach to education allows students to develop skills in the real world and write original work.Keywords: education, innovation, nursing, urban
Procedia PDF Downloads 28714397 Optimal Price Points in Differential Pricing
Authors: Katerina Kormusheva
Abstract:
Pricing plays a pivotal role in the marketing discipline as it directly influences consumer perceptions, purchase decisions, and overall market positioning of a product or service. This paper seeks to expand current knowledge in the area of discriminatory and differential pricing, a main area of marketing research. The methodology includes developing a framework and a model for determining how many price points to implement in differential pricing. We focus on choosing the levels of differentiation, derive a function form of the model framework proposed, and lastly, test it empirically with data from a large-scale marketing pricing experiment of services in telecommunications.Keywords: marketing, differential pricing, price points, optimization
Procedia PDF Downloads 9314396 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network
Authors: Abdolreza Memari
Abstract:
In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model
Procedia PDF Downloads 50114395 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System
Authors: J. K. Adedeji, M. O. Oyekanmi
Abstract:
This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.Keywords: biometric characters, facial recognition, neural network, OpenCV
Procedia PDF Downloads 25614394 Classification of Poverty Level Data in Indonesia Using the Naïve Bayes Method
Authors: Anung Style Bukhori, Ani Dijah Rahajoe
Abstract:
Poverty poses a significant challenge in Indonesia, requiring an effective analytical approach to understand and address this issue. In this research, we applied the Naïve Bayes classification method to examine and classify poverty data in Indonesia. The main focus is on classifying data using RapidMiner, a powerful data analysis platform. The analysis process involves data splitting to train and test the classification model. First, we collected and prepared a poverty dataset that includes various factors such as education, employment, and health..The experimental results indicate that the Naïve Bayes classification model can provide accurate predictions regarding the risk of poverty. The use of RapidMiner in the analysis process offers flexibility and efficiency in evaluating the model's performance. The classification produces several values to serve as the standard for classifying poverty data in Indonesia using Naive Bayes. The accuracy result obtained is 40.26%, with a moderate recall result of 35.94%, a high recall result of 63.16%, and a low recall result of 38.03%. The precision for the moderate class is 58.97%, for the high class is 17.39%, and for the low class is 58.70%. These results can be seen from the graph below.Keywords: poverty, classification, naïve bayes, Indonesia
Procedia PDF Downloads 5514393 Carbonaceous Monolithic Multi-Channel Denuders as a Gas-Particle Partitioning Tool for the Occupational Sampling of Aerosols from Semi-Volatile Organic Compounds
Authors: Vesta Kohlmeier, George C. Dragan, Juergen Orasche, Juergen Schnelle-Kreis, Dietmar Breuer, Ralf Zimmermann
Abstract:
Aerosols from hazardous semi-volatile organic compounds (SVOC) may occur in workplace air and can simultaneously be found as particle and gas phase. For health risk assessment, it is necessary to collect particles and gases separately. This can be achieved by using a denuder for the gas phase collection, combined with a filter and an adsorber for particle collection. The study focused on the suitability of carbonaceous monolithic multi-channel denuders, so-called Novacarb™-Denuders (MastCarbon International Ltd., Guilford, UK), to achieve gas-particle separation. Particle transmission efficiency experiments were performed with polystyrene latex (PSL) particles (size range 0.51-3 µm), while the time dependent gas phase collection efficiency was analysed for polar and nonpolar SVOC (mass concentrations 7-10 mg/m3) over 2 h at 5 or 10 l/min. The experimental gas phase collection efficiency was also compared with theoretical predictions. For n-hexadecane (C16), the gas phase collection efficiency was max. 91 % for one denuder and max. 98 % for two denuders, while for diethylene glycol (DEG), a maximal gas phase collection efficiency of 93 % for one denuder and 97 % for two denuders was observed. At 5 l/min higher gas phase collection efficiencies were achieved than at 10 l/min. The deviations between the theoretical and experimental gas phase collection efficiencies were up to 5 % for C16 and 23 % for DEG. Since the theoretical efficiency depends on the geometric shape and length of the denuder, flow rate and diffusion coefficients of the tested substances, the obtained values define an upper limit which could be reached. Regarding the particle transmission through the denuders, the use of one denuder showed transmission efficiencies around 98 % for 1-3 µm particle diameters. The use of three denuders resulted in transmission efficiencies from 93-97 % for the same particle sizes. In summary, NovaCarb™-Denuders are well applicable for sampling aerosols of polar/nonpolar substances with particle diameters ≤3 µm and flow rates of 5 l/min or lower. These properties and their compact size make them suitable for use in personal aerosol samplers. This work is supported by the German Social Accident Insurance (DGUV), research contract FP371.Keywords: gas phase collection efficiency, particle transmission, personal aerosol sampler, SVOC
Procedia PDF Downloads 17614392 The Potential Role of Some Nutrients and Drugs in Providing Protection from Neurotoxicity Induced by Aluminium in Rats
Authors: Azza A. Ali, Abeer I. Abd El-Fattah, Shaimaa S. Hussein, Hanan A. Abd El-Samea, Karema Abu-Elfotuh
Abstract:
Background: Aluminium (Al) represents an environmental risk factor. Exposure to high levels of Al causes neurotoxic effects and different diseases. Vinpocetine is widely used to improve cognitive functions, it possesses memory-protective and memory-enhancing properties and has the ability to increase cerebral blood flow and glucose uptake. Cocoa bean represents a rich source of iron as well as a potent antioxidant. It can protect from the impact of free radicals, reduces stress as well as depression and promotes better memory and concentration. Wheatgrass is primarily used as a concentrated source of nutrients. It contains vitamins, minerals, carbohydrates, amino acids and possesses antioxidant and anti-inflammatory activities. Coenzyme Q10 (CoQ10) is an intracellular antioxidant and mitochondrial membrane stabilizer. It is effective in improving cognitive disorders and has been used as anti-aging. Zinc is a structural element of many proteins and signaling messenger that is released by neural activity at many central excitatory synapses. Objective: To study the role of some nutrients and drugs as Vinpocetine, Cocoa, Wheatgrass, CoQ10 and Zinc against neurotoxicity induced by Al in rats as well as to compare between their potency in providing protection. Methods: Seven groups of rats were used and received daily for three weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except for the control group which received saline. All groups of Al-toxicity model except one group (non-treated) were co-administered orally together with AlCl3 the following treatments; Vinpocetine (20mg/kg), Cocoa powder (24mg/kg), Wheat grass (100mg/kg), CoQ10 (200mg/kg) or Zinc (32mg/kg). Biochemical changes in the rat brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups besides histopathological examinations in different brain regions. Results: Neurotoxicity and neurodegenerations in the rat brain after three weeks of Al exposure were indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the significant decrease in SOD, TAC, BDNF and confirmed by the histopathological changes in the brain. On the other hand, co-administration of each of Vinpocetine, Cocoa, Wheatgrass, CoQ10 or Zinc together with AlCl3 provided protection against hazards of neurotoxicity and neurodegenerations induced by Al, their protection were indicated by the decrease in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the increase in SOD, TAC, BDNF and confirmed by the histopathological examinations of different brain regions. Vinpocetine and Cocoa showed the most pronounced protection while Zinc provided the least protective effects than the other used nutrients and drugs. Conclusion: Different degrees of protection from neurotoxicity and neuronal degenerations induced by Al could be achieved through the co-administration of some nutrients and drugs during its exposure. Vinpocetine and Cocoa provided the most protection than Wheat grass, CoQ10 or Zinc which showed the least protective effects.Keywords: aluminum, neurotoxicity, vinpocetine, cocoa, wheat grass, coenzyme Q10, Zinc, rats
Procedia PDF Downloads 24914391 Enhancing Email Security: A Multi-Layered Defense Strategy Approach and an AI-Powered Model for Identifying and Mitigating Phishing Attacks
Authors: Anastasios Papathanasiou, George Liontos, Athanasios Katsouras, Vasiliki Liagkou, Euripides Glavas
Abstract:
Email remains a crucial communication tool due to its efficiency, accessibility and cost-effectiveness, enabling rapid information exchange across global networks. However, the global adoption of email has also made it a prime target for cyber threats, including phishing, malware and Business Email Compromise (BEC) attacks, which exploit its integral role in personal and professional realms in order to perform fraud and data breaches. To combat these threats, this research advocates for a multi-layered defense strategy incorporating advanced technological tools such as anti-spam and anti-malware software, machine learning algorithms and authentication protocols. Moreover, we developed an artificial intelligence model specifically designed to analyze email headers and assess their security status. This AI-driven model examines various components of email headers, such as "From" addresses, ‘Received’ paths and the integrity of SPF, DKIM and DMARC records. Upon analysis, it generates comprehensive reports that indicate whether an email is likely to be malicious or benign. This capability empowers users to identify potentially dangerous emails promptly, enhancing their ability to avoid phishing attacks, malware infections and other cyber threats.Keywords: email security, artificial intelligence, header analysis, threat detection, phishing, DMARC, DKIM, SPF, ai model
Procedia PDF Downloads 5814390 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor
Abstract:
Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.Keywords: foot disorder, machine learning, neural network, pes planus
Procedia PDF Downloads 36014389 Study on the Transition to Pacemaker of Two Coupled Neurons
Authors: Sun Zhe, Ruggero Micheletto
Abstract:
The research of neural network is very important for the development of advanced next generation intelligent devices and the medical treatment. The most important part of the neural network research is the learning. The process of learning in our brain is essentially several adjustment processes of connection strength between neurons. It is very difficult to figure out how this mechanism works in the complex network and how the connection strength influences brain functions. For this reason, we made a model with only two coupled neurons and studied the influence of connection strength between them. To emulate the neuronal activity of realistic neurons, we prefer to use the Izhikevich neuron model. This model can simulate the neuron variables accurately and it’s simplicity is very suitable to implement on computers. In this research, the parameter ρ is used to estimate the correlation coefficient between spike train of two coupling neurons.We think the results is very important for figuring out the mechanism between synchronization of coupling neurons and synaptic plasticity. The result also presented the importance of the spike frequency adaptation in complex systems.Keywords: neural networks, noise, stochastic processes, coupled neurons, correlation coefficient, synchronization, pacemaker, synaptic plasticity
Procedia PDF Downloads 28414388 Budget Optimization for Maintenance of Bridges in Egypt
Authors: Hesham Abd Elkhalek, Sherif M. Hafez, Yasser M. El Fahham
Abstract:
Allocating limited budget to maintain bridge networks and selecting effective maintenance strategies for each bridge represent challenging tasks for maintenance managers and decision makers. In Egypt, bridges are continuously deteriorating. In many cases, maintenance works are performed due to user complaints. The objective of this paper is to develop a practical and reliable framework to manage the maintenance, repair, and rehabilitation (MR&R) activities of Bridges network considering performance and budget limits. The model solves an optimization problem that maximizes the average condition of the entire network given the limited available budget using Genetic Algorithm (GA). The framework contains bridge inventory, condition assessment, repair cost calculation, deterioration prediction, and maintenance optimization. The developed model takes into account multiple parameters including serviceability requirements, budget allocation, element importance on structural safety and serviceability, bridge impact on network, and traffic. A questionnaire is conducted to complete the research scope. The proposed model is implemented in software, which provides a friendly user interface. The framework provides a multi-year maintenance plan for the entire network for up to five years. A case study of ten bridges is presented to validate and test the proposed model with data collected from Transportation Authorities in Egypt. Different scenarios are presented. The results are reasonable, feasible and within acceptable domain.Keywords: bridge management systems (BMS), cost optimization condition assessment, fund allocation, Markov chain
Procedia PDF Downloads 29114387 Classification of Germinatable Mung Bean by Near Infrared Hyperspectral Imaging
Authors: Kaewkarn Phuangsombat, Arthit Phuangsombat, Anupun Terdwongworakul
Abstract:
Hard seeds will not grow and can cause mold in sprouting process. Thus, the hard seeds need to be separated from the normal seeds. Near infrared hyperspectral imaging in a range of 900 to 1700 nm was implemented to develop a model by partial least squares discriminant analysis to discriminate the hard seeds from the normal seeds. The orientation of the seeds was also studied to compare the performance of the models. The model based on hilum-up orientation achieved the best result giving the coefficient of determination of 0.98, and root mean square error of prediction of 0.07 with classification accuracy was equal to 100%.Keywords: mung bean, near infrared, germinatability, hard seed
Procedia PDF Downloads 30514386 Groundwater Level Prediction Using hybrid Particle Swarm Optimization-Long-Short Term Memory Model and Performance Evaluation
Authors: Sneha Thakur, Sanjeev Karmakar
Abstract:
This paper proposed hybrid Particle Swarm Optimization (PSO) – Long-Short Term Memory (LSTM) model for groundwater level prediction. The evaluation of the performance is realized using the parameters: root mean square error (RMSE) and mean absolute error (MAE). Ground water level forecasting will be very effective for planning water harvesting. Proper calculation of water level forecasting can overcome the problem of drought and flood to some extent. The objective of this work is to develop a ground water level forecasting model using deep learning technique integrated with optimization technique PSO by applying 29 years data of Chhattisgarh state, In-dia. It is important to find the precise forecasting in case of ground water level so that various water resource planning and water harvesting can be managed effectively.Keywords: long short-term memory, particle swarm optimization, prediction, deep learning, groundwater level
Procedia PDF Downloads 7814385 Interaction Between Task Complexity and Collaborative Learning on Virtual Patient Design: The Effects on Students’ Performance, Cognitive Load, and Task Time
Authors: Fatemeh Jannesarvatan, Ghazaal Parastooei, Jimmy frerejan, Saedeh Mokhtari, Peter Van Rosmalen
Abstract:
Medical and dental education increasingly emphasizes the acquisition, integration, and coordination of complex knowledge, skills, and attitudes that can be applied in practical situations. Instructional design approaches have focused on using real-life tasks in order to facilitate complex learning in both real and simulated environments. The Four component instructional design (4C/ID) model has become a useful guideline for designing instructional materials that improve learning transfer, especially in health profession education. The objective of this study was to apply the 4C/ID model in the creation of virtual patients (VPs) that dental students can use to practice their clinical management and clinical reasoning skills. The study first explored the context and concept of complication factors and common errors for novices and how they can affect the design of a virtual patient program. The study then selected key dental information and considered the content needs of dental students. The design of virtual patients was based on the 4C/ID model's fundamental principles, which included: Designing learning tasks that reflect real patient scenarios and applying different levels of task complexity to challenge students to apply their knowledge and skills in different contexts. Creating varied learning materials that support students during the VP program and are closely integrated with the learning tasks and students' curricula. Cognitive feedback was provided at different levels of the program. Providing procedural information where students followed a step-by-step process from history taking to writing a comprehensive treatment plan. Four virtual patients were designed using the 4C/ID model's principles, and an experimental design was used to test the effectiveness of the principles in achieving the intended educational outcomes. The 4C/ID model provides an effective framework for designing engaging and successful virtual patients that support the transfer of knowledge and skills for dental students. However, there are some challenges and pitfalls that instructional designers should take into account when developing these educational tools.Keywords: 4C/ID model, virtual patients, education, dental, instructional design
Procedia PDF Downloads 8014384 Analysis of Vertical Hall Effect Device Using Current-Mode
Authors: Kim Jin Sup
Abstract:
This paper presents a vertical hall effect device using current-mode. Among different geometries that have been studied and simulated using COMSOL Multiphysics, optimized cross-shaped model displayed the best sensitivity. The cross-shaped model emerged as the optimum plate to fit the lowest noise and residual offset and the best sensitivity. The symmetrical cross-shaped hall plate is widely used because of its high sensitivity and immunity to alignment tolerances resulting from the fabrication process. The hall effect device has been designed using a 0.18-μm CMOS technology. The simulation uses the nominal bias current of 12μA. The applied magnetic field is from 0 mT to 20 mT. Simulation results achieved in COMSOL and validated with respect to the electrical behavior of equivalent circuit for Cadence. Simulation results of the one structure over the 13 available samples shows for the best geometry a current-mode sensitivity of 6.6 %/T at 20mT. Acknowledgment: This work was supported by Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIP) (No. R7117-16-0165, Development of Hall Effect Semiconductor for Smart Car and Device).Keywords: vertical hall device, current-mode, crossed-shaped model, CMOS technology
Procedia PDF Downloads 29214383 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran
Authors: Reza Zakerinejad
Abstract:
Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.Keywords: TreeNet model, terrain analysis, Golestan Province, Iran
Procedia PDF Downloads 53514382 Trusting the Big Data Analytics Process from the Perspective of Different Stakeholders
Authors: Sven Gehrke, Johannes Ruhland
Abstract:
Data is the oil of our time, without them progress would come to a hold [1]. On the other hand, the mistrust of data mining is increasing [2]. The paper at hand shows different aspects of the concept of trust and describes the information asymmetry of the typical stakeholders of a data mining project using the CRISP-DM phase model. Based on the identified influencing factors in relation to trust, problematic aspects of the current approach are verified using various interviews with the stakeholders. The results of the interviews confirm the theoretically identified weak points of the phase model with regard to trust and show potential research areas.Keywords: trust, data mining, CRISP DM, stakeholder management
Procedia PDF Downloads 9414381 Assessing the Effects of Community Informatics on Livelihoods Sustainability in Nigeria: a Model for Rural Communities
Authors: Adebayo J. Julius, Oluremi N. Iluyomade
Abstract:
Livelihood in Nigeria is a paradox of poverty amidst plenty. The Country is endowed with a good climate for agriculture, naturally growing fruit trees and vegetables, and undomesticated water resources. In spite of all its endowment, Nigeria continues to live in poverty year in year out. This thus raises a very important question as to how can there be so much poverty in Nigeria with all its natural endowments. This study focused comparative analysis of the utilization of community informatics for sustainable livelihoods through agriculture. The idea projected in this study is that small strategic changes in the modus operandi of social informatics can have a significant impact on sustainability of livelihoods. This paper carefully explored the theories of community informatics and its efficacies in dealing with sustainability issues. This study identified, described and evaluates the roles of community informatics in some sectors of the economy, different analytical tools to benchmark the influence of social informatics in agriculture against what is obtainable in agricultural sectors of the economy were used. It further employed comparative analysis to build a case model for sustainable livelihood in agriculture through community informatics.Keywords: informatics , model, rural community, livelihoods sustainability, Nigeria
Procedia PDF Downloads 15114380 Using Teachers' Perceptions of Science Outreach Activities to Design an 'Optimum' Model of Science Outreach
Authors: Victoria Brennan, Andrea Mallaburn, Linda Seton
Abstract:
Science outreach programmes connect school pupils with external agencies to provide activities and experiences that enhance their exposure to science. It can be argued that these programmes not only aim to support teachers with curriculum engagement and promote scientific literacy but also provide pivotal opportunities to spark scientific interest in students. In turn, a further objective of these programmes is to increase awareness of career opportunities within this field. Although outreach work is also often described as a fun and satisfying venture, a plethora of researchers express caution to how successful the processes are to increases engagement post-16 in science. When researching the impact of outreach programmes, it is often student feedback regarding the activities or enrolment numbers to particular science courses post-16, which are generated and analysed. Although this is informative, the longevity of the programme’s impact could be better informed by the teacher’s perceptions; the evidence of which is far more limited in the literature. In addition, there are strong suggestions that teachers can have an indirect impact on a student’s own self-concept. These themes shape the focus and importance of this ongoing research project as it presents the rationale that teachers are under-used resources when it comes to considering the design of science outreach programmes. Therefore, the end result of the research will consist of a presentation of an ‘optimum’ model of outreach. The result of which should be of interest to the wider stakeholders such as universities or private or government organisations who design science outreach programmes in the hope to recruit future scientists. During phase one, questionnaires (n=52) and interviews (n=8) have generated both quantitative and qualitative data. These have been analysed using the Wilcoxon non-parametric test to compare teachers’ perceptions of science outreach interventions and thematic analysis for open-ended questions. Both of these research activities provide an opportunity for a cross-section of teacher opinions of science outreach to be obtained across all educational levels. Therefore, an early draft of the ‘optimum’ model of science outreach delivery was generated using both the wealth of literature and primary data. This final (ongoing) phase aims to refine this model using teacher focus groups to provide constructive feedback about the proposed model. The analysis uses principles of modified Grounded Theory to ensure that focus group data is used to further strengthen the model. Therefore, this research uses a pragmatist approach as it aims to focus on the strengths of the different paradigms encountered to ensure the data collected will provide the most suitable information to create an improved model of sustainable outreach. The results discussed will focus on this ‘optimum’ model and teachers’ perceptions of benefits and drawbacks when it comes to engaging with science outreach work. Although the model is still a ‘work in progress’, it provides both insight into how teachers feel outreach delivery can be a sustainable intervention tool within the classroom and what providers of such programmes should consider when designing science outreach activities.Keywords: educational partnerships, science education, science outreach, teachers
Procedia PDF Downloads 12714379 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana
Authors: Ayesha Sanjana Kawser Parsha
Abstract:
S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score
Procedia PDF Downloads 7614378 Numerical Analysis of NOₓ Emission in Staged Combustion for the Optimization of Once-Through-Steam-Generators
Authors: Adrien Chatel, Ehsan Askari Mahvelati, Laurent Fitschy
Abstract:
Once-Through-Steam-Generators are commonly used in the oil-sand industry in the heavy fuel oil extraction process. They are composed of three main parts: the burner, the radiant and convective sections. Natural gas is burned through staged diffusive flames stabilized by the burner. The heat generated by the combustion is transferred to the water flowing through the piping system in the radiant and convective sections. The steam produced within the pipes is then directed to the ground to reduce the oil viscosity and allow its pumping. With the rapid development of the oil-sand industry, the number of OTSG in operation has increased as well as the associated emissions of environmental pollutants, especially the Nitrous Oxides (NOₓ). To limit the environmental degradation, various international environmental agencies have established regulations on the pollutant discharge and pushed to reduce the NOₓ release. To meet these constraints, OTSG constructors have to rely on more and more advanced tools to study and predict the NOₓ emission. With the increase of the computational resources, Computational Fluid Dynamics (CFD) has emerged as a flexible tool to analyze the combustion and pollutant formation process. Moreover, to optimize the burner operating condition regarding the NOx emission, field characterization and measurements are usually accomplished. However, these kinds of experimental campaigns are particularly time-consuming and sometimes even impossible for industrial plants with strict operation schedule constraints. Therefore, the application of CFD seems to be more adequate in order to provide guidelines on the NOₓ emission and reduction problem. In the present work, two different software are employed to simulate the combustion process in an OTSG, namely the commercial software ANSYS Fluent and the open source software OpenFOAM. RANS (Reynolds-Averaged Navier–Stokes) equations combined with the Eddy Dissipation Concept to model the combustion and closed by the k-epsilon model are solved. A mesh sensitivity analysis is performed to assess the independence of the solution on the mesh. In the first part, the results given by the two software are compared and confronted with experimental data as a mean to assess the numerical modelling. Flame temperatures and chemical composition are used as reference fields to perform this validation. Results show a fair agreement between experimental and numerical data. In the last part, OpenFOAM is employed to simulate several operating conditions, and an Emission Characteristic Map of the combustion system is generated. The sources of high NOₓ production inside the OTSG are pointed and correlated to the physics of the flow. CFD is, therefore, a useful tool for providing an insight into the NOₓ emission phenomena in OTSG. Sources of high NOₓ production can be identified, and operating conditions can be adjusted accordingly. With the help of RANS simulations, an Emission Characteristics Map can be produced and then be used as a guide for a field tune-up.Keywords: combustion, computational fluid dynamics, nitrous oxides emission, once-through-steam-generators
Procedia PDF Downloads 11314377 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 25614376 Memory and Narratives Rereading before and after One Week
Authors: Abigail M. Csik, Gabriel A. Radvansky
Abstract:
As people read through event-based narratives, they construct an event model that captures information about the characters, goals, location, time, and causality. For many reasons, memory for such narratives is represented at different levels, namely, the surface form, textbase, and event model levels. Rereading has been shown to decrease surface form memory, while, at the same time, increasing textbase and event model memories. More generally, distributed practice has consistently shown memory benefits over massed practice for different types of materials, including texts. However, little research has investigated distributed practice of narratives at different inter-study intervals and these effects on these three levels of memory. Recent work in our lab has indicated that there may be dramatic changes in patterns of forgetting around one week, which may affect the three levels of memory. The present experiment aimed to determine the effects of rereading on the three levels of memory as a factor of whether the texts were reread before versus after one week. Participants (N = 42) read a set of stories, re-read them either before or after one week (with an inter-study interval of three days, seven days, or fourteen days), and then took a recognition test, from which the three levels of representation were derived. Signal detection results from this study reveal that differential patterns at the three levels as a factor of whether the narratives were re-read prior to one week or after one week. In particular, an ANOVA revealed that surface form memory was lower (p = .08) while textbase (p = .02) and event model memory (p = .04) were greater if narratives were re-read 14 days later compared to memory when narratives were re-read 3 days later. These results have implications for what type of memory benefits from distributed practice at various inter-study intervals.Keywords: memory, event cognition, distributed practice, consolidation
Procedia PDF Downloads 22514375 A Prediction Model for Dynamic Responses of Building from Earthquake Based on Evolutionary Learning
Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park
Abstract:
The seismic responses-based structural health monitoring system has been performed to prevent seismic damage. Structural seismic damage of building is caused by the instantaneous stress concentration which is related with dynamic characteristic of earthquake. Meanwhile, seismic response analysis to estimate the dynamic responses of building demands significantly high computational cost. To prevent the failure of structural members from the characteristic of the earthquake and the significantly high computational cost for seismic response analysis, this paper presents an artificial neural network (ANN) based prediction model for dynamic responses of building considering specific time length. Through the measured dynamic responses, input and output node of the ANN are formed by the length of specific time, and adopted for the training. In the model, evolutionary radial basis function neural network (ERBFNN), that radial basis function network (RBFN) is integrated with evolutionary optimization algorithm to find variables in RBF, is implemented. The effectiveness of the proposed model is verified through an analytical study applying responses from dynamic analysis for multi-degree of freedom system to training data in ERBFNN.Keywords: structural health monitoring, dynamic response, artificial neural network, radial basis function network, genetic algorithm
Procedia PDF Downloads 30414374 Optimization-Based Design Improvement of Synchronizer in Transmission System for Efficient Vehicle Performance
Authors: Sanyka Banerjee, Saikat Nandi, P. K. Dan
Abstract:
Synchronizers as an integral part of gearbox is a key element in the transmission system in automotive. The performance of synchronizer affects transmission efficiency and driving comfort. Synchronizing mechanism as a major component of transmission system must be capable of preventing vibration and noise in the gears. Gear shifting efficiency improvement with an aim to achieve smooth, quick and energy efficient power transmission remains a challenge for the automotive industry. Performance of the synchronizer is dependent on the features and characteristics of its sub-components and therefore analysis of the contribution of such characteristics is necessary. An important exercise involved is to identify all such characteristics or factors which are associated with the modeling and analysis and for this purpose the literature was reviewed, rather extensively, to study the mathematical models, formulated considering such. It has been observed that certain factors are rather common across models; however, there are few factors which have specifically been selected for individual models, as reported. In order to obtain a more realistic model, an attempt here has been made to identify and assimilate practically all possible factors which may be considered in formulating the model more comprehensively. A simulation study, formulated as a block model, for such analysis has been carried out in a reliable environment like MATLAB. Lower synchronization time is desirable and hence, it has been considered here as the output factors in the simulation modeling for evaluating transmission efficiency. An improved synchronizer model requires optimized values of sub-component design parameters. A parametric optimization utilizing Taguchi’s design of experiment based response data and their analysis has been carried out for this purpose. The effectiveness of the optimized parameters for the improved synchronizer performance has been validated by the simulation study of the synchronizer block model with improved parameter values as input parameters for better transmission efficiency and driver comfort.Keywords: design of experiments, modeling, parametric optimization, simulation, synchronizer
Procedia PDF Downloads 311