Search results for: parallel algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3040

Search results for: parallel algorithms

880 Shadows and Symbols: The Tri-Level Importance of Memory in Jane Yolen's 'the Devil's Arithmetic' and Soon-To-Be-Published 'Mapping the Bones'

Authors: Kirsten A. Bartels

Abstract:

'Never again' and 'Lest we forget' have long been messages associated with the events of the Shoah. Yet as we attempt to learn from the past, we must find new ways to engage with its memories. The preservation of the culture and the value of tradition are critical factors in Jane Yolen's works of Holocaust fiction, The Devil's Arithmetic and Mapping the Bones, emphasized through the importance of remembering. That word, in its multitude of forms (remember, remembering, memories), occurs no less than ten times in the first four pages and over one hundred times in the one hundred and sixty-four-page narrative The Devil’s Arithmetic. While Yolen takes a different approach to showcasing the importance of memory in Mapping the Bones, it is of equal import in this work and arguably to the future of Holocaust knowing. The idea of remembering, the desire to remember, and the ability to remember, are explored in three divergent ways in The Devil’s Arithmetic. First, in the importance to remember a past which is not her own – to understand history or acquired memories. Second, in the protagonist's actual or initial memories, those of her life in modern-day New York. Third, in a reverse mode of forgetting and trying to reacquire that which has been lost -- as Hannah is processed in the camp and she forgets everything, all worlds prior to the camp are lost to her. As numbers replace names, Yolen stresses the importance of self-identity or owned memories. In addition, the importance of relaying memory, the transitions of memory from perspective, and the ideas of reflective telling are explored in Mapping the Bones -- through the telling of the story through the lens of one of the twins as the events are unfolding; and then the through the reflective telling from the lens of the other twin. Parallel to the exploration of the intersemiosis of memory is the discussion of literary shadows (foreshadowing, backshadowing, and side-shadowing) and their impact on the reader's experience with Yolen's narrative. For in this type of exploration, one cannot look at the events described in Yolen's work and not also contemplate the figurative shadows cast.

Keywords: holocaust literature, memory, narrative, Yolen

Procedia PDF Downloads 215
879 A Diagnostic Comparative Analysis of on Simultaneous Localization and Mapping (SLAM) Models for Indoor and Outdoor Route Planning and Obstacle Avoidance

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In robotics literature, the simultaneous localization and mapping (SLAM) is commonly associated with a priori-posteriori problem. The autonomous vehicle needs a neutral map to spontaneously track its local position, i.e., “localization” while at the same time a precise path estimation of the environment state is required for effective route planning and obstacle avoidance. On the other hand, the environmental noise factors can significantly intensify the inherent uncertainties in using odometry information and measurements obtained from the robot’s exteroceptive sensor which in return directly affect the overall performance of the corresponding SLAM. Therefore, the current work is primarily dedicated to provide a diagnostic analysis of six SLAM algorithms including FastSLAM, L-SLAM, GraphSLAM, Grid SLAM and DP-SLAM. A SLAM simulated environment consisting of two sets of landmark locations and robot waypoints was set based on modified EKF and UKF in MATLAB using two separate maps for indoor and outdoor route planning subject to natural and artificial obstacles. The simulation results are expected to provide an unbiased platform to compare the estimation performances of the five SLAM models as well as on the reliability of each SLAM model for indoor and outdoor applications.

Keywords: route planning, obstacle, estimation performance, FastSLAM, L-SLAM, GraphSLAM, Grid SLAM, DP-SLAM

Procedia PDF Downloads 426
878 Traffic Forecasting for Open Radio Access Networks Virtualized Network Functions in 5G Networks

Authors: Khalid Ali, Manar Jammal

Abstract:

In order to meet the stringent latency and reliability requirements of the upcoming 5G networks, Open Radio Access Networks (O-RAN) have been proposed. The virtualization of O-RAN has allowed it to be treated as a Network Function Virtualization (NFV) architecture, while its components are considered Virtualized Network Functions (VNFs). Hence, intelligent Machine Learning (ML) based solutions can be utilized to apply different resource management and allocation techniques on O-RAN. However, intelligently allocating resources for O-RAN VNFs can prove challenging due to the dynamicity of traffic in mobile networks. Network providers need to dynamically scale the allocated resources in response to the incoming traffic. Elastically allocating resources can provide a higher level of flexibility in the network in addition to reducing the OPerational EXpenditure (OPEX) and increasing the resources utilization. Most of the existing elastic solutions are reactive in nature, despite the fact that proactive approaches are more agile since they scale instances ahead of time by predicting the incoming traffic. In this work, we propose and evaluate traffic forecasting models based on the ML algorithm. The algorithms aim at predicting future O-RAN traffic by using previous traffic data. Detailed analysis of the traffic data was carried out to validate the quality and applicability of the traffic dataset. Hence, two ML models were proposed and evaluated based on their prediction capabilities.

Keywords: O-RAN, traffic forecasting, NFV, ARIMA, LSTM, elasticity

Procedia PDF Downloads 187
877 Evolving Convolutional Filter Using Genetic Algorithm for Image Classification

Authors: Rujia Chen, Ajit Narayanan

Abstract:

Convolutional neural networks (CNN), as typically applied in deep learning, use layer-wise backpropagation (BP) to construct filters and kernels for feature extraction. Such filters are 2D or 3D groups of weights for constructing feature maps at subsequent layers of the CNN and are shared across the entire input. BP as a gradient descent algorithm has well-known problems of getting stuck at local optima. The use of genetic algorithms (GAs) for evolving weights between layers of standard artificial neural networks (ANNs) is a well-established area of neuroevolution. In particular, the use of crossover techniques when optimizing weights can help to overcome problems of local optima. However, the application of GAs for evolving the weights of filters and kernels in CNNs is not yet an established area of neuroevolution. In this paper, a GA-based filter development algorithm is proposed. The results of the proof-of-concept experiments described in this paper show the proposed GA algorithm can find filter weights through evolutionary techniques rather than BP learning. For some simple classification tasks like geometric shape recognition, the proposed algorithm can achieve 100% accuracy. The results for MNIST classification, while not as good as possible through standard filter learning through BP, show that filter and kernel evolution warrants further investigation as a new subarea of neuroevolution for deep architectures.

Keywords: neuroevolution, convolutional neural network, genetic algorithm, filters, kernels

Procedia PDF Downloads 163
876 Harvesting Energy from Lightning Strikes

Authors: Vaishakh Medikeri

Abstract:

Lightning, the marvelous, spectacular and the awesome truth of nature is one of the greatest energy sources left unharnessed since ages. A single lightning bolt of lightning contains energy of about 15 billion joules. This huge amount of energy cannot be harnessed completely but partially. This paper proposes to harness the energy from lightning strikes. Throughout the globe the frequency of lightning is 40-50 flashes per second, totally 1.4 billion flashes per year; all of these flashes carrying an average energy of about 15 billion joules each. When a lightning bolt strikes the ground, tremendous amounts of energy is transferred to earth which propagates in the form of concentric circular energy waves. These waves have a frequency of about 7.83Hz. Harvesting the lightning bolt directly seems impossible, but harvesting the energy waves produced by the lightning is pretty easier. This can be done using a tricoil energy harnesser which is a new device which I have invented. We know that lightning bolt seeks the path which has minimum resistance down to the earth. For this we can make a lightning rod about 100 meters high. Now the lightning rod is attached to the tricoil energy harnesser. The tricoil energy harnesser contains three coils whose centers are collinear and all the coils are parallel to the ground. The first coil has one of its ends connected to the lightning rod and the other end grounded. There is a secondary coil wound on the first coil with one of its end grounded and the other end pointing to the ground and left unconnected and placed a little bit above the ground so that this end of the coil produces more intense currents, hence producing intense energy waves. The first coil produces very high magnetic fields and induces them in the second and third coils. Along with the magnetic fields induced by the first coil, the energy waves which are currents also flow through the second and the third coils. The second and the third coils are connected to a generator which in turn is connected to a capacitor which stores the electrical energy. The first coil is placed in the middle of the second and the third coil. The stored energy can be used for transmission of electricity. This new technique of harnessing the lightning strikes would be most efficient in places with more probability of the lightning strikes. Since we are using a lightning rod sufficiently long, the probability of cloud to ground strikes is increased. If the proposed apparatus is implemented, it would be a great source of pure and clean energy.

Keywords: generator, lightning rod, tricoil energy harnesser, harvesting energy

Procedia PDF Downloads 361
875 [Keynote Talk]: Treatment Satisfaction and Safety of Sitagliptin versus Pioglitazone in Patients with Type 2 Diabetes Mellitus Inadequately Controlled on Metformin Monotherapy

Authors: Shahnaz Haque, Anand Shukla, Sunita Singh, Anil Kem

Abstract:

Introduction: Diabetes Mellitus is a chronic metabolic disease affecting millions worldwide. Metformin is the most commonly prescribed first line oral hypoglycemic drug for type 2 diabetes mellitus, but due to progressive worsening of blood glucose control during the natural history of type 2 diabetes, combination therapy usually becomes necessary. Objective: This study was designed to assess the treatment satisfaction between Sitagliptin versus Pioglitazone added to Metformin in patients with type 2 diabetes mellitus (T2DM). Methods: We conducted a prospective, open label, randomized, parallel group study in SIMS, Hapur, U.P. Eligible patients fulfilling inclusion criteria were randomized into two groups having 25 patients in each group using tab Sitagliptin 100mg, tab Pioglitazone 30mg added to ongoing tab Metformin (500mg) therapy for 16 weeks. The follow-up visits were on weeks 4,12 and 16. Result: 16 weeks later, addition of Sitagliptin 100mg compared to that of Pioglitazone 30 mg to ongoing Metformin therapy provided similar glycosylated hemoglobin (HbA1c) lowering efficacy in patients with T2DM with inadequate glycemic control on metformin monotherapy. Change in HbA1c in group1 was -0.656±0.21%(p<0.0001) whereas in group2 was -0.748±0.35%(p<0.0001). Hence decrease in HbA1c from baseline was more in group2. Both treatments were well tolerated with negligible risk of hypoglycaemia. Weight loss was observed with Sitagliptin in contrast to weight gain seen in Pioglitazone. Conclusion: In this study, Sitagliptin 100 mg along with metformin therapy in comparison to pioglitazone 30 mg plus metformin therapy was both effective, well-tolerated and improved glycemic control in both the groups. Addition of pioglitazone had cause oedema and weight gain to the patients whereas sitagliptin caused weight loss in its patients.

Keywords: sitagliptin, pioglitazone, metformin, type 2 diabetes mellitus

Procedia PDF Downloads 285
874 Combination of Diane-35 and Metformin to Treat Early Endometrial Carcinoma in PCOS Women with Insulin Resistance

Authors: Xin Li, Yan-Rong Guo, Jin-Fang Lin, Yi Feng, Håkan Billig, Ruijin Shao

Abstract:

Background: Young women with polycystic ovary syndrome (PCOS) have a high risk of developing endometrial carcinoma. There is a need for the development of new medical therapies that can reduce the need for surgical intervention so as to preserve the fertility of these patients. The aim of the study was to describe and discuss cases of PCOS and insulin resistance (IR) women with early endometrial carcinoma while being co-treated with Diane-35 and metformin. Methods: Five PCOS-IR women who were scheduled for diagnosis and therapy for early endometrial carcinoma were recruited. The hospital records and endometrial pathology reports were reviewed. All patients were co-treated with Diane-35 and metformin for 6 months to reverse the endometrial carcinoma and preserve their fertility. Before, during, and after treatment, endometrial biopsies and blood samples were obtained and oral glucose tolerance tests were performed. Endometrial pathology was evaluated. Body weight (BW), body mass index (BMI), follicle-stimulating hormone (FSH), luteinizing hormone (LH), total testosterone (TT), sex hormone-binding globulin (SHBG), free androgen index (FAI), insulin area under curve (IAUC), and homeostasis model assessment of insulin resistance (HOMA-IR) were determined. Results: Clinical stage 1a, low grade endometrial carcinoma was confirmed before treatment. After 6 months of co-treatment, all patients showed normal epithelia. No evidence of atypical hyperplasia or endometrial carcinoma was found. Co-treatment resulted in significant decreases in BW, BMI, TT, FAI, IAUC, and HOMA-IR in parallel with a significant increase in SHBG. There were no differences in the FSH and LH levels after co-treatment. Conclusions: Combined treatment with Diane-35 and metformin has the potential to revert the endometrial carcinoma into normal endometrial cells in PCOS-IR women. The cellular and molecular mechanisms behind this effect merit further investigation.

Keywords: PCOS, progesterone resistance, insulin resistance, steroid hormone receptors, endometrial carcinoma

Procedia PDF Downloads 387
873 Performance Evaluation of Composite Beam under Uniform Corrosion

Authors: Ririt Aprilin Sumarsono

Abstract:

Composite member (concrete and steel) has been widely advanced for structural utilization due to its best performance in resisting load, reducing the total weight of the structure, increasing stiffness, and other available advantages. On the other hand, the environment load such as corrosion (e.g. chloride ingress) creates significant time-dependent degradation for steel. Analysis performed in this paper is mainly considered uniform corrosion for evaluating the composite beam without examining the pit corrosion as the initial corrosion formed. Corrosion level in terms of weight loss is modified in yield stress and modulus elasticity of steel. Those two mechanical properties are utilized in this paper for observing the stresses due to corrosion attacked. As corrosion level increases, the effective width of the composite beam in the concrete section will be wider. The position of a neutral axis of composite section will indicate the composite action due to corrosion of composite beam so that numerous shear connectors provided must be reconsidered. Flexure capacity quantification provides stresses, and shear capacity calculation derives connectors needed in overcoming the shear problem for composite beam under corrosion. A model of simply supported composite beam examined in this paper under uniform corrosion where the stresses as the focus of the evaluation. Principal stress at the first stage of composite construction decline as the corrosion level incline, parallel for the second stage stress analysis where the tension region held by the steel undergoes lower capacity due to corrosion. Total stresses of the composite section for steel to be born significantly decreases particularly in the outermost fiber of tension side. Whereas, the available compression side is smaller as the corrosion level increases so that the stress occurs on the compression side shows reduction as well. As a conclusion, the increment of corrosion level will degrade both compression and tension side of stresses.

Keywords: composite beam, modulus of elasticity, stress analysis, yield strength, uniform corrosion

Procedia PDF Downloads 266
872 Radial Basis Surrogate Model Integrated to Evolutionary Algorithm for Solving Computation Intensive Black-Box Problems

Authors: Abdulbaset Saad, Adel Younis, Zuomin Dong

Abstract:

For design optimization with high-dimensional expensive problems, an effective and efficient optimization methodology is desired. This work proposes a series of modification to the Differential Evolution (DE) algorithm for solving computation Intensive Black-Box Problems. The proposed methodology is called Radial Basis Meta-Model Algorithm Assisted Differential Evolutionary (RBF-DE), which is a global optimization algorithm based on the meta-modeling techniques. A meta-modeling assisted DE is proposed to solve computationally expensive optimization problems. The Radial Basis Function (RBF) model is used as a surrogate model to approximate the expensive objective function, while DE employs a mechanism to dynamically select the best performing combination of parameters such as differential rate, cross over probability, and population size. The proposed algorithm is tested on benchmark functions and real life practical applications and problems. The test results demonstrate that the proposed algorithm is promising and performs well compared to other optimization algorithms. The proposed algorithm is capable of converging to acceptable and good solutions in terms of accuracy, number of evaluations, and time needed to converge.

Keywords: differential evolution, engineering design, expensive computations, meta-modeling, radial basis function, optimization

Procedia PDF Downloads 371
871 Comparison Study of Machine Learning Classifiers for Speech Emotion Recognition

Authors: Aishwarya Ravindra Fursule, Shruti Kshirsagar

Abstract:

In the intersection of artificial intelligence and human-centered computing, this paper delves into speech emotion recognition (SER). It presents a comparative analysis of machine learning models such as K-Nearest Neighbors (KNN),logistic regression, support vector machines (SVM), decision trees, ensemble classifiers, and random forests, applied to SER. The research employs four datasets: Crema D, SAVEE, TESS, and RAVDESS. It focuses on extracting salient audio signal features like Zero Crossing Rate (ZCR), Chroma_stft, Mel Frequency Cepstral Coefficients (MFCC), root mean square (RMS) value, and MelSpectogram. These features are used to train and evaluate the models’ ability to recognize eight types of emotions from speech: happy, sad, neutral, angry, calm, disgust, fear, and surprise. Among the models, the Random Forest algorithm demonstrated superior performance, achieving approximately 79% accuracy. This suggests its suitability for SER within the parameters of this study. The research contributes to SER by showcasing the effectiveness of various machine learning algorithms and feature extraction techniques. The findings hold promise for the development of more precise emotion recognition systems in the future. This abstract provides a succinct overview of the paper’s content, methods, and results.

Keywords: comparison, ML classifiers, KNN, decision tree, SVM, random forest, logistic regression, ensemble classifiers

Procedia PDF Downloads 24
870 A Reflective Investigation on the Course Design and Coaching Strategy for Creating a Trans-Disciplinary Leaning Environment

Authors: Min-Feng Hsieh

Abstract:

Nowadays, we are facing a highly competitive environment in which the situation for survival has come even more critical than ever before. The challenge we will be confronted with is no longer can be dealt with the single system of knowledge. The abilities we urgently need to acquire is something that can lead us to cross over the boundaries between different disciplines and take us to a neutral ground that gathers and integrates powers and intelligence that surrounds us. This paper aims at discussing how a trans-disciplinary design course organized by the College of Design at Chaoyang University can react to this modern challenge. By orchestrating an experimental course format and by developing a series of coaching strategies, a trans-disciplinary learning environment has been created and practiced in which students selected from five different departments, including Architecture, Interior Design, Visual Design, Industrial Design, Landscape and Urban Design, are encouraged to think outside their familiar knowledge pool and to learn with/from each other. In the course of implementing this program, a parallel research has been conducted alongside by adopting the theory and principles of Action Research which is a research methodology that can provide the course organizer emergent, responsive, action-oriented, participative and critically reflective insights for the immediate changes and amendments in order to improve the effect of teaching and learning experience. In the conclusion, how the learning and teaching experience of this trans-disciplinary design studio can offer us some observation that can help us reflect upon the constraints and division caused by the subject base curriculum will be pointed out. A series of concepts for course design and teaching strategies developed and implemented in this trans-disciplinary course are to be introduced as a way to promote learners’ self-motivated, collaborative, cross-disciplinary and student-centered learning skills. The outcome of this experimental course can exemplify an alternative approach that we could adopt in pursuing a remedy for dealing with the problematic issues of the current educational practice.

Keywords: course design, coaching strategy, subject base curriculum, trans-disciplinary

Procedia PDF Downloads 185
869 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 505
868 Three Tier Indoor Localization System for Digital Forensics

Authors: Dennis L. Owuor, Okuthe P. Kogeda, Johnson I. Agbinya

Abstract:

Mobile localization has attracted a great deal of attention recently due to the introduction of wireless networks. Although several localization algorithms and systems have been implemented and discussed in the literature, very few researchers have exploited the gap that exists between indoor localization, tracking, external storage of location information and outdoor localization for the purpose of digital forensics during and after a disaster. The contribution of this paper lies in the implementation of a robust system that is capable of locating, tracking mobile device users and store location information for both indoor and partially outdoor the cloud. The system can be used during disaster to track and locate mobile phone users. The developed system is a mobile application built based on Android, Hypertext Preprocessor (PHP), Cascading Style Sheets (CSS), JavaScript and MATLAB for the Android mobile users. Using Waterfall model of software development, we have implemented a three level system that is able to track, locate and store mobile device information in secure database (cloud) on almost a real time basis. The outcome of the study showed that the developed system is efficient with regard to the tracking and locating mobile devices. The system is also flexible, i.e. can be used in any building with fewer adjustments. Finally, the system is accurate for both indoor and outdoor in terms of locating and tracking mobile devices.

Keywords: indoor localization, digital forensics, fingerprinting, tracking and cloud

Procedia PDF Downloads 307
867 Magneto-Transport of Single Molecular Transistor Using Anderson-Holstein-Caldeira-Leggett Model

Authors: Manasa Kalla, Narasimha Raju Chebrolu, Ashok Chatterjee

Abstract:

We have studied the quantum transport properties of a single molecular transistor in the presence of an external magnetic field using the Keldysh Green function technique. We also used the Anderson-Holstein-Caldeira-Leggett Model to describe the single molecular transistor that consists of a molecular quantum dot (QD) coupled to two metallic leads and placed on a substrate that acts as a heat bath. The phonons are eliminated by the Lang-Firsov transformation and the effective Hamiltonian is used to study the effect of an external magnetic field on the spectral density function, Tunneling Current, Differential Conductance and Spin polarization. A peak in the spectral function corresponds to a possible excitation. In the presence of a magnetic field, the spin-up and spin-down states are degenerate and this degeneracy is lifted by the magnetic field leading to the splitting of the central peak of the spectral function. The tunneling current decreases with increasing magnetic field. We have observed that even the differential conductance peak in the zero magnetic field curve is split in the presence electron-phonon interaction. As the magnetic field is increased, each peak splits into two peaks. And each peak indicates the existence of an energy level. Thus the number of energy levels for transport in the bias window increases with the magnetic field. In the presence of the electron-phonon interaction, Differential Conductance in general gets reduced and decreases faster with the magnetic field. As magnetic field strength increases, the spin polarization of the current is increasing. Our results show that a strongly interacting QD coupled to metallic leads in the presence of external magnetic field parallel to the plane of QD acts as a spin filter at zero temperature.

Keywords: Anderson-Holstein model, Caldeira-Leggett model, spin-polarization, quantum dots

Procedia PDF Downloads 157
866 Machine Learning for Targeting of Conditional Cash Transfers: Improving the Effectiveness of Proxy Means Tests to Identify Future School Dropouts and the Poor

Authors: Cristian Crespo

Abstract:

Conditional cash transfers (CCTs) have been targeted towards the poor. Thus, their targeting assessments check whether these schemes have been allocated to low-income households or individuals. However, CCTs have more than one goal and target group. An additional goal of CCTs is to increase school enrolment. Hence, students at risk of dropping out of school also are a target group. This paper analyses whether one of the most common targeting mechanisms of CCTs, a proxy means test (PMT), is suitable to identify the poor and future school dropouts. The PMT is compared with alternative approaches that use the outputs of a predictive model of school dropout. This model was built using machine learning algorithms and rich administrative datasets from Chile. The paper shows that using machine learning outputs in conjunction with the PMT increases targeting effectiveness by identifying more students who are either poor or future dropouts. This joint targeting approach increases effectiveness in different scenarios except when the social valuation of the two target groups largely differs. In these cases, the most likely optimal approach is to solely adopt the targeting mechanism designed to find the highly valued group.

Keywords: conditional cash transfers, machine learning, poverty, proxy means tests, school dropout prediction, targeting

Procedia PDF Downloads 182
865 Transforming Health Information from Manual to Digital (Electronic) World: A Reference and Guide

Authors: S. Karthikeyan, Naveen Bindra

Abstract:

Introduction: To update ourselves and understand the concept of latest electronic formats available for Health care providers and how it could be used and developed as per standards. The idea is to correlate between the patients Manual Medical Records keeping and maintaining patients Electronic Information in a Health care setup in this world. Furthermore this stands with adapting to the right technology depending upon the organization and improve our quality and quantity of Healthcare providing skills. Objective: The concept and theory is to explain the terms of Electronic Medical Record (EMR), Electronic Health Record (EHR) and Personal Health Record (PHR) and selecting the best technical among the available Electronic sources and software before implementing. It is to guide and make sure the technology used by the end users without any doubts and difficulties. The idea is to evaluate is to admire the uses and barriers of EMR-EHR-PHR. Aim and Scope: The target is to achieve the health care providers like Physicians, Nurses, Therapists, Medical Bill reimbursements, Insurances and Government to assess the patient’s information on easy and systematic manner without diluting the confidentiality of patient’s information. Method: Health Information Technology can be implemented with the help of Organisations providing with legal guidelines and help to stand by the health care provider. The main objective is to select the correct embedded and affordable database management software and generating large-scale data. The parallel need is to know how the latest software available in the market. Conclusion: The question lies here is implementing the Electronic information system with healthcare providers and organisation. The clinicians are the main users of the technology and manage us to ‘go paperless’. The fact is that day today changing technologically is very sound and up to date. Basically the idea is to tell how to store the data electronically safe and secure. All three exemplifies the fact that an electronic format has its own benefit as well as barriers.

Keywords: medical records, digital records, health information, electronic record system

Procedia PDF Downloads 435
864 Evaluation of Total Phenolic Content and Antioxidant Activity in Amaranth Seeds Grown in Latvia

Authors: Alla Mariseva, Ilze Beitane

Abstract:

Daily intake of products rich in antioxidants that scavenge free radicals in cell membranes is an effective way to combat oxidative stress. Last year there was noticed higher interest towards the identification and utilization of plants rich in antioxidant compounds as they may behave as preventive medicine. Amaranth seeds due to polyphenols, anthocyanins, flavonoids, and tocopherols are characterized by high antioxidant activity. The study aimed to evaluate the total phenolic content and radical scavenging activity of amaranth seeds cultivated in 2020 in two farms in Latvia. One sample of amaranth seeds came from an organic farm, the other – from a conventional farm. The total phenol content of amaranth seed extracts was measured with the Folin-Ciocalte spectrophotometric method. The total phenols were expressed as gallic acid equivalents (GAE) per 100 g dry weight (DW) of the samples. The antioxidant activity of amaranth seed extracts was calculated based on scavenging activities of the stable 2.2-diphenyl-1-picrylhydrazyl (DPPH˙) radical, the radical scavenging capacity (ABTS) was demonstrated as Trolox mM equivalents (TE) per 100 g-1 dry weight. Three parallel measurements were performed on all samples. There were significant differences between organic and conventional amaranth seeds in terms of total phenolic content and antioxidant activity. Organic amaranth seeds showed higher total phenolic content compared to conventional amaranth seeds, 65.4±6.0 mg GAE 100 g⁻¹ DW and 43.4±7.8 mg GAE 100 g⁻¹ DW respectively. Organic amaranth seeds were also characterized by higher DPPH radical scavenging activity (7.9±0.4 mM TE 100 g⁻¹ of dry matter) and ABTS radical scavenging capacity (13.2±1.5 mM TE 100 g⁻¹ of dry matter). The results obtained on total phenolic content and antioxidant activity of amaranth seeds grown in Latvia confirmed that the samples have a high biological value; therefore, it would be necessary to promote their consumption by including them in various food products, including vegan products, increasing their nutritional value.

Keywords: ABTS, amaranth seeds, antioxidant activity, DPPH, total phenolic content

Procedia PDF Downloads 200
863 The Effect of Nanoscience and Nanotechnology Education on Preservice Science Teachers' Awareness of Nanoscience and Nanotechnology

Authors: Tuba Senel Zor, Oktay Aslan

Abstract:

With current trends in nanoscience and nanotechnology (NST), scientists have paid much attention to education and nanoliteracy in parallel with the developments on these fields. To understand the advances in NST research requires a population with a high degree of science literacy. All citizens should soon need nanoliteracy in order to navigate some of the important science-based issues faced to their everyday lives. While the fields of NST are advancing rapidly and raising their societal significance, general public’s awareness of these fields has remained at a low level. Moreover, students enrolled different education levels and teachers don’t have awareness at expected level. This problem may be stemmed from inadequate education and training. To remove the inadequacy, teachers have greatest duties and responsibilities. Especially science teachers at all levels need to be made aware of these developments and adequately prepared so that they are able to teach about these advances in a developmentally appropriate manner. If the teachers develop understanding and awareness of NST, they can also discuss the topic with their students. Therefore, the awareness and conceptual understandings of both the teachers who will teach science to students and the students who will be introduced about NST should be increased, and the necessary training should be provided. The aim of this study was to examine the effect of NST education on preservice science teachers’ awareness of NST. The study was designed in one group pre-test post-test quasi-experimental pattern. The study was conducted with 32 preservice science teachers attending the Elementary Science Education Program at a large Turkish university in central Anatolia. NST education was given during five weeks as two hours per week. Nanoscience and Nanotechnology Awareness Questionnaire was used as data collected tool and was implemented for pre-test and post-test. The collected data were analyzed using Statistical package for the Social Science (SPSS). The results of data analysis showed that there was a significant difference (z=6.25, p< .05) on NST awareness of preservice science teachers after implemented NST education. The results of the study indicate that NST education has an important effect for improving awareness of preservice science teachers on NST.

Keywords: awareness level, nanoliteracy, nanoscience and nanotechnology education, preservice science teachers

Procedia PDF Downloads 429
862 Teaching Tools for Web Processing Services

Authors: Rashid Javed, Hardy Lehmkuehler, Franz Josef-Behr

Abstract:

Web Processing Services (WPS) have up growing concern in geoinformation research. However, teaching about them is difficult because of the generally complex circumstances of their use. They limit the possibilities for hands- on- exercises on Web Processing Services. To support understanding however a Training Tools Collection was brought on the way at University of Applied Sciences Stuttgart (HFT). It is limited to the scope of Geostatistical Interpolation of sample point data where different algorithms can be used like IDW, Nearest Neighbor etc. The Tools Collection aims to support understanding of the scope, definition and deployment of Web Processing Services. For example it is necessary to characterize the input of Interpolation by the data set, the parameters for the algorithm and the interpolation results (here a grid of interpolated values is assumed). This paper reports on first experiences using a pilot installation. This was intended to find suitable software interfaces for later full implementations and conclude on potential user interface characteristics. Experiences were made with Deegree software, one of several Services Suites (Collections). Being strictly programmed in Java, Deegree offers several OGC compliant Service Implementations that also promise to be of benefit for the project. The mentioned parameters for a WPS were formalized following the paradigm that any meaningful component will be defined in terms of suitable standards. E.g. the data output can be defined as a GML file. But, the choice of meaningful information pieces and user interactions is not free but partially determined by the selected WPS Processing Suite.

Keywords: deegree, interpolation, IDW, web processing service (WPS)

Procedia PDF Downloads 334
861 Multi-Objective Evolutionary Computation Based Feature Selection Applied to Behaviour Assessment of Children

Authors: F. Jiménez, R. Jódar, M. Martín, G. Sánchez, G. Sciavicco

Abstract:

Abstract—Attribute or feature selection is one of the basic strategies to improve the performances of data classification tasks, and, at the same time, to reduce the complexity of classifiers, and it is a particularly fundamental one when the number of attributes is relatively high. Its application to unsupervised classification is restricted to a limited number of experiments in the literature. Evolutionary computation has already proven itself to be a very effective choice to consistently reduce the number of attributes towards a better classification rate and a simpler semantic interpretation of the inferred classifiers. We present a feature selection wrapper model composed by a multi-objective evolutionary algorithm, the clustering method Expectation-Maximization (EM), and the classifier C4.5 for the unsupervised classification of data extracted from a psychological test named BASC-II (Behavior Assessment System for Children - II ed.) with two objectives: Maximizing the likelihood of the clustering model and maximizing the accuracy of the obtained classifier. We present a methodology to integrate feature selection for unsupervised classification, model evaluation, decision making (to choose the most satisfactory model according to a a posteriori process in a multi-objective context), and testing. We compare the performance of the classifier obtained by the multi-objective evolutionary algorithms ENORA and NSGA-II, and the best solution is then validated by the psychologists that collected the data.

Keywords: evolutionary computation, feature selection, classification, clustering

Procedia PDF Downloads 344
860 Using Deep Learning Real-Time Object Detection Convolution Neural Networks for Fast Fruit Recognition in the Tree

Authors: K. Bresilla, L. Manfrini, B. Morandi, A. Boini, G. Perulli, L. C. Grappadelli

Abstract:

Image/video processing for fruit in the tree using hard-coded feature extraction algorithms have shown high accuracy during recent years. While accurate, these approaches even with high-end hardware are computationally intensive and too slow for real-time systems. This paper details the use of deep convolution neural networks (CNNs), specifically an algorithm (YOLO - You Only Look Once) with 24+2 convolution layers. Using deep-learning techniques eliminated the need for hard-code specific features for specific fruit shapes, color and/or other attributes. This CNN is trained on more than 5000 images of apple and pear fruits on 960 cores GPU (Graphical Processing Unit). Testing set showed an accuracy of 90%. After this, trained data were transferred to an embedded device (Raspberry Pi gen.3) with camera for more portability. Based on correlation between number of visible fruits or detected fruits on one frame and the real number of fruits on one tree, a model was created to accommodate this error rate. Speed of processing and detection of the whole platform was higher than 40 frames per second. This speed is fast enough for any grasping/harvesting robotic arm or other real-time applications.

Keywords: artificial intelligence, computer vision, deep learning, fruit recognition, harvesting robot, precision agriculture

Procedia PDF Downloads 392
859 Time Series Forecasting (TSF) Using Various Deep Learning Models

Authors: Jimeng Shi, Mahek Jain, Giri Narasimhan

Abstract:

Time Series Forecasting (TSF) is used to predict the target variables at a future time point based on the learning from previous time points. To keep the problem tractable, learning methods use data from a fixed-length window in the past as an explicit input. In this paper, we study how the performance of predictive models changes as a function of different look-back window sizes and different amounts of time to predict the future. We also consider the performance of the recent attention-based Transformer models, which have had good success in the image processing and natural language processing domains. In all, we compare four different deep learning methods (RNN, LSTM, GRU, and Transformer) along with a baseline method. The dataset (hourly) we used is the Beijing Air Quality Dataset from the UCI website, which includes a multivariate time series of many factors measured on an hourly basis for a period of 5 years (2010-14). For each model, we also report on the relationship between the performance and the look-back window sizes and the number of predicted time points into the future. Our experiments suggest that Transformer models have the best performance with the lowest Mean Average Errors (MAE = 14.599, 23.273) and Root Mean Square Errors (RSME = 23.573, 38.131) for most of our single-step and multi-steps predictions. The best size for the look-back window to predict 1 hour into the future appears to be one day, while 2 or 4 days perform the best to predict 3 hours into the future.

Keywords: air quality prediction, deep learning algorithms, time series forecasting, look-back window

Procedia PDF Downloads 138
858 Design and Emotion: The Value of 1970s French Children’s Books in the Middle East

Authors: Tina Sleiman

Abstract:

In the early 1970s, a graphics revolution - in quantity and quality - marked the youth publications sector in France. The increased interest in youth publications was supported with the emergence of youth libraries and major publishing houses. In parallel, the 'Agence de Cooperation Culturelle et Technique' (currently the International Organization of the Francophonie) was created, and several Arab countries had joined as members. In spite of political turmoil in the Middle East, French schools in Arab countries were still functioning and some even flourishing. This is a testament that French culture was, and still is, a major export to the region. This study focuses on the aesthetic value of the graphic styles that characterize French children’s books from the 1970s, and their personal value to Francophone people who have consumed these artifacts, in the Middle East. The first part of the study looks at the artifact itself: starting from the context of creation and consumption of these books, and continuing to the preservation and remaining collections. The aesthetic value is studied and compared to similar types of visuals of juxtaposed time periods. The second part examines the audience’s response to the visuals in terms of style recognition or identification, along with emotional significance or associations, and the personal value the artifacts might hold to their consumers. The methods of investigation consist of a literature review, a survey of book collections, and a visual questionnaire, supported by personal interviews. As an outcome, visual patterns will be identified: elements from 1970s children’s books reborn in contemporary youth-based publications. Results of the study shall inform us directly on the aesthetic and personal value of illustrated French children’s books in the Middle East, and indirectly on the capacity of youth-targeted design to create a long-term emotional response from its audience.

Keywords: children’s books, French visual culture, graphic style, publication design, revival

Procedia PDF Downloads 145
857 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 423
856 Vehicular Speed Detection Camera System Using Video Stream

Authors: C. A. Anser Pasha

Abstract:

In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene.

Keywords: radar, image processing, detection, tracking, segmentation

Procedia PDF Downloads 445
855 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation

Procedia PDF Downloads 18
854 Intelligent Recognition of Diabetes Disease via FCM Based Attribute Weighting

Authors: Kemal Polat

Abstract:

In this paper, an attribute weighting method called fuzzy C-means clustering based attribute weighting (FCMAW) for classification of Diabetes disease dataset has been used. The aims of this study are to reduce the variance within attributes of diabetes dataset and to improve the classification accuracy of classifier algorithm transforming from non-linear separable datasets to linearly separable datasets. Pima Indians Diabetes dataset has two classes including normal subjects (500 instances) and diabetes subjects (268 instances). Fuzzy C-means clustering is an improved version of K-means clustering method and is one of most used clustering methods in data mining and machine learning applications. In this study, as the first stage, fuzzy C-means clustering process has been used for finding the centers of attributes in Pima Indians diabetes dataset and then weighted the dataset according to the ratios of the means of attributes to centers of theirs. Secondly, after weighting process, the classifier algorithms including support vector machine (SVM) and k-NN (k- nearest neighbor) classifiers have been used for classifying weighted Pima Indians diabetes dataset. Experimental results show that the proposed attribute weighting method (FCMAW) has obtained very promising results in the classification of Pima Indians diabetes dataset.

Keywords: fuzzy C-means clustering, fuzzy C-means clustering based attribute weighting, Pima Indians diabetes, SVM

Procedia PDF Downloads 390
853 The Searching Artificial Intelligence: Neural Evidence on Consumers' Less Aversion to Algorithm-Recommended Search Product

Authors: Zhaohan Xie, Yining Yu, Mingliang Chen

Abstract:

As research has shown a convergent tendency for aversion to AI recommendation, it is imperative to find a way to promote AI usage and better harness the technology. In the context of e-commerce, this study has found evidence that people show less avoidance of algorithms when recommending search products compared to experience products. This is due to people’s different attribution of mind to AI versus humans, as suggested by mind perception theory. While people hold the belief that an algorithm owns sufficient capability to think and calculate, which makes it competent to evaluate search product attributes that can be obtained before actual use, they doubt its capability to sense and feel, which is essential for evaluating experience product attributes that must be assessed after experience in person. The result of the behavioral investigation (Study 1, N=112) validated that consumers show low purchase intention to experience products recommended by AI. Further consumer neuroscience study (Study 2, N=26) using Event-related potential (ERP) showed that consumers have a higher level of cognitive conflict when faced with AI recommended experience product as reflected by larger N2 component, while the effect disappears for search product. This research has implications for the effective employment of AI recommenders, and it extends the literature on e-commerce and marketing communication.

Keywords: algorithm recommendation, consumer behavior, e-commerce, event-related potential, experience product, search product

Procedia PDF Downloads 113
852 From Wave-Powered Propulsion to Flight with Membrane Wings: Insights Powered by High-Fidelity Immersed Boundary Methods based FSI Simulations

Authors: Rajat Mittal, Jung Hee Seo, Jacob Turner, Harshal Raut

Abstract:

The perpetual advancement in computational capabilities, coupled with the continuous evolution of software tools and numerical algorithms, is creating novel avenues for research, exploration, and application at the nexus of computational fluid and structural mechanics. Fish leverage their remarkably flexible bodies and fins to harness energy from vortices, propelling themselves with an elegance and efficiency that captivates engineers. Bats fly with unparalleled agility and speed by using their flexible membrane wings. Wave-assisted propulsion (WAP) systems, utilizing elastically mounted hydrofoils, convert wave energy into thrust. Each of these problems involves a complex and elegant interplay between fluid dynamics and structural mechanics. Historically, investigations into such phenomena were constrained by available tools, but modern computational advancements now facilitate exploration of these multi-physics challenges with an unprecedented level of fidelity, precision, and realism. In this work, the author will discuss projects that harness the capabilities of high-fidelity sharp-interface immersed boundary methods to address a spectrum of engineering and biological challenges involving fluid-structure interaction.

Keywords: immersed boundary methods, CFD, bioflight, fluid structure interaction

Procedia PDF Downloads 42
851 Comparative Performance of Artificial Bee Colony Based Algorithms for Wind-Thermal Unit Commitment

Authors: P. K. Singhal, R. Naresh, V. Sharma

Abstract:

This paper presents the three optimization models, namely New Binary Artificial Bee Colony (NBABC) algorithm, NBABC with Local Search (NBABC-LS), and NBABC with Genetic Crossover (NBABC-GC) for solving the Wind-Thermal Unit Commitment (WTUC) problem. The uncertain nature of the wind power is incorporated using the Weibull probability density function, which is used to calculate the overestimation and underestimation costs associated with the wind power fluctuation. The NBABC algorithm utilizes a mechanism based on the dissimilarity measure between binary strings for generating the binary solutions in WTUC problem. In NBABC algorithm, an intelligent scout bee phase is proposed that replaces the abandoned solution with the global best solution. The local search operator exploits the neighboring region of the current solutions, whereas the integration of genetic crossover with the NBABC algorithm increases the diversity in the search space and thus avoids the problem of local trappings encountered with the NBABC algorithm. These models are then used to decide the units on/off status, whereas the lambda iteration method is used to dispatch the hourly load demand among the committed units. The effectiveness of the proposed models is validated on an IEEE 10-unit thermal system combined with a wind farm over the planning period of 24 hours.

Keywords: artificial bee colony algorithm, economic dispatch, unit commitment, wind power

Procedia PDF Downloads 357