Search results for: Desirability Function Approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6708

Search results for: Desirability Function Approach

3288 Automatic Distance Compensation for Robust Voice-based Human-Computer Interaction

Authors: Randy Gomez, Keisuke Nakamura, Kazuhiro Nakadai

Abstract:

Distant-talking voice-based HCI system suffers from performance degradation due to mismatch between the acoustic speech (runtime) and the acoustic model (training). Mismatch is caused by the change in the power of the speech signal as observed at the microphones. This change is greatly influenced by the change in distance, affecting speech dynamics inside the room before reaching the microphones. Moreover, as the speech signal is reflected, its acoustical characteristic is also altered by the room properties. In general, power mismatch due to distance is a complex problem. This paper presents a novel approach in dealing with distance-induced mismatch by intelligently sensing instantaneous voice power variation and compensating model parameters. First, the distant-talking speech signal is processed through microphone array processing, and the corresponding distance information is extracted. Distance-sensitive Gaussian Mixture Models (GMMs), pre-trained to capture both speech power and room property are used to predict the optimal distance of the speech source. Consequently, pre-computed statistic priors corresponding to the optimal distance is selected to correct the statistics of the generic model which was frozen during training. Thus, model combinatorics are post-conditioned to match the power of instantaneous speech acoustics at runtime. This results to an improved likelihood in predicting the correct speech command at farther distances. We experiment using real data recorded inside two rooms. Experimental evaluation shows voice recognition performance using our method is more robust to the change in distance compared to the conventional approach. In our experiment, under the most acoustically challenging environment (i.e., Room 2: 2.5 meters), our method achieved 24.2% improvement in recognition performance against the best-performing conventional method.

Keywords: Human Machine Interaction, Human Computer Interaction, Voice Recognition, Acoustic Model Compensation, Acoustic Speech Enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
3287 Purity Monitor Studies in Medium Liquid Argon TPC

Authors: I. Badhrees

Abstract:

This paper is an attempt to describe some of the results that had been found through a journey of study in the field of particle physics. This study consists of two parts, one about the measurement of the cross section of the decay of the Z particle in two electrons, and the other deals with the measurement of the cross section of the multi-photon absorption process using a beam of Laser in the Liquid Argon Time Projection Chamber.

The first part of the paper concerns the results based on the analysis of a data sample containing 8120 ee candidates to reconstruct the mass of the Z particle for each event where each event has an ee pair with PT(e) > 20GeV, and η(e) < 2.5. Monte Carlo templates of the reconstructed Z particle were produced as a function of the Z mass scale. The distribution of the reconstructed Z mass in the data was compared to the Monte Carlo templates, where the total cross section is calculated to be equal to 1432pb.

The second part concerns the Liquid Argon Time Projection Chamber, LAr TPC, the results of the interaction of the UV Laser, Nd-YAG with λ= 266mm, with LAr and through the study of the multi-photon ionization process as a part of the R&D at Bern University. The main result of this study was the cross section of the process of the multi-photon ionization process of the LAr, σe = 1.24±0.10stat±0.30sys.10 -56cm4.

Keywords: ATLAS, CERN, KACST, LArTPC, Particle Physics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688
3286 A Survey of Various Algorithms for Vlsi Physical Design

Authors: Rajine Swetha R, B. Shekar Babu, Sumithra Devi K.A

Abstract:

Electronic Systems are the core of everyday lives. They form an integral part in financial networks, mass transit, telephone systems, power plants and personal computers. Electronic systems are increasingly based on complex VLSI (Very Large Scale Integration) integrated circuits. Initial electronic design automation is concerned with the design and production of VLSI systems. The next important step in creating a VLSI circuit is Physical Design. The input to the physical design is a logical representation of the system under design. The output of this step is the layout of a physical package that optimally or near optimally realizes the logical representation. Physical design problems are combinatorial in nature and of large problem sizes. Darwin observed that, as variations are introduced into a population with each new generation, the less-fit individuals tend to extinct in the competition of basic necessities. This survival of fittest principle leads to evolution in species. The objective of the Genetic Algorithms (GA) is to find an optimal solution to a problem .Since GA-s are heuristic procedures that can function as optimizers, they are not guaranteed to find the optimum, but are able to find acceptable solutions for a wide range of problems. This survey paper aims at a study on Efficient Algorithms for VLSI Physical design and observes the common traits of the superior contributions.

Keywords: Genetic Algorithms, Physical Design, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1720
3285 Integration of Image and Patient Data, Software and International Coding Systems for Use in a Mammography Research Project

Authors: V. Balanica, W. I. D. Rae, M. Caramihai, S. Acho, C. P. Herbst

Abstract:

Mammographic images and data analysis to facilitate modelling or computer aided diagnostic (CAD) software development should best be done using a common database that can handle various mammographic image file formats and relate these to other patient information. This would optimize the use of the data as both primary reporting and enhanced information extraction of research data could be performed from the single dataset. One desired improvement is the integration of DICOM file header information into the database, as an efficient and reliable source of supplementary patient information intrinsically available in the images. The purpose of this paper was to design a suitable database to link and integrate different types of image files and gather common information that can be further used for research purposes. An interface was developed for accessing, adding, updating, modifying and extracting data from the common database, enhancing the future possible application of the data in CAD processing. Technically, future developments envisaged include the creation of an advanced search function to selects image files based on descriptor combinations. Results can be further used for specific CAD processing and other research. Design of a user friendly configuration utility for importing of the required fields from the DICOM files must be done.

Keywords: Database Integration, Mammogram Classification, Tumour Classification, Computer Aided Diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1928
3284 Hepatoprotective Effect of Oleuropein against Cisplatin-Induced Liver Damage in Rat

Authors: Salim Cerig, Fatime Geyikoglu, Murat Bakir, Suat Colak, Merve Sonmez, Kubra Koc

Abstract:

Cisplatin (CIS) is one of the most effective an anticancer drug and also toxic to cells by activating oxidative stress. Oleuropein (OLE) has key role against oxidative stress in mammalian cells, but the role of this antioxidant in the toxicity of CIS remains unknown. The aim of the present study was to investigate the efficacy of OLE on CIS-induced liver damages in male rats. With this aim, male Sprague Dawley rats were randomly assigned to one of eight groups: Control group; the group treated with 7 mg/kg/day CIS; the groups treated with 50, 100 and 200 mg/kg/day OLE (i.p.); and the groups treated with OLE for three days starting at 24 h following CIS injection. After 4 days of injections, serum was provided to assess the blood AST, ALT and LDH values. The liver tissues were removed for histological, biochemical (TAC, TOS and MDA) and genotoxic evaluations. In the CIS treated group, the whole liver tissue showed significant histological changes. Also, CIS significantly increased both the incidence of oxidative stress and the induction of 8-hydroxy-deoxyguanosine (8-OH-dG). Moreover, the rats taking CIS have abnormal results on liver function tests. However, these parameters reached to the normal range after administration of OLE for 3 days. Finally, OLE demonstrated an acceptable high potential and was effective in attenuating CIS-induced liver injury. In this trial, the 200 mg/kg dose of OLE firstly appeared to induce the most optimal protective response.

Keywords: Antioxidant response, cisplatin, histology, liver, oleuropein, 8-OhdG.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2228
3283 A NonLinear Observer of an Electrical Transformer: A Bond Graph Approach

Authors: Gilberto Gonzalez-A , Israel Nuñez

Abstract:

A bond graph model of an electrical transformer including the nonlinear saturation is presented. A nonlinear observer for the transformer based on multivariable circle criterion in the physical domain is proposed. In order to show the saturation and hysteresis effects on the electrical transformer, simulation results are obtained. Finally, the paper describes that convergence of the estimates to the true states is achieved.

Keywords: Bond graph, nonlinear observer, electrical transformer, nonlinear saturation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
3282 Variable Regularization Parameter Normalized Least Mean Square Adaptive Filter

Authors: Young-Seok Choi

Abstract:

We present a normalized LMS (NLMS) algorithm with robust regularization. Unlike conventional NLMS with the fixed regularization parameter, the proposed approach dynamically updates the regularization parameter. By exploiting a gradient descent direction, we derive a computationally efficient and robust update scheme for the regularization parameter. In simulation, we demonstrate the proposed algorithm outperforms conventional NLMS algorithms in terms of convergence rate and misadjustment error.

Keywords: Regularization, normalized LMS, system identification, robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1860
3281 Evolutionary Techniques for Model Order Reduction of Large Scale Linear Systems

Authors: S. Panda, J. S. Yadav, N. P. Patidar, C. Ardil

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. The GA has been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly non-linear, mixed integer optimization problems that are typical of complex engineering systems. PSO technique is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. In this paper both PSO and GA optimization are employed for finding stable reduced order models of single-input- single-output large-scale linear systems. Both the techniques guarantee stability of reduced order model if the original high order model is stable. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical example from literature and the results are compared with recently published conventional model reduction technique.

Keywords: Genetic Algorithm, Particle Swarm Optimization, Order Reduction, Stability, Transfer Function, Integral Squared Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2698
3280 Comparison of Particle Swarm Optimization and Genetic Algorithm for TCSC-based Controller Design

Authors: Sidhartha Panda, N. P. Padhy

Abstract:

Recently, genetic algorithms (GA) and particle swarm optimization (PSO) technique have attracted considerable attention among various modern heuristic optimization techniques. Since the two approaches are supposed to find a solution to a given objective function but employ different strategies and computational effort, it is appropriate to compare their performance. This paper presents the application and performance comparison of PSO and GA optimization techniques, for Thyristor Controlled Series Compensator (TCSC)-based controller design. The design objective is to enhance the power system stability. The design problem of the FACTS-based controller is formulated as an optimization problem and both the PSO and GA optimization techniques are employed to search for optimal controller parameters. The performance of both optimization techniques in terms of computational time and convergence rate is compared. Further, the optimized controllers are tested on a weakly connected power system subjected to different disturbances, and their performance is compared with the conventional power system stabilizer (CPSS). The eigenvalue analysis and non-linear simulation results are presented and compared to show the effectiveness of both the techniques in designing a TCSC-based controller, to enhance power system stability.

Keywords: Thyristor Controlled Series Compensator, geneticalgorithm; particle swarm optimization; Phillips-Heffron model;power system stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3141
3279 Improving Subjective Bias Detection Using Bidirectional Encoder Representations from Transformers and Bidirectional Long Short-Term Memory

Authors: Ebipatei Victoria Tunyan, T. A. Cao, Cheol Young Ock

Abstract:

Detecting subjectively biased statements is a vital task. This is because this kind of bias, when present in the text or other forms of information dissemination media such as news, social media, scientific texts, and encyclopedias, can weaken trust in the information and stir conflicts amongst consumers. Subjective bias detection is also critical for many Natural Language Processing (NLP) tasks like sentiment analysis, opinion identification, and bias neutralization. Having a system that can adequately detect subjectivity in text will boost research in the above-mentioned areas significantly. It can also come in handy for platforms like Wikipedia, where the use of neutral language is of importance. The goal of this work is to identify the subjectively biased language in text on a sentence level. With machine learning, we can solve complex AI problems, making it a good fit for the problem of subjective bias detection. A key step in this approach is to train a classifier based on BERT (Bidirectional Encoder Representations from Transformers) as upstream model. BERT by itself can be used as a classifier; however, in this study, we use BERT as data preprocessor as well as an embedding generator for a Bi-LSTM (Bidirectional Long Short-Term Memory) network incorporated with attention mechanism. This approach produces a deeper and better classifier. We evaluate the effectiveness of our model using the Wiki Neutrality Corpus (WNC), which was compiled from Wikipedia edits that removed various biased instances from sentences as a benchmark dataset, with which we also compare our model to existing approaches. Experimental analysis indicates an improved performance, as our model achieved state-of-the-art accuracy in detecting subjective bias. This study focuses on the English language, but the model can be fine-tuned to accommodate other languages.

Keywords: Subjective bias detection, machine learning, BERT–BiLSTM–Attention, text classification, natural language processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 797
3278 Exploring the Role of Private Commercial Banks in Increasing Small and Medium Size Enterprises’ Financial Accessibility in Developing Countries: A Study in Bangladesh

Authors: Khondokar Farid Ahmmed, Robin Bown

Abstract:

It is widely recognized that the formal financing of Small and Medium Size Enterprises (SMEs) by Private Commercial Banks (PCBs) is restricted. Due to changing financial market competition, SMEs are now important customers to PCBs in the member countries of the Asian Development Bank (ADB). Various initiatives in enhancing the efficiency of risk assessment of PCBs have failed in increasing financing accessibility in the traditional financing system where information asymmetry is a key constraint. In this circumstance, PCBs need to undertake a holistic approach. Holistic approach refers to methods that attempt to fundamentally change established traditions. To undertake holistic approach, this study intends to find the entire established financing culture between PCBs and SMEs in a new lens beyond the tradition on the basis of two basic questions: “What is the traditional lending culture between PCBs and SMEs” and “What could be potential role of PCBs to develop that culture where focusing on SME financing to PCBs". This study considered formal SME financing in Bangladesh by focusing on SMEs applying for their first loan. Bangladesh is a member country of ADB. The data collection method is semi-structured and we utilized face-to-face interviews with in-depth branch managers, higher officials and owner-managers of SME customers of PCBs and higher officials of SME Foundation and the Bangladesh central bank. Discourse analysis method was used for data analysis on the frame of thematic discussion fully based on participants’ views. The research found that branch managers and loan officers have a high level of power in assessing and financing decision-making. There is a changing attitude in PCB sector in requiring flexible collateral assets. Branch managers (Loan Officers) consider value of business prospect of owner-mangers as complementary of collateral assets. However, the study found the assessment process of business prospect is entirely unstructured and linked with socio-cultural settings that does not support PCBs’ changing manner in terms of collateral requirement. The study redefined and classified collateral assets to include all financing constructs in a structure. The degree of value of the collateral assets determines the degree of business prospects. This study suggested applying an outside classroom-learning paradigm such as “knowledge tour” to enhance the value of the kinds of collateral assets. This is the scope of PCBs in increasing SMEs’ financing eligibility in win-win basis. The findings and proposition could be effective in other ADB member countries and audiences in the field.

Keywords: CCA, financing, information asymmetry, PCA, PCB, financing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
3277 Comparison of Router Intelligent and Cooperative Host Intelligent Algorithms in a Continuous Model of Fixed Telecommunication Networks

Authors: Dávid Csercsik, Sándor Imre

Abstract:

The performance of state of the art worldwide telecommunication networks strongly depends on the efficiency of the applied routing mechanism. Game theoretical approaches to this problem offer new solutions. In this paper a new continuous network routing model is defined to describe data transfer in fixed telecommunication networks of multiple hosts. The nodes of the network correspond to routers whose latency is assumed to be traffic dependent. We propose that the whole traffic of the network can be decomposed to a finite number of tasks, which belong to various hosts. To describe the different latency-sensitivity, utility functions are defined for each task. The model is used to compare router and host intelligent types of routing methods, corresponding to various data transfer protocols. We analyze host intelligent routing as a transferable utility cooperative game with externalities. The main aim of the paper is to provide a framework in which the efficiency of various routing algorithms can be compared and the transferable utility game arising in the cooperative case can be analyzed.

Keywords: Routing, Telecommunication networks, Performance evaluation, Cooperative game theory, Partition function form games

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1832
3276 Assessing the Impact of Quinoa Cultivation Adopted to Produce a Secure Food Crop and Poverty Reduction by Farmers in Rural Pakistan

Authors: Ejaz Ashraf, Raheel Babar, Muhammad Yaseen, Hafiz Khurram Shurjeel, Nosheen Fatima

Abstract:

Main purpose of this study was to assess adoption level of farmers for quinoa cultivation after they had been taught through training and visit extension approach. At this time of the 21st century, population structure, climate change, food requirements and eating habits of people are changing rapidly. In this scenario, farmers must play their key role in sustainable crop development and production through adoption of new crops that may also be helpful to overcome the issue of food insecurity as well as reducing poverty in rural areas. Its cultivation in Pakistan is at the early stages and there is a need to raise awareness among farmers to grow quinoa crops. In the middle of the 2015, a training and visit extension approach was used to raise awareness and convince farmers to grow quinoa in the area. During training and visit extension program, 80 farmers were randomly selected for the training of quinoa cultivation. Later on, these farmers trained 60 more farmers living into their neighborhood. After six months, a survey was conducted with all 140 farmers to assess the impact of the training and visit program on adoption level of respondents for the quinoa crop. The survey instrument was developed with the help of literature review and other experts of the crop. Validity and reliability of the instrument were checked before complete data collection. The data were analyzed by using SPSS. Multiple regression analysis was used for interpretation of the results from the survey, which indicated that factors like information/ training, change in agronomic and plant protection practices play a key role in the adoption of quinoa cultivation by respondents. In addition, the model explains more than 50% of variation in the adoption level of respondents. It is concluded that farmers need timely information for improved knowledge of agronomic and plant protection practices to adopt cultivation of the quinoa crop in the area.

Keywords: Farmers, quinoa, adoption, contact, training and visit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 897
3275 Investigation of Artificial Neural Networks Performance to Predict Net Heating Value of Crude Oil by Its Properties

Authors: Mousavian, M. Moghimi Mofrad, M. H. Vakili, D. Ashouri, R. Alizadeh

Abstract:

The aim of this research is to use artificial neural networks computing technology for estimating the net heating value (NHV) of crude oil by its Properties. The approach is based on training the neural network simulator uses back-propagation as the learning algorithm for a predefined range of analytically generated well test response. The network with 8 neurons in one hidden layer was selected and prediction of this network has been good agreement with experimental data.

Keywords: Neural Network, Net Heating Value, Crude Oil, Experimental, Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
3274 Neuropalliative Care in Patients with Progressive Neurological Disease in Czech Republic: Study Protocol

Authors: R. Bužgová, R. Kozáková, M. Škutová, M. Bar, P. Ressner, P. Bártová

Abstract:

Introduction: Currently, there has been an increasing concern about the provision of palliative care in non-oncological patients in both professional literature and clinical practice. However, there is not much scientific information on how to provide neurological and palliative care together. The main objective of the project is to create and to verify a concept of neuro-palliative and rehabilitative care for patients with selected neurological diseases in an advanced stage of the disease and also to evaluate bio-psychosocial and spiritual needs of these patients and their caregivers related to the quality of life using created standardized tools. Methodology: Triangulation of research methods (qualitative and quantitative) will be used. A concept of care and assessment tools will be developed by analyzing interviews and focus groups. Qualitative data will be analyzed using grounded theory. The concept of care will be tested in the context of the intervention study. Using quantitative analysis, we will assess the effect of an intervention provided on the saturation of needs, quality of life, and quality of care. A research sample will be made up of the patients with selected neurological diseases (Parkinson´s syndrome, motor neuron disease, multiple sclerosis, Huntington’s disease), together with patients´ family members. Based on the results, educational materials and a certified course for health care professionals will be created. Findings: Based on qualitative data analysis, we will propose the concept of integrated care model combining neurological, rehabilitative and specialist palliative care for patients with selected neurological diseases in different settings of care and services. Patients´ needs related to quality of life will be described by newly created and validated measuring tools before the start of intervention (application of neuro-palliative and palliative approach) and then in the time interval. Conclusion: Based on the results, educational materials and a certified course for doctors and health care professionals will be created.

Keywords: Multidisciplinary approach, neuropalliative care, research, quality of life.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 889
3273 Different Teaching Methods for Program Design and Algorithmic Language

Authors: Yue Zhao, Jianping Li

Abstract:

This paper covers the present situation and problem of experimental teaching of mathematics specialty in recent years, puts forward and demonstrates experimental teaching methods for different education. From the aspects of content and experimental teaching approach, uses as an example the course “Experiment for Program Designing & Algorithmic Language" and discusses teaching practice and laboratory course work. In addition a series of successful methods and measures are introduced in experimental teaching.

Keywords: Differentiated teaching, experimental teaching, program design and algorithmic language, teaching method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
3272 An Archetype to Sustain Knowledge Management Systems through Intranet

Authors: B. T. Sayed, Nafaâ Jabeur, M. Aref

Abstract:

Creation and maintenance of knowledge management systems has been recognized as an important research area. Consecutively lack of accurate results from knowledge management systems limits the organization to apply their knowledge management processes. This leads to a failure in getting the right information to the right people at the right time thus followed by a deficiency in decision making processes. An Intranet offers a powerful tool for communication and collaboration, presenting data and information, and the means that creates and shares knowledge, all in one easily accessible place. This paper proposes an archetype describing how a knowledge management system, with the support of intranet capabilities, could very much increase the accuracy of capturing, storing and retrieving knowledge based processes thereby increasing the efficiency of the system. This system will expect a critical mass of usage, by the users, for intranet to function as knowledge management systems. This prototype would lead to a design of an application that would impose creation and maintenance of an effective knowledge management system through intranet. The aim of this paper is to introduce an effective system to handle capture, store and distribute knowledge management in a form that may not lead to any failure which exists in most of the systems. The methodology used in the system would require all the employees, in the organization, to contribute the maximum to deliver the system to a successful arena. The system is still in its initial mode and thereby the authors are under the process to practically implement the ideas, as mentioned in the system, to produce satisfactory results.

Keywords: Knowledge Management Systems, Intranet, Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
3271 Calibration Method for an Augmented Reality System

Authors: S. Malek, N. Zenati-Henda, M. Belhocine, S. Benbelkacem

Abstract:

In geometrical camera calibration, the objective is to determine a set of camera parameters that describe the mapping between 3D references coordinates and 2D image coordinates. In this paper, a technique of calibration and tracking based on both a least squares method is presented and a correlation technique developed as part of an augmented reality system. This approach is fast and it can be used for a real time system

Keywords: Camera calibration, pinhole model, least squares method, augmented reality, strong calibration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1980
3270 Investigation of Bubble Growth during Nucleate Boiling Using CFD

Authors: K. Jagannath, Akhilesh Kotian, S. S. Sharma, Achutha Kini U., P. R. Prabhu

Abstract:

Boiling process is characterized by the rapid formation of vapour bubbles at the solid–liquid interface (nucleate boiling) with pre-existing vapour or gas pockets. Computational fluid dynamics (CFD) is an important tool to study bubble dynamics. In the present study, CFD simulation has been carried out to determine the bubble detachment diameter and its terminal velocity. Volume of fluid method is used to model the bubble and the surrounding by solving single set of momentum equations and tracking the volume fraction of each of the fluids throughout the domain. In the simulation, bubble is generated by allowing water-vapour to enter a cylinder filled with liquid water through an inlet at the bottom. After the bubble is fully formed, the bubble detaches from the surface and rises up during which the bubble accelerates due to the net balance between buoyancy force and viscous drag. Finally when these forces exactly balance each other, it attains a constant terminal velocity. The bubble detachment diameter and the terminal velocity of the bubble are captured by the monitor function provided in FLUENT. The detachment diameter and the terminal velocity obtained are compared with the established results based on the shape of the bubble. A good agreement is obtained between the results obtained from simulation and the equations in comparison with the established results.

Keywords: Bubble growth, computational fluid dynamics, detachment diameter, terminal velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2102
3269 An Enterprise Intelligent System Development and Solution Framework

Authors: Rajendra M. Sonar

Abstract:

The recent trend has been using hybrid approach rather than using a single intelligent technique to solve the problems. In this paper, we describe and discuss a framework to develop enterprise solutions that are backed by intelligent techniques. The framework not only uses intelligent techniques themselves but it is a complete environment that includes various interfaces and components to develop the intelligent solutions. The framework is completely Web-based and uses XML extensively. It can work like shared plat-form to be accessed by multiple developers, users and decision makers.

Keywords: Intelligent System Development Framework, WebbasedIntelligent Systems, Retail Banking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1999
3268 The Particle Swarm Optimization Against the Runge’s Phenomenon: Application to the Generalized Integral Quadrature Method

Authors: A. Zerarka, A. Soukeur, N. Khelil

Abstract:

In the present work, we introduce the particle swarm optimization called (PSO in short) to avoid the Runge-s phenomenon occurring in many numerical problems. This new approach is tested with some numerical examples including the generalized integral quadrature method in order to solve the Volterra-s integral equations

Keywords: Integral equation, particle swarm optimization, Runge's phenomenon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1395
3267 Study on Planning of Smart GRID using Landscape Ecology

Authors: Sunglim Lee, Susumu Fujii, Koji Okamura

Abstract:

Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.

Keywords: Landscape ecology, IT, smart grid, aerial photograph, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952
3266 A Comparison of Fuzzy Clustering Algorithms to Cluster Web Messages

Authors: Sara El Manar El Bouanani, Ismail Kassou

Abstract:

Our objective in this paper is to propose an approach capable of clustering web messages. The clustering is carried out by assigning, with a certain probability, texts written by the same web user to the same cluster based on Stylometric features and using fuzzy clustering algorithms. Focus in the present work is on comparing the most popular algorithms in fuzzy clustering theory namely, Fuzzy C-means, Possibilistic C-means and Fuzzy Possibilistic C-Means.

Keywords: Authorship detection, fuzzy clustering, profiling, stylometric features.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2031
3265 Practical Design Procedures of 3D Reinforced Concrete Shear Wall-Frame Structure Based on Structural Optimization Method

Authors: H. Nikzad, S. Yoshitomi

Abstract:

This study investigates and develops the structural optimization method. The effect of size constraints on practical solution of reinforced concrete (RC) building structure with shear wall is proposed. Cross-sections of beam and column, and thickness of shear wall are considered as design variables. The objective function to be minimized is total cost of the structure by using a simple and efficient automated MATLAB platform structural optimization methodology. With modification of mathematical formulations, the result is compared with optimal solution without size constraints. The most suitable combination of section sizes is selected as for the final design application based on linear static analysis. The findings of this study show that defining higher value of upper bound of sectional sizes significantly affects optimal solution, and defining of size constraints play a vital role in finding of global and practical solution during optimization procedures. The result and effectiveness of proposed method confirm the ability and efficiency of optimal solutions for 3D RC shear wall-frame structure.

Keywords: Structural optimization, linear static analysis, ETABS, MATLAB, RC shear wall-frame structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1268
3264 Effect of On-Demand Cueing on Freezing of Gait in Parkinson’s Patients

Authors: Rosemarie Velik

Abstract:

Gait disturbance, particularly freezing of gait (FOG), is a phenomenon that is common in Parkinson’s patients and significantly contributes to a loss of function and independence. Walking performance and number of freezing episodes have been known to respond favorably to sensory cues of different modalities. However, a topic that has so far barely been touched is how to resolve freezing episodes via sensory cues once they have appeared. In this study, we analyze the effect of five different sensory cues on the duration of freezing episodes: (1) vibratory alert, (2) auditory alert, (3) vibratory rhythm, (4) auditory rhythm, (5) visual cue in form of parallel lines projected to the floor. The motivation for this study is to investigate the possibility of the design of a gait assistive device for Parkinson’s patients. Test subjects were 7 Parkinson’s patients regularly suffering from FOG. The patients had to repeatedly walk a pre-defined course and cues were triggered always 2 s after freezing onset. The effect was analyzed via experimental measurements and patient interviews. The measurements showed that all 5 sensory cues led to a decrease of the average duration of freezing: baseline (7.9s), vibratory alert (7.1s), auditory alert (6.7s), auditory rhythm (6.4s), vibratory rhythm (6.3s), and visual cue (5.3s). Nevertheless, interestingly, patients subjectively evaluated the audio alert and vibratory signals to have a significantly better effect for reducing their freezing duration than the visual cue.

Keywords: Auditory cueing, freezing of gait, gait assistance, Parkinson’s disease, vibratory cueing, visual cueing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3034
3263 Jeffrey's Prior for Unknown Sinusoidal Noise Model via Cramer-Rao Lower Bound

Authors: Samuel A. Phillips, Emmanuel A. Ayanlowo, Rasaki O. Olanrewaju, Olayode Fatoki

Abstract:

This paper employs the Jeffrey's prior technique in the process of estimating the periodograms and frequency of sinusoidal model for unknown noisy time variants or oscillating events (data) in a Bayesian setting. The non-informative Jeffrey's prior was adopted for the posterior trigonometric function of the sinusoidal model such that Cramer-Rao Lower Bound (CRLB) inference was used in carving-out the minimum variance needed to curb the invariance structure effect for unknown noisy time observational and repeated circular patterns. An average monthly oscillating temperature series measured in degree Celsius (0C) from 1901 to 2014 was subjected to the posterior solution of the unknown noisy events of the sinusoidal model via Markov Chain Monte Carlo (MCMC). It was not only deduced that two minutes period is required before completing a cycle of changing temperature from one particular degree Celsius to another but also that the sinusoidal model via the CRLB-Jeffrey's prior for unknown noisy events produced a miniature posterior Maximum A Posteriori (MAP) compare to a known noisy events.

Keywords: Cramer-Rao Lower Bound (CRLB), Jeffrey's prior, Sinusoidal, Maximum A Posteriori (MAP), Markov Chain Monte Carlo (MCMC), Periodograms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 636
3262 Wet Flue Gas Desulfurization Using a New O-Element Design Which Replaces the Venturi Scrubber

Authors: P. Lestinsky, D. Jecha, V. Brummer, P. Stehlik

Abstract:

Scrubbing by a liquid spraying is one of the most effective processes used for removal of fine particles and soluble gas pollutants (such as SO2, HCl, HF) from the flue gas. There are many configurations of scrubbers designed to provide contact between the liquid and gas stream for effectively capturing particles or soluble gas pollutants, such as spray plates, packed bed towers, jet scrubbers, cyclones, vortex and venturi scrubbers. The primary function of venturi scrubber is the capture of fine particles as well as HCl, HF or SO2 removal with effect of the flue gas temperature decrease before input to the absorption column. In this paper, sulfur dioxide (SO2) from flue gas was captured using new design replacing venturi scrubber (1st degree of wet scrubbing). The flue gas was prepared by the combustion of the carbon disulfide solution in toluene (1:1 vol.) in the flame in the reactor. Such prepared flue gas with temperature around 150°C was processed in designed laboratory O-element scrubber. Water was used as absorbent liquid. The efficiency of SO2 removal, pressure drop and temperature drop were measured on our experimental device. The dependence of these variables on liquid-gas ratio was observed. The average temperature drop was in the range from 150°C to 40°C. The pressure drop was increased with increasing of a liquid-gas ratio, but no too much as for the common venturi scrubber designs. The efficiency of SO2 removal was up to 70 %. The pressure drop of our new designed wet scrubber is similar to commonly used venturi scrubbers; nevertheless the influence of amount of the liquid on pressure drop is not so significant.

Keywords: Desulphurization, absorption, flue gas, modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2873
3261 Holistic Approach to Teaching Mathematics in Secondary School as a Means of Improving Students’ Comprehension of Study Material

Authors: Natalia Podkhodova, Olga Sheremeteva, Mariia Soldaeva

Abstract:

Creating favourable conditions for students’ comprehension of mathematical content is one of the primary problems in teaching mathematics in secondary school. The fact of comprehension includes the ability to build a working situational model and thus becomes an important means of solving mathematical problems. This paper describes a holistic approach to teaching mathematics designed to address the primary challenges of such teaching; specifically, the challenge of students’ comprehension. Essentially, this approach consists of (1) establishing links between the attributes of the notion: the sense, the meaning, and the term; (2) taking into account the components of student’s subjective experience—value-based emotions, contextual, procedural and communicative—during the educational process; (3) linking together different ways to present mathematical information; (4) identifying and leveraging the relationships between real, perceptual and conceptual (scientific) mathematical spaces by applying real-life situational modelling. The article describes approaches to the practical use of these foundational concepts. Identifying how proposed methods and techniques influence understanding of material used in teaching mathematics was the primary goal. The study included an experiment in which 256 secondary school students took part: 142 in the study group and 114 in the control group. All students in these groups had similar levels of achievement in math and studied math under the same curriculum. In the course of the experiment, comprehension of two topics — “Derivative” and “Trigonometric functions”—was evaluated. Control group participants were taught using traditional methods. Students in the study group were taught using the holistic method: under teacher’s guidance, they carried out assignments designed to establish linkages between notion’s characteristics, to convert information from one mode of presentation to another, as well as assignments that required the ability to operate with all modes of presentation. Identification, accounting for and transformation of subjective experience were associated with methods of stimulating the emotional value component of the studied mathematical content (discussions of lesson titles, assignments aimed to create study dominants, performing theme-related physical exercise ...) The use of techniques that forms inter-subject notions based on linkages between, perceptual real and mathematical conceptual spaces proved to be of special interest to the students. Results of the experiment were analysed by presenting students in each of the groups with a final test in each of the studied topics. The test included assignments that required building real situational models. Statistical analysis was used to aggregate test results. Pierson criterion x2 was used to reveal statistics significance of results (pass-fail the modelling test). Significant difference of results was revealed (p < 0.001), which allowed to conclude that students in the study group showed better comprehension of mathematical information than those in the control group. The total number of completed assignments of each student was analysed as well, with average results calculated for each group. Statistical significance of result differences against the quantitative criterion (number of completed assignments) was determined using Student’s t-test, which showed that students in the study group completed significantly more assignments than those in the control group (p = 0.0001). Authors thus come to the conclusion that suggested increase in the level of comprehension of study material took place as a result of applying implemented methods and techniques.

Keywords: Comprehension of mathematical content, holistic approach to teaching mathematics in secondary school, subjective experience, technology of the formation of inter-subject notions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 549
3260 Numerical Solution of Transient Natural Convection in Vertical Heated Rectangular Channel between Two Vertical Parallel MTR-Type Fuel Plates

Authors: Djalal Hamed

Abstract:

The aim of this paper is to perform, by mean of the finite volume method, a numerical solution of the transient natural convection in a narrow rectangular channel between two vertical parallel Material Testing Reactor (MTR)-type fuel plates, imposed under a heat flux with a cosine shape to determine the margin of the nuclear core power at which the natural convection cooling mode can ensure a safe core cooling, where the cladding temperature should not reach a specific safety limits (90 °C). For this purpose, a computer program is developed to determine the principal parameters related to the nuclear core safety, such as the temperature distribution in the fuel plate and in the coolant (light water) as a function of the reactor core power. Throughout the obtained results, we noticed that the core power should not reach 400 kW, to ensure a safe passive residual heat removing from the nuclear core by the upward natural convection cooling mode.

Keywords: Buoyancy force, friction force, friction factor, finite volume method, transient natural convection, thermal hydraulic analysis, vertical heated rectangular channel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 726
3259 Modelling and Analyzing a Hospital Procedureusing a Petri-Net Approach

Authors: Mourtou Efstratia, Abdel-Badeeh M. Salem, Pavlidis George

Abstract:

Hierarchical high-level PNs (HHPNs) with time versions are a useful tool to model systems in a variety of application domains, ranging from logistics to complex workflows. This paper addresses an application domain which is receiving more and more attention: procedure that arranges the final inpatient charge in payment-s office and their management. We shall prove that Petri net based analysis is able to improve the delays during the procedure, in order that inpatient charges could be more reliable and on time.

Keywords: eHealth, Petri-Nets, Hospital Services, InpatientCharges, Workflow Modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1955