Search results for: Interval endurance training program
468 A Study for the Effect of Fire Initiated Location on Evacuation Success Rate
Authors: Jin A Ryu, Ga Ye Kim, Hee Sun Kim
Abstract:
As the number of fire accidents is gradually raising, many studies have been reported on evacuation. Previous studies have mostly focused on evaluating the safety of evacuation and the risk of fire in particular buildings. However, studies on effects of various parameters on evacuation have not been nearly done. Therefore, this paper aims at observing evacuation time under the effect of fire initiated location. In this study, evacuation simulations are performed on a 5-floor building located in Seoul, South Korea using the commercial program, Fire Dynamics Simulator with Evacuation (FDS+EVAC). Only the fourth and fifth floors are modeled with an assumption that fire starts in a room located on the fourth floor. The parameter for evacuation simulations is location of fire initiation to observe the evacuation time and safety. Results show that the location of fire initiation is closer to exit, the more time is taken to evacuate. The case having the nearest location of fire initiation to exit has the lowest ratio of successful occupants to the total occupants. In addition, for safety evaluation, the evacuation time calculated from computer simulation model is compared with the tolerable evacuation time according to code in Japan. As a result, all cases are completed within the tolerable evacuation time. This study allows predicting evacuation time under various conditions of fire and can be used to evaluate evacuation appropriateness and fire safety of building.Keywords: Evacuation safety, Evacuation simulation, FDS+Evac, Time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1513467 Shear Buckling of a Large Pultruded Composite I-Section under Asymmetric Loading
Authors: Jin Y. Park, Jeong Wan Lee
Abstract:
An experimental and analytical research on shear buckling of a comparably large polymer composite I-section is presented. It is known that shear buckling load of a large span composite beam is difficult to determine experimentally. In order to sensitively detect shear buckling of the tested I-section, twenty strain rosettes and eight displacement sensors were applied and attached on the web and flange surfaces. The tested specimen was a pultruded composite beam made of vinylester resin, E-glass, carbon fibers and micro-fillers. Various coupon tests were performed before the shear buckling test to obtain fundamental material properties of the Isection. An asymmetric four-point bending loading scheme was utilized for the shear test. The loading scheme resulted in a high shear and almost zero moment condition at the center of the web panel. The shear buckling load was successfully determined after analyzing the obtained test data from strain rosettes and displacement sensors. An analytical approach was also performed to verify the experimental results and to support the discussed experimental program.Keywords: Strain sensor, displacement sensor, shear buckling, polymer composite I-section, asymmetric loading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963466 Efficient DTW-Based Speech Recognition System for Isolated Words of Arabic Language
Authors: Khalid A. Darabkh, Ala F. Khalifeh, Baraa A. Bathech, Saed W. Sabah
Abstract:
Despite the fact that Arabic language is currently one of the most common languages worldwide, there has been only a little research on Arabic speech recognition relative to other languages such as English and Japanese. Generally, digital speech processing and voice recognition algorithms are of special importance for designing efficient, accurate, as well as fast automatic speech recognition systems. However, the speech recognition process carried out in this paper is divided into three stages as follows: firstly, the signal is preprocessed to reduce noise effects. After that, the signal is digitized and hearingized. Consequently, the voice activity regions are segmented using voice activity detection (VAD) algorithm. Secondly, features are extracted from the speech signal using Mel-frequency cepstral coefficients (MFCC) algorithm. Moreover, delta and acceleration (delta-delta) coefficients have been added for the reason of improving the recognition accuracy. Finally, each test word-s features are compared to the training database using dynamic time warping (DTW) algorithm. Utilizing the best set up made for all affected parameters to the aforementioned techniques, the proposed system achieved a recognition rate of about 98.5% which outperformed other HMM and ANN-based approaches available in the literature.Keywords: Arabic speech recognition, MFCC, DTW, VAD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4082465 Gene Expression Signature for Classification of Metastasis Positive and Negative Oral Cancer in Homosapiens
Authors: A. Shukla, A. Tarsauliya, R. Tiwari, S. Sharma
Abstract:
Cancer classification to their corresponding cohorts has been key area of research in bioinformatics aiming better prognosis of the disease. High dimensionality of gene data has been makes it a complex task and requires significance data identification technique in order to reducing the dimensionality and identification of significant information. In this paper, we have proposed a novel approach for classification of oral cancer into metastasis positive and negative patients. We have used significance analysis of microarrays (SAM) for identifying significant genes which constitutes gene signature. 3 different gene signatures were identified using SAM from 3 different combination of training datasets and their classification accuracy was calculated on corresponding testing datasets using k-Nearest Neighbour (kNN), Fuzzy C-Means Clustering (FCM), Support Vector Machine (SVM) and Backpropagation Neural Network (BPNN). A final gene signature of only 9 genes was obtained from above 3 individual gene signatures. 9 gene signature-s classification capability was compared using same classifiers on same testing datasets. Results obtained from experimentation shows that 9 gene signature classified all samples in testing dataset accurately while individual genes could not classify all accurately.
Keywords: Cancer, Gene Signature, SAM, Classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079464 A Numerical Framework to Investigate Intake Aerodynamics Behavior in Icing Conditions
Authors: Ali Mirmohammadi, Arash Taheri, Meysam Mohammadi-Amin
Abstract:
One of the major parts of a jet engine is air intake, which provides proper and required amount of air for the engine to operate. There are several aerodynamic parameters which should be considered in design, such as distortion, pressure recovery, etc. In this research, the effects of lip ice accretion on pitot intake performance are investigated. For ice accretion phenomenon, two supervised multilayer neural networks (ANN) are designed, one for ice shape prediction and another one for ice roughness estimation based on experimental data. The Fourier coefficients of transformed ice shape and parameters include velocity, liquid water content (LWC), median volumetric diameter (MVD), spray time and temperature are used in neural network training. Then, the subsonic intake flow field is simulated numerically using 2D Navier-Stokes equations and Finite Volume approach with Hybrid mesh includes structured and unstructured meshes. The results are obtained in different angles of attack and the variations of intake aerodynamic parameters due to icing phenomenon are discussed. The results show noticeable effects of ice accretion phenomenon on intake behavior.Keywords: Artificial Neural Network, Ice Accretion, IntakeAerodynamics, Design Parameters, Finite Volume Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207463 Neural Network Tuned Fuzzy Controller for MIMO System
Authors: Seema Chopra, R. Mitra, Vijay Kumar
Abstract:
In this paper, a neural network tuned fuzzy controller is proposed for controlling Multi-Input Multi-Output (MIMO) systems. For the convenience of analysis, the structure of MIMO fuzzy controller is divided into single input single-output (SISO) controllers for controlling each degree of freedom. Secondly, according to the characteristics of the system-s dynamics coupling, an appropriate coupling fuzzy controller is incorporated to improve the performance. The simulation analysis on a two-level mass–spring MIMO vibration system is carried out and results show the effectiveness of the proposed fuzzy controller. The performance though improved, the computational time and memory used is comparatively higher, because it has four fuzzy reasoning blocks and number may increase in case of other MIMO system. Then a fuzzy neural network is designed from a set of input-output training data to reduce the computing burden during implementation. This control strategy can not only simplify the implementation problem of fuzzy control, but also reduce computational time and consume less memory.Keywords: Fuzzy Control, Neural Network, MIMO System, Optimization of Membership functions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3215462 Using Teager Energy Cepstrum and HMM distancesin Automatic Speech Recognition and Analysis of Unvoiced Speech
Authors: Panikos Heracleous
Abstract:
In this study, the use of silicon NAM (Non-Audible Murmur) microphone in automatic speech recognition is presented. NAM microphones are special acoustic sensors, which are attached behind the talker-s ear and can capture not only normal (audible) speech, but also very quietly uttered speech (non-audible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech conversion etc.) for sound-impaired people. Using a small amount of training data and adaptation approaches, 93.9% word accuracy was achieved for a 20k Japanese vocabulary dictation task. Non-audible murmur recognition in noisy environments is also investigated. In this study, further analysis of the NAM speech has been made using distance measures between hidden Markov model (HMM) pairs. It has been shown the reduced spectral space of NAM speech using a metric distance, however the location of the different phonemes of NAM are similar to the location of the phonemes of normal speech, and the NAM sounds are well discriminated. Promising results in using nonlinear features are also introduced, especially under noisy conditions.Keywords: Speech recognition, unvoiced speech, nonlinear features, HMM distance measures
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1650461 Evaluation of A 50MW Two-Axis Tracking Photovoltaic Power Plant for AL-Jagbob, Libya: Energetic, Economic, and Environmental Impact Analysis
Abstract:
This paper investigates the application of large scale (LS-PV) two-axis tracking photovoltaic power plant in Al-Jagbob, Libya. A 50MW PV-grid connected (two-axis tracking) power plant design in Al-Jagbob, Libya has been carried out presently. A hetero-junction with intrinsic thin layer (HIT) type PV module has been selected and modeled. A Microsoft Excel-VBA program has been constructed to compute slope radiation, dew-point, sky temperature, and then cell temperature, maximum power output and module efficiency for this system, for tracking system. The results for energy production show that the total energy output is 128.5 GWh/year. The average module efficiency is 16.6%. The electricity generation capacity factor (CF) and solar capacity factor (SCF) were found to be 29.3% and 70.4% respectively. A 50MW two axis tracking power plant with a total energy output of 128.5 GWh/year would reduce CO2 pollution by 85,581 tonnes of each year. The payback time for the proposed LS-PV photovoltaic power plant was found to be 4 years.
Keywords: Large PV power plant, solar energy, environmental impact, Dual-axis tracking system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3668460 Adopted Method of Information System Strategy for Knowledge Management System: A Literature Review
Authors: Elin Cahyaningsih, Dana Indra Sensuse, Wahyu Catur Wibowo, Sofiyanti Indriasari
Abstract:
Bureaucracy reform program drives Indonesian government to change their management to enhance their organizational performance. Information technology became one of strategic plan that organization tried to improve. Knowledge management system is one of information system that supporting knowledge management implementation in government which categorized as people perspective, because this system has high dependency in human interaction and participation. Strategic plan for developing knowledge management system can be determine using some of information system strategic methods. This research conducted to define type of strategic method of information system, stage of activity each method, strength and weakness. Literature review methods used to identify and classify strategic methods of information system, differentiate method type, categorize common activities, strength and weakness. Result of this research are determine and compare six strategic information system methods, Balanced Scorecard and Risk Analysis believe as common strategic method that usually used and have the highest excellence strength.
Keywords: Knowledge management system, balanced scorecard, five force, risk analysis, gap analysis, value chain analysis, SWOT analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2642459 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems
Authors: Alexander J. Severinsky
Abstract:
Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.
Keywords: GHG radiative forces, GHG air temperature, GHG thermodynamics, GHG historical, GHG experimental, GHG radiative force on ice, GHG radiative force on plants, GHG radiative force in air.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 579458 Developing a Sustainable Educational Portal for the D-Grid Community
Authors: Viktor Achter, Sebastian Breuers, Marc Seifert, Ulrich Lang, Joachim Götze, Bernd Reuther, Paul Müller
Abstract:
Within the last years, several technologies have been developed to help building e-learning portals. Most of them follow approaches that deliver a vast amount of functionalities, suitable for class-like learning. The SuGI project, as part of the D-Grid (funded by the BMBF), targets on delivering a highly scalable and sustainable learning solution to provide materials (e.g. learning modules, training systems, webcasts, tutorials, etc.) containing knowledge about Grid computing to the D-Grid community. In this article, the process of the development of an e-learning portal focused on the requirements of this special user group is described. Furthermore, it deals with the conceptual and technical design of an e-learning portal, addressing the special needs of heterogeneous target groups. The main focus lies on the quality management of the software development process, Web templates for uploading new contents, the rich search and filter functionalities which will be described from a conceptual as well as a technical point of view. Specifically, it points out best practices as well as concepts to provide a sustainable solution to a relatively unknown and highly heterogeneous community.
Keywords: D-Grid, e-learning, e-science, Grid computing, SuGI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1349457 Knowledge Management Factors Affecting the Level of Commitment
Authors: Abbas Keramati, Abtin Boostani, Mohammad Jamal Sadeghi
Abstract:
This paper examines the influence of knowledge management factors on organizational commitment for employees in the oil and gas drilling industry of Iran. We determine what knowledge factors have the greatest impact on the personnel loyalty and commitment to the organization using collected data from a survey of over 300 full-time personnel working in three large companies active in oil and gas drilling industry of Iran. To specify the effect of knowledge factors in the organizational commitment of the personnel in the studied organizations, the Principal Component Analysis (PCA) is used. Findings of our study show that the factors such as knowledge and expertise, in-service training, the knowledge value and the application of individuals’ knowledge in the organization as the factor “learning and perception of personnel from the value of knowledge within the organization” has the greatest impact on the organizational commitment. After this factor, “existence of knowledge and knowledge sharing environment in the organization”; “existence of potential knowledge exchanging in the organization”; and “organizational knowledge level” factors have the most impact on the organizational commitment of personnel, respectively.
Keywords: Knowledge management, organizational commitment, loyalty, drilling industry, principle component analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 880456 Assessment of the Administration and Services of Public Access Computers in Academic Libraries in Kaduna State, Nigeria
Authors: Usman Ahmed Adam, Umar Ibrahim, Ezra S. Gbaje
Abstract:
This study is posed to explore the practice of Public Access Computers (PACs) in academic libraries in Kaduna State, Nigeria. The study aimed to determine the computers and other tools available, their services and challenges of the practices. Three questions were framed to identify number of public computers and tools available, their services and problems faced during the practice. The study used qualitative research design along with semi-constructed interview and observation as tools for data collection. Descriptive analysis was employed to analyze the data. The sample size of the study comprises 52 librarian and IT staff from the seven academic institutions in Kaduna State. The findings revealed that, PACs were provided for access to the Internet, digital resources, library catalogue and training services. The study further explored that, despite the limit number of the computers, users were not allowed to enjoy many services. The study recommends that libraries in Kaduna state should provide more public computers to be able to cover the population of their users; libraries should allow users to use the computers without limitations and restrictions.
Keywords: Academic libraries, computers in the library, digital libraries, public computers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 617455 Status Report of the GERDA Phase II Startup
Authors: Valerio D’Andrea
Abstract:
The GERmanium Detector Array (GERDA) experiment, located at the Laboratori Nazionali del Gran Sasso (LNGS) of INFN, searches for 0νββ of 76Ge. Germanium diodes enriched to ∼ 86 % in the double beta emitter 76Ge(enrGe) are exposed being both source and detectors of 0νββ decay. Neutrinoless double beta decay is considered a powerful probe to address still open issues in the neutrino sector of the (beyond) Standard Model of particle Physics. Since 2013, just after the completion of the first part of its experimental program (Phase I), the GERDA setup has been upgraded to perform its next step in the 0νββ searches (Phase II). Phase II aims to reach a sensitivity to the 0νββ decay half-life larger than 1026 yr in about 3 years of physics data taking. This exposing a detector mass of about 35 kg of enrGe and with a background index of about 10^−3 cts/(keV·kg·yr). One of the main new implementations is the liquid argon scintillation light read-out, to veto those events that only partially deposit their energy both in Ge and in the surrounding LAr. In this paper, the GERDA Phase II expected goals, the upgrade work and few selected features from the 2015 commissioning and 2016 calibration runs will be presented. The main Phase I achievements will be also reviewed.Keywords: Gerda, double beta decay, germanium, LNGS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1529454 The e-DELPHI Method to Test the Importance Competence and Skills: Case of the Lifelong Learning Spanish Trainers
Authors: Xhevrie Mamaqi, Jesus Miguel, Pilar Olave
Abstract:
The lifelong learning is a crucial element in the modernization of European education and training systems. The most important actors in the development process of the lifelong learning are the trainers, whose professional characteristics need new competences and skills in the current labour market. The main objective of this paper is to establish an importance ranking of the new competences, capabilities and skills that the lifelong learning Spanish trainers must possess nowadays. A wide study of secondary sources has allowed the design of a questionnaire that organizes the trainer-s skills and competences. The e-Delphi method is used for realizing a creative, individual and anonymous evaluation by experts on the importance ranking that presents the criteria, sub-criteria and indicators of the e-Delphi questionnaire. Twenty Spanish experts in the lifelong learning have participated in two rounds of the e- DELPHI method. In the first round, the analysis of the experts- evaluation has allowed to establish the ranking of the most importance criteria, sub-criteria and indicators and to eliminate the least valued. The minimum level necessary to reach the consensus among experts has been achieved in the second round.Keywords: competences and skills, lifelong learningtrainers, Spain, e-DELHI method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686453 Molecular Detection and Characterization of Infectious Bronchitis Virus from Libya
Authors: Abdulwahab Kammon, Tan Sheau Wei, Abdul Rahman Omar, Abdunaser Dayhum, Ibrahim Eldghayes, Monier Sharif
Abstract:
Infectious bronchitis virus (IBV) is a very dynamic and evolving virus, causing major economic losses to the global poultry industry. Recently, the Libyan poultry industry faced severe outbreak of respiratory distress associated with high mortality and dramatic drop in egg production. Tracheal and cloacal swabs were analyzed for several poultry viruses. IBV was detected using SYBR Green I real-time PCR detection based on the nucleocapsid (N) gene. Sequence analysis of the partial N gene indicated high similarity (~ 94%) to IBV strain 3382/06 that was isolated from Taiwan. Even though the IBV strain 3382/06 is more similar to that of the Mass type H120, the isolate has been implicated associated with intertypic recombinant of 3 putative parental IBV strains namely H120, Taiwan strain 1171/92 and China strain CK/CH/LDL/97I. Complete sequencing and antigenicity studies of the Libya IBV strains are currently underway to determine the evolution of the virus and its importance in vaccine induced immunity. In this paper we documented for the first time the presence of possibly variant IBV strain from Libya which required dramatic change in vaccination program.
Keywords: Libya, Infectious bronchitis, Molecular characterization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2498452 Computer Aided Diagnosis of Polycystic Kidney Disease Using ANN
Authors: Anjan Babu G, Sumana G, Rajasekhar M
Abstract:
Many inherited diseases and non-hereditary disorders are common in the development of renal cystic diseases. Polycystic kidney disease (PKD) is a disorder developed within the kidneys in which grouping of cysts filled with water like fluid. PKD is responsible for 5-10% of end-stage renal failure treated by dialysis or transplantation. New experimental models, application of molecular biology techniques have provided new insights into the pathogenesis of PKD. Researchers are showing keen interest for developing an automated system by applying computer aided techniques for the diagnosis of diseases. In this paper a multilayered feed forward neural network with one hidden layer is constructed, trained and tested by applying back propagation learning rule for the diagnosis of PKD based on physical symptoms and test results of urinalysis collected from the individual patients. The data collected from 50 patients are used to train and test the network. Among these samples, 75% of the data used for training and remaining 25% of the data are used for testing purpose. Further, this trained network is used to implement for new samples. The output results in normality and abnormality of the patient.
Keywords: Dialysis, Hereditary, Transplantation, Polycystic, Pathogenesis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008451 Impact of Liquidity Crunch on Interbank Network
Authors: I. Lucas, N. Schomberg, F-A. Couturier
Abstract:
Most empirical studies have analyzed how liquidity risks faced by individual institutions turn into systemic risk. Recent banking crisis has highlighted the importance of grasping and controlling the systemic risk, and the acceptance by Central Banks to ease their monetary policies for saving default or illiquid banks. This last point shows that banks would pay less attention to liquidity risk which, in turn, can become a new important channel of loss. The financial regulation focuses on the most important and “systemic” banks in the global network. However, to quantify the expected loss associated with liquidity risk, it is worth to analyze sensitivity to this channel for the various elements of the global bank network. A small bank is not considered as potentially systemic; however the interaction of small banks all together can become a systemic element. This paper analyzes the impact of medium and small banks interaction on a set of banks which is considered as the core of the network. The proposed method uses the structure of agent-based model in a two-class environment. In first class, the data from actual balance sheets of 22 large and systemic banks (such as BNP Paribas or Barclays) are collected. In second one, to model a network as closely as possible to actual interbank market, 578 fictitious banks smaller than the ones belonging to first class have been split into two groups of small and medium ones. All banks are active on the European interbank network and have deposit and market activity. A simulation of 12 three month periods representing a midterm time interval three years is projected. In each period, there is a set of behavioral descriptions: repayment of matured loans, liquidation of deposits, income from securities, collection of new deposits, new demands of credit, and securities sale. The last two actions are part of refunding process developed in this paper. To strengthen reliability of proposed model, random parameters dynamics are managed with stochastic equations as rates the variations of which are generated by Vasicek model. The Central Bank is considered as the lender of last resort which allows banks to borrow at REPO rate and some ejection conditions of banks from the system are introduced.
Liquidity crunch due to exogenous crisis is simulated in the first class and the loss impact on other bank classes is analyzed though aggregate values representing the aggregate of loans and/or the aggregate of borrowing between classes. It is mainly shown that the three groups of European interbank network do not have the same response, and that intermediate banks are the most sensitive to liquidity risk.
Keywords: Systemic Risk, Financial Contagion, Liquidity Risk, Interbank Market, Network Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2029450 Modeling of Pulping of Sugar Maple Using Advanced Neural Network Learning
Authors: W. D. Wan Rosli, Z. Zainuddin, R. Lanouette, S. Sathasivam
Abstract:
This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of Pulping of Sugar Maple problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified problem where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.
Keywords: Convergence, Modeling, Neural Networks, Preconditioned Conjugate Gradient.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690449 A Kernel Based Rejection Method for Supervised Classification
Authors: Abdenour Bounsiar, Edith Grall, Pierre Beauseroy
Abstract:
In this paper we are interested in classification problems with a performance constraint on error probability. In such problems if the constraint cannot be satisfied, then a rejection option is introduced. For binary labelled classification, a number of SVM based methods with rejection option have been proposed over the past few years. All of these methods use two thresholds on the SVM output. However, in previous works, we have shown on synthetic data that using thresholds on the output of the optimal SVM may lead to poor results for classification tasks with performance constraint. In this paper a new method for supervised classification with rejection option is proposed. It consists in two different classifiers jointly optimized to minimize the rejection probability subject to a given constraint on error rate. This method uses a new kernel based linear learning machine that we have recently presented. This learning machine is characterized by its simplicity and high training speed which makes the simultaneous optimization of the two classifiers computationally reasonable. The proposed classification method with rejection option is compared to a SVM based rejection method proposed in recent literature. Experiments show the superiority of the proposed method.Keywords: rejection, Chow's rule, error-reject tradeoff, SupportVector Machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451448 Gas Lift Optimization Using Smart Gas Lift Valve
Authors: Mohamed A. G. H. Abdalsadig, Amir Nourian, G. G. Nasr, M. Babaie
Abstract:
Gas lift is one of the most common forms of artificial lift, particularly for offshore wells because of its relative down hole simplicity, flexibility, reliability, and ability to operate over a large range of rates and occupy very little space at the well head. Presently, petroleum industry is investing in exploration and development fields in offshore locations where oil and gas wells are being drilled thousands of feet below the ocean in high pressure and temperature conditions. Therefore, gas-lifted oil wells are capable of failure through gas lift valves which are considered as the heart of the gas lift system for controlling the amount of the gas inside the tubing string. The gas injection rate through gas lift valve must be controlled to be sufficient to obtain and maintain critical flow, also, gas lift valves must be designed not only to allow gas passage through it and prevent oil passage, but also for gas injection into wells to be started and stopped when needed. In this paper, smart gas lift valve has been used to investigate the effect of the valve port size, depth of injection and vertical lift performance on well productivity; all these aspects have been investigated using PROSPER simulator program coupled with experimental data. The results show that by using smart gas lift valve, the gas injection rate can be controlled which leads to improved flow performance.
Keywords: Effect of gas lift valve port size, effect water cut, and vertical flow performance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2463447 Importance of Mobile Technology in Successful Adoption and Sustainability of a Chronic Disease Support System
Authors: Reza Ariaeinejad, Norm Archer
Abstract:
Self-management is becoming a new emphasis for healthcare systems around the world. But there are many different problems with adoption of new health-related intervention systems. The situation is even more complicated for chronically ill patients with disabilities, illiteracy, and impairment in judgment in addition to their conditions, or having multiple co-morbidities. Providing online decision support to manage patient health and to provide better support for chronically ill patients is a new way of dealing with chronic disease management. In this study, the importance of mobile technology through an m-Health system that supports self-management interventions including the care provider, family and social support, education and training, decision support, recreation, and ongoing patient motivation to promote adherence and sustainability of the intervention are discussed. A proposed theoretical model for adoption and sustainability of system use is developed, based on UTAUT2 and IS Continuance of Use models, both of which have been pre-validated through longitudinal studies. The objective of this paper is to show the importance of using mobile technology in adoption and sustainability of use of an m-Health system which will result in commercially sustainable self-management support for chronically ill patients.
Keywords: M-health, e-health, self-management, disease.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2838446 The Effect of Electric Field Distributions on Grains and Insect for Dielectric Heating Applications
Authors: S. Santalunai, T. Thosdeekoraphat, C. Thongsopa
Abstract:
This paper presents the effect of electric field distribution which is an electric field intensity analysis. Consideration of the dielectric heating of grains and insects, the rice and rice weevils are utilized for dielectric heating analysis. Furthermore, this analysis compares the effect of electric field distribution in rice and rice weevil. In this simulation, two copper plates are used to generate the electric field for dielectric heating system and put the rice materials between the copper plates. The simulation is classified in two cases, which are case I one rice weevil is placed in the rice and case II two rice weevils are placed at different position in the rice. Moreover, the probes are located in various different positions on plate. The power feeding on this plate is optimized by using CST EM studio program of 1000 watt electrical power at 39 MHz resonance frequency. The results of two cases are indicated that the most electric field distribution and intensity are occurred on the rice and rice weevils at the near point of the probes. Moreover, the heat is directed to the rice weevils more than the rice. When the temperature of rice and rice weevils are calculated and compared, the rice weevils has the temperature more than rice is about 41.62 Celsius degrees. These results can be applied for the dielectric heating applications to eliminate insect.
Keywords: Copper plates, Electric field distribution, Dielectric heating.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2346445 Numerical Analysis of Thermal Conductivity of Non-Charring Material Ablation Carbon-Carbon and Graphite with Considering Chemical Reaction Effects, Mass Transfer and Surface Heat Transfer
Authors: H. Mohammadiun, A. Kianifar, A. Kargar
Abstract:
Nowadays, there is little information, concerning the heat shield systems, and this information is not completely reliable to use in so many cases. for example, the precise calculation cannot be done for various materials. In addition, the real scale test has two disadvantages: high cost and low flexibility, and for each case we must perform a new test. Hence, using numerical modeling program that calculates the surface recession rate and interior temperature distribution is necessary. Also, numerical solution of governing equation for non-charring material ablation is presented in order to anticipate the recession rate and the heat response of non-charring heat shields. the governing equation is nonlinear and the Newton- Rafson method along with TDMA algorithm is used to solve this nonlinear equation system. Using Newton- Rafson method for solving the governing equation is one of the advantages of the solving method because this method is simple and it can be easily generalized to more difficult problems. The obtained results compared with reliable sources in order to examine the accuracy of compiling code.Keywords: Ablation rate, surface recession, interior temperaturedistribution, non charring material ablation, Newton Rafson method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905444 Model Canvas and Process for Educational Game Design in Outcome-Based Education
Authors: Ratima Damkham, Natasha Dejdumrong, Priyakorn Pusawiro
Abstract:
This paper explored the solution in game design to help game designers in the educational game designing using digital educational game model canvas (DEGMC) and digital educational game form (DEGF) based on Outcome-based Education program. DEGMC and DEGF can help designers develop an overview of the game while designing and planning their own game. The way to clearly assess players’ ability from learning outcomes and support their game learning design is by using the tools. Designers can balance educational content and entertainment in designing a game by using the strategies of the Business Model Canvas and design the gameplay and players’ ability assessment from learning outcomes they need by referring to the Constructive Alignment. Furthermore, they can use their design plan in this research to write their Game Design Document (GDD). The success of the research was evaluated by four experts’ perspectives in the education and computer field. From the experiments, the canvas and form helped the game designers model their game according to the learning outcomes and analysis of their own game elements. This method can be a path to research an educational game design in the future.Keywords: Constructive alignment, constructivist theory, educational game, outcome-based education.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 862443 Thai Perception on Litecoin Value
Authors: Toby Gibbs, Suwaree Yordchim
Abstract:
This research analyzes factors affecting the success of Litecoin Value within Thailand and develops a guideline for selfreliance for effective business implementation. Samples in this study included 119 people through surveys. The results revealed four main factors affecting the success as follows: 1) Future Career training should be pursued in applied Litecoin development. 2) Didn't grasp the concept of a digital currency or see the benefit of a digital currency. 3) There is a great need to educate the next generation of learners on the benefits of Litecoin within the community. 4) A great majority didn't know what Litecoin was. The guideline for self-reliance planning consisted of 4 aspects: 1) Development planning: by arranging meet up groups to conduct further education on Litecoin and share solutions on adoption into every day usage. Local communities need to develop awareness of the usefulness of Litecoin and share the value of Litecoin among friends and family. 2) Computer Science and Business Management staff should develop skills to expand on the benefits of Litecoin within their departments. 3) Further research should be pursued on how Litecoin Value can improve business and tourism within Thailand. 4) Local communities should focus on developing Litecoin awareness by encouraging street vendors to accept Litecoin as another form of payment for services rendered.
Keywords: Litecoin, Mining, Confirmations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2796442 Protein Secondary Structure Prediction Using Parallelized Rule Induction from Coverings
Authors: Leong Lee, Cyriac Kandoth, Jennifer L. Leopold, Ronald L. Frank
Abstract:
Protein 3D structure prediction has always been an important research area in bioinformatics. In particular, the prediction of secondary structure has been a well-studied research topic. Despite the recent breakthrough of combining multiple sequence alignment information and artificial intelligence algorithms to predict protein secondary structure, the Q3 accuracy of various computational prediction algorithms rarely has exceeded 75%. In a previous paper [1], this research team presented a rule-based method called RT-RICO (Relaxed Threshold Rule Induction from Coverings) to predict protein secondary structure. The average Q3 accuracy on the sample datasets using RT-RICO was 80.3%, an improvement over comparable computational methods. Although this demonstrated that RT-RICO might be a promising approach for predicting secondary structure, the algorithm-s computational complexity and program running time limited its use. Herein a parallelized implementation of a slightly modified RT-RICO approach is presented. This new version of the algorithm facilitated the testing of a much larger dataset of 396 protein domains [2]. Parallelized RTRICO achieved a Q3 score of 74.6%, which is higher than the consensus prediction accuracy of 72.9% that was achieved for the same test dataset by a combination of four secondary structure prediction methods [2].Keywords: data mining, protein secondary structure prediction, parallelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600441 Multipurpose Agricultural Robot Platform: Conceptual Design of Control System Software for Autonomous Driving and Agricultural Operations Using Programmable Logic Controller
Authors: P. Abhishesh, B. S. Ryuh, Y. S. Oh, H. J. Moon, R. Akanksha
Abstract:
This paper discusses about the conceptual design and development of the control system software using Programmable logic controller (PLC) for autonomous driving and agricultural operations of Multipurpose Agricultural Robot Platform (MARP). Based on given initial conditions by field analysis and desired agricultural operations, the structural design development of MARP is done using modelling and analysis tool. PLC, being robust and easy to use, has been used to design the autonomous control system of robot platform for desired parameters. The robot is capable of performing autonomous driving and three automatic agricultural operations, viz. hilling, mulching, and sowing of seeds in the respective order. The input received from various sensors on the field is later transmitted to the controller via ZigBee network to make the changes in the control program to get desired field output. The research is conducted to provide assistance to farmers by reducing labor hours for agricultural activities by implementing automation. This study will provide an alternative to the existing systems with machineries attached behind tractors and rigorous manual operations on agricultural field at effective cost.
Keywords: Agricultural operations, autonomous driving, MARP, PLC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2200440 Analysis of Performance of 3T1D Dynamic Random-Access Memory Cell
Authors: Nawang Chhunid, Gagnesh Kumar
Abstract:
On-chip memories consume a significant portion of the overall die space and power in modern microprocessors. On-chip caches depend on Static Random-Access Memory (SRAM) cells and scaling of technology occurring as per Moore’s law. Unfortunately, the scaling is affecting stability, performance, and leakage power which will become major problems for future SRAMs in aggressive nanoscale technologies due to increasing device mismatch and variations. 3T1D Dynamic Random-Access Memory (DRAM) cell is a non-destructive read DRAM cell with three transistors and a gated diode. In 3T1D DRAM cell gated diode (D1) acts as a storage device and also as an amplifier, which leads to fast read access. Due to its high tolerance to process variation, high density, and low cost of memory as compared to 6T SRAM cell, it is universally used by the advanced microprocessor for on chip data and program memory. In the present paper, it has been shown that 3T1D DRAM cell can perform better in terms of fast read access as compared to 6T, 4T, 3T SRAM cells, respectively.Keywords: DRAM cell, read access time, tanner EDA tool write access time and retention time, average power dissipation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1346439 The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition
Authors: Fawaz S. Al-Anzi, Dia AbuZeina
Abstract:
Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.
Keywords: Speech recognition, acoustic features, Mel Frequency Cepstral Coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1981