Search results for: grammatical error correction
1839 Settlement Prediction in Cape Flats Sands Using Shear Wave Velocity – Penetration Resistance Correlations
Authors: Nanine Fouche
Abstract:
The Cape Flats is a low-lying sand-covered expanse of approximately 460 square kilometres, situated to the southeast of the central business district of Cape Town in the Western Cape of South Africa. The aeolian sands masking this area are often loose and compressible in the upper 1m to 1.5m of the surface, and there is a general exceedance of the maximum allowable settlement in these sands. The settlement of shallow foundations on Cape Flats sands is commonly predicted using the results of in-situ tests such as the SPT or DPSH due to the difficulty of retrieving undisturbed samples for laboratory testing. Varying degrees of accuracy and reliability are associated with these methods. More recently, shear wave velocity (Vs) profiles obtained from seismic testing, such as continuous surface wave tests (CSW), are being used for settlement prediction. Such predictions have the advantage of considering non-linear stress-strain behaviour of soil and the degradation of stiffness with increasing strain. CSW tests are rarely executed in the Cape Flats, whereas SPT’s are commonly performed. For this reason, and to facilitate better settlement predictions in Cape Flats sand, equations representing shear wave velocity (Vs) as a function of SPT blow count (N60) and vertical effective stress (v’) were generated by statistical regression of site investigation data. To reveal the most appropriate method of overburden correction, analyses were performed with a separate overburden term (Pa/σ’v) as well as using stress corrected shear wave velocity and SPT blow counts (correcting Vs. and N60 to Vs1and (N1)60respectively). Shear wave velocity profiles and SPT blow count data from three sites masked by Cape Flats sands were utilised to generate 80 Vs-SPT N data pairs for analysis. Investigated terrains included sites in the suburbs of Athlone, Muizenburg, and Atlantis, all underlain by windblown deposits comprising fine and medium sand with varying fines contents. Elastic settlement analysis was also undertaken for the Cape Flats sands, using a non-linear stepwise method based on small-strain stiffness estimates, which was obtained from the best Vs-N60 model and compared to settlement estimates using the general elastic solution with stiffness profiles determined using Stroud’s (1989) and Webb’s (1969) SPT N60-E transformation models. Stroud’s method considers strain level indirectly whereasWebb’smethod does not take account of the variation in elastic modulus with strain. The expression of Vs. in terms of N60 and Pa/σv’ derived from the Atlantis data set revealed the best fit with R2 = 0.83 and a standard error of 83.5m/s. Less accurate Vs-SPT N relations associated with the combined data set is presumably the result of inversion routines used in the analysis of the CSW results showcasing significant variation in relative density and stiffness with depth. The regression analyses revealed that the inclusion of a separate overburden term in the regression of Vs and N60, produces improved fits, as opposed to the stress corrected equations in which the R2 of the regression is notably lower. It is the correction of Vs and N60 to Vs1 and (N1)60 with empirical constants ‘n’ and ‘m’ prior to regression, that introduces bias with respect to overburden pressure. When comparing settlement prediction methods, both Stroud’s method (considering strain level indirectly) and the small strain stiffness method predict higher stiffnesses for medium dense and dense profiles than Webb’s method, which takes no account of strain level in the determination of soil stiffness. Webb’s method appears to be suitable for loose sands only. The Versak software appears to underestimate differences in settlement between square and strip footings of similar width. In conclusion, settlement analysis using small-strain stiffness data from the proposed Vs-N60 model for Cape Flats sands provides a way to take account of the non-linear stress-strain behaviour of the sands when calculating settlement.Keywords: sands, settlement prediction, continuous surface wave test, small-strain stiffness, shear wave velocity, penetration resistance
Procedia PDF Downloads 1751838 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation
Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang
Abstract:
In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.Keywords: building energy model, simulation, geometric simplification, design, regression
Procedia PDF Downloads 1801837 On Hyperbolic Gompertz Growth Model (HGGM)
Authors: S. O. Oyamakin, A. U. Chukwu,
Abstract:
We proposed a Hyperbolic Gompertz Growth Model (HGGM), which was developed by introducing a stabilizing parameter called θ using hyperbolic sine function into the classical gompertz growth equation. The resulting integral solution obtained deterministically was reprogrammed into a statistical model and used in modeling the height and diameter of Pines (Pinus caribaea). Its ability in model prediction was compared with the classical gompertz growth model, an approach which mimicked the natural variability of height/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using goodness of fit tests and model selection criteria. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the compliance of the error term to normality assumptions while using testing the independence of the error term using the runs test. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic gompertz growth models better than the source model (classical gompertz growth model) while the results of R2, Adj. R2, MSE, and AIC confirmed the predictive power of the Hyperbolic Monomolecular growth models over its source model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, gompertz
Procedia PDF Downloads 4411836 Behavior of Laminated Plates under Mechanical Loading
Authors: Mahmoudi Noureddine
Abstract:
In this study the use of two variable refined plate theories of laminated composite plates to static response of laminated plates. The plate theory accounts for parabolic distribution of the transverse shear strains, and satisfies the zero traction boundary conditions on the surfaces of the plate without using shear correction factor. The validity of the present theory is demonstrated by comparison with solutions available in the literature and finite element method. The result is presented for the static response of simply supported rectangular plates under uniform sinusoidal mechanical loadings.Keywords: bending, composite, laminate, plates, fem
Procedia PDF Downloads 4061835 Modelling Fluoride Pollution of Groundwater Using Artificial Neural Network in the Western Parts of Jharkhand
Authors: Neeta Kumari, Gopal Pathak
Abstract:
Artificial neural network has been proved to be an efficient tool for non-parametric modeling of data in various applications where output is non-linearly associated with input. It is a preferred tool for many predictive data mining applications because of its power , flexibility, and ease of use. A standard feed forward networks (FFN) is used to predict the groundwater fluoride content. The ANN model is trained using back propagated algorithm, Tansig and Logsig activation function having varying number of neurons. The models are evaluated on the basis of statistical performance criteria like Root Mean Squarred Error (RMSE) and Regression coefficient (R2), bias (mean error), Coefficient of variation (CV), Nash-Sutcliffe efficiency (NSE), and the index of agreement (IOA). The results of the study indicate that Artificial neural network (ANN) can be used for groundwater fluoride prediction in the limited data situation in the hard rock region like western parts of Jharkhand with sufficiently good accuracy.Keywords: Artificial neural network (ANN), FFN (Feed-forward network), backpropagation algorithm, Levenberg-Marquardt algorithm, groundwater fluoride contamination
Procedia PDF Downloads 5501834 Algorithm Development of Individual Lumped Parameter Modelling for Blood Circulatory System: An Optimization Study
Authors: Bao Li, Aike Qiao, Gaoyang Li, Youjun Liu
Abstract:
Background: Lumped parameter model (LPM) is a common numerical model for hemodynamic calculation. LPM uses circuit elements to simulate the human blood circulatory system. Physiological indicators and characteristics can be acquired through the model. However, due to the different physiological indicators of each individual, parameters in LPM should be personalized in order for convincing calculated results, which can reflect the individual physiological information. This study aimed to develop an automatic and effective optimization method to personalize the parameters in LPM of the blood circulatory system, which is of great significance to the numerical simulation of individual hemodynamics. Methods: A closed-loop LPM of the human blood circulatory system that is applicable for most persons were established based on the anatomical structures and physiological parameters. The patient-specific physiological data of 5 volunteers were non-invasively collected as personalized objectives of individual LPM. In this study, the blood pressure and flow rate of heart, brain, and limbs were the main concerns. The collected systolic blood pressure, diastolic blood pressure, cardiac output, and heart rate were set as objective data, and the waveforms of carotid artery flow and ankle pressure were set as objective waveforms. Aiming at the collected data and waveforms, sensitivity analysis of each parameter in LPM was conducted to determine the sensitive parameters that have an obvious influence on the objectives. Simulated annealing was adopted to iteratively optimize the sensitive parameters, and the objective function during optimization was the root mean square error between the collected waveforms and data and simulated waveforms and data. Each parameter in LPM was optimized 500 times. Results: In this study, the sensitive parameters in LPM were optimized according to the collected data of 5 individuals. Results show a slight error between collected and simulated data. The average relative root mean square error of all optimization objectives of 5 samples were 2.21%, 3.59%, 4.75%, 4.24%, and 3.56%, respectively. Conclusions: Slight error demonstrated good effects of optimization. The individual modeling algorithm developed in this study can effectively achieve the individualization of LPM for the blood circulatory system. LPM with individual parameters can output the individual physiological indicators after optimization, which are applicable for the numerical simulation of patient-specific hemodynamics.Keywords: blood circulatory system, individual physiological indicators, lumped parameter model, optimization algorithm
Procedia PDF Downloads 1371833 Energy Consumption Forecast Procedure for an Industrial Facility
Authors: Tatyana Aleksandrovna Barbasova, Lev Sergeevich Kazarinov, Olga Valerevna Kolesnikova, Aleksandra Aleksandrovna Filimonova
Abstract:
We regard forecasting of energy consumption by private production areas of a large industrial facility as well as by the facility itself. As for production areas the forecast is made based on empirical dependencies of the specific energy consumption and the production output. As for the facility itself implementation of the task to minimize the energy consumption forecasting error is based on adjustment of the facility’s actual energy consumption values evaluated with the metering device and the total design energy consumption of separate production areas of the facility. The suggested procedure of optimal energy consumption was tested based on the actual data of core product output and energy consumption by a group of workshops and power plants of the large iron and steel facility. Test results show that implementation of this procedure gives the mean accuracy of energy consumption forecasting for winter 2014 of 0.11% for the group of workshops and 0.137% for the power plants.Keywords: energy consumption, energy consumption forecasting error, energy efficiency, forecasting accuracy, forecasting
Procedia PDF Downloads 4451832 Performance of VSAT MC-CDMA System Using LDPC and Turbo Codes over Multipath Channel
Authors: Hassan El Ghazi, Mohammed El Jourmi, Tayeb Sadiki, Esmail Ahouzi
Abstract:
The purpose of this paper is to model and analyze a geostationary satellite communication system based on VSAT network and Multicarrier CDMA system scheme which presents a combination of multicarrier modulation scheme and CDMA concepts. In this study the channel coding strategies (Turbo codes and LDPC codes) are adopted to achieve good performance due to iterative decoding. The envisaged system is examined for a transmission over Multipath channel with use of Ku band in the uplink case. The simulation results are obtained for each different case. The performance of the system is given in terms of Bit Error Rate (BER) and energy per bit to noise power spectral density ratio (Eb/N0). The performance results of designed system shown that the communication system coded with LDPC codes can achieve better error rate performance compared to VSAT MC-CDMA system coded with Turbo codes.Keywords: satellite communication, VSAT Network, MC-CDMA, LDPC codes, turbo codes, uplink
Procedia PDF Downloads 5041831 Behavioral and EEG Reactions in Children during Recognition of Emotionally Colored Sentences That Describe the Choice Situation
Authors: Tuiana A. Aiusheeva, Sergey S. Tamozhnikov, Alexander E. Saprygin, Arina A. Antonenko, Valentina V. Stepanova, Natalia N. Tolstykh, Alexander N. Savostyanov
Abstract:
Situation of choice is an important condition for the formation of essential character qualities of a child, such as being initiative, responsible, hard-working. We have studied the behavioral and EEG reactions in Russian schoolchildren during recognition of syntactic errors in emotionally colored sentences that describe the choice situation. Twenty healthy children (mean age 9,0±0,3 years, 12 boys, 8 girls) were examined. Forty sentences were selected for the experiment; the half of them contained a syntactic error. The experiment additionally had the hidden condition: 50% of the sentences described the children's own choice and were emotionally colored (positive or negative). The other 50% of the sentences described the forced-choice situation, also with positive or negative coloring. EEG were recorded during execution of error-recognition task. Reaction time and quality of syntactic error detection were chosen as behavioral measures. Event-related spectral perturbation (ERSP) was applied to characterize the oscillatory brain activity of children. There were two time-frequency intervals in EEG reactions: (1) 500-800 ms in the 3-7 Hz frequency range (theta synchronization) and (2) 500-1000 ms in the 8-12 Hz range (alpha desynchronization). We found out that behavioral and brain reactions in child brain during recognition of positive and negative sentences describing forced-choice situation did not have significant differences. Theta synchronization and alpha desynchronization were stronger during recognition of sentences with children's own choice, especially with negative coloring. Also, the quality and execution time of the task were higher for this types of sentences. The results of our study will be useful for improvement of teaching methods and diagnostics of children affective disorders.Keywords: choice situation, electroencephalogram (EEG), emotionally colored sentences, schoolchildren
Procedia PDF Downloads 2681830 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 281829 Types of Feedback and Their Effectiveness in an EFL Context in Iran
Authors: Adel Ebrahimpourtaher, Saeede Eisaie
Abstract:
This study was an attempt to investigate the types of feedback most frequently provided to the students and their effectiveness based on the students’ preferences established through the interview conducted after the treatment. For this purpose, some class sessions of the students of the institute who were studying general English (pre-intermediate level) were recorded by the teacher for the analysis of the feed backs. The results of the analysis and transcriptions indicated that recast is the most frequent feedback type used by the teacher. In addition, the interview indicated that most of the students prefer recast as well as elicitation and explicit correction to some extent.Keywords: EFL, elicitation, explicit, recast, feedback
Procedia PDF Downloads 3651828 Influence of Scalable Energy-Related Sensor Parameters on Acoustic Localization Accuracy in Wireless Sensor Swarms
Authors: Joyraj Chakraborty, Geoffrey Ottoy, Jean-Pierre Goemaere, Lieven De Strycker
Abstract:
Sensor swarms can be a cost-effectieve and more user-friendly alternative for location based service systems in different application like health-care. To increase the lifetime of such swarm networks, the energy consumption should be scaled to the required localization accuracy. In this paper we have investigated some parameter for energy model that couples localization accuracy to energy-related sensor parameters such as signal length,Bandwidth and sample frequency. The goal is to use the model for the localization of undetermined environmental sounds, by means of wireless acoustic sensors. we first give an overview of TDOA-based localization together with the primary sources of TDOA error (including reverberation effects, Noise). Then we show that in localization, the signal sample rate can be under the Nyquist frequency, provided that enough frequency components remain present in the undersampled signal. The resulting localization error is comparable with that of similar localization systems.Keywords: sensor swarms, localization, wireless sensor swarms, scalable energy
Procedia PDF Downloads 4221827 Neural Network Approaches for Sea Surface Height Predictability Using Sea Surface Temperature
Authors: Luther Ollier, Sylvie Thiria, Anastase Charantonis, Carlos E. Mejia, Michel Crépon
Abstract:
Sea Surface Height Anomaly (SLA) is a signature of the sub-mesoscale dynamics of the upper ocean. Sea Surface Temperature (SST) is driven by these dynamics and can be used to improve the spatial interpolation of SLA fields. In this study, we focused on the temporal evolution of SLA fields. We explored the capacity of deep learning (DL) methods to predict short-term SLA fields using SST fields. We used simulated daily SLA and SST data from the Mercator Global Analysis and Forecasting System, with a resolution of (1/12)◦ in the North Atlantic Ocean (26.5-44.42◦N, -64.25–41.83◦E), covering the period from 1993 to 2019. Using a slightly modified image-to-image convolutional DL architecture, we demonstrated that SST is a relevant variable for controlling the SLA prediction. With a learning process inspired by the teaching-forcing method, we managed to improve the SLA forecast at five days by using the SST fields as additional information. We obtained predictions of a 12 cm (20 cm) error of SLA evolution for scales smaller than mesoscales and at time scales of 5 days (20 days), respectively. Moreover, the information provided by the SST allows us to limit the SLA error to 16 cm at 20 days when learning the trajectory.Keywords: deep-learning, altimetry, sea surface temperature, forecast
Procedia PDF Downloads 901826 Models Comparison for Solar Radiation
Authors: Djelloul Benatiallah
Abstract:
Due to the current high consumption and recent industry growth, the depletion of fossil and natural energy supplies like oil, gas, and uranium is declining. Due to pollution and climate change, there needs to be a swift switch to renewable energy sources. Research on renewable energy is being done to meet energy needs. Solar energy is one of the renewable resources that can currently meet all of the world's energy needs. In most parts of the world, solar energy is a free and unlimited resource that can be used in a variety of ways, including photovoltaic systems for the generation of electricity and thermal systems for the generation of heatfor the residential sector's production of hot water. In this article, we'll conduct a comparison. The first step entails identifying the two empirical models that will enable us to estimate the daily irradiations on a horizontal plane. On the other hand, we compare it using the data obtained from measurements made at the Adrar site over the four distinct seasons. The model 2 provides a better estimate of the global solar components, with an absolute mean error of less than 7% and a correlation coefficient of more than 0.95, as well as a relative coefficient of the bias error that is less than 6% in absolute value and a relative RMSE that is less than 10%, according to a comparison of the results obtained by simulating the two models.Keywords: solar radiation, renewable energy, fossil, photovoltaic systems
Procedia PDF Downloads 791825 An Improved Prediction Model of Ozone Concentration Time Series Based on Chaotic Approach
Authors: Nor Zila Abd Hamid, Mohd Salmi M. Noorani
Abstract:
This study is focused on the development of prediction models of the Ozone concentration time series. Prediction model is built based on chaotic approach. Firstly, the chaotic nature of the time series is detected by means of phase space plot and the Cao method. Then, the prediction model is built and the local linear approximation method is used for the forecasting purposes. Traditional prediction of autoregressive linear model is also built. Moreover, an improvement in local linear approximation method is also performed. Prediction models are applied to the hourly ozone time series observed at the benchmark station in Malaysia. Comparison of all models through the calculation of mean absolute error, root mean squared error and correlation coefficient shows that the one with improved prediction method is the best. Thus, chaotic approach is a good approach to be used to develop a prediction model for the Ozone concentration time series.Keywords: chaotic approach, phase space, Cao method, local linear approximation method
Procedia PDF Downloads 3311824 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis
Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior
Abstract:
Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyse several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.Keywords: drying, models, jackfruit, biotechnology
Procedia PDF Downloads 3791823 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis
Procedia PDF Downloads 5921822 Long-Term Results of Surgical Treatment of Atrial Fibrillation in Patients with Coronary Heart Disease: One Center Experience
Authors: Emil Sakharov, Alex Zotov, Ilkin Osmanov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Since 2015, our center has been actively implementing methods of surgical correction of atrial fibrillation, in particular, in patients with coronary heart disease. The study presents a comparative analysis of the late postoperative period in patients with coronary artery bypass grafting and atrial fibrillation. Methods: The study included 150 patients with ischemic heart disease and atrial fibrillation for the period from 2015 to 2021. Patients were divided into 2 groups. The first group is represented by patients with ischemic heart disease and atrial fibrillation who underwent coronary bypass surgery and surgical correction of atrial fibrillation (N=50). The second group is represented by patients with ischemic heart disease and atrial fibrillation who underwent only myocardial revascularization (N=100). Patients were comparable in age, gender, and initial severity of the condition. Among the patients in group 1 there were 82% were men, while in the second group, their number was 75%. Among the patients of the first group, there were 36% with persistent atrial fibrillation, 20% with long-term persistent atrial fibrillation. In the second group, 10% with persistent atrial fibrillation and 17% with long-term persistent atrial fibrillation. Results: Average follow-up for groups 1 and 2 amounted to 47 months. There were no complications in group 1, such as bleeding and stroke. There was only 1 patient in group 1, who had died from cardiovascular disease. Freedom of atrial fibrillation was in 82% without AADs therapy. In group 2 there were 8 patients who had died from cardiovascular diseases and total freedom of atrial fibrillation was in 35% of patients, among which 42.8% had additional AADs therapy. Follow-up data are presented in Table 2. Progression of heart failure was observed in 3% in group 1 and 7% in group 2. Combined endpoints (recurrence of AF, stroke, progression of heart failure, myocardial infarction) were achieved in 16% in group 1 and 34% in group 2, respectively. Freedom from atrial fibrillation without antiarrhythmic therapy was 82% for group 1 and 35% for group 2. In the first group, there is a more pronounced decrease in heart failure rates. Deaths from cardiovascular causes were recorded in 2% for group 1 and 7% for group 2. Conclusion: Surgical treatment of atrial fibrillation helps to reduce adverse complications in the late postoperative period and contributes to the regression of heart failure.Keywords: atrial fibrillation, coronary artery bypass grafting, ischaemic heart disease, heart failure
Procedia PDF Downloads 1191821 Photo-Fenton Decolorization of Methylene Blue Adsolubilized on Co2+ -Embedded Alumina Surface: Comparison of Process Modeling through Response Surface Methodology and Artificial Neural Network
Authors: Prateeksha Mahamallik, Anjali Pal
Abstract:
In the present study, Co(II)-adsolubilized surfactant modified alumina (SMA) was prepared, and methylene blue (MB) degradation was carried out on Co-SMA surface by visible light photo-Fenton process. The entire reaction proceeded on solid surface as MB was embedded on Co-SMA surface. The reaction followed zero order kinetics. Response surface methodology (RSM) and artificial neural network (ANN) were used for modeling the decolorization of MB by photo-Fenton process as a function of dose of Co-SMA (10, 20 and 30 g/L), initial concentration of MB (10, 20 and 30 mg/L), concentration of H2O2 (174.4, 348.8 and 523.2 mM) and reaction time (30, 45 and 60 min). The prediction capabilities of both the methodologies (RSM and ANN) were compared on the basis of correlation coefficient (R2), root mean square error (RMSE), standard error of prediction (SEP), relative percent deviation (RPD). Due to lower value of RMSE (1.27), SEP (2.06) and RPD (1.17) and higher value of R2 (0.9966), ANN was proved to be more accurate than RSM in order to predict decolorization efficiency.Keywords: adsolubilization, artificial neural network, methylene blue, photo-fenton process, response surface methodology
Procedia PDF Downloads 2541820 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement
Authors: Pogula Rakesh, T. Kishore Kumar
Abstract:
Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss
Procedia PDF Downloads 4811819 Analysis of Human Mental and Behavioral Models for Development of an Electroencephalography-Based Human Performance Management System
Authors: John Gaber, Youssef Ahmed, Hossam A. Gabbar, Jing Ren
Abstract:
Accidents at Nuclear Power Plants (NPPs) occur due to various factors, notable among them being poor safety management and poor safety culture. During abnormal situations, the likelihood of human error is many-fold higher due to the higher cognitive workload. The most common cause of human error and high cognitive workload is mental fatigue. Electroencephalography (EEG) is a method of gathering the electromagnetic waves emitted by a human brain. We propose a safety system by monitoring brainwaves for signs of mental fatigue using an EEG system. This requires an analysis of the mental model of the NPP operator, changes in brain wave power in response to certain stimuli, and the risk factors on mental fatigue and attention that NPP operators face when performing their tasks. We analyzed these factors and developed an EEG-based monitoring system, which aims to alert NPP operators when levels of mental fatigue and attention hinders their ability to maintain safety.Keywords: brain imaging, EEG, power plant operator, psychology
Procedia PDF Downloads 1011818 Punishment In Athenian Forensic Oratory
Authors: Eleni Volonaki
Abstract:
In Athenian forensic speeches, the argumentation on punishment of the wrongdoers constitutes a fundamental ideal of exacting justice in court. The present paper explores the variation of approaches to punishment as a means of reformation, revenge, correction, education, example, chance to restoration of justice. As it will be shown, all these approaches reflect the social and political ideology of Athenian justice in the classical period and enhances the role of the courts and the importance of rhetoric in the process of decision-making. Punishment entails a wide range of penalties but also of ideological principles related to the Athenian constitution of democracy.Keywords: punishment, athenian forensic speeches, justice, athenian democracy
Procedia PDF Downloads 1891817 Evaluation of Solid-Gas Separation Efficiency in Natural Gas Cyclones
Authors: W. I. Mazyan, A. Ahmadi, M. Hoorfar
Abstract:
Objectives/Scope: This paper proposes a mathematical model for calculating the solid-gas separation efficiency in cyclones. This model provides better agreement with experimental results compared to existing mathematical models. Methods: The separation ratio efficiency, ϵsp, is evaluated by calculating the outlet to inlet count ratio. Similar to mathematical derivations in the literature, the inlet and outlet particle count were evaluated based on Eulerian approach. The model also includes the external forces acting on the particle (i.e., centrifugal and drag forces). In addition, the proposed model evaluates the exact length that the particle travels inside the cyclone for the evaluation of number of turns inside the cyclone. The separation efficiency model derivation using Stoke’s law considers the effect of the inlet tangential velocity on the separation performance. In cyclones, the inlet velocity is a very important factor in determining the performance of the cyclone separation. Therefore, the proposed model provides accurate estimation of actual cyclone separation efficiency. Results/Observations/Conclusion: The separation ratio efficiency, ϵsp, is studied to evaluate the performance of the cyclone for particles ranging from 1 microns to 10 microns. The proposed model is compared with the results in the literature. It is shown that the proposed mathematical model indicates an error of 7% between its efficiency and the efficiency obtained from the experimental results for 1 micron particles. At the same time, the proposed model gives the user the flexibility to analyze the separation efficiency at different inlet velocities. Additive Information: The proposed model determines the separation efficiency accurately and could also be used to optimize the separation efficiency of cyclones at low cost through trial and error testing, through dimensional changes to enhance separation and through increasing the particle centrifugal forces. Ultimately, the proposed model provides a powerful tool to optimize and enhance existing cyclones at low cost.Keywords: cyclone efficiency, solid-gas separation, mathematical model, models error comparison
Procedia PDF Downloads 3921816 Imperfect Production Inventory Model with Inspection Errors and Fuzzy Demand and Deterioration Rates
Authors: Chayanika Rout, Debjani Chakraborty, Adrijit Goswami
Abstract:
Our work presents an inventory model which illustrates imperfect production and imperfect inspection processes for deteriorating items. A cost-minimizing model is studied considering two types of inspection errors, namely, Type I error of falsely screening out a proportion of non-defects, thereby passing them on for rework and Type II error of falsely not screening out a proportion of defects, thus selling those to customers which incurs a penalty cost. The screened items are reworked; however, no returns are entertained due to deteriorating nature of the items. In more practical situations, certain parameters such as the demand rate and the deterioration rate of inventory cannot be accurately determined, and therefore, they are assumed to be triangular fuzzy numbers in our model. We calculate the optimal lot size that must be produced in order to minimize the total inventory cost for both the crisp and the fuzzy models. A numerical example is also considered to exemplify the procedure which is followed by the analysis of sensitivity of various parameters on the decision variable and the objective function.Keywords: deteriorating items, EPQ, imperfect quality, rework, type I and type II inspection errors
Procedia PDF Downloads 1821815 Improved Acoustic Source Sensing and Localization Based On Robot Locomotion
Authors: V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta
Abstract:
This paper presents different methodology for an acoustic source sensing and localization in an unknown environment. The developed methodology includes an acoustic based sensing and localization system, a converging target localization based on the recursive direction of arrival (DOA) error minimization, and a regressive obstacle avoidance function. Our method is able to augment the existing proven localization techniques and improve results incrementally by utilizing robot locomotion and is capable of converging to a position estimate with greater accuracy using fewer measurements. The results also evinced the DOA error minimization at each iteration, improvement in time for reaching the destination and the efficiency of this target localization method as gradually converging to the real target position. Initially, the system is tested using Kinect mounted on turntable with DOA markings which serve as a ground truth and then our approach is validated using a FireBird VI (FBVI) mobile robot on which Kinect is used to obtain bearing information.Keywords: acoustic source localization, acoustic sensing, recursive direction of arrival, robot locomotion
Procedia PDF Downloads 4921814 Virtual Chemistry Laboratory as Pre-Lab Experiences: Stimulating Student's Prediction Skill
Authors: Yenni Kurniawati
Abstract:
Students Prediction Skill in chemistry experiments is an important skill for pre-service chemistry students to stimulate students reflective thinking at each stage of many chemistry experiments, qualitatively and quantitatively. A Virtual Chemistry Laboratory was designed to give students opportunities and times to practicing many kinds of chemistry experiments repeatedly, everywhere and anytime, before they do a real experiment. The Virtual Chemistry Laboratory content was constructed using the Model of Educational Reconstruction and developed to enhance students ability to predicted the experiment results and analyzed the cause of error, calculating the accuracy and precision with carefully in using chemicals. This research showed students changing in making a decision and extremely beware with accuracy, but still had a low concern in precision. It enhancing students level of reflective thinking skill related to their prediction skill 1 until 2 stage in average. Most of them could predict the characteristics of the product in experiment, and even the result will going to be an error. In addition, they take experiments more seriously and curiously about the experiment results. This study recommends for a different subject matter to provide more opportunities for students to learn about other kinds of chemistry experiments design.Keywords: virtual chemistry laboratory, chemistry experiments, prediction skill, pre-lab experiences
Procedia PDF Downloads 3401813 Collision Theory Based Sentiment Detection Using Discourse Analysis in Hadoop
Authors: Anuta Mukherjee, Saswati Mukherjee
Abstract:
Data is growing everyday. Social networking sites such as Twitter are becoming an integral part of our daily lives, contributing a large increase in the growth of data. It is a rich source especially for sentiment detection or mining since people often express honest opinion through tweets. However, although sentiment analysis is a well-researched topic in text, this analysis using Twitter data poses additional challenges since these are unstructured data with abbreviations and without a strict grammatical correctness. We have employed collision theory to achieve sentiment analysis in Twitter data. We have also incorporated discourse analysis in the collision theory based model to detect accurate sentiment from tweets. We have also used the retweet field to assign weights to certain tweets and obtained the overall weightage of a topic provided in the form of a query. Hadoop has been exploited for speed. Our experiments show effective results.Keywords: sentiment analysis, twitter, collision theory, discourse analysis
Procedia PDF Downloads 5351812 Mixed Integer Programming-Based One-Class Classification Method for Process Monitoring
Authors: Younghoon Kim, Seoung Bum Kim
Abstract:
One-class classification plays an important role in detecting outlier and abnormality from normal observations. In the previous research, several attempts were made to extend the scope of application of the one-class classification techniques to statistical process control problems. For most previous approaches, such as support vector data description (SVDD) control chart, the design of the control limits is commonly based on the assumption that the proportion of abnormal observations is approximately equal to an expected Type I error rate in Phase I process. Because of the limitation of the one-class classification techniques based on convex optimization, we cannot make the proportion of abnormal observations exactly equal to expected Type I error rate: controlling Type I error rate requires to optimize constraints with integer decision variables, but convex optimization cannot satisfy the requirement. This limitation would be undesirable in theoretical and practical perspective to construct effective control charts. In this work, to address the limitation of previous approaches, we propose the one-class classification algorithm based on the mixed integer programming technique, which can solve problems formulated with continuous and integer decision variables. The proposed method minimizes the radius of a spherically shaped boundary subject to the number of normal data to be equal to a constant value specified by users. By modifying this constant value, users can exactly control the proportion of normal data described by the spherically shaped boundary. Thus, the proportion of abnormal observations can be made theoretically equal to an expected Type I error rate in Phase I process. Moreover, analogous to SVDD, the boundary can be made to describe complex structures by using some kernel functions. New multivariate control chart applying the effectiveness of the algorithm is proposed. This chart uses a monitoring statistic to characterize the degree of being an abnormal point as obtained through the proposed one-class classification. The control limit of the proposed chart is established by the radius of the boundary. The usefulness of the proposed method was demonstrated through experiments with simulated and real process data from a thin film transistor-liquid crystal display.Keywords: control chart, mixed integer programming, one-class classification, support vector data description
Procedia PDF Downloads 1741811 Changes of First-Person Pronoun Pragmatic Functions in Three Historical Chinese Texts
Authors: Cher Leng Lee
Abstract:
The existence of multiple first-person pronouns (1PPs) in classical Chinese is an issue that has not been resolved despite linguists using the grammatical perspective. This paper proposes pragmatics as a viable solution. There is also a lack of research exploring the evolving usage patterns of 1PPs within the historical context of Chinese language use. Such research can help us comprehend the changes and developments of these linguistic elements. To fill these research gaps, we use the diachronic pragmatics approach to contrast the functions of Chinese 1PPs in three representative texts from three different historical periods: The Analects (The Spring and Autumn Period), The Grand Scribe’s Records (Grand Records) (Qin and Han Period), and A New Account of Tales of the World (New Account) (The Wei, Jin and Southern and Northern Period). The 1PPs of these texts are manually identified and classified according to the pragmatic functions in the given contexts to observe their historical changes, understand the factors that contribute to these changes, and provide possible answers to the development of how wo became the only 1PP in today’s spoken Mandarin.Keywords: historical, Chinese, pronouns, pragmatics
Procedia PDF Downloads 541810 Numerical Study on Ultimate Capacity of Bi-Modulus Beam-Column
Authors: Zhiming Ye, Dejiang Wang, Huiling Zhao
Abstract:
Development of the technology demands a higher-level research on the mechanical behavior of materials. Structural members made of bi-modulus materials have different elastic modulus when they are under tension and compression. The stress and strain states of the point effect on the elastic modulus and Poisson ratio of every point in the bi-modulus material body. Accompanied by the uncertainty and nonlinearity of the elastic constitutive relation is the complicated nonlinear problem of the bi-modulus members. In this paper, the small displacement and large displacement finite element method for the bi-modulus members have been proposed. Displacement nonlinearity is considered in the elastic constitutive equation. Mechanical behavior of slender bi-modulus beam-column under different boundary conditions and loading patterns has been simulated by the proposed method. The influence factors on the ultimate bearing capacity of slender beam and columns have been studied. The results show that as the ratio of tensile modulus to compressive modulus increases, the error of the simulation employing the same elastic modulus theory exceeds the engineering permissible error.Keywords: bi-modulus, ultimate capacity, beam-column, nonlinearity
Procedia PDF Downloads 411