Search results for: total vector error
10513 The Effectivity of Lime Juice on the Cooked Rice's Shelf-Life
Authors: Novriyanti Lubis, Riska Prasetiawati, Nuriani Rahayu
Abstract:
The effectivity of lime juice on the cooked rice’s shelf-life was investigated. This research was proposed to get the optimal condition, such as concentration lime juice as the preservatives, and shelf-life cooked rice’s container to store using rice warmer. The effectivity was analysed total colony bacteriology, and physically. The variation of lime juice’s concentration that have been used were 0%, 0,46%, 0,93%, 1,40%, and 1,87%. The observation of cooked rice’s quality was done every 12 hours, including colour, smell, flavour, and total colony every 24 hours. Based on the result of the research considered from the cooked rice’s quality through observing the total of the colony bacteriology and physically, it showed the optimum concentrate which is effective preserve the cooked rise’s level concentrate was 0.93%.Keywords: bacteriology, cooked rice's, lime juice, preservative
Procedia PDF Downloads 33610512 On the Relation between λ-Symmetries and μ-Symmetries of Partial Differential Equations
Authors: Teoman Ozer, Ozlem Orhan
Abstract:
This study deals with symmetry group properties and conservation laws of partial differential equations. We give a geometrical interpretation of notion of μ-prolongations of vector fields and of the related concept of μ-symmetry for partial differential equations. We show that these are in providing symmetry reduction of partial differential equations and systems and invariant solutions.Keywords: λ-symmetry, μ-symmetry, classification, invariant solution
Procedia PDF Downloads 31910511 Analysis of Filtering in Stochastic Systems on Continuous- Time Memory Observations in the Presence of Anomalous Noises
Authors: S. Rozhkova, O. Rozhkova, A. Harlova, V. Lasukov
Abstract:
For optimal unbiased filter as mean-square and in the case of functioning anomalous noises in the observation memory channel, we have proved insensitivity of filter to inaccurate knowledge of the anomalous noise intensity matrix and its equivalence to truncated filter plotted only by non anomalous components of an observation vector.Keywords: mathematical expectation, filtration, anomalous noise, memory
Procedia PDF Downloads 36210510 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology
Authors: Ugwu O. C., Mamah R. O., Awudu W. S.
Abstract:
This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.Keywords: beamforming algorithm, adaptive beamforming, simulink, reception
Procedia PDF Downloads 4110509 Reliability and Validity for Measurement of Body Composition: A Field Method
Authors: Ahmad Hashim, Zarizi Ab Rahman
Abstract:
Measurement of body composition via a field method has the most popular instruments which are used to estimate the percentage of body fat. Among the instruments used are the Body Mass Index, Bio Impedance Analysis and Skinfold Test. All three of these instruments do not involve high costs, do not require high technical skills, are mobile, save time, and are suitable for use in large populations. Because all three instruments can estimate the percentage of body fat, but it is important to identify the most appropriate instruments and have high reliability. Hence, this study was conducted to determine the reliability and convergent validity of the instruments. A total of 40 students, males and females aged between 13 and 14 years participated in this study. The study found that the test retest and Pearson correlation coefficient of reliability for the three instruments is very high, r = .99. While the inter class reliability also are at high level with r = .99 for Body Mass Index and Bio Impedance Analysis, r = .96 for Skin fold test. Intra class reliability coefficient for these three instruments is too high for Body Mass Index r = .99, Bio Impedance Analysis r = .97, and Skin fold Test r = .90. However, Standard Error of Measurement value for all three instruments indicates the Body Mass Index is the most appropriate instrument with a mean value of .000672 compared with other instruments. The findings show that the Body Mass Index is an instrument which is the most accurate and reliable in estimating body fat percentage for the population studied.Keywords: reliability, validity, body mass index, bio impedance analysis and skinfold test
Procedia PDF Downloads 33510508 Measuring the Height of a Person in Closed Circuit Television Video Footage Using 3D Human Body Model
Authors: Dojoon Jung, Kiwoong Moon, Joong Lee
Abstract:
The height of criminals is one of the important clues that can determine the scope of the suspect's search or exclude the suspect from the search target. Although measuring the height of criminals by video alone is limited by various reasons, the 3D data of the scene and the Closed Circuit Television (CCTV) footage are matched, the height of the criminal can be measured. However, it is still difficult to measure the height of CCTV footage in the non-contact type measurement method because of variables such as position, posture, and head shape of criminals. In this paper, we propose a method of matching the CCTV footage with the 3D data on the crime scene and measuring the height of the person using the 3D human body model in the matched data. In the proposed method, the height is measured by using 3D human model in various scenes of the person in the CCTV footage, and the measurement value of the target person is corrected by the measurement error of the replay CCTV footage of the reference person. We tested for 20 people's walking CCTV footage captured from an indoor and an outdoor and corrected the measurement values with 5 reference persons. Experimental results show that the measurement error (true value-measured value) average is 0.45 cm, and this method is effective for the measurement of the person's height in CCTV footage.Keywords: human height, CCTV footage, 2D/3D matching, 3D human body model
Procedia PDF Downloads 24810507 Evaluation of Antioxidants in Medicinal plant Limoniastrum guyonianum
Authors: Assia Belfar, Mohamed Hadjadj, Messaouda Dakmouche, Zineb Ghiaba
Abstract:
Introduction: This study aims to phytochemical screening; Extracting the active compounds and estimate the effectiveness of antioxidant in Medicinal plants desert Limoniastrum guyonianum (Zeïta) from South Algeria. Methods: Total phenolic content and total flavonoid content using Folin-Ciocalteu and aluminum chloride colorimetric methods, respectively. The total antioxidant capacity was estimated by the following methods: DPPH (1.1-diphenyl-2-picrylhydrazyl radical) and reducing power assay. Results: Phytochemical screening of the plant part reveals the presence of phenols, saponins, flavonoids and tannins. While alkaloids and Terpenoids were absent. The acetonic extract of L. guyonianum was extracted successively with ethyl acetate and butanol. Extraction of yield varied widely in the L. guyonianum ranging from (0.9425 %to 11.131%). The total phenolic content ranged from 53.33 mg GAE/g DW to 672.79 mg GAE/g DW. The total flavonoid concentrations varied from 5.45 to 21.71 mg/100g. IC50 values ranged from 0.02 ± 0.0004 to 0.13 ± 0.002 mg/ml. All extracts showed very good activity of ferric reducing power, the higher power was in butanol fraction (23.91 mM) more effective than BHA, BHT and VC. Conclusions: Demonstrated this study that the acetonic extract of L. guyonianum contain a considerable quantity of phenolic compounds and possess a good antioxidant activity. Can be used as an easily accessible source of Natural Antioxidants and as a possible food supplement and in the pharmaceutical industry.Keywords: limoniastrum guyonianum, phenolics compounds, flavonoid compound, antioxidant activity
Procedia PDF Downloads 34610506 Parametric Analysis and Optimal Design of Functionally Graded Plates Using Particle Swarm Optimization Algorithm and a Hybrid Meshless Method
Authors: Foad Nazari, Seyed Mahmood Hosseini, Mohammad Hossein Abolbashari, Mohammad Hassan Abolbashari
Abstract:
The present study is concerned with the optimal design of functionally graded plates using particle swarm optimization (PSO) algorithm. In this study, meshless local Petrov-Galerkin (MLPG) method is employed to obtain the functionally graded (FG) plate’s natural frequencies. Effects of two parameters including thickness to height ratio and volume fraction index on the natural frequencies and total mass of plate are studied by using the MLPG results. Then the first natural frequency of the plate, for different conditions where MLPG data are not available, is predicted by an artificial neural network (ANN) approach which is trained by back-error propagation (BEP) technique. The ANN results show that the predicted data are in good agreement with the actual one. To maximize the first natural frequency and minimize the mass of FG plate simultaneously, the weighted sum optimization approach and PSO algorithm are used. However, the proposed optimization process of this study can provide the designers of FG plates with useful data.Keywords: optimal design, natural frequency, FG plate, hybrid meshless method, MLPG method, ANN approach, particle swarm optimization
Procedia PDF Downloads 36710505 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers
Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus
Abstract:
Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.
Procedia PDF Downloads 55510504 In vitro Evaluation of the Anti-Methanogenic Properties of Australian Native and Some Exotic Plants with a View of Their Potential Role in Management of Ruminant Livestock Emissions
Authors: Philip Vercoe, Ali Hardan
Abstract:
Samples of 29 Australian wild natives and exotic plants were tested in vitro batch rumen culture system for their methanogenic characteristics and potential usage as feed or antimicrobial to enhance sustainable livestock ruminant production system. The plants were tested for their in vitro rumen fermentation end products properties which include: methane production, total gas pressure, concentrations of total volatile fatty acids, ammonia, and acetate to propionate ratio. All of the plants were produced less methane than the positive control (i.e., oaten chaff) in vitro. Nearly 50 % of plants inhibiting methane by over 50% in comparison to the control. Eremophila granitica had the strongest inhibitory effect about 92 % on methane production comparing with oaten chaff. The exotic weed Arctotheca calendula (Capeweed) had the highest concentration of volatile fatty acids production as well as the highest in total gas pressure among all plants and the control. Some of the acacia species have the lowest production of total gas pressure. The majority of the plants produced more ammonia than the oaten chaff control. The plant species that produced the most ammonia was Codonocarpus cotinifolius, producing over 3 times as much methane as oaten chaff control while the lowest was Eremophila galeata. There was strong positive correlation between methane production and total gas production as well as between total gas production and the concentration of VFA produced with R² = 0.74, R² = 0.84, respectively. While there was weak positive correlation between methane production and the acetate to propionate ratio as well as between the concentration of VFA produced and methane production with R² = 0.41, R² = 0.52, respectively.Keywords: in vitro Rumen Fermentation, methane, wild Australian native plants, forages
Procedia PDF Downloads 34510503 A Straightforward Method for Determining Inorganic Selenium Speciations by Graphite Furnace Atomic Absorption Spectroscopy in Water Samples
Authors: Sahar Ehsani, David James, Vernon Hodge
Abstract:
In this experimental study, total selenium in solution was measured with Graphite Furnace Atomic Absorption Spectroscopy, GFAAS, then chemical reactions with sodium borohydride were used to reduce selenite to hydrogen selenide. Hydrogen selenide was then stripped from the solution by purging the solution with nitrogen gas. Since the two main speciations in oxic waters are usually selenite, Se(IV) and selenate, Se(VI), it was assumed that after Se(IV) is removed, the remaining total selenium was Se(VI). Total selenium measured after stripping gave Se(VI) concentration, and the difference of total selenium measured before and after stripping gave Se(IV) concentration. An additional step of reducing Se(VI) to Se(IV) was performed by boiling the stripped solution under acidic conditions, then removing Se(IV) by a chemical reaction with sodium borohydride. This additional procedure of removing Se(VI) from the solution is useful in rare cases where the water sample is reducing and contains selenide speciation. In this study, once Se(IV) and Se(VI) were both removed from the water sample, the remaining total selenium concentration was zero. The method was tested to determine Se(IV) and Se(VI) in both purified water and synthetic irrigation water spiked with Se(IV) and Se(VI). Average recovery of spiked samples of diluted synthetic irrigation water was 99% for Se(IV) and 97% for Se(VI). Detection limits of the method were 0.11 µg L⁻¹ and 0.32 µg L⁻¹ for Se(IV) and Se(VI), respectively.Keywords: Analytical Method, Graphite Furnace Atomic Absorption Spectroscopy, Selenate, Selenite, Selenium Speciations
Procedia PDF Downloads 14210502 Attempt Survivor Families’ Views on Criminalizing Attempted Suicide in Ghana
Authors: Joseph Osafo, Winifred Asare-Doku, Charity Akotia
Abstract:
Decriminalizing suicide is one of the major goals of suicide prevention worldwide. In Ghana, suicide is legally prescribed and there is a wide-spread societal condemnation of the act, the survivor and families share the stigma. Evidence and advocacy continue to mount towards pressuring the government, the legal fraternity and lawmakers to consider decriminalizing the act. However, within this discourse, the views of families of attempt survivors are absent. The purpose of this study was to explore from relatives of suicide attempters their reactions towards the criminality of suicide attempt in the country. A total of 10 relatives of suicide attempters were interviewed using a semi-structured interview guide. Thematic analysis was used to analyze the data. We found that there were divergent views from families on decriminalizing suicide. We generated two major themes; Out-group bias versus In-group bias. Half of the participants opined that suicide attempt should not be decriminalized and others advocated for help and mental health care for victims of the suicide attempt. It was generally observed that although all 10 participants were cognizant that suicide attempt is a crime in Ghana, they preferred their relatives were spared from prosecution. The findings indicate incongruity, especially when participants want their relatives to avoid jail term but want the law that criminalizes suicide to remain. Findings are explained using the Fundamental Attribution Error and the concept of Kin selection. Implications for public education on decriminalization and advocacy are addressed.Keywords: decriminalization, families, Ghana suicide, suicide attempt
Procedia PDF Downloads 51910501 Stability of Total Phenolic Concentration and Antioxidant Capacity of Extracts from Pomegranate Co-Products Subjected to In vitro Digestion
Authors: Olaniyi Fawole, Umezuruike Opara
Abstract:
Co-products obtained from pomegranate juice processing contain high levels of polyphenols with potential high added values. From value-addition viewpoint, the aim of this study was to evaluate the stability of polyphenolic concentrations in pomegranate fruit co-products in different solvent extracts and assess the effect on the total antioxidant capacity using the FRAP, DPPH˙ and ABTS˙+ assays during simulated in vitro digestion. Pomegranate juice, marc and peel were extracted in water, 50% ethanol (50%EtOH) and absolute ethanol (100%EtOH) and analysed for total phenolic concentration (TPC), total flavonoids concentration (TFC) and total antioxidant capacity in DPPH˙, ABST˙+ and FRAP assays before and after in vitro digestion. Total phenolic concentration (TPC) and total flavonoid concentration (TFC) were in the order of peel > marc > juice throughout the in vitro digestion irrespective of the extraction solvents used. However, 50% ethanol extracted 1.1 to 12-fold more polyphenols than water and ethanol solvents depending on co-products. TPC and TFC increased significantly in gastric digests. In contrast, after the duodenal, polyphenolic concentrations decreased significantly (p < 0.05) compared to those obtained in gastric digests. Undigested samples and gastric digests showed strong and positive relationships between polyphenols and the antioxidant activities measured in DPPH, ABTS and FRAP assays, with correlation coefficients (r2) ranging between 0.930 – 0.990 whereas, the correlation between polyphenols (TPC and TFC) and radical cation scavenging activity (in ABTS) were moderately positive in duodenal digests. Findings from this study also showed that the concentration of pomegranate polyphenols and antioxidant thereof during in vitro gastro-intestinal digestion may not reflect the pre-digested phenolic concentration. Thus, this study highlights the need to provide biologically relevant information on antioxidants by providing data reflecting their stability and activity after in vitro digestion.Keywords: by-product, DPPH, polyphenols, value addition
Procedia PDF Downloads 33010500 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: linked open data, information integration, digital libraries, data mining
Procedia PDF Downloads 42610499 Production and Evaluation of Mango Pulp by Using Ohmic Heating Process
Authors: Sobhy M. Mohsen, Mohamed M. El-Nikeety, Tarek G. Mohamed, Michael Murkovic
Abstract:
The present work aimed to study the use of ohmic heating in the processing of mango pulp comparing to conventional method. Mango pulp was processed by using ohmic heating under the studied suitable conditions. Physical, chemical and microbiological properties of mango pulp were studied. The results showed that processing of mango pulp by using either ohmic heating or conventional method caused a decrease in the contents of TSS, total carbohydrates, total acidity, total sugars (reducing and non-reducing sugar) and an increase in phenol content, ascorbic acid and carotenoids compared to the conventional process. The increase in electric conductivity of mango pulp during ohmic heating was due to the addition of some electrolytes (salts) to increase the ions and enhance the process. The results also indicate that mango pulp processed by ohmic heating contained more phenols, carbohydrates and vitamin C and less HMF compared to that produced by conventional one. Total pectin and its fractions had slightly reduced by ohmic heating compared to conventional method. Enzymatic activities showed a reduction in poly phenoloxidase (PPO) and polygalacturonase (PG) activity in mango pulp processed by conventional method. However, ohmic heating completely inhibited PPO and PG activities.Keywords: ohmic heating, mango pulp, phenolic, sarotenoids
Procedia PDF Downloads 45510498 Seismic Behavior of a Jumbo Container Crane in the Low Seismicity Zone Using Time-History Analyses
Authors: Huy Q. Tran, Bac V. Nguyen, Choonghyun Kang, Jungwon Huh
Abstract:
Jumbo container crane is an important part of port structures that needs to be designed properly, even when the port locates in low seismicity zone such as in Korea. In this paper, 30 artificial ground motions derived from the elastic response spectra of Korean Building Code (2005) are used for time history analysis. It is found that the uplift might not occur in this analysis when the crane locates in the low seismic zone. Therefore, a selection of a pinned or a gap element for base supporting has not much effect on the determination of the total base shear. The relationships between the total base shear and peak ground acceleration (PGA) and the relationships between the portal drift and the PGA are proposed in this study.Keywords: jumbo container crane, portal drift, time history analysis, total base shear
Procedia PDF Downloads 18910497 Novel Recombinant Betasatellite Associated with Vein Thickening Symptoms on Okra Plants in Saudi Arabia
Authors: Adel M. Zakri, Mohammed A. Al-Saleh, Judith. K. Brown, Ali M. Idris
Abstract:
Betasatellites are small circular single stranded DNA molecules found associated with begomoviruses on field symptomatic plants. Their genome size is about half that of the helper begomovirus, ranging between 1.3 and 1.4 kb. The helper begomoviruses are usually members of the family Geminiviridae. Okra leaves showing vein thickening were collected from okra plants growing in Jazan, Saudi Arabia. Total DNA was extracted from leaves and used as a template to amplify circular DNA using rolling circle amplification (RCA) technology. Products were digested with PstI to linearize the helper viral genome(s), and associated DNA satellite(s), yielding a 2.8kbp and 1.4kbp fragment, respectively. The linearized fragments were cloned into the pGEM-5Zf (+) vector and subjected to DNA sequencing. The 2.8 kb fragment was identified as Cotton leaf curl Gezira virus genome, at 2780bp, an isolate closely related to strains reported previously from Saudi Arabia. A clone obtained from the 1.4 kb fragments he 1.4kb was blasted to GeneBank database found to be a betasatellite. The genome of betasatellite was 1357-bp in size. It was found to be a recombinant containing one fragment (877-bp) that shared 91% nt identity with Cotton leaf curl Gezira betasatellite [KM279620], and a smaller fragment [133--bp) that shared 86% nt identity with Tomato leaf curl Sudan virus [JX483708]. This satellite is thus a recombinant between a malvaceous-infecting satellite and a solanaceous-infecting begomovirus.Keywords: begomovirus, betasatellites, cotton leaf curl Gezira virus, okra plants
Procedia PDF Downloads 34110496 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 26610495 Optimization of Energy Consumption with Various Design Parameters on Office Buildings in Chinese Severe Cold Zone
Authors: Yuang Guo, Dewancker Bart
Abstract:
The primary energy consumption of buildings throughout China was approximately 814 million tons of coal equivalents in 2014, which accounts for 19.12% of China's total primary energy consumption. Also, the energy consumption of public buildings takes a bigger share than urban residential buildings and rural residential buildings among the total energy consumption. To improve the level of energy demand, various design parameters were chosen. Meanwhile, a series of simulations by Energy Plus (EP-Launch) is performed using a base case model established in Open Studio. Through the results, 16%-23% of total energy demand reductions can be found in the severe cold zone of China, and it can also provide a reference for the architectural design of other similar climate zones.Keywords: energy consumption, design parameters, indoor thermal comfort, simulation study, severe cold climate zone
Procedia PDF Downloads 15610494 Subpixel Corner Detection for Monocular Camera Linear Model Research
Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao
Abstract:
Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection
Procedia PDF Downloads 27710493 The Mirage of Progress? a Longitudinal Study of Japanese Students’ L2 Oral Grammar
Authors: Robert Long, Hiroaki Watanabe
Abstract:
This longitudinal study examines the grammatical errors of Japanese university students’ dialogues with a native speaker over an academic year. The L2 interactions of 15 Japanese speakers were taken from the JUSFC2018 corpus (April/May 2018) and the JUSFC2019 corpus (January/February). The corpora were based on a self-introduction monologue and a three-question dialogue; however, this study examines the grammatical accuracy found in the dialogues. Research questions focused on a possible significant difference in grammatical accuracy from the first interview session in 2018 and the second one the following year, specifically regarding errors in clauses per 100 words, global errors and local errors, and with specific errors related to parts of speech. The investigation also focused on which forms showed the least improvement or had worsened? Descriptive statistics showed that error-free clauses/errors per 100 words decreased slightly while clauses with errors/100 words increased by one clause. Global errors showed a significant decline, while local errors increased from 97 to 158 errors. For errors related to parts of speech, a t-test confirmed there was a significant difference between the two speech corpora with more error frequency occurring in the 2019 corpus. This data highlights the difficulty in having students self-edit themselves.Keywords: clause analysis, global vs. local errors, grammatical accuracy, L2 output, longitudinal study
Procedia PDF Downloads 13210492 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life
Authors: Desplanches Maxime
Abstract:
Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression
Procedia PDF Downloads 6910491 Formation of the Investment Portfolio of Intangible Assets with a Wide Pairwise Comparison Matrix Application
Authors: Gulnara Galeeva
Abstract:
The Analytic Hierarchy Process is widely used in the economic and financial studies, including the formation of investment portfolios. In this study, a generalized method of obtaining a vector of priorities for the case with separate pairwise comparisons of the expert opinion being presented as a set of several equal evaluations on a ratio scale is examined. The author claims that this method allows solving an important and up-to-date problem of excluding vagueness and ambiguity of the expert opinion in the decision making theory. The study describes the authentic wide pairwise comparison matrix. Its application in the formation of the efficient investment portfolio of intangible assets of a small business enterprise with limited funding is considered. The proposed method has been successfully approbated on the practical example of a functioning dental clinic. The result of the study confirms that the wide pairwise comparison matrix can be used as a simple and reliable method for forming the enterprise investment policy. Moreover, a comparison between the method based on the wide pairwise comparison matrix and the classical analytic hierarchy process was conducted. The results of the comparative analysis confirm the correctness of the method based on the wide matrix. The application of a wide pairwise comparison matrix also allows to widely use the statistical methods of experimental data processing for obtaining the vector of priorities. A new method is available for simple users. Its application gives about the same accuracy result as that of the classical hierarchy process. Financial directors of small and medium business enterprises get an opportunity to solve the problem of companies’ investments without resorting to services of analytical agencies specializing in such studies.Keywords: analytic hierarchy process, decision processes, investment portfolio, intangible assets
Procedia PDF Downloads 26510490 Modeling of the Attitude Control Reaction Wheels of a Spacecraft in Software in the Loop Test Bed
Authors: Amr AbdelAzim Ali, G. A. Elsheikh, Moutaz M. Hegazy
Abstract:
Reaction wheels (RWs) are generally used as main actuator in the attitude control system (ACS) of spacecraft (SC) for fast orientation and high pointing accuracy. In order to achieve the required accuracy for the RWs model, the main characteristics of the RWs that necessitate analysis during the ACS design phase include: technical features, sequence of operating and RW control logic are included in function (behavior) model. A mathematical model is developed including the various errors source. The errors in control torque including relative, absolute, and error due to time delay. While the errors in angular velocity due to differences between average and real speed, resolution error, loose in installation of angular sensor, and synchronization errors. The friction torque is presented in the model include the different feature of friction phenomena: steady velocity friction, static friction and break-away torque, and frictional lag. The model response is compared with the experimental torque and frequency-response characteristics of tested RWs. Based on the created RW model, some criteria of optimization based control torque allocation problem can be recommended like: avoiding the zero speed crossing, bias angular velocity, or preventing wheel from running on the same angular velocity.Keywords: friction torque, reaction wheels modeling, software in the loop, spacecraft attitude control
Procedia PDF Downloads 26610489 Calculation of Lattice Constants and Band Gaps for Generalized Quasicrystals of InGaN Alloy: A First Principle Study
Authors: Rohin Sharma, Sumantu Chaulagain
Abstract:
This paper presents calculations of total energy of InGaN alloy carried out in a disordered quasirandom structure for a triclinic super cell. This structure replicates the disorder and composition effect in the alloy. First principle calculations within the density functional theory with the local density approximation approach is employed to accurately determine total energy of the system. Lattice constants and band gaps associated with the ground states are then estimated for different concentration ratios of the alloy. We provide precise results of quasirandom structures of the alloy and their lattice constants with the total energy and band gap energy of the system for the range of seven different composition ratios and their respective lattice parameters.Keywords: DFT, ground state, LDA, quasicrystal, triclinic super cell
Procedia PDF Downloads 18810488 Pre-Analytical Laboratory Performance Evaluation Utilizing Quality Indicators between Private and Government-Owned Hospitals Affiliated to University of Santo Tomas
Authors: A. J. Francisco, K. C. Gallosa, R. J. Gasacao, J. R. Ros, B. J. Viado
Abstract:
The study focuses on the use of quality indicators (QI)s based on the standards made by the (IFCC), that could effectively identify and minimize errors occurring throughout the total testing process (TTP), in order to improve patient safety. The study was conducted through a survey questionnaire that was given to a random sample of 19 respondents (eight privately-owned and eleven government-owned hospitals), mainly CMTs, MTs, and Supervisors from UST-affiliated hospitals. The pre-analytical laboratory errors, which include misidentification errors, transcription errors, sample collection errors and sample handling and transportation errors, were considered as variables according to the IFCC WG-LEPS. Data gathered were analyzed using the Mann-Whitney U test, Percentile, Linear Regression, Percentage, and Frequency. The laboratory performance of both hospitals is High level. There is no significant difference between the laboratory performance between the two stated variables. Moreover, among the four QIs, sample handling and transportation errors contributed most to the difference between the two variables. Outcomes indicate satisfactory performance between both variables. However, in order to ensure high-quality and efficient laboratory operation, constant vigilance and improvements in pre-analytical QI are still needed. Expanding the coverage of the study, the inclusion of other phases, utilization of parametric tests are recommended.Keywords: pre-analytical phase, quality indicators, laboratory performance, pre-analytical error
Procedia PDF Downloads 14610487 Effects of Raw Bee Propolis and Water or Ethanol Extract of Propolis on Performance, Immune System and Some Blood Parameters on Broiler Bredeers
Authors: Hasan Alp Sahin, Ergin Ozturk
Abstract:
The effects of raw bee propolis (RP) and water (WEP) or ethanol (EEP) extract of propolis on growth performance, selected immune parameters (IgA, IgY and IgM) and some blood parameters such as aspartate aminotransferase, alanine aminotransferase, trygliceride, total protein, albumin, calcium, phosphorus, total antioxidant status and total oxidant status were determined. The study was conducted between 15th and 20th weeks (6 weeks) and used a total of 48 broiler breeder pullets (Ross-308). The broiler breeder in control group was fed diet without propolis whereas the birds in RP, WEP and EEP groups were fed diets with RP, WEP and EEP at the level of 1200, 400 and 400 ppm, respectively. All pullets were fed mash form diet with 15% crude protein and 2800 ME kcal/kg. All propolis forms had not a beneficial effect on any studied parameters compared to control group (P > 0.05). The results of the study indicated that both the level of the active matters supplied from the bee propolis has no enough beneficial effect on performance, some immune and blood parameters on broiler breeders or they did not have such a level that would cause a beneficial effect on these variables.Keywords: antioxidant, bee product , poultry breeders, growth performance, immune parameters, blood chemistry
Procedia PDF Downloads 26310486 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 10610485 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Iran: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of carbon dioxide (CO2) emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: Crude oil, coal, natural gas, and electricity), CO2 emissions and gross domestic product (GDP) for Iran using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Augmented Dickey-Fuller (ADF) test for stationarity, Johansen’s maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. All the variables in this study show very strong significant effects on GDP in the country for the long term. The long-run equilibrium in VECM suggests that all energy consumption variables in this study have significant impacts on GDP in the long term. The consumption of petroleum products and the direct combustion of crude oil and natural gas decrease GDP, while the coal and electricity use enhanced the GDP between 1980-2010 in Iran. In the short term, only electricity use enhances the GDP as well as its long-run effects. All variables of this study, except the CO2 emissions, show significant effects on the GDP in the country for the long term. The long-run equilibrium in VECM suggests that the consumption of petroleum products and the direct combustion of crude oil and natural gas use have positive impacts on the GDP while the consumptions of electricity and coal have adverse impacts on the GDP in the long term. In the short run, electricity use enhances the GDP over period of 1980-2010 in Iran. Overall, the results partly support arguments that there are relationships between energy use and economic output, but the associations can be differed by the sources of energy in the case of Iran over period of 1980-2010. However, there is no significant relationship between the CO2 emissions and the GDP and between the CO2 emissions and the energy use both in the short term and long term.Keywords: CO2 emissions, energy consumption, GDP, Iran, time series analysis
Procedia PDF Downloads 59210484 Performance of High Efficiency Video Codec over Wireless Channels
Authors: Mohd Ayyub Khan, Nadeem Akhtar
Abstract:
Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.Keywords: AWGN, forward error correction, HEVC, video coding, QAM
Procedia PDF Downloads 149