Search results for: standard error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6599

Search results for: standard error

5669 Improving the Accuracy of Stress Intensity Factors Obtained by Scaled Boundary Finite Element Method on Hybrid Quadtree Meshes

Authors: Adrian W. Egger, Savvas P. Triantafyllou, Eleni N. Chatzi

Abstract:

The scaled boundary finite element method (SBFEM) is a semi-analytical numerical method, which introduces a scaling center in each element’s domain, thus transitioning from a Cartesian reference frame to one resembling polar coordinates. Consequently, an analytical solution is achieved in radial direction, implying that only the boundary need be discretized. The only limitation imposed on the resulting polygonal elements is that they remain star-convex. Further arbitrary p- or h-refinement may be applied locally in a mesh. The polygonal nature of SBFEM elements has been exploited in quadtree meshes to alleviate all issues conventionally associated with hanging nodes. Furthermore, since in 2D this results in only 16 possible cell configurations, these are precomputed in order to accelerate the forward analysis significantly. Any cells, which are clipped to accommodate the domain geometry, must be computed conventionally. However, since SBFEM permits polygonal elements, significantly coarser meshes at comparable accuracy levels are obtained when compared with conventional quadtree analysis, further increasing the computational efficiency of this scheme. The generalized stress intensity factors (gSIFs) are computed by exploiting the semi-analytical solution in radial direction. This is initiated by placing the scaling center of the element containing the crack at the crack tip. Taking an analytical limit of this element’s stress field as it approaches the crack tip, delivers an expression for the singular stress field. By applying the problem specific boundary conditions, the geometry correction factor is obtained, and the gSIFs are then evaluated based on their formal definition. Since the SBFEM solution is constructed as a power series, not unlike mode superposition in FEM, the two modes contributing to the singular response of the element can be easily identified in post-processing. Compared to the extended finite element method (XFEM) this approach is highly convenient, since neither enrichment terms nor a priori knowledge of the singularity is required. Computation of the gSIFs by SBFEM permits exceptional accuracy, however, when combined with hybrid quadtrees employing linear elements, this does not always hold. Nevertheless, it has been shown that crack propagation schemes are highly effective even given very coarse discretization since they only rely on the ratio of mode one to mode two gSIFs. The absolute values of the gSIFs may still be subject to large errors. Hence, we propose a post-processing scheme, which minimizes the error resulting from the approximation space of the cracked element, thus limiting the error in the gSIFs to the discretization error of the quadtree mesh. This is achieved by h- and/or p-refinement of the cracked element, which elevates the amount of modes present in the solution. The resulting numerical description of the element is highly accurate, with the main error source now stemming from its boundary displacement solution. Numerical examples show that this post-processing procedure can significantly improve the accuracy of the computed gSIFs with negligible computational cost even on coarse meshes resulting from hybrid quadtrees.

Keywords: linear elastic fracture mechanics, generalized stress intensity factors, scaled finite element method, hybrid quadtrees

Procedia PDF Downloads 137
5668 Biometrics and Dietary Studies of Citharinus citharus in the Lower Niger River in Kogi State, Nigeria

Authors: Adeyemi, Samuel Olusegun

Abstract:

Biometrics and dietary habit of Citharinus citharus in the lower Niger River area of kogi state were studied between October and December, 2010. A total of 120 fish sampled were used for the study. The total length, standard length and weight were taken for each fish sample for the estimations of length-weight relationship using the formula W = aLb and transformed to Log W = Log a + b Log L. Stomach contents were analyzed by frequency of occurrence method. The standard length of males, females and combined sexes ranged between 6.8 - 16.5, 7.3 – 14.3 cm, 6.8 – 74.2 (cm) respectively, with b – values of 3.0963, 3.174 and 3.1382. The condition factor ranged from 2.04 – 2.80, 1.88 – 2.86 and 1.88 – 2.86 respectively. The food and feeding habits shows that the fish feeds mainly sand grain (25.83%), mud (24.16%), plant parts (12.50%), insect part (2.50%), algae (12.50%) and unidentified items (5.00%). C. citharus in the lower Niger area of kogi state could be termed to an omnivore. River Niger could be said to be suitable for growth and survival of the fish species C. citharus.

Keywords: length-weight, sexes, stomach content, feeding habits, plant materials

Procedia PDF Downloads 506
5667 Analysis of Urban Housing Quality and Conditions within Kano Metropolis

Authors: Abdurraheem A. Yakub

Abstract:

Housing is one of the needs of mankind and is one of the best indicators of a person’s standard of living. This research was set out to analyze the housing qualities and conditions in Kano. Primary data was collected through both Personal observations where the researcher carried out an inspection of the study area prior to interview/implementation of questionnaires and took into consideration the type of housing units, construction materials and services available as well as the environmental condition of the study area. This was followed by an interview which was done through personal contact with the various people related to the study. In the course of doing that, questions were asked orally and notes were taken to record the responses. Thereafter, the Questionnaire was implemented which was earlier designed to elicit information from households in the study area using well-structured questions related to the type of facilities provided in the housing unit, types of houses and response with regard to quality of their houses and neighborhoods, tenure of house. The research work looked at the prevailing housing qualities and conditions and the state of the existing facilities and amenities within the environment and offered recommendations on policies and measures that could help improve the situation.

Keywords: housing provision, housing quality, housing standard, housing condition, housing affordability and housing facilities

Procedia PDF Downloads 327
5666 Design, Synthesis and In-Vitro Antibacterial and Antifungal Activities of Some Novel Spiro[Azetidine-2, 3’-Indole]-2, 4(1’H)-Dione

Authors: Ravi J. Shah

Abstract:

The present study deals with the synthesis of novel spiro[azetidine-2, 3’-indole]-2’, 4(1’H)-dione derivative from the reactions of 3-(phenylimino)-1,3-dihydro-2H-indol-2-one derivatives with chloracetyl chloride in presence of triethyl amine (TEA). All the compounds were characterized using IR, 1H NMR, MS and elemental analysis. They were screened for their antibacterial and antifungal activities. Results revealed that, compounds (7a), (7b), (7c), (7d) and (7e) showed very good activity with MIC value of 6.25-12.5 μg/ml against three evaluated bacterial strains and the remaining compounds showed good to moderate activity comparable to standard drugs as antibacterial agents. Compounds (7c) and (7h) displayed equipotent antifungal activity in comparison to standard drugs. Structure-activity relationship study of the compounds showed that the presence of electron withdrawing group substitution at 5’ and 7’ positions of indoline ring and on ortho or para position of phenyl ring increases both antibacterial and antifungal activity of the compound. Henceforth, our findings will have a good impact on chemists and biochemists for further investigations in search of bromine containing spiro fused antimicrobial agents.

Keywords: antibacterial activity, antifungal activity, 2-Azetidinone, indoline

Procedia PDF Downloads 483
5665 Knowledge Based Liability for ISPs’ Copyright and Trademark Infringement in the EU E-Commerce Directive: Two Steps Behind the Philosophy of Computing Mind

Authors: Mohammad Sadeghi

Abstract:

The subject matter of this article is the efficiency of current knowledge standard to afford the legal integration regarding criteria and approaches to ISP knowledge standards, to shield ISP and copyright, trademark and other parties’ rights in the online information society. The EU recognizes the knowledge-based liability for intermediaries in the European Directive on Electronic Commerce, but the implication of all parties’ responsibility for combating infringement has been immolated by dominating attention on liability due to the lack of the appropriate legal mechanism to devote each party responsibility. Moreover, there is legal challenge on the applicability of knowledge-based liability on hosting services and information location tools service. The aim of this contribution is to discuss the advantages and disadvantages of ECD knowledge standard through case law with a special emphasis on duty of prevention and constructive knowledge role on internet service providers (ISP s’) to achieve fair balance between all parties rights.

Keywords: internet service providers, liability, copyright infringement, hosting, caching, mere conduit service, notice and takedown, E-commerce Directive

Procedia PDF Downloads 515
5664 Category-Base Theory of the Optimum Signal Approximation Clarifying the Importance of Parallel Worlds in the Recognition of Human and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

We show a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detailed algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory and it is indicated that introducing conversations with feedback does not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: signal prediction, pseudo inverse matrix, artificial intelligence, conditional optimization

Procedia PDF Downloads 148
5663 Design and Construction of an Impulse Current Generator for Lightning Strike Experiments

Authors: Kamran Yousefpour, Mojtaba Rostaghi-Chalaki, Jason Warden, Chanyeop Park

Abstract:

There has been a rising trend in using impulse current generators to investigate the lightning strike protection of materials including aluminum and composites in structures such as wind turbine blade and aircraft body. The focus of this research is to present a new impulse current generator built in the High Voltage Lab at Mississippi State University. The generator is capable of producing component A and D of the natural lightning discharges in accordance with the Society of Automotive Engineers (SAE) standard, which is widely used in the aerospace industry. The generator can supply lightning impulse energy up to 400 kJ with the capability of producing impulse currents with magnitudes greater than 200 kA. The electrical circuit and physical components of an improved impulse current generator are described and several lightning strike waveforms with different amplitudes is presented for comparing with the standard waveform. The results of this study contribute to the fundamental understanding the functionality of the impulse current generators and present a new impulse current generator developed at the High Voltage Lab of Mississippi State University.

Keywords: impulse current generator, lightning, society of automotive engineers, capacitor

Procedia PDF Downloads 161
5662 Establishment of Standardized Bill of Material for Korean Urban Rail Transit System

Authors: J. E. Jung, J. M. Yang, J. W. Kim

Abstract:

The railway market across the world has been standardized with the globalization strategy of Europe. On the other hand, the Korean urban railway system is operated by 10 operators which have established their standards and independently managed BOMs. When operators manage different BOMs, lack of system compatibility prevents them from sharing information and hinders work linkage and efficiency. Europe launched a large-scale railway project in 1993 when the European Union went into effect. In particular, the recent standardization efforts of the EU-funded MODTRAIN project are similar to the approach of the urban rail system standardization research that is underway in Korea. This paper looks into the BOMs of Koran urban rail transit operators and suggests the standard BOM for the rail transit system in Korea by reviewing rail vehicle technologies and the MODTRAIN project of Europe. The standard BOM is structured up to the key device level or module level, and it allows vehicle manufacturers and component manufacturers to manage their lower-level BOMs and share them with each other and with operators.

Keywords: BOM, Korean rail, urban rail, standardized

Procedia PDF Downloads 310
5661 Computational Fluid Dynamics Simulation of Gas-Liquid Phase Stirred Tank

Authors: Thiyam Tamphasana Devi, Bimlesh Kumar

Abstract:

A Computational Fluid Dynamics (CFD) technique has been applied to simulate the gas-liquid phase in double stirred tank of Rushton impeller. Eulerian-Eulerian model was adopted to simulate the multiphase with standard correlation of Schiller and Naumann for drag co-efficient. The turbulence was modeled by using standard k-ε turbulence model. The present CFD model predicts flow pattern, local gas hold-up, and local specific area. It also predicts local kLa (mass transfer rate) for single impeller. The predicted results were compared with experimental and CFD results of published literature. The predicted results are slightly over predicted with the experimental results; however, it is in reasonable agreement with other simulated results of published literature.

Keywords: Eulerian-Eulerian, gas-hold up, gas-liquid phase, local mass transfer rate, local specific area, Rushton Impeller

Procedia PDF Downloads 229
5660 Simulation of Internal Flow Field of Pitot-Tube Jet Pump

Authors: Iqra Noor, Ihtzaz Qamar

Abstract:

Pitot-tube Jet pump, single-stage pump with low flow rate and high head, consists of a radial impeller that feeds water to rotating cavity. Water then enters stationary pitot-tube collector (diffuser), which discharges to the outside. By means of ANSYS Fluent 15.0, the internal flow characteristics for Pitot-tube Jet pump with standard pitot and curved pitot are studied. Under design condition, realizable k-e turbulence model and SIMPLEC algorithm are used to calculate 3D flow field inside both pumps. The simulation results reveal that energy is imparted to the flow by impeller and inside the rotor, forced vortex type flow is observed. Total pressure decreases inside pitot-tube whereas static pressure increases. Changing pitot-tube from standard to curved shape results in minimum flow circulation inside pitot-tube and leads to a higher pump performance.

Keywords: CFD, flow circulation, high pressure pump, impeller, internal flow, pickup tube pump, rectangle channels, rotating casing, turbulence

Procedia PDF Downloads 154
5659 In Online and Laboratory We Trust: Comparing Trust Game Behavior in Three Environments

Authors: Kaisa M. Herne, Hanna E. Björkstedt

Abstract:

Comparisons of online and laboratory environments are important for assessing whether the environment influences behavioral results. Trust game behavior was examined in three environments: 1) The standard laboratory setting with physically present participants (laboratory), 2) An online environment with an online meeting before playing the trust game (online plus a meeting); and 3) An online environment without a meeting (online without a meeting). In laboratory, participants were present in a classroom and played the trust game anonymously via computers. Online plus a meeting mimicked the laboratory in that participants could see each other in an online meeting before sessions started, whereas online without a meeting was a standard online experiment in which participants did not see each other at any stages of the experiment. Participants were recruited through pools of student subjects at two universities. The trust game was identical in all conditions; it was played with the same software, anonymously, and with stranger matching. There were no statistically significant differences between the treatment conditions regarding trust or trustworthiness. Results suggest that conducting trust game experiments online will yield similar results to experiments implemented in a laboratory.

Keywords: laboratory vs. online experiment, trust behavior, trust game, trustworthiness behavior

Procedia PDF Downloads 73
5658 Numerical Investigation on Design Method of Timber Structures Exposed to Parametric Fire

Authors: Robert Pečenko, Karin Tomažič, Igor Planinc, Sabina Huč, Tomaž Hozjan

Abstract:

Timber is favourable structural material due to high strength to weight ratio, recycling possibilities, and green credentials. Despite being flammable material, it has relatively high fire resistance. Everyday engineering practice around the word is based on an outdated design of timber structures considering standard fire exposure, while modern principles of performance-based design enable use of advanced non-standard fire curves. In Europe, standard for fire design of timber structures EN 1995-1-2 (Eurocode 5) gives two methods, reduced material properties method and reduced cross-section method. In the latter, fire resistance of structural elements depends on the effective cross-section that is a residual cross-section of uncharred timber reduced additionally by so called zero strength layer. In case of standard fire exposure, Eurocode 5 gives a fixed value of zero strength layer, i.e. 7 mm, while for non-standard parametric fires no additional comments or recommendations for zero strength layer are given. Thus designers often implement adopted 7 mm rule also for parametric fire exposure. Since the latest scientific evidence suggests that proposed value of zero strength layer can be on unsafe side for standard fire exposure, its use in the case of a parametric fire is also highly questionable and more numerical and experimental research in this field is needed. Therefore, the purpose of the presented study is to use advanced calculation methods to investigate the thickness of zero strength layer and parametric charring rates used in effective cross-section method in case of parametric fire. Parametric studies are carried out on a simple solid timber beam that is exposed to a larger number of parametric fire curves Zero strength layer and charring rates are determined based on the numerical simulations which are performed by the recently developed advanced two step computational model. The first step comprises of hygro-thermal model which predicts the temperature, moisture and char depth development and takes into account different initial moisture states of timber. In the second step, the response of timber beam simultaneously exposed to mechanical and fire load is determined. The mechanical model is based on the Reissner’s kinematically exact beam model and accounts for the membrane, shear and flexural deformations of the beam. Further on, material non-linear and temperature dependent behaviour is considered. In the two step model, the char front temperature is, according to Eurocode 5, assumed to have a fixed temperature of around 300°C. Based on performed study and observations, improved levels of charring rates and new thickness of zero strength layer in case of parametric fires are determined. Thus, the reduced cross section method is substantially improved to offer practical recommendations for designing fire resistance of timber structures. Furthermore, correlations between zero strength layer thickness and key input parameters of the parametric fire curve (for instance, opening factor, fire load, etc.) are given, representing a guideline for a more detailed numerical and also experimental research in the future.

Keywords: advanced numerical modelling, parametric fire exposure, timber structures, zero strength layer

Procedia PDF Downloads 160
5657 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation

Authors: Othman Maklouf, Abdunnaser Tresh

Abstract:

Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.

Keywords: GPS, IMU, Kalman filter, materials engineering

Procedia PDF Downloads 412
5656 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 144
5655 Gnss Aided Photogrammetry for Digital Mapping

Authors: Muhammad Usman Akram

Abstract:

This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.

Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry

Procedia PDF Downloads 13
5654 A Single Loop Repetitive Controller for a Four Legs Matrix Converter Unit

Authors: Wesam Rohouma

Abstract:

The aim of this paper is to investigate the use of repetitive controller to regulate the output voltage of three phase four leg matric converter for an Aircraft Ground Power Supply Unit. The proposed controller improve the steady state error and provide good regulation during different loading. Simulation results of 7.5 KW converter are presented to verify the operation of the proposed controller.

Keywords: matrix converter, Power electronics, controller, regulation

Procedia PDF Downloads 1500
5653 Comparison of Two Neural Networks To Model Margarine Age And Predict Shelf-Life Using Matlab

Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien

Abstract:

The present study was aimed at developing & comparing two neural-network-based predictive models to predict shelf-life/product age of South African margarine using free fatty acid (FFA), water droplet size (D3.3), water droplet distribution (e-sigma), moisture content, peroxide value (PV), anisidine valve (AnV) and total oxidation (totox) value as input variables to the model. Brick margarine products which had varying ages ranging from fresh i.e. week 0 to week 47 were sourced. The brick margarine products which had been stored at 10 & 25 °C and were characterized. JMP and MATLAB models to predict shelf-life/ margarine age were developed and their performances were compared. The key performance indicators to evaluate the model performances were correlation coefficient (CC), root mean square error (RMSE), and mean absolute percentage error (MAPE) relative to the actual data. The MATLAB-developed model showed a better performance in all three performance indicators. The correlation coefficient of the MATLAB model was 99.86% versus 99.74% for the JMP model, the RMSE was 0.720 compared to 1.005 and the MAPE was 7.4% compared to 8.571%. The MATLAB model was selected to be the most accurate, and then, the number of hidden neurons/ nodes was optimized to develop a single predictive model. The optimized MATLAB with 10 neurons showed a better performance compared to the models with 1 & 5 hidden neurons. The developed models can be used by margarine manufacturers, food research institutions, researchers etc, to predict shelf-life/ margarine product age, optimize addition of antioxidants, extend shelf-life of products and proactively troubleshoot for problems related to changes which have an impact on shelf-life of margarine without conducting expensive trials.

Keywords: margarine shelf-life, predictive modelling, neural networks, oil oxidation

Procedia PDF Downloads 188
5652 Vortices Structure in Internal Laminar and Turbulent Flows

Authors: Farid Gaci, Zoubir Nemouchi

Abstract:

A numerical study of laminar and turbulent fluid flows in 90° bend of square section was carried out. Three-dimensional meshes, based on hexahedral cells, were generated. The QUICK scheme was employed to discretize the convective term in the transport equations. The SIMPLE algorithm was adopted to treat the velocity-pressure coupling. The flow structure obtained showed interesting features such as recirculation zones and counter-rotating pairs of vortices. The performance of three different turbulence models was evaluated: the standard k- ω model, the SST k-ω model and the Reynolds Stress Model (RSM). Overall, it was found that, the multi-equation model performed better than the two equation models. In fact, the existence of four pairs of counter rotating cells, in the straight duct upstream of the bend, were predicted by the RSM closure but not by the standard eddy viscosity model nor the SST k-ω model. The analysis of the results led to a better understanding of the induced three dimensional secondary flows and the behavior of the local pressure coefficient and the friction coefficient.

Keywords: curved duct, counter-rotating cells, secondary flow, laminar, turbulent

Procedia PDF Downloads 333
5651 Density Determination by Dilution for Extra Heavy Oil Residues Obtained Using Molecular Distillation and Supercritical Fluid Extraction as Upgrading and Refining Process

Authors: Oscar Corredor, Alexander Guzman, Adan Leon

Abstract:

Density is a bulk physical property that indicates the quality of a petroleum fraction. It is also a useful property to estimate various physicochemical properties of fraction and petroleum fluids; however, the determination of density of extra heavy residual (EHR) fractions by standard methodologies, (ASTM D70) shows limitations for samples with higher densities than 1.0879 g/cm3. For this reason, a dilution methodology was developed in order to determinate density for those particular fractions, 87 (EHR) fractions were obtained as products of the fractionation of Colombian typical Vacuum Distillation Residual Fractions using molecular distillation (MD) and extraction with Solvent N-hexane in Supercritical Conditions (SFEF) pilot plants. The proposed methodology showed reliable results that can be demonstrated with the standard deviation of repeatability and reproducibility values of 0.0031 and 0.0061 g/ml respectively. In the same way, it was possible to determine densities in fractions EHR up to 1.1647g/cm3 and °API values obtained were ten times less than the water reference value.

Keywords: API, density, vacuum residual, molecular distillation, supercritical fluid extraction

Procedia PDF Downloads 261
5650 Developing a DNN Model for the Production of Biogas From a Hybrid BO-TPE System in an Anaerobic Wastewater Treatment Plant

Authors: Hadjer Sadoune, Liza Lamini, Scherazade Krim, Amel Djouadi, Rachida Rihani

Abstract:

Deep neural networks are highly regarded for their accuracy in predicting intricate fermentation processes. Their ability to learn from a large amount of datasets through artificial intelligence makes them particularly effective models. The primary obstacle in improving the performance of these models is to carefully choose the suitable hyperparameters, including the neural network architecture (number of hidden layers and hidden units), activation function, optimizer, learning rate, and other relevant factors. This study predicts biogas production from real wastewater treatment plant data using a sophisticated approach: hybrid Bayesian optimization with a tree-structured Parzen estimator (BO-TPE) for an optimised deep neural network (DNN) model. The plant utilizes an Upflow Anaerobic Sludge Blanket (UASB) digester that treats industrial wastewater from soft drinks and breweries. The digester has a working volume of 1574 m3 and a total volume of 1914 m3. Its internal diameter and height were 19 and 7.14 m, respectively. The data preprocessing was conducted with meticulous attention to preserving data quality while avoiding data reduction. Three normalization techniques were applied to the pre-processed data (MinMaxScaler, RobustScaler and StandardScaler) and compared with the Non-Normalized data. The RobustScaler approach has strong predictive ability for estimating the volume of biogas produced. The highest predicted biogas volume was 2236.105 Nm³/d, with coefficient of determination (R2), mean absolute error (MAE), and root mean square error (RMSE) values of 0.712, 164.610, and 223.429, respectively.

Keywords: anaerobic digestion, biogas production, deep neural network, hybrid bo-tpe, hyperparameters tuning

Procedia PDF Downloads 29
5649 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial

Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs

Abstract:

Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.

Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation

Procedia PDF Downloads 117
5648 An Experiment Research on the Effect of Brain-Break in the Classroom on Elementary School Students’ Selective Attention

Authors: Hui Liu, Xiaozan Wang, Jiarong Zhong, Ziming Shao

Abstract:

Introduction: Related research shows that students don’t concentrate on teacher’s speaking in the classroom. The d2 attention test is a time-limited test about selective attention. The d2 attention test can be used to evaluate individual selective attention. Purpose: To use the d2 attention test tool to measure the difference between the attention level of the experimental class and the control class before and after Brain-Break and to explore the effect of Brain-Break in the classroom on students' selective attention. Methods: According to the principle of no difference in pre-test data, two classes in the fourth- grade of Shenzhen Longhua Central Primary School were selected. After 20 minutes of class in the third class in the morning and the third class in the afternoon, about 3-minute Brain-Break intervention was performed in the experimental class for 10 weeks. The normal class in the control class did not intervene. Before and after the experiment, the d2 attention test tool was used to test the attention level of the two-class students. The paired sample t-test and independent sample t-test in SPSS 23.0 was used to test the change in the attention level of the two-class classes around 10 weeks. This article only presents results with significant differences. Results: The independent sample t-test results showed that after ten-week of Brain-Break, the missed errors (E1 t = -2.165 p = 0.042), concentration performance (CP t = 1.866 p = 0.05), and the degree of omissions (Epercent t = -2.375 p = 0.029) in experimental class showed significant differences compared with control class. The students’ error level decreased and the concentration increased. Conclusions: Adding Brain-Break interventions in the classroom can effectively improve the attention level of fourth-grade primary school students to a certain extent, especially can improve the concentration of attention and decrease the error rate in the tasks. The new sport's learning model is worth promoting

Keywords: cultural class, micromotor, attention, D2 test

Procedia PDF Downloads 125
5647 The Affect of Ethnic Minority People: A Prediction by Gender and Marital Status

Authors: A. K. M. Rezaul Karim, Abu Yusuf Mahmud, S. H. Mahmud

Abstract:

The study aimed to investigate whether the affect (experience of feeling or emotion) of ethnic minority people can be predicted by gender and marital status. Toward this end, positive affect and negative affect of 103 adult indigenous persons were measured. Analysis of data in multiple regressions demonstrated that both gender and marital status are significantly associated with positive affect (Gender: β=.318, p < .001; Marital status: β=.201, p < .05), but not with negative affect. Results indicated that the indigenous males have 0.32 standard deviations increased positive affect as compared to the indigenous females and that married individuals have 0.20 standard deviations increased positive affect as compared to their unmarried counterparts. These findings advance our understanding that gender and marital status inequalities in the experience of emotion are not specific to the mainstream society; rather it is a generalized picture of all societies. In general, men possess more positive affect than females; married persons possess more positive affect than the unmarried persons.

Keywords: positive affect, negative affect, ethnic minority, gender, marital status

Procedia PDF Downloads 440
5646 A Four Free Element Radiofrequency Coil with High B₁ Homogeneity for Magnetic Resonance Imaging

Authors: Khalid Al-Snaie

Abstract:

In this paper, the design and the testing of a symmetrical radiofrequency prototype coil with high B₁ magnetic field homogeneity are presented. The developed coil comprises four tuned coaxial circular loops that can produce a relatively homogeneous radiofrequency field. In comparison with a standard Helmholtz pair that provides 2nd-order homogeneity, it aims to provide fourth-order homogeneity of the B₁ field while preserving the simplicity of implementation. Electrical modeling of the probe, including all couplings, is used to ensure these requirements. Results of comparison tests, in free space and in a spectro-imager, between a standard Helmholtz pair and the presented prototype coil are introduced. In terms of field homogeneity, an improvement of 30% is observed. Moreover, the proposed prototype coil possesses a better quality factor (+25% on average) and a noticeable improvement in sensitivity (+20%). Overall, this work, which includes both theoretical and experimental aspects, aims to contribute to the study and understanding of four-element radio frequency (RF) systems derived from Helmholtz coils for Magnetic Resonance Imaging

Keywords: B₁ homogeneity, MRI, NMR, radiofrequency, RF coil, free element systems

Procedia PDF Downloads 83
5645 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints

Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes

Abstract:

Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.

Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart

Procedia PDF Downloads 242
5644 Protocol for Consumer Research in Academia for Community Marketing Campaigns

Authors: Agnes J. Otjen, Sarah Keller

Abstract:

A Montana university has used applied consumer research in experiential learning with non-profit clients for over a decade. Through trial and error, a successful protocol has been established from problem statement through formative research to integrated marketing campaign execution. In this paper, we describe the protocol and its applications. Analysis was completed to determine the effectiveness of the campaigns and the results of how pre- and post-consumer research mark societal change because of media.

Keywords: consumer, research, marketing, communications

Procedia PDF Downloads 126
5643 AI Peer Review Challenge: Standard Model of Physics vs 4D GEM EOS

Authors: David A. Harness

Abstract:

Natural evolution of ATP cognitive systems is to meet AI peer review standards. ATP process of axiom selection from Mizar to prove a conjecture would be further refined, as in all human and machine learning, by solving the real world problem of the proposed AI peer review challenge: Determine which conjecture forms the higher confidence level constructive proof between Standard Model of Physics SU(n) lattice gauge group operation vs. present non-standard 4D GEM EOS SU(n) lattice gauge group spatially extended operation in which the photon and electron are the first two trace angular momentum invariants of a gravitoelectromagnetic (GEM) energy momentum density tensor wavetrain integration spin-stress pressure-volume equation of state (EOS), initiated via 32 lines of Mathematica code. Resulting gravitoelectromagnetic spectrum ranges from compressive through rarefactive of the central cosmological constant vacuum energy density in units of pascals. Said self-adjoint group operation exclusively operates on the stress energy momentum tensor of the Einstein field equations, introducing quantization directly on the 4D spacetime level, essentially reformulating the Yang-Mills virtual superpositioned particle compounded lattice gauge groups quantization of the vacuum—into a single hyper-complex multi-valued GEM U(1) × SU(1,3) lattice gauge group Planck spacetime mesh quantization of the vacuum. Thus the Mizar corpus already contains all of the axioms required for relevant DeepMath premise selection and unambiguous formal natural language parsing in context deep learning.

Keywords: automated theorem proving, constructive quantum field theory, information theory, neural networks

Procedia PDF Downloads 173
5642 Product Line Design with Customization in the Presence of Demand Uncertainty

Authors: Parisa Bagheri Tookanlou

Abstract:

In this paper, we analyze a product line design problem faced by a manufacturing firm where the product line consists of a customized product in addition to a standard product and is offered in a market in which customers are heterogeneous on aesthetic attributes of the product. The customization level of a product is defined by the fraction of aesthetic attributes of the product that the manufacturer chooses to customize. In contrast to the existing literature on product line design that predominantly assumes deterministic demand, we consider the presence of demand uncertainty and frame the product line design problem in a single period (news vendor) setting. We examine the effect of demand uncertainty on product line decisions. Furthermore, we also examine how product line decisions are influenced by channel structure. While we use the centralized channel as a benchmark, we consider the decentralized dual channel where the customized product is sold through an online channel owned by the manufacturer and the standard product is sold through a retailer. We introduce a supply contract between the manufacturer and the retailer for improving channel efficiency and coordinate the distribution channel.

Keywords: product line design, demand uncertainty, customization level, distribution channel

Procedia PDF Downloads 175
5641 Evidence of Climate Change from Statistical Analysis of Temperature and Rainfall Data of Kaduna State, Nigeria

Authors: Iliya Bitrus Abaje

Abstract:

This study examines the evidence of climate change scenario in Kaduna State from the analysis of temperature and rainfall data (1976-2015) from three meteorological stations along a geographic transect from the southern part to the northern part of the State. Different statistical methods were used in determining the changes in both the temperature and rainfall series. The result of the linear trend lines revealed a mean increase in average temperature of 0.73oC for the 40 years period of study in the State. The plotted standard deviation for the temperature anomalies generally revealed that years of temperatures above the mean standard deviation (hotter than the normal conditions) in the last two decades (1996-2005 and 2006-2015) were more than those below (colder than the normal condition). The Cramer’s test and student’s t-test generally revealed an increasing temperature trend in the recent decades. The increased in temperature is an evidence that the earth’s atmosphere is getting warmer in recent years. The linear trend line equation of the annual rainfall for the period of study showed a mean increase of 316.25 mm for the State. Findings also revealed that the plotted standard deviation for the rainfall anomalies, and the 10-year non-overlapping and 30-year overlapping sub-periods analysis in all the three stations generally showed an increasing trend from the beginning of the data to the recent years. This is an evidence that the study area is now experiencing wetter conditions in recent years and hence climate change. The study recommends diversification of the economic base of the populace with emphasis on moving away from activities that are sensitive to temperature and rainfall extremes Also, appropriate strategies to ameliorate the scourge of climate change at all levels/sectors should always take into account the recent changes in temperature and rainfall amount in the area.

Keywords: anomalies, linear trend, rainfall, temperature

Procedia PDF Downloads 312
5640 Assesments of Some Environment Variables on Fisheries at Two Levels: Global and Fao Major Fishing Areas

Authors: Hyelim Park, Juan Martin Zorrilla

Abstract:

Climate change influences very widely and in various ways ocean ecosystem functioning. The consequences of climate change on marine ecosystems are an increase in temperature and irregular behavior of some solute concentrations. These changes would affect fisheries catches in several ways. Our aim is to assess the quantitative contribution change of fishery catches along the time and express them through four environment variables: Sea Surface Temperature (SST4) and the concentrations of Chlorophyll (CHL), Particulate Inorganic Carbon (PIC) and Particulate Organic Carbon (POC) at two spatial scales: Global and the nineteen FAO Major Fishing Areas divisions. Data collection was based on the FAO FishStatJ 2014 database as well as MODIS Aqua satellite observations from 2002 to 2012. Some data had to be corrected and interpolated using some existing methods. As the results, a multivariable regression model for average Global fisheries captures contained temporal mean of SST4, standard deviation of SST4, standard deviation of CHL and standard deviation of PIC. Global vector auto-regressive (VAR) model showed that SST4 was a statistical cause of global fishery capture. To accommodate varying conditions in fishery condition and influence of climate change variables, a model was constructed for each FAO major fishing area. From the management perspective it should be recognized some limitations of the FAO marine areas division that opens to possibility to the discussion of the subdivision of the areas into smaller units. Furthermore, it should be treated that the contribution changes of fishery species and the possible environment factor for specific species at various scale levels.

Keywords: fisheries-catch, FAO FishStatJ, MODIS Aqua, sea surface temperature (SST), chlorophyll, particulate inorganic carbon (PIC), particulate organic carbon (POC), VAR, granger causality

Procedia PDF Downloads 479