Search results for: step method
20516 Identification and Optimisation of South Africa's Basic Access Road Network
Authors: Diogo Prosdocimi, Don Ross, Matthew Townshend
Abstract:
Road authorities are mandated within limited budgets to both deliver improved access to basic services and facilitate economic growth. This responsibility is further complicated if maintenance backlogs and funding shortfalls exist, as evident in many countries including South Africa. These conditions require authorities to make difficult prioritisation decisions, with the effect that Road Asset Management Systems with a one-dimensional focus on traffic volumes may overlook the maintenance of low-volume roads that provide isolated communities with vital access to basic services. Given these challenges, this paper overlays the full South African road network with geo-referenced information for population, primary and secondary schools, and healthcare facilities to identify the network of connective roads between communities and basic service centres. This connective network is then rationalised according to the Gross Value Added and number of jobs per mesozone, administrative and functional road classifications, speed limit, and road length, location, and name to estimate the Basic Access Road Network. A two-step floating catchment area (2SFCA) method, capturing a weighted assessment of drive-time to service centres and the ratio of people within a catchment area to teachers and healthcare workers, is subsequently applied to generate a Multivariate Road Index. This Index is used to assign higher maintenance priority to roads within the Basic Access Road Network that provide more people with better access to services. The relatively limited incidence of Basic Access Roads indicates that authorities could maintain the entire estimated network without exhausting the available road budget before practical economic considerations get any purchase. Despite this fact, a final case study modelling exercise is performed for the Namakwa District Municipality to demonstrate the extent to which optimal relocation of schools and healthcare facilities could minimise the Basic Access Road Network and thereby release budget for investment in roads that best promote GDP growth.Keywords: basic access roads, multivariate road index, road prioritisation, two-step floating catchment area method
Procedia PDF Downloads 23120515 Continuous Plug Flow and Discrete Particle Phase Coupling Using Triangular Parcels
Authors: Anders Schou Simonsen, Thomas Condra, Kim Sørensen
Abstract:
Various processes are modelled using a discrete phase, where particles are seeded from a source. Such particles can represent liquid water droplets, which are affecting the continuous phase by exchanging thermal energy, momentum, species etc. Discrete phases are typically modelled using parcel, which represents a collection of particles, which share properties such as temperature, velocity etc. When coupling the phases, the exchange rates are integrated over the cell, in which the parcel is located. This can cause spikes and fluctuating exchange rates. This paper presents an alternative method of coupling a discrete and a continuous plug flow phase. This is done using triangular parcels, which span between nodes following the dynamics of single droplets. Thus, the triangular parcels are propagated using the corner nodes. At each time step, the exchange rates are spatially integrated over the surface of the triangular parcels, which yields a smooth continuous exchange rate to the continuous phase. The results shows that the method is more stable, converges slightly faster and yields smooth exchange rates compared with the steam tube approach. However, the computational requirements are about five times greater, so the applicability of the alternative method should be limited to processes, where the exchange rates are important. The overall balances of the exchanged properties did not change significantly using the new approach.Keywords: CFD, coupling, discrete phase, parcel
Procedia PDF Downloads 26620514 Evaluation of Synthesis and Structure Elucidation of Some Benzimidazoles as Antimicrobial Agents
Authors: Ozlem Temiz Arpaci, Meryem Tasci, Hakan Goker
Abstract:
Benzimidazole, a structural isostere of indol and purine nuclei that can interact with biopolymers, can be identified as master key. So that benzimidazole compounds are important fragments in medicinal chemistry because of their wide range of biological activities including antimicrobial activity. We planned to synthesize some benzimidazole compounds for developing new antimicrobial drug candidates. In this study, we put some heterocyclic rings on second position and an amidine group on the fifth position of benzimidazole ring and synthesized them using a multiple step procedure. For the synthesis of the compounds, as the first step, 4-chloro-3-nitrobenzonitrile was reacted with cyclohexylamine in dimethyl formamide. Imidate esters (compound 2) were then prepared with absolute ethanol saturated with dry HCl gas. These imidate esters which were not too stable were converted to compound 3 by passing ammonia gas through ethanol. At the Pd / C catalyst, the nitro group is reduced to the amine group (compound 4). Finally, various aldehyde derivatives were reacted with sodium metabisulfite addition products to give compound 5-20. Melting points were determined on a Buchi B-540 melting point apparatus in open capillary tubes and are uncorrected. Elemental analyses were done a Leco CHNS 932 elemental analyzer. 1H-NMR and 13C-NMR spectra were recorded on a Varian Mercury 400 MHz spectrometer using DMSO-d6. Mass spectra were acquired on a Waters Micromass ZQ using the ESI(+) method. The structures of them were supported by spectral data. The 1H-NMR, 13C NMR and mass spectra and elemental analysis results agree with those of the proposed structures. Antimicrobial activity studies of the synthesized compounds are under the investigation.Keywords: benzimidazoles, synthesis, structure elucidation, antimicrobial
Procedia PDF Downloads 15520513 Reliability-Simulation of Composite Tubular Structure under Pressure by Finite Elements Methods
Authors: Abdelkader Hocine, Abdelhakim Maizia
Abstract:
The exponential growth of reinforced fibers composite materials use has prompted researchers to step up their work on the prediction of their reliability. Owing to differences between the properties of the materials used for the composite, the manufacturing processes, the load combinations and types of environment, the prediction of the reliability of composite materials has become a primary task. Through failure criteria, TSAI-WU and the maximum stress, the reliability of multilayer tubular structures under pressure is the subject of this paper, where the failure probability of is estimated by the method of Monte Carlo.Keywords: composite, design, monte carlo, tubular structure, reliability
Procedia PDF Downloads 46420512 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.Keywords: matrix minimization algorithm, decoding sequential search algorithm, image compression, DCT, DWT
Procedia PDF Downloads 15020511 Simulation and Analytical Investigation of Different Combination of Single Phase Power Transformers
Authors: M. Salih Taci, N. Tayebi, I. Bozkır
Abstract:
In this paper, the equivalent circuit of the ideal single-phase power transformer with its appropriate voltage current measurement was presented. The calculated values of the voltages and currents of the different connections single phase normal transformer and the results of the simulation process are compared. As it can be seen, the calculated results are the same as the simulated results. This paper includes eight possible different transformer connections. Depending on the desired voltage level, step-down and step-up application transformer is considered. Modelling and analysis of a system consisting of an equivalent source, transformer (primary and secondary), and loads are performed to investigate the combinations. The obtained values are simulated in PSpice environment and then how the currents, voltages and phase angle are distributed between them is explained based on calculation.Keywords: transformer, simulation, equivalent model, parallel series combinations
Procedia PDF Downloads 36120510 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 49420509 Development of a Two-Step 'Green' Process for (-) Ambrafuran Production
Authors: Lucia Steenkamp, Chris V. D. Westhuyzen, Kgama Mathiba
Abstract:
Ambergris, and more specifically its oxidation product (–)-ambrafuran, is a scarce, valuable, and sought-after perfumery ingredient. The material is used as a fixative agent to stabilise perfumes in formulations by reducing the evaporation rate of volatile substances. Ambergris is a metabolic product of the sperm whale (Physeter macrocephatus L.), resulting from intestinal irritation. Chemically, (–)-ambrafuran is produced from the natural product sclareol in eight synthetic steps – in the process using harsh and often toxic chemicals to do so. An overall yield of no more than 76% can be achieved in some routes, but generally, this is lower. A new 'green' route has been developed in our laboratory in which sclareol, extracted from the Clary sage plant, is converted to (–)-ambrafuran in two steps with an overall yield in excess of 80%. The first step uses a microorganism, Hyphozyma roseoniger, to bioconvert sclareol to an intermediate diol using substrate concentrations up to 50g/L. The yield varies between 90 and 67% depending on the substrate concentration used. The purity of the diol product is 95%, and the diol is used without further purification in the next step. The intermediate diol is then cyclodehydrated to the final product (–)-ambrafuran using a zeolite, which is not harmful to the environment and is readily recycled. The yield of the product is 96%, and following a single recrystallization, the purity of the product is > 99.5%. A preliminary LC-MS study of the bioconversion identified several intermediates produced in the fermentation broth under oxygen-restricted conditions. Initially, a short-lived ketone is produced in equilibrium with a more stable pyranol, a key intermediate in the process. The latter is oxidised under Norrish type I cleavage conditions to yield an acetate, which is hydrolysed either chemically or under lipase action to afford the primary fermentation product, an intermediate diol. All the intermediates identified point to the likely CYP450 action as the key enzyme(s) in the mechanism. This invention is an exceptional example of how the power of biocatalysis, combined with a mild, benign chemical step, can be deployed to replace a total chemical synthesis of a specific chiral antipode of a commercially relevant material.Keywords: ambrafuran, biocatalysis, fragrance, microorganism
Procedia PDF Downloads 22620508 Durability Analysis of a Knuckle Arm Using VPG System
Authors: Geun-Yeon Kim, S. P. Praveen Kumar, Kwon-Hee Lee
Abstract:
A steering knuckle arm is the component that connects the steering system and suspension system. The structural performances such as stiffness, strength, and durability are considered in its design process. The former study suggested the lightweight design of a knuckle arm considering the structural performances and using the metamodel-based optimization. The six shape design variables were defined, and the optimum design was calculated by applying the kriging interpolation method. The finite element method was utilized to predict the structural responses. The suggested knuckle was made of the aluminum Al6082, and its weight was reduced about 60% in comparison with the base steel knuckle, satisfying the design requirements. Then, we investigated its manufacturability by performing foraging analysis. The forging was done as hot process, and the product was made through two-step forging. As a final step of its developing process, the durability is investigated by using the flexible dynamic analysis software, LS-DYNA and the pre and post processor, eta/VPG. Generally, a car make does not provide all the information with the part manufacturer. Thus, the part manufacturer has a limit in predicting the durability performance with the unit of full car. The eta/VPG has the libraries of suspension, tire, and road, which are commonly used parts. That makes a full car modeling. First, the full car is modeled by referencing the following information; Overall Length: 3,595mm, Overall Width: 1,595mm, CVW (Curve Vehicle Weight): 910kg, Front Suspension: MacPherson Strut, Rear Suspension: Torsion Beam Axle, Tire: 235/65R17. Second, the road is selected as the cobblestone. The road condition of the cobblestone is almost 10 times more severe than that of usual paved road. Third, the dynamic finite element analysis using the LS-DYNA is performed to predict the durability performance of the suggested knuckle arm. The life of the suggested knuckle arm is calculated as 350,000km, which satisfies the design requirement set up by the part manufacturer. In this study, the overall design process of a knuckle arm is suggested, and it can be seen that the developed knuckle arm satisfies the design requirement of the durability with the unit of full car. The VPG analysis is successfully performed even though it does not an exact prediction since the full car model is very rough one. Thus, this approach can be used effectively when the detail to full car is not given.Keywords: knuckle arm, structural optimization, Metamodel, forging, durability, VPG (Virtual Proving Ground)
Procedia PDF Downloads 41920507 Research the Causes of Defects and Injuries of Reinforced Concrete and Stone Construction
Authors: Akaki Qatamidze
Abstract:
Implementation of the project will be a step forward in terms of reliability in Georgia and the improvement of the construction and the development of construction. Completion of the project is expected to result in a complete knowledge, which is expressed in concrete and stone structures of assessing the technical condition of the processing. This method is based on a detailed examination of the structure, in order to establish the injuries and the elimination of the possibility of changing the structural scheme of the new requirements and architectural preservationists. Reinforced concrete and stone structures research project carried out in a systematic analysis of the important approach is to optimize the process of research and development of new knowledge in the neighboring areas. In addition, the problem of physical and mathematical models of rational consent, the main pillar of the physical (in-situ) data and mathematical calculation models and physical experiments are used only for the calculation model specification and verification. Reinforced concrete and stone construction defects and failures the causes of the proposed research to enhance the effectiveness of their maximum automation capabilities and expenditure of resources to reduce the recommended system analysis of the methodological concept-based approach, as modern science and technology major particularity of one, it will allow all family structures to be identified for the same work stages and procedures, which makes it possible to exclude subjectivity and addresses the problem of the optimal direction. It discussed the methodology of the project and to establish a major step forward in the construction trades and practical assistance to engineers, supervisors, and technical experts in the construction of the settlement of the problem.Keywords: building, reinforced concrete, expertise, stone structures
Procedia PDF Downloads 33620506 On the Blocked-off Finite-Volume Radiation Solutions in a Two-Dimensional Enclosure
Authors: Gyo Woo Lee, Man Young Kim
Abstract:
The blocked-off formulations for the analysis of radiative heat transfer are formulated and examined in order to find the solutions in a two-dimensional complex enclosure. The final discretization equations using the step scheme for spatial differencing practice are proposed with the additional source term to incorporate the blocked-off procedure. After introducing the implementation for inactive region into the general discretization equation, three different problems are examined to find the performance of the solution methods.Keywords: radiative heat transfer, Finite Volume Method (FVM), blocked-off solution procedure, body-fitted coordinate
Procedia PDF Downloads 29520505 The Implementation of a Numerical Technique to Thermal Design of Fluidized Bed Cooler
Authors: Damiaa Saad Khudor
Abstract:
The paper describes an investigation for the thermal design of a fluidized bed cooler and prediction of heat transfer rate among the media categories. It is devoted to the thermal design of such equipment and their application in the industrial fields. It outlines the strategy for the fluidization heat transfer mode and its implementation in industry. The thermal design for fluidized bed cooler is used to furnish a complete design for a fluidized bed cooler of Sodium Bicarbonate. The total thermal load distribution between the air-solid and water-solid along the cooler is calculated according to the thermal equilibrium. The step by step technique was used to accomplish the thermal design of the fluidized bed cooler. It predicts the load, air, solid and water temperature along the trough. The thermal design for fluidized bed cooler revealed to the installation of a heat exchanger consists of (65) horizontal tubes with (33.4) mm diameter and (4) m length inside the bed trough.Keywords: fluidization, powder technology, thermal design, heat exchangers
Procedia PDF Downloads 51320504 Investigations into Effect of Neural Network Predictive Control of UPFC for Improving Transient Stability Performance of Multimachine Power System
Authors: Sheela Tiwari, R. Naresh, R. Jha
Abstract:
The paper presents an investigation into the effect of neural network predictive control of UPFC on the transient stability performance of a multi-machine power system. The proposed controller consists of a neural network model of the test system. This model is used to predict the future control inputs using the damped Gauss-Newton method which employs ‘backtracking’ as the line search method for step selection. The benchmark 2 area, 4 machine system that mimics the behavior of large power systems is taken as the test system for the study and is subjected to three phase short circuit faults at different locations over a wide range of operating conditions. The simulation results clearly establish the robustness of the proposed controller to the fault location, an increase in the critical clearing time for the circuit breakers and an improved damping of the power oscillations as compared to the conventional PI controller.Keywords: identification, neural networks, predictive control, transient stability, UPFC
Procedia PDF Downloads 37120503 Comprehensive Validation of High-Performance Liquid Chromatography-Diode Array Detection (HPLC-DAD) for Quantitative Assessment of Caffeic Acid in Phenolic Extracts from Olive Mill Wastewater
Authors: Layla El Gaini, Majdouline Belaqziz, Meriem Outaki, Mariam Minhaj
Abstract:
In this study, it introduce and validate a high-performance liquid chromatography method with diode-array detection (HPLC-DAD) specifically designed for the accurate quantification of caffeic acid in phenolic extracts obtained from olive mill wastewater. The separation process of caffeic acid was effectively achieved through the use of an Acclaim Polar Advantage column (5µm, 250x4.6mm). A meticulous multi-step gradient mobile phase was employed, comprising water acidified with phosphoric acid (pH 2.3) and acetonitrile, to ensure optimal separation. The diode-array detection was adeptly conducted within the UV–VIS spectrum, spanning a range of 200–800 nm, which facilitated precise analytical results. The method underwent comprehensive validation, addressing several essential analytical parameters, including specificity, repeatability, linearity, as well as the limits of detection and quantification, alongside measurement uncertainty. The generated linear standard curves displayed high correlation coefficients, underscoring the method's efficacy and consistency. This validated approach is not only robust but also demonstrates exceptional reliability for the focused analysis of caffeic acid within the intricate matrices of wastewater, thus offering significant potential for applications in environmental and analytical chemistry.Keywords: high-performance liquid chromatography (HPLC-DAD), caffeic acid analysis, olive mill wastewater phenolics, analytical method validation
Procedia PDF Downloads 7020502 Aluminum Matrix Composites Reinforced by Glassy Carbon-Titanium Spatial Structure
Authors: B. Hekner, J. Myalski, P. Wrzesniowski
Abstract:
This study presents aluminum matrix composites reinforced by glassy carbon (GC) and titanium (Ti). In the first step, the heterophase (GC+Ti), spatial form (similar to skeleton) of reinforcement was obtained via own method. The polyurethane foam (with spatial, open-cells structure) covered by suspension of Ti particles in phenolic resin was pyrolyzed. In the second step, the prepared heterogeneous foams were infiltrated by aluminium alloy. The manufactured composites are designated to industrial application, especially as a material used in tribological field. From this point of view, the glassy carbon was applied to stabilise a coefficient of friction on the required value 0.6 and reduce wear. Furthermore, the wear can be limited due to titanium phase application, which reveals high mechanical properties. Moreover, fabrication of thin titanium layer on the carbon skeleton leads to reduce contact between aluminium alloy and carbon and thus aluminium carbide phase creation. However, the main modification involves the manufacturing of reinforcement in the form of 3D, skeleton foam. This kind on reinforcement reveals a few important advantages compared to classical form of reinforcement-particles: possibility to control homogeneity of reinforcement phase in composite material; low-advanced technique of composite manufacturing- infiltration; possibility to application the reinforcement only in required places of material; strict control of phase composition; High quality of bonding between components of material. This research is founded by NCN in the UMO-2016/23/N/ST8/00994.Keywords: metal matrix composites, MMC, glassy carbon, heterophase composites, tribological application
Procedia PDF Downloads 11820501 A New Computational Package for Using in CFD and Other Problems (Third Edition)
Authors: Mohammad Reza Akhavan Khaleghi
Abstract:
This paper shows changes done to the Reduced Finite Element Method (RFEM) that its result will be the most powerful numerical method that has been proposed so far (some forms of this method are so powerful that they can approximate the most complex equations simply Laplace equation!). Finite Element Method (FEM) is a powerful numerical method that has been used successfully for the solution of the existing problems in various scientific and engineering fields such as its application in CFD. Many algorithms have been expressed based on FEM, but none have been used in popular CFD software. In this section, full monopoly is according to Finite Volume Method (FVM) due to better efficiency and adaptability with the physics of problems in comparison with FEM. It doesn't seem that FEM could compete with FVM unless it was fundamentally changed. This paper shows those changes and its result will be a powerful method that has much better performance in all subjects in comparison with FVM and another computational method. This method is not to compete with the finite volume method but to replace it.Keywords: reduced finite element method, new computational package, new finite element formulation, new higher-order form, new isogeometric analysis
Procedia PDF Downloads 11720500 Reinforced Concrete Foundation for Turbine Generators
Authors: Siddhartha Bhattacharya
Abstract:
Steam Turbine-Generators (STG) and Combustion Turbine-Generator (CTG) are used in almost all modern petrochemical, LNG plants and power plant facilities. The reinforced concrete table top foundations are required to support these high speed rotating heavy machineries and is one of the most critical and challenging structures on any industrial project. The paper illustrates through a practical example, the step by step procedure adopted in designing a table top foundation supported on piles for a steam turbine generator with operating speed of 60 Hz. Finite element model of a table top foundation is generated in ANSYS. Piles are modeled as springs-damper elements (COMBIN14). Basic loads are adopted in analysis and design of the foundation based on the vendor requirements, industry standards, and relevant ASCE & ACI codal provisions. Static serviceability checks are performed with the help of Misalignment Tolerance Matrix (MTM) method in which the percentage of misalignment at a given bearing due to displacement at another bearing is calculated and kept within the stipulated criteria by the vendor so that the machine rotor can sustain the stresses developed due to this misalignment. Dynamic serviceability checks are performed through modal and forced vibration analysis where the foundation is checked for resonance and allowable amplitudes, as stipulated by the machine manufacturer. Reinforced concrete design of the foundation is performed by calculating the axial force, bending moment and shear at each of the critical sections. These values are calculated through area integral of the element stresses at these critical locations. Design is done as per ACI 318-05.Keywords: steam turbine generator foundation, finite element, static analysis, dynamic analysis
Procedia PDF Downloads 29520499 Linac Quality Controls Using An Electronic Portal Imaging Device
Authors: Domingo Planes Meseguer, Raffaele Danilo Esposito, Maria Del Pilar Dorado Rodriguez
Abstract:
Monthly quality control checks for a Radiation Therapy Linac may be performed is a simple and efficient way once they have been standardized and protocolized. On the other hand this checks, in spite of being imperatives, require a not negligible execution times in terms of machine time and operators time. Besides it must be taken into account the amount of disposable material which may be needed together with the use of commercial software for their performing. With the aim of optimizing and standardizing mechanical-geometric checks and multi leaves collimator checks, we decided to implement a protocol which makes use of the Electronic Portal Imaging Device (EPID) available on our Linacs. The user is step by step guided by the software during the whole procedure. Acquired images are automatically analyzed by our programs all of them written using only free software.Keywords: quality control checks, linac, radiation oncology, medical physics, free software
Procedia PDF Downloads 19920498 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant
Authors: Michael Smalenberger
Abstract:
Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation
Procedia PDF Downloads 17220497 A Study on the Solutions of the 2-Dimensional and Forth-Order Partial Differential Equations
Abstract:
In this study, we will carry out a comparative study between the reduced differential transform method, the adomian decomposition method, the variational iteration method and the homotopy analysis method. These methods are used in many fields of engineering. This is been achieved by handling a kind of 2-Dimensional and forth-order partial differential equations called the Kuramoto–Sivashinsky equations. Three numerical examples have also been carried out to validate and demonstrate efficiency of the four methods. Furthermost, it is shown that the reduced differential transform method has advantage over other methods. This method is very effective and simple and could be applied for nonlinear problems which used in engineering.Keywords: reduced differential transform method, adomian decomposition method, variational iteration method, homotopy analysis method
Procedia PDF Downloads 43320496 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition
Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade
Abstract:
The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.Keywords: automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection
Procedia PDF Downloads 16920495 New Hybrid Method to Model Extreme Rainfalls
Authors: Youness Laaroussi, Zine Elabidine Guennoun, Amine Amar
Abstract:
Modeling and forecasting dynamics of rainfall occurrences constitute one of the major topics, which have been largely treated by statisticians, hydrologists, climatologists and many other groups of scientists. In the same issue, we propose in the present paper a new hybrid method, which combines Extreme Values and fractal theories. We illustrate the use of our methodology for transformed Emberger Index series, constructed basing on data recorded in Oujda (Morocco). The index is treated at first by Peaks Over Threshold (POT) approach, to identify excess observations over an optimal threshold u. In the second step, we consider the resulting excess as a fractal object included in one dimensional space of time. We identify fractal dimension by the box counting. We discuss the prospect descriptions of rainfall data sets under Generalized Pareto Distribution, assured by Extreme Values Theory (EVT). We show that, despite of the appropriateness of return periods given by POT approach, the introduction of fractal dimension provides accurate interpretation results, which can ameliorate apprehension of rainfall occurrences.Keywords: extreme values theory, fractals dimensions, peaks Over threshold, rainfall occurrences
Procedia PDF Downloads 36120494 Hybrid Control Strategy for Nine-Level Asymmetrical Cascaded H-Bridge Inverter
Authors: Bachir Belmadani, Rachid Taleb, M’hamed Helaimi
Abstract:
Multilevel inverters are well used in high power electronic applications because of their ability to generate a very good quality of waveforms, reducing switching frequency, and their low voltage stress across the power devices. This paper presents the hybrid pulse-width modulation (HPWM) strategy of a uniform step asymmetrical cascaded H-bridge nine-level Inverter (USACHB9LI). The HPWM approach is compared to the well-known sinusoidal pulse-width modulation (SPWM) strategy. Simulation results demonstrate the better performances and technical advantages of the HPWM controller in feeding a high power induction motor.Keywords: uniform step asymmetrical cascaded h-bridge high-level inverter, hybrid pwm, sinusoidal pwm, high power induction motor
Procedia PDF Downloads 57120493 Development and Evaluation of a Psychological Adjustment and Adaptation Status Scale for Breast Cancer Survivors
Authors: Jing Chen, Jun-E Liu, Peng Yue
Abstract:
Objective: The objective of this study was to develop a psychological adjustment and adaptation status scale for breast cancer survivors, and to examine the reliability and validity of the scale. Method: 37 breast cancer survivors were recruited in qualitative research; a five-subject theoretical framework and an item pool of 150 items of the scale were derived from the interview data. In order to evaluate and select items and reach a preliminary validity and reliability for the original scale, the suggestions of study group members, experts and breast cancer survivors were taken, and statistical methods were used step by step in a sample of 457 breast cancer survivors. Results: An original 24-item scale was developed. The five dimensions “domestic affections”, “interpersonal relationship”, “attitude of life”, “health awareness”, “self-control/self-efficacy” explained 58.053% of the total variance. The content validity was assessed by experts, the CVI was 0.92. The construct validity was examined in a sample of 264 breast cancer survivors. The fitting indexes of confirmatory factor analysis (CFA) showed good fitting of the five dimensions model. The criterion-related validity of the total scale with PTGI was satisfactory (r=0.564, p<0.001). The internal consistency reliability and test-retest reliability were tested. Cronbach’s alpha value (0.911) showed a good internal consistency reliability, and the intraclass correlation coefficient (ICC=0.925, p<0.001) showed a satisfactory test-retest reliability. Conclusions: The scale was brief and easy to understand, was suitable for breast cancer patients whose physical strength and energy were limited.Keywords: breast cancer survivors, rehabilitation, psychological adaption and adjustment, development of scale
Procedia PDF Downloads 51320492 Influence of Cryo-Grinding on Particle Size Distribution of Proso Millet Bran Fraction
Authors: Maja Benkovic, Dubravka Novotni, Bojana Voucko, Duska Curic, Damir Jezek, Nikolina Cukelj
Abstract:
Cryo-grinding is an ultra-fine grinding method used in the pharmaceutical industry, production of herbs and spices and in the production and handling of cereals, due to its ability to produce powders with small particle sizes which maintain their favorable bioactive profile. The aim of this study was to determine the particle size distributions of the proso millet (Panicum miliaceum) bran fraction grinded at cryogenic temperature (using liquid nitrogen (LN₂) cooling, T = - 196 °C), in comparison to non-cooled grinding. Proso millet bran is primarily used as an animal feed, but has a potential in food applications, either as a substrate for extraction of bioactive compounds or raw material in the bakery industry. For both applications finer particle sizes of the bran could be beneficial. Thus, millet bran was ground for 2, 4, 8 and 12 minutes using the ball mill (CryoMill, Retsch GmbH, Haan, Germany) at three grinding modes: (I) without cooling, (II) at cryo-temperature, and (III) at cryo-temperature with included 1 minute of intermediate cryo-cooling step after every 2 minutes of grinding, which is usually applied when samples require longer grinding times. The sample was placed in a 50 mL stainless steel jar containing one grinding ball (Ø 25 mm). The oscillation frequency in all three modes was 30 Hz. Particle size distributions of the bran were determined by a laser diffraction particle sizing method (Mastersizer 2000) using the Scirocco 2000 dry dispersion unit (Malvern Instruments, Malvern, UK). Three main effects of the grinding set-up were visible from the results. Firstly, grinding time at all three modes had a significant effect on all particle size parameters: d(0.1), d(0.5), d(0.9), D[3,2], D[4,3], span and specific surface area. Longer grinding times resulted in lower values of the above-listed parameters, e.g. the averaged d(0.5) of the sample (229.57±1.46 µm) dropped to 51.29±1.28 µm after 2 minutes grinding without LN₂, and additionally to 43.00±1.33 µm after 4 minutes of grinding without LN₂. The only exception was the sample ground for 12 minutes without cooling, where an increase in particle diameters occurred (d(0.5)=62.85±2.20 µm), probably due to particles adhering to one another and forming larger particle clusters. Secondly, samples with LN₂ cooling exhibited lower diameters in comparison to non-cooled. For example, after 8 minutes of non-cooled grinding d(0.5)=46.97±1.05 µm was achieved, while the LN₂ cooling enabled collection of particles with average sizes of d(0.5)=18.57±0.18 µm. Thirdly, the application of intermediate cryo-cooling step resulted in similar particle diameters (d(0.5)=15.83±0.36 µm, 12 min of grinding) as cryo-milling without this step (d(0.5)=16.33±2.09 µm, 12 min of grinding). This indicates that intermediate cooling is not necessary for the current application, which consequently reduces the consumption of LN₂. These results point out the potential beneficial effects of millet bran grinding at cryo-temperatures. Further research will show if the lower particle size achieved in comparison to non-cooled grinding could result in increased bioavailability of bioactive compounds, as well as protein digestibility and solubility of dietary fibers of the proso millet bran fraction.Keywords: ball mill, cryo-milling, particle size distribution, proso millet (Panicum miliaceum) bran
Procedia PDF Downloads 14520491 Vibration Analysis of Stepped Nanoarches with Defects
Authors: Jaan Lellep, Shahid Mubasshar
Abstract:
A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.Keywords: crack, nanoarches, natural frequency, step
Procedia PDF Downloads 12820490 A Posteriori Trading-Inspired Model-Free Time Series Segmentation
Authors: Plessen Mogens Graf
Abstract:
Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.Keywords: time series segmentation, model-free, trading-inspired, multivariate data
Procedia PDF Downloads 13620489 Finite Element Molecular Modeling: A Structural Method for Large Deformations
Authors: A. Rezaei, M. Huisman, W. Van Paepegem
Abstract:
Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.Keywords: finite element, large deformation, molecular mechanics, structural method
Procedia PDF Downloads 15220488 Kinematical Analysis of Normal Children in Different Age Groups during Gait
Authors: Nawaf Al Khashram, Graham Arnold, Weijie Wang
Abstract:
Background—Gait classifying allows clinicians to differentiate gait patterns into clinically important categories that help in clinical decision making. Reliable comparison of gait data between normal and patients requires knowledge of the gait parameters of normal children's specific age group. However, there is still a lack of the gait database for normal children of different ages. Objectives—The aim of this study is to investigate the kinematics of the lower limb joints during gait for normal children in different age groups. Methods—Fifty-three normal children (34 boys, 19 girls) were recruited in this study. All the children were aged between 5 to 16 years old. Age groups were defined as three types: young child aged (5-7), child (8-11), and adolescent (12-16). When a participant agreed to take part in the project, their parents signed a consent form. Vicon® motion capture system was used to collect gait data. Participants were asked to walk at their comfortable speed along a 10-meter walkway. Each participant walked up to 20 trials. Three good trials were analyzed using the Vicon Plug-in-Gait model to obtain parameters of the gait, e.g., walking speed, cadence, stride length, and joint parameters, e.g. joint angle, force, moments, etc. Moreover, each gait cycle was divided into 8 phases. The range of motion (ROM) angle of pelvis, hip, knee, and ankle joints in three planes of both limbs were calculated using an in-house program. Results—The temporal-spatial variables of three age groups of normal children were compared between each other; it was found that there was a significant difference (p < 0.05) between the groups. The step length and walking speed were gradually increasing from young child to adolescent, while cadence was gradually decreasing from young child to adolescent group. The mean and standard deviation (SD) of the step length of young child, child and adolescent groups were 0.502 ± 0.067 m, 0.566 ± 0.061 m and 0.672 ± 0.053 m, respectively. The mean and SD of the cadence of the young child, child and adolescent groups were 140.11±15.79 step/min, 129±11.84 step/min, and a 115.96±6.47 step/min, respectively. Moreover, it was observed that there were significant differences in kinematic parameters, either whole gait cycle or each phase. For example, RoM of knee angle in the sagittal plane in whole cycle of young child group is (65.03±0.52 deg) larger than child group (63.47±0.47 deg). Conclusion—Our result showed that there are significant differences between each age group in the gait phases and thus children walking performance changes with ages. Therefore, it is important for the clinician to consider age group when analyzing the patients with lower limb disorders before any clinical treatment.Keywords: age group, gait analysis, kinematics, normal children
Procedia PDF Downloads 11920487 Comparison of Microbiological Assessment of Non-adhesive Use and the Use of Adhesive on Complete Dentures
Authors: Hyvee Gean Cabuso, Arvin Taruc, Danielle Villanueva, Channela Anais Hipolito, Jia Bianca Alfonso
Abstract:
Introduction: Denture adhesive aids to provide additional retention, support and comfort for patients with loose dentures, as well as for patients who seek to achieve optimal denture adhesion. But due to its growing popularity, arising oral health issues should be considered, including its possible impact that may alter the microbiological condition of the denture. Changes as such may further resolve to denture-related oral diseases that can affect the day-to-day lives of patients. Purpose: The study aims to assess and compare the microbiological status of dentures without adhesives versus dentures when adhesives were applied. The study also intends to identify the presence of specific microorganisms, their colony concentration and their possible effects on the oral microflora. This study also aims to educate subjects by introducing an alternative denture cleaning method as well as denture and oral health care. Methodology: Edentulous subjects age 50-80 years old, both physically and medically fit, were selected to participate. Before obtaining samples for the study, the alternative cleaning method was introduced by demonstrating a step-by-step cleaning process. Samples were obtained by swabbing the intaglio surface of their upper and lower prosthesis. These swabs were placed in a thioglycollate broth, which served as a transport and enrichment medium. The swabs were then processed through bacterial culture. The colony-forming units (CFUs) were calculated on MacConkey Agar Plate (MAP) and Blood Agar Plate (BAP) in order to identify and assess the microbiological status, including species identification and microbial counting. Result: Upon evaluation and analysis of collected data, the microbiological assessment of the upper dentures with adhesives showed little to no difference compared to dentures without adhesives, but for the lower dentures, (P=0.005), which is less than α = 0.05; therefore, the researchers reject (Ho) and that there is a significant difference between the mean ranks of the lower denture without adhesive to those with, implying that there is a significant decrease in the bacterial count. Conclusion: These results findings may implicate the possibility that the addition of denture adhesives may contribute to the significant decrease of microbial colonization on the dentures.Keywords: denture, denture adhesive, denture-related, microbiological assessment
Procedia PDF Downloads 128