Search results for: reduced order models
19014 Mapping of Solar Radiation Anomalies Based on Climate Change
Authors: Elison Eduardo Jardim Bierhals, Claudineia Brazil, Francisco Pereira, Elton Rossini
Abstract:
The use of alternative energy sources to meet energy demand reduces environmental damage. To diversify an energy matrix and to minimize global warming, a solar energy is gaining space, being an important source of renewable energy, and its potential depends on the climatic conditions of the region. Brazil presents a great solar potential for a generation of electric energy, so the knowledge of solar radiation and its characteristics are fundamental for the study of energy use. Due to the above reasons, this article aims to verify the climatic variability corresponding to the variations in solar radiation anomalies, in the face of climate change scenarios. The data used in this research are part of the Intercomparison of Interconnected Models, Phase 5 (CMIP5), which contributed to the preparation of the fifth IPCC-AR5 report. The solar radiation data were extracted from The Australian Community Climate and Earth System Simulator (ACCESS) model using the RCP 4.5 and RCP 8.5 scenarios that represent an intermediate structure and a pessimistic framework, the latter being the most worrisome in all cases. In order to allow the use of solar radiation as a source of energy in a given location and/or region, it is important, first, to determine its availability, thus justifying the importance of the study. The results pointed out, for the 75-year period (2026-2100), based on a pessimistic scenario, indicate a drop in solar radiation of the approximately 12% in the eastern region of Rio Grande do Sul. Factors that influence the pessimistic prospects of this scenario should be better observed by the responsible authorities, since they can affect the possibility to produce electricity from solar radiation.Keywords: climate change, energy, IPCC, solar radiation
Procedia PDF Downloads 19619013 Allocation of Mobile Units in an Urban Emergency Service System
Authors: Dimitra Alexiou
Abstract:
In an urban area the allocation placement of an emergency service mobile units, such as ambulances, police patrol must be designed so as to achieve a prompt response to demand locations. In this paper, a partition of a given urban network into distinct sub-networks is performed such that; the vertices in each component are close and simultaneously the difference of the sums of the corresponding population in the sub-networks is almost uniform. The objective here is to position appropriately in each sub-network a mobile emergency unit in order to reduce the response time to the demands. A mathematical model in the framework of graph theory is developed. In order to clarify the corresponding method a relevant numerical example is presented on a small network.Keywords: graph partition, emergency service, distances, location
Procedia PDF Downloads 50319012 Development and Adaptation of a LGBM Machine Learning Model, with a Suitable Concept Drift Detection and Adaptation Technique, for Barcelona Household Electric Load Forecasting During Covid-19 Pandemic Periods (Pre-Pandemic and Strict Lockdown)
Authors: Eric Pla Erra, Mariana Jimenez Martinez
Abstract:
While aggregated loads at a community level tend to be easier to predict, individual household load forecasting present more challenges with higher volatility and uncertainty. Furthermore, the drastic changes that our behavior patterns have suffered due to the COVID-19 pandemic have modified our daily electrical consumption curves and, therefore, further complicated the forecasting methods used to predict short-term electric load. Load forecasting is vital for the smooth and optimized planning and operation of our electric grids, but it also plays a crucial role for individual domestic consumers that rely on a HEMS (Home Energy Management Systems) to optimize their energy usage through self-generation, storage, or smart appliances management. An accurate forecasting leads to higher energy savings and overall energy efficiency of the household when paired with a proper HEMS. In order to study how COVID-19 has affected the accuracy of forecasting methods, an evaluation of the performance of a state-of-the-art LGBM (Light Gradient Boosting Model) will be conducted during the transition between pre-pandemic and lockdowns periods, considering day-ahead electric load forecasting. LGBM improves the capabilities of standard Decision Tree models in both speed and reduction of memory consumption, but it still offers a high accuracy. Even though LGBM has complex non-linear modelling capabilities, it has proven to be a competitive method under challenging forecasting scenarios such as short series, heterogeneous series, or data patterns with minimal prior knowledge. An adaptation of the LGBM model – called “resilient LGBM” – will be also tested, incorporating a concept drift detection technique for time series analysis, with the purpose to evaluate its capabilities to improve the model’s accuracy during extreme events such as COVID-19 lockdowns. The results for the LGBM and resilient LGBM will be compared using standard RMSE (Root Mean Squared Error) as the main performance metric. The models’ performance will be evaluated over a set of real households’ hourly electricity consumption data measured before and during the COVID-19 pandemic. All households are located in the city of Barcelona, Spain, and present different consumption profiles. This study is carried out under the ComMit-20 project, financed by AGAUR (Agència de Gestiód’AjutsUniversitaris), which aims to determine the short and long-term impacts of the COVID-19 pandemic on building energy consumption, incrementing the resilience of electrical systems through the use of tools such as HEMS and artificial intelligence.Keywords: concept drift, forecasting, home energy management system (HEMS), light gradient boosting model (LGBM)
Procedia PDF Downloads 11019011 Element Distribution and REE Dispersal in Sandstone-Hosted Copper Mineralization within Oligo-Miocene Strata, NE Iran: Insights from Lithostratigraphy and Mineralogy
Authors: Mostafa Feiz, Mohammad Safari, Hossein Hadizadeh
Abstract:
The Chalpo copper area is located in northeastern Iran, which is part of the structural zone of central Iran and the back-arc basin of Sabzevar. This sedimentary basin accumulated in destructive-oligomiocene sediments is named the Nasr-Chalpo-Sangerd (NCS) basin. The sedimentary layers in this basin originated mainly from Upper Cretaceous ophiolitic rocks and intermediate to mafic-post ophiolitic volcanic rocks, deposited as a nonconformity. The mineralized sandstone layers in the Chalpo area include leached zones (with a thickness of 5 to 8 meters) and mineralized lenses with a thickness of 0.5 to 0.7 meters. Ore minerals include primary sulfide minerals, such as chalcocite, chalcopyrite, and pyrite, as well as secondary minerals, such as covellite, digenite, malachite, and azurite, formed in three stages that comprise primary, simultaneously, and supergene stage. The best agents that control the mineralization in this area include the permeability of host rocks, the presence of fault zones as the conduits for copper oxide solutions, and significant amounts of plant fossils, which create a reducing environment for the deposition of mineralized layers. The calculations of mass changes on copper-bearing layers and primary sandstone layers indicate that Pb, As, Cd, Te, and Mo are enriched in the mineralized zones, whereas SiO₂, TiO₂, Fe₂O₃, V, Sr, and Ba are depleted. The combination of geological, stratigraphic, and geochemical studies suggests that the origin of copper may have been the underlying red strata that contained hornblende, plagioclase, biotite, alkaline feldspar, and labile minerals. Dehydration and hydrolysis of these minerals during the diagenetic process caused the leaching of copper and associated elements by circling fluids, which formed an oxidant-hydrothermal solution. Copper and silver in this oxidant solution might have moved upwards through the basin-fault zones and deposited in the reducing environments in the sandstone layers that have had abundant organic matter. Copper in these solutions was probably carried by chloride complexes. The collision of oxidant and reduced solutions caused the deposition of Cu and Ag, whereas some s elements in oxidant environments (e.g., Fe₂O₃, TiO₂, SiO₂, REEs) become uns in the reduced condition. Therefore, the copper-bearing sandstones in the study area are depleted from these elements resulting from the leaching process. The results indicate that during the mineralization stage, LREEs and MREEs were depleted, but Cu, Ag, and S were enriched. Based on field evidence, it seems that the circulation of connate fluids in the reb-bed strata, produced by diagenetic processes, encountered to reduced facies, which formed earlier by abundant fossil-plant debris in the sandstones, is the best model for precipitating sulfide-copper minerals.Keywords: Chalpo, Oligo-Miocene red beds, sandstone-hosted copper mineralization, mass change, LREEs and MREEs
Procedia PDF Downloads 3319010 Quantitative Seismic Interpretation in the LP3D Concession, Central of the Sirte Basin, Libya
Authors: Tawfig Alghbaili
Abstract:
LP3D Field is located near the center of the Sirt Basin in the Marada Trough approximately 215 km south Marsa Al Braga City. The Marada Trough is bounded on the west by a major fault, which forms the edge of the Beda Platform, while on the east, a bounding fault marks the edge of the Zelten Platform. The main reservoir in the LP3D Field is Upper Paleocene Beda Formation. The Beda Formation is mainly limestone interbedded with shale. The reservoir average thickness is 117.5 feet. To develop a better understanding of the characterization and distribution of the Beda reservoir, quantitative seismic data interpretation has been done, and also, well logs data were analyzed. Six reflectors corresponding to the tops of the Beda, Hagfa Shale, Gir, Kheir Shale, Khalifa Shale, and Zelten Formations were picked and mapped. Special work was done on fault interpretation part because of the complexities of the faults at the structure area. Different attribute analyses were done to build up more understanding of structures lateral extension and to view a clear image of the fault blocks. Time to depth conversion was computed using velocity modeling generated from check shot and sonic data. The simplified stratigraphic cross-section was drawn through the wells A1, A2, A3, and A4-LP3D. The distribution and the thickness variations of the Beda reservoir along the study area had been demonstrating. Petrophysical analysis of wireline logging also was done and Cross plots of some petrophysical parameters are generated to evaluate the lithology of reservoir interval. Structure and Stratigraphic Framework was designed and run to generate different model like faults, facies, and petrophysical models and calculate the reservoir volumetric. This study concluded that the depth structure map of the Beda formation shows the main structure in the area of study, which is north to south faulted anticline. Based on the Beda reservoir models, volumetric for the base case has been calculated and it has STOIIP of 41MMSTB and Recoverable oil of 10MMSTB. Seismic attributes confirm the structure trend and build a better understanding of the fault system in the area.Keywords: LP3D Field, Beda Formation, reservoir models, Seismic attributes
Procedia PDF Downloads 22419009 Shear Layer Investigation through a High-Load Cascade in Low-Pressure Gas Turbine Conditions
Authors: Mehdi Habibnia Rami, Shidvash Vakilipour, Mohammad H. Sabour, Rouzbeh Riazi, Hossein Hassannia
Abstract:
This paper deals with the steady and unsteady flow behavior on the separation bubble occurring on the rear portion of the suction side of T106A blade. The first phase was to implement the steady condition capturing the separation bubble. To accurately predict the separated region, the effects of three different turbulence models and computational grids were separately investigated. The results of Large Eddy Simulation (LES) model on the finest grid structure are acceptably in a good agreement with its relevant experimental results. The second phase is mainly to address the effects of wake entrance on bubble disappearance in unsteady situation. In the current simulations, from what was suggested in an experiment, simulating the flow unsteadiness, with concentrations on small scale disturbances instead of simulating a complete oncoming wake, is the key issue. Subsequently, the results from the current strategy to apply the effects of the wake and two other experimental work were compared to be in a good agreement. Between the two experiments, one of them deals with wake passing unsteady flow, and the other one implements experimentally the same approach as the current Computational Fluid Dynamics (CFD) simulation.Keywords: low-pressure turbine cascade, large-Eddy simulation (LES), RANS turbulence models, unsteady flow measurements, flow separation
Procedia PDF Downloads 30819008 Using Machine Learning to Classify Human Fetal Health and Analyze Feature Importance
Authors: Yash Bingi, Yiqiao Yin
Abstract:
Reduction of child mortality is an ongoing struggle and a commonly used factor in determining progress in the medical field. The under-5 mortality number is around 5 million around the world, with many of the deaths being preventable. In light of this issue, Cardiotocograms (CTGs) have emerged as a leading tool to determine fetal health. By using ultrasound pulses and reading the responses, CTGs help healthcare professionals assess the overall health of the fetus to determine the risk of child mortality. However, interpreting the results of the CTGs is time-consuming and inefficient, especially in underdeveloped areas where an expert obstetrician is hard to come by. Using a support vector machine (SVM) and oversampling, this paper proposed a model that classifies fetal health with an accuracy of 99.59%. To further explain the CTG measurements, an algorithm based on Randomized Input Sampling for Explanation ((RISE) of Black-box Models was created, called Feature Alteration for explanation of Black Box Models (FAB), and compared the findings to Shapley Additive Explanations (SHAP) and Local Interpretable Model Agnostic Explanations (LIME). This allows doctors and medical professionals to classify fetal health with high accuracy and determine which features were most influential in the process.Keywords: machine learning, fetal health, gradient boosting, support vector machine, Shapley values, local interpretable model agnostic explanations
Procedia PDF Downloads 14619007 Use of Hierarchical Temporal Memory Algorithm in Heart Attack Detection
Authors: Tesnim Charrad, Kaouther Nouira, Ahmed Ferchichi
Abstract:
In order to reduce the number of deaths due to heart problems, we propose the use of Hierarchical Temporal Memory Algorithm (HTM) which is a real time anomaly detection algorithm. HTM is a cortical learning algorithm based on neocortex used for anomaly detection. In other words, it is based on a conceptual theory of how the human brain can work. It is powerful in predicting unusual patterns, anomaly detection and classification. In this paper, HTM have been implemented and tested on ECG datasets in order to detect cardiac anomalies. Experiments showed good performance in terms of specificity, sensitivity and execution time.Keywords: cardiac anomalies, ECG, HTM, real time anomaly detection
Procedia PDF Downloads 23619006 Analysis of Thermoelectric Coolers as Energy Harvesters for Low Power Embedded Applications
Authors: Yannick Verbelen, Sam De Winne, Niek Blondeel, Ann Peeters, An Braeken, Abdellah Touhafi
Abstract:
The growing popularity of solid state thermoelectric devices in cooling applications has sparked an increasing diversity of thermoelectric coolers (TECs) on the market, commonly known as “Peltier modules”. They can also be used as generators, converting a temperature difference into electric power, and opportunities are plentiful to make use of these devices as thermoelectric generators (TEGs) to supply energy to low power, autonomous embedded electronic applications. Their adoption as energy harvesters in this new domain of usage is obstructed by the complex thermoelectric models commonly associated with TEGs. Low cost TECs for the consumer market lack the required parameters to use the models because they are not intended for this mode of operation, thereby urging an alternative method to obtain electric power estimations in specific operating conditions. The design of the test setup implemented in this paper is specifically targeted at benchmarking commercial, off-the-shelf TECs for use as energy harvesters in domestic environments: applications with limited temperature differences and space available. The usefulness is demonstrated by testing and comparing single and multi stage TECs with different sizes. The effect of a boost converter stage on the thermoelectric end-to-end efficiency is also discussed.Keywords: thermoelectric cooler, TEC, complementary balanced energy harvesting, step-up converter, DC/DC converter, energy harvesting, thermal harvesting
Procedia PDF Downloads 26719005 Spectral Efficiency Improvement in 5G Systems by Polyphase Decomposition
Authors: Wilson Enríquez, Daniel Cardenas
Abstract:
This article proposes a filter bank format combined with the mathematical tool called polyphase decomposition and the discrete Fourier transform (DFT) with the purpose of improving the performance of the fifth-generation communication systems (5G). We started with a review of the literature and the study of the filter bank theory and its combination with DFT in order to improve the performance of wireless communications since it reduces the computational complexity of these communication systems. With the proposed technique, several experiments were carried out in order to evaluate the structures in 5G systems. Finally, the results are presented in graphical form in terms of bit error rate against the ratio bit energy/noise power spectral density (BER vs. Eb / No).Keywords: multi-carrier system (5G), filter bank, polyphase decomposition, FIR equalizer
Procedia PDF Downloads 21019004 Influence of the Granular Mixture Properties on the Rheological Properties of Concrete: Yield Stress Determination Using Modified Chateau et al. Model
Authors: Rachid Zentar, Mokrane Bala, Pascal Boustingorry
Abstract:
The prediction of the rheological behavior of concrete is at the center of current concerns of the concrete industry for different reasons. The shortage of good quality standard materials combined with variable properties of available materials imposes to improve existing models to take into account these variations at the design stage of concrete. The main reasons for improving the predictive models are, of course, saving time and cost at the design stage as well as to optimize concrete performances. In this study, we will highlight the different properties of the granular mixtures that affect the rheological properties of concrete. Our objective is to identify the intrinsic parameters of the aggregates which make it possible to predict the yield stress of concrete. The work was done using two typologies of grains: crushed and rolled aggregates. The experimental results have shown that the rheology of concrete is improved by increasing the packing density of the granular mixture using rolled aggregates. The experimental program realized allowed to model the yield stress of concrete by a modified model of Chateau et al. through a dimensionless parameter following Krieger-Dougherty law. The modelling confirms that the yield stress of concrete depends not only on the properties of cement paste but also on the packing density of the granular skeleton and the shape of grains.Keywords: crushed aggregates, intrinsic viscosity, packing density, rolled aggregates, slump, yield stress of concrete
Procedia PDF Downloads 13119003 An Improved GA to Address Integrated Formulation of Project Scheduling and Material Ordering with Discount Options
Authors: Babak H. Tabrizi, Seyed Farid Ghaderi
Abstract:
Concurrent planning of the resource constraint project scheduling and material ordering problems have received significant attention within the last decades. Hence, the issue has been investigated here with the aim to minimize total project costs. Furthermore, the presented model considers different discount options in order to approach the real world conditions. The incorporated alternatives consist of all-unit and incremental discount strategies. On the other hand, a modified version of the genetic algorithm is applied in order to solve the model for larger sizes, in particular. Finally, the applicability and efficiency of the given model is tested by different numerical instances.Keywords: genetic algorithm, material ordering, project management, project scheduling
Procedia PDF Downloads 30419002 Triggering Supersonic Boundary-Layer Instability by Small-Scale Vortex Shedding
Authors: Guohua Tu, Zhi Fu, Zhiwei Hu, Neil D Sandham, Jianqiang Chen
Abstract:
Tripping of boundary-layers from laminar to turbulent flow, which may be necessary in specific practical applications, requires high amplitude disturbances to be introduced into the boundary layers without large drag penalties. As a possible improvement on fixed trip devices, a technique based on vortex shedding for enhancing supersonic flow transition is demonstrated in the present paper for a Mach 1.5 boundary layer. The compressible Navier-Stokes equations are solved directly using a high-order (fifth-order in space and third-order in time) finite difference method for small-scale cylinders suspended transversely near the wall. For cylinders with proper diameter and mount location, asymmetry vortices shed within the boundary layer are capable of tripping laminar-turbulent transition. Full three-dimensional simulations showed that transition was enhanced. A parametric study of the size and mounting location of the cylinder is carried out to identify the most effective setup. It is also found that the vortex shedding can be suppressed by some factors such as wall effect.Keywords: boundary layer instability, boundary layer transition, vortex shedding, supersonic flows, flow control
Procedia PDF Downloads 37119001 Effect of Modulation Factors on Tomotherapy Plans and Their Quality Analysis
Authors: Asawari Alok Pawaskar
Abstract:
This study was aimed at investigating quality assurance (QA) done with IBA matrix, the discrepancies observed for helical tomotherapy plans. A selection of tomotherapy plans that initially failed the with Matrix process was chosen for this investigation. These plans failed the fluence analysis as assessed using gamma criteria (3%, 3 mm). Each of these plans was modified (keeping the planning constraints the same), beamlets rebatched and reoptimized. By increasing and decreasing the modulation factor, the fluence in a circumferential plane as measured with a diode array was assessed. A subset of these plans was investigated using varied pitch values. Factors for each plan that were examined were point doses, fluences, leaf opening times, planned leaf sinograms, and uniformity indices. In order to ensure that the treatment constraints remained the same, the dose-volume histograms (DVHs) of all the modulated plans were compared to the original plan. It was observed that a large increase in the modulation factor did not significantly improve DVH uniformity, but reduced the gamma analysis pass rate. This also increased the treatment delivery time by slowing down the gantry rotation speed which then increases the maximum to mean non-zero leaf open time ratio. Increasing and decreasing the pitch value did not substantially change treatment time, but the delivery accuracy was adversely affected. This may be due to many other factors, such as the complexity of the treatment plan and site. Patient sites included in this study were head and neck, breast, abdomen. The impact of leaf timing inaccuracies on plans was greater with higher modulation factors. Point-dose measurements were seen to be less susceptible to changes in pitch and modulation factors. The initial modulation factor used by the optimizer, such that the TPS generated ‘actual’ modulation factor within the range of 1.4 to 2.5, resulted in an improved deliverable plan.Keywords: dose volume histogram, modulation factor, IBA matrix, tomotherapy
Procedia PDF Downloads 18319000 The Impact of CYP2C9 Gene Polymorphisms on Warfarin Dosing
Authors: Weaam Aldeeban, Majd Aljamali, Lama A. Youssef
Abstract:
Background & Objective: Warfarin is considered a problematic drug due to its narrow therapeutic window and wide inter-individual response variations, which are attributed to demographic, environmental, and genetic factors, particularly single nucleotide polymorphism (SNPs) in the genes encoding VKORC1 and CYP2C9 involved in warfarin's mechanism of action and metabolism, respectively. CYP2C9*2rs1799853 and CYP2C9*3rs1057910 alleles are linked to reduced enzyme activity, as carriers of either or both alleles are classified as moderate or slow metabolizers, and therefore exhibit higher sensitivity of warfarin compared with wild type (CYP2C9*1*1). Our study aimed to assess the frequency of *1, *2, and *3 alleles in the CYP2C9 gene in a cohort of Syrian patients receiving a maintenance dose of warfarin for different indications, the impact of genotypes on warfarin dosing, and the frequency of adverse effects (i.e., bleedings). Subjects & Methods: This retrospective cohort study encompassed 94 patients treated with warfarin. Patients’ genotypes were identified by sequencing the polymerase chain reaction (PCR) specific products of the gene encoding CYP2C9, and the effects on warfarin therapeutic outcomes were investigated. Results: Sequencing revealed that 43.6% of the study population has the *2 and/or *3 SNPs. The mean weekly maintenance dose of warfarin was 37.42 ± 15.5 mg for patients with the wild-type allele (CYP2C9*1*1), whereas patients with one or both variants (*2 and/or *3) demanded a significantly lower dose (28.59 ±11.58 mg) of warfarin, (P= 0.015). A higher percentage (40.7%) of patients with allele *2 and/or *3 experienced hemorrhagic accidents compared with only 17.9% of patients with the wild type *1*1, (P = 0.04). Conclusions: Our study proves an association between *2 and *3 genotypes and higher sensitivity to warfarin and a tendency to bleed, which necessitates lowering the dose. These findings emphasize the significance of CYP2C9 genotyping prior to commencing warfarin therapy in order to achieve optimal and faster dose control and to ensure effectiveness and safety.Keywords: warfarin, CYP2C9, polymorphisms, Syrian, hemorrhage
Procedia PDF Downloads 14818999 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea
Authors: Masoud Sakhinia, Sajjad Ahmad
Abstract:
Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR
Procedia PDF Downloads 28418998 Evaluation of the Impact of Reducing the Traffic Light Cycle for Cars to Improve Non-Vehicular Transportation: A Case of Study in Lima
Authors: Gheyder Concha Bendezu, Rodrigo Lescano Loli, Aldo Bravo Lizano
Abstract:
In big urbanized cities of Latin America, motor vehicles have priority over non-motor vehicles and pedestrians. There is an important problem that affects people's health and quality of life; lack of inclusion towards pedestrians makes it difficult for them to move smoothly and safely since the city has been planned for the transit of motor vehicles. Faced with the new trend for sustainable and economical transport, the city is forced to develop infrastructure in order to incorporate pedestrians and users with non-motorized vehicles in the transport system. The present research aims to study the influence of non-motorized vehicles on an avenue, the optimization of a cycle using traffic lights based on simulation in Synchro software, to improve the flow of non-motor vehicles. The evaluation is of the microscopic type; for this reason, field data was collected, such as vehicular, pedestrian, and non-motor vehicle user demand. With the values of speed and travel time, it is represented in the current scenario that contains the existing problem. These data allow to create a microsimulation model in Vissim software, later to be calibrated and validated so that it has a behavior similar to reality. The results of this model are compared with the efficiency parameters of the proposed model; these parameters are the queue length, the travel speed, and mainly the travel times of the users at this intersection. The results reflect a reduction of 27% in travel time, that is, an improvement between the proposed model and the current one for this great avenue. The tail length of motor vehicles is also reduced by 12.5%, a considerable improvement. All this represents an improvement in the level of service and in the quality of life of users.Keywords: bikeway, microsimulation, pedestrians, queue length, traffic light cycle, travel time
Procedia PDF Downloads 17918997 Effect of Acid-Basic Treatments of Lingocellulosic Material Forest Wastes Wild Carob on Ethyl Violet Dye Adsorption
Authors: Abdallah Bouguettoucha, Derradji Chebli, Tariq Yahyaoui, Hichem Attout
Abstract:
The effect of acid -basic treatment of lingocellulosic material (forest wastes wild carob) on Ethyl violet adsorption was investigated. It was found that surface chemistry plays an important role in Ethyl violet (EV) adsorption. HCl treatment produces more active acidic surface groups such as carboxylic and lactone, resulting in an increase in the adsorption of EV dye. The adsorption efficiency was higher for treated of lingocellulosic material with HCl than for treated with KOH. Maximum biosorption capacity was 170 and 130 mg/g, for treated of lingocellulosic material with HCl than for treated with KOH at pH 6 respectively. It was also found that the time to reach equilibrium takes less than 25 min for both treated materials. The adsorption of basic dye (i.e., ethyl violet or basic violet 4) was carried out by varying some process parameters, such as initial concentration, pH and temperature. The adsorption process can be well described by means of a pseudo-second-order reaction model showing that boundary layer resistance was not the rate-limiting step, as confirmed by intraparticle diffusion since the linear plot of Qt versus t^0.5 did not pass through the origin. In addition, experimental data were accurately expressed by the Sips equation if compared with the Langmuir and Freundlich isotherms. The values of ΔG° and ΔH° confirmed that the adsorption of EV on acid-basic treated forest wast wild carob was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase of the randomness at the treated lingocellulosic material -solution interface during the adsorption process.Keywords: adsorption, isotherm models, thermodynamic parameters, wild carob
Procedia PDF Downloads 28118996 Optimization of the Co-Precipitation of Industrial Waste Metals in a Continuous Reactor System
Authors: Thomas S. Abia II, Citlali Garcia-Saucedo
Abstract:
A continuous copper precipitation treatment (CCPT) system was conceived at Intel Chandler Site to serve as a first-of-kind (FOK) facility-scale waste copper (Cu), nickel (Ni), and manganese (Mn) co-precipitation facility. The process was designed to treat highly variable wastewater discharged from a substrate packaging research factory. The paper discusses metals co-precipitation induced by internal changes for manufacturing facilities that lack the capacity for hardware expansion due to real estate restrictions, aggressive schedules, or budgetary constraints. Herein, operating parameters such as pH and oxidation reduction potential (ORP) were examined to analyze the ability of the CCPT System to immobilize various waste metals. Additionally, influential factors such as influent concentrations and retention times were investigated to quantify the environmental variability against system performance. A total of 2,027 samples were analyzed and statistically evaluated to measure the performance of CCPT that was internally retrofitted for Mn abatement to meet environmental regulations. In order to enhance the consistency of the influent, a separate holding tank was cannibalized from another system to collect and slow-feed the segregated Mn wastewater from the factory into CCPT. As a result, the baseline influent Mn decreased from 17.2+18.7 mg1L-1 at pre-pilot to 5.15+8.11 mg1L-1 post-pilot (70.1% reduction). Likewise, the pre-trial and post-trial average influent Cu values to CCPT were 52.0+54.6 mg1L-1 and 33.9+12.7 mg1L-1, respectively (34.8% reduction). However, the raw Ni content of 0.97+0.39 mg1L-1 at pre-pilot increased to 1.06+0.17 mg1L-1 at post-pilot. The average Mn output declined from 10.9+11.7 mg1L-1 at pre-pilot to 0.44+1.33 mg1L-1 at post-pilot (96.0% reduction) as a result of the pH and ORP operating setpoint changes. In similar fashion, the output Cu quality improved from 1.60+5.38 mg1L-1 to 0.55+1.02 mg1L-1 (65.6% reduction) while the Ni output sustained a 50% enhancement during the pilot study (0.22+0.19 mg1L-1 reduced to 0.11+0.06 mg1L-1). pH and ORP were shown to be significantly instrumental to the precipitative versatility of the CCPT System.Keywords: copper, co-precipitation, industrial wastewater treatment, manganese, optimization, pilot study
Procedia PDF Downloads 27218995 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People
Authors: James Boylan, Denny Meyer, Won Sun Chen
Abstract:
Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire
Procedia PDF Downloads 10818994 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home
Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.Keywords: situation-awareness, smart home, IoT, machine learning, classifier
Procedia PDF Downloads 42418993 Age and Sex Identification among Egyptian Population Using Fingerprint Ridge Density
Authors: Nazih Ramadan, Manal Mohy-Eldine, Amani Hanoon, Alaa Shehab
Abstract:
Background and Aims: The study of fingerprints is widely used in providing a clue regarding identity. Age and gender identification from fingerprints is an important step in forensic anthropology in order to minimize the list of suspects search. The aim of this study was to determine finger ridge density and patterns among Egyptians, and to estimate age and gender using ridge densities. Materials and Methods: This study was conducted on 177 randomly-selected healthy Egyptian subjects (90 males and 87 females). They were divided into three age groups; Group (a): from 6-< 12 years, group (b) from 12-< 18 years and group (c) ≥ 18 years. Bilateral digital prints, from every subject, were obtained by the inking procedure. Ridge count per 25 mm² was determined together with assessment of ridge pattern type. Statistical analysis was done with references to different age and sex groups. Results: There was a statistical significant difference in ridge density between the different age groups; where younger ages had significantly higher ridge density than older ages. Females proved to have significantly higher ridge density than males. Also, there was a statistically significant negative correlation between age and ridge density. Ulnar loops were the most frequent pattern among Egyptians then whorls then arches then radial loops. Finally, different regression models were constructed to estimate age and gender from fingerprints ridge density. Conclusion: fingerprint ridge density can be used to identify both age and sex of subjects. Further studies are recommended on different populations, larger samples or using different methods of fingerprint recording and finger ridge counting.Keywords: age, sex identification, Egyptian population, fingerprints, ridge density
Procedia PDF Downloads 37518992 Hull Detection from Handwritten Digit Image
Authors: Sriraman Kothuri, Komal Teja Mattupalli
Abstract:
In this paper we proposed a novel algorithm for recognizing hulls in a hand written digits. This is an extension to the work on “Digit Recognition Using Freeman Chain code”. In order to find out the hulls in a user given digit it is necessary to follow three steps. Those are pre-processing, Boundary Extraction and at last apply the Hull Detection system in a way to attain the better results. The detection of Hull Regions is mainly intended to increase the machine learning capability in detection of characters or digits. This can also extend this in order to get the hull regions and their intensities in Black Holes in Space Exploration.Keywords: chain code, machine learning, hull regions, hull recognition system, SASK algorithm
Procedia PDF Downloads 40518991 Serial Position Curves under Compressively Expanding and Contracting Schedules of Presentation
Authors: Priya Varma, Denis John McKeown
Abstract:
Psychological time, unlike physical time, is believed to be ‘compressive’ in the sense that the mental representations of a series of events may be internally arranged with ever decreasing inter-event spacing (looking back from the most recently encoded event). If this is true, the record within immediate memory of recent events is severely temporally distorted. Although this notion of temporal distortion of the memory record is captured within some theoretical accounts of human forgetting, notably temporal distinctiveness accounts, the way in which the fundamental nature of the distortion underpins memory and forgetting broadly is barely recognised or at least directly investigated. Our intention here was to manipulate the spacing of items for recall in order to ‘reverse’ this supposed natural compression within the encoding of the items. In Experiment 1 three schedules of presentation (expanding, contracting and fixed irregular temporal spacing) were created using logarithmic spacing of the words for both free and serial recall conditions. The results of recall of lists of 7 words showed statistically significant benefits of temporal isolation, and more excitingly the contracting word series (which we may think of as reversing the natural compression within the mental representation of the word list) showed best performance. Experiment 2 tested for effects of active verbal rehearsal in the recall task; this reduced but did not remove the benefits of our temporal scheduling manipulation. Finally, a third experiment used the same design but with Chinese characters as memoranda, in a further attempt to subvert possible verbal maintenance of items. One change to the design here was to introduce a probe item following the sequence of items and record response times to this probe. Together the outcomes of the experiments broadly support the notion of temporal compression within immediate memory.Keywords: memory, serial position curves, temporal isolation, temporal schedules
Procedia PDF Downloads 22218990 Impact of Boundary Conditions on the Behavior of Thin-Walled Laminated Column with L-Profile under Uniform Shortening
Authors: Jaroslaw Gawryluk, Andrzej Teter
Abstract:
Simply supported angle columns subjected to uniform shortening are tested. The experimental studies are conducted on a testing machine using additional Aramis and the acoustic emission system. The laminate samples are subjected to axial uniform shortening. The tested columns are loaded with the force values from zero to the maximal load destroying the L-shaped column, which allowed one to observe the column post-buckling behavior until its collapse. Laboratory tests are performed at a constant velocity of the cross-bar equal to 1 mm/min. In order to eliminate stress concentrations between sample and support, flexible pads are used. Analyzed samples are made with carbon-epoxy laminate using the autoclave method. The configurations of laminate layers are: [60,0₂,-60₂,60₃,-60₂,0₃,-60₂,0,60₂]T, where direction 0 is along the length of the profile. Material parameters of laminate are: Young’s modulus along the fiber direction - 170GPa, Young’s modulus along the fiber transverse direction - 7.6GPa, shear modulus in-plane - 3.52GPa, Poisson’s ratio in-plane - 0.36. The dimensions of all columns are: length-300 mm, thickness-0.81mm, width of the flanges-40mm. Next, two numerical models of the column with and without flexible pads are developed using the finite element method in Abaqus software. The L-profile laminate column is modeled using the S8R shell elements. The layup-ply technique is used to define the sequence of the laminate layers. However, the model of grips is made of the R3D4 discrete rigid elements. The flexible pad is consists of the C3D20R type solid elements. In order to estimate the moment of the first laminate layer damage, the following initiation criteria were applied: maximum stress criterion, Tsai-Hill, Tsai-Wu, Azzi-Tsai-Hill, and Hashin criteria. The best compliance of results was observed for the Hashin criterion. It was found that the use of the pad in the numerical model significantly influences the damage mechanism. The model without pads characterized a much more stiffness, as evidenced by a greater bifurcation load and damage initiation load in all analyzed criteria, lower shortening, and less deflection of the column in its center than the model with flexible pads. Acknowledgment: The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).Keywords: angle column, compression, experiment, FEM
Procedia PDF Downloads 21018989 Enhance Engineering Learning Using Cognitive Simulator
Authors: Lior Davidovitch
Abstract:
Traditional training based on static models and case studies is the backbone of most teaching and training programs of engineering education. However, project management learning is characterized by dynamics models that requires new and enhanced learning method. The results of empirical experiments evaluating the effectiveness and efficiency of using cognitive simulator as a new training technique are reported. The empirical findings are focused on the impact of keeping and reviewing learning history in a dynamic and interactive simulation environment of engineering education. The cognitive simulator for engineering project management learning had two learning history keeping modes: manual (student-controlled), automatic (simulator-controlled) and a version with no history keeping. A group of industrial engineering students performed four simulation-runs divided into three identical simple scenarios and one complicated scenario. The performances of participants running the simulation with the manual history mode were significantly better than users running the simulation with the automatic history mode. Moreover, the effects of using the undo enhanced further the learning process. The findings indicate an enhancement of engineering students’ learning and decision making when they use the record functionality of the history during their engineering training process. Furthermore, the cognitive simulator as educational innovation improves students learning and training. The practical implications of using simulators in the field of engineering education are discussed.Keywords: cognitive simulator, decision making, engineering learning, project management
Procedia PDF Downloads 25318988 A Comparative Study on the Dimensional Error of 3D CAD Model and SLS RP Model for Reconstruction of Cranial Defect
Authors: L. Siva Rama Krishna, Sriram Venkatesh, M. Sastish Kumar, M. Uma Maheswara Chary
Abstract:
Rapid Prototyping (RP) is a technology that produces models and prototype parts from 3D CAD model data, CT/MRI scan data, and model data created from 3D object digitizing systems. There are several RP process like Stereolithography (SLA), Solid Ground Curing (SGC), Selective Laser Sintering (SLS), Fused Deposition Modelling (FDM), 3D Printing (3DP) among them SLS and FDM RP processes are used to fabricate pattern of custom cranial implant. RP technology is useful in engineering and biomedical application. This is helpful in engineering for product design, tooling and manufacture etc. RP biomedical applications are design and development of medical devices, instruments, prosthetics and implantation; it is also helpful in planning complex surgical operation. The traditional approach limits the full appreciation of various bony structure movements and therefore the custom implants produced are difficult to measure the anatomy of parts and analyse the changes in facial appearances accurately. Cranioplasty surgery is a surgical correction of a defect in cranial bone by implanting a metal or plastic replacement to restore the missing part. This paper aims to do a comparative study on the dimensional error of CAD and SLS RP Models for reconstruction of cranial defect by comparing the virtual CAD with the physical RP model of a cranial defect.Keywords: rapid prototyping, selective laser sintering, cranial defect, dimensional error
Procedia PDF Downloads 32718987 Hybrid Thresholding Lifting Dual Tree Complex Wavelet Transform with Wiener Filter for Quality Assurance of Medical Image
Authors: Hilal Naimi, Amelbahahouda Adamou-Mitiche, Lahcene Mitiche
Abstract:
The main problem in the area of medical imaging has been image denoising. The most defying for image denoising is to secure data carrying structures like surfaces and edges in order to achieve good visual quality. Different algorithms with different denoising performances have been proposed in previous decades. More recently, models focused on deep learning have shown a great promise to outperform all traditional approaches. However, these techniques are limited to the necessity of large sample size training and high computational costs. This research proposes a denoising approach basing on LDTCWT (Lifting Dual Tree Complex Wavelet Transform) using Hybrid Thresholding with Wiener filter to enhance the quality image. This research describes the LDTCWT as a type of lifting wavelets remodeling that produce complex coefficients by employing a dual tree of lifting wavelets filters to get its real part and imaginary part. Permits the remodel to produce approximate shift invariance, directionally selective filters and reduces the computation time (properties lacking within the classical wavelets transform). To develop this approach, a hybrid thresholding function is modeled by integrating the Wiener filter into the thresholding function.Keywords: lifting wavelet transform, image denoising, dual tree complex wavelet transform, wavelet shrinkage, wiener filter
Procedia PDF Downloads 16718986 Effect of Wind Braces to Earthquake Resistance of Steel Structures
Authors: H. Gokdemir
Abstract:
All structures are subject to vertical and lateral loads. Under these loads, structures make deformations and deformation values of structural elements mustn't exceed their capacity for structural stability. Especially, lateral loads cause critical deformations because of their random directions and magnitudes. Wind load is one of the lateral loads which can act in any direction and any magnitude. Although wind has nearly no effect on reinforced concrete structures, it must be considered for steel structures, roof systems and slender structures like minarets. Therefore, every structure must be able to resist wind loads acting parallel and perpendicular to any side. One of the effective methods for resisting lateral loads is assembling cross steel elements between columns which are called as wind bracing. These cross elements increases lateral rigidity of a structure and prevent exceeding of deformation capacity of the structural system. So, this means cross elements are also effective in resisting earthquake loads too. In this paper; Effects of wind bracing to earthquake resistance of structures are studied. Structure models (with and without wind bracing) are generated and these models are solved under both earthquake and wind loads with different seismic zone parameters. It is concluded by the calculations that; in low-seismic risk zones, wind bracing can easily resist earthquake loads and no additional reinforcement for earthquake loads is necessary. Similarly; in high-seismic risk zones, earthquake cross elements resist wind loads too.Keywords: wind bracings, earthquake, steel structures, vertical and lateral loads
Procedia PDF Downloads 47418985 Dem Based Surface Deformation in Jhelum Valley: Insights from River Profile Analysis
Authors: Syed Amer Mahmood, Rao Mansor Ali Khan
Abstract:
This study deals with the remote sensing analysis of tectonic deformation and its implications to understand the regional uplift conditions in the lower Jhelum and eastern Potwar. Identification and mapping of active structures is an important issue in order to assess seismic hazards and to understand the Quaternary deformation of the region. Digital elevation models (DEMs) provide an opportunity to quantify land surface geometry in terms of elevation and its derivatives. Tectonic movement along the faults is often reflected by characteristic geomorphological features such as elevation, stream offsets, slope breaks and the contributing drainage area. The river profile analysis in this region using SRTM digital elevation model gives information about the tectonic influence on the local drainage network. The steepness and concavity indices have been calculated by power law of scaling relations under steady state conditions. An uplift rate map is prepared after carefully analysing the local drainage network showing uplift rates in mm/year. The active faults in the region control local drainages and the deflection of stream channels is a further evidence of the recent fault activity. The results show variable relative uplift conditions along MBT and Riasi and represent a wonderful example of the recency of uplift, as well as the influence of active tectonics on the evolution of young orogens.Keywords: quaternary deformation, SRTM DEM, geomorphometric indices, active tectonics and MBT
Procedia PDF Downloads 350