Search results for: machine modelling
572 Comparison between the Performances of Different Boring Bars in the Internal Turning of Long Overhangs
Authors: Wallyson Thomas, Zsombor Fulop, Attila Szilagyi
Abstract:
Impact dampers are mainly used in the metal-mechanical industry in operations that generate too much vibration in the machining system. Internal turning processes become unstable during the machining of deep holes, in which the tool holder is used with long overhangs (high length-to-diameter ratios). The devices coupled with active dampers, are expensive and require the use of advanced electronics. On the other hand, passive impact dampers (PID – Particle Impact Dampers) are cheaper alternatives that are easier to adapt to the machine’s fixation system, once that, in this last case, a cavity filled with particles is simply added to the structure of the tool holder. The cavity dimensions and the diameter of the spheres are pre-determined. Thus, when passive dampers are employed during the machining process, the vibration is transferred from the tip of the tool to the structure of the boring bar, where it is absorbed by the fixation system. This work proposes to compare the behaviors of a conventional solid boring bar and a boring bar with a passive impact damper in turning while using the highest possible L/D (length-to-diameter ratio) of the tool and an Easy Fix fixation system (also called: Split Bushing Holding System). It is also intended to optimize the impact absorption parameters, as the filling percentage of the cavity and the diameter of the spheres. The test specimens were made of hardened material and machined in a Computer Numerical Control (CNC) lathe. The laboratory tests showed that when the cavity of the boring bar is totally filled with minimally spaced spheres of the largest diameter, the gain in absorption allowed of obtaining, with an L/D equal to 6, the same surface roughness obtained when using the solid boring bar with an L/D equal to 3.4. The use of the passive particle impact damper resulted in, therefore, increased static stiffness and reduced deflexion of the tool.Keywords: active damper, fixation system, hardened material, passive damper
Procedia PDF Downloads 220571 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data
Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding
Abstract:
The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)
Procedia PDF Downloads 151570 Development of Innovative Nuclear Fuel Pellets Using Additive Manufacturing
Authors: Paul Lemarignier, Olivier Fiquet, Vincent Pateloup
Abstract:
In line with the strong desire of nuclear energy players to have ever more effective products in terms of safety, research programs on E-ATF (Enhanced-Accident Tolerant Fuels) that are more resilient, particularly to the loss of coolant, have been launched in all countries with nuclear power plants. Among the multitude of solutions being developed internationally, carcinoembryonic antigen (CEA) and its partners are investigating a promising solution, which is the realization of CERMET (CERamic-METal) type fuel pellets made of a matrix of fissile material, uranium dioxide UO2, which has a low thermal conductivity, and a metallic phase with a high thermal conductivity to improve heat evacuation. Work has focused on the development by powder metallurgy of micro-structured CERMETs, characterized by networks of metallic phase embedded in the UO₂ matrix. Other types of macro-structured CERMETs, based on concepts proposed by thermal simulation studies, have been developed with a metallic phase with a specific geometry to optimize heat evacuation. This solution could not be developed using traditional processes, so additive manufacturing, which revolutionizes traditional design principles, is used to produce these innovative prototype concepts. At CEA Cadarache, work is first carried out on a non-radioactive surrogate material, alumina, in order to acquire skills and to develop the equipment, in particular the robocasting machine, an additive manufacturing technique selected for its simplicity and the possibility of optimizing the paste formulations. A manufacturing chain was set up, with the pastes production, the 3D printing of pellets, and the associated thermal post-treatment. The work leading to the first elaborations of macro-structured alumina/molybdenum CERMETs will be presented. This work was carried out with the support of Framatome and EdF.Keywords: additive manufacturing, alumina, CERMET, molybdenum, nuclear safety
Procedia PDF Downloads 77569 Innovative Screening Tool Based on Physical Properties of Blood
Authors: Basant Singh Sikarwar, Mukesh Roy, Ayush Goyal, Priya Ranjan
Abstract:
This work combines two bodies of knowledge which includes biomedical basis of blood stain formation and fluid communities’ wisdom that such formation of blood stain depends heavily on physical properties. Moreover biomedical research tells that different patterns in stains of blood are robust indicator of blood donor’s health or lack thereof. Based on these valuable insights an innovative screening tool is proposed which can act as an aide in the diagnosis of diseases such Anemia, Hyperlipidaemia, Tuberculosis, Blood cancer, Leukemia, Malaria etc., with enhanced confidence in the proposed analysis. To realize this powerful technique, simple, robust and low-cost micro-fluidic devices, a micro-capillary viscometer and a pendant drop tensiometer are designed and proposed to be fabricated to measure the viscosity, surface tension and wettability of various blood samples. Once prognosis and diagnosis data has been generated, automated linear and nonlinear classifiers have been applied into the automated reasoning and presentation of results. A support vector machine (SVM) classifies data on a linear fashion. Discriminant analysis and nonlinear embedding’s are coupled with nonlinear manifold detection in data and detected decisions are made accordingly. In this way, physical properties can be used, using linear and non-linear classification techniques, for screening of various diseases in humans and cattle. Experiments are carried out to validate the physical properties measurement devices. This framework can be further developed towards a real life portable disease screening cum diagnostics tool. Small-scale production of screening cum diagnostic devices is proposed to carry out independent test.Keywords: blood, physical properties, diagnostic, nonlinear, classifier, device, surface tension, viscosity, wettability
Procedia PDF Downloads 376568 Development of Earthquake and Typhoon Loss Models for Japan, Specifically Designed for Underwriting and Enterprise Risk Management Cycles
Authors: Nozar Kishi, Babak Kamrani, Filmon Habte
Abstract:
Natural hazards such as earthquakes and tropical storms, are very frequent and highly destructive in Japan. Japan experiences, every year on average, more than 10 tropical cyclones that come within damaging reach, and earthquakes of moment magnitude 6 or greater. We have developed stochastic catastrophe models to address the risk associated with the entire suite of damaging events in Japan, for use by insurance, reinsurance, NGOs and governmental institutions. KCC’s (Karen Clark and Company) catastrophe models are procedures constituted of four modular segments: 1) stochastic events sets that would represent the statistics of the past events, hazard attenuation functions that could model the local intensity, vulnerability functions that would address the repair need for local buildings exposed to the hazard, and financial module addressing policy conditions that could estimates the losses incurring as result of. The events module is comprised of events (faults or tracks) with different intensities with corresponding probabilities. They are based on the same statistics as observed through the historical catalog. The hazard module delivers the hazard intensity (ground motion or wind speed) at location of each building. The vulnerability module provides library of damage functions that would relate the hazard intensity to repair need as percentage of the replacement value. The financial module reports the expected loss, given the payoff policies and regulations. We have divided Japan into regions with similar typhoon climatology, and earthquake micro-zones, within each the characteristics of events are similar enough for stochastic modeling. For each region, then, a set of stochastic events is developed that results in events with intensities corresponding to annual occurrence probabilities that are of interest to financial communities; such as 0.01, 0.004, etc. The intensities, corresponding to these probabilities (called CE, Characteristics Events) are selected through a superstratified sampling approach that is based on the primary uncertainty. Region specific hazard intensity attenuation functions followed by vulnerability models leads to estimation of repair costs. Extensive economic exposure model addresses all local construction and occupancy types, such as post-linter Shinand Okabe wood, as well as concrete confined in steel, SRC (Steel-Reinforced Concrete), high-rise.Keywords: typhoon, earthquake, Japan, catastrophe modelling, stochastic modeling, stratified sampling, loss model, ERM
Procedia PDF Downloads 269567 Measuring the Unmeasurable: A Project of High Risk Families Prediction and Management
Authors: Peifang Hsieh
Abstract:
The prevention of child abuse has aroused serious concerns in Taiwan because of the disparity between the increasing amount of reported child abuse cases that doubled over the past decade and the scarcity of social workers. New Taipei city, with the most population in Taiwan and over 70% of its 4 million citizens are migrant families in which the needs of children can be easily neglected due to insufficient support from relatives and communities, sees urgency for a social support system, by preemptively identifying and outreaching high-risk families of child abuse, so as to offer timely assistance and preventive measure to safeguard the welfare of the children. Big data analysis is the inspiration. As it was clear that high-risk families of child abuse have certain characteristics in common, New Taipei city decides to consolidate detailed background information data from departments of social affairs, education, labor, and health (for example considering status of parents’ employment, health, and if they are imprisoned, fugitives or under substance abuse), to cross-reference for accurate and prompt identification of the high-risk families in need. 'The Service Center for High-Risk Families' (SCHF) was established to integrate data cross-departmentally. By utilizing the machine learning 'random forest method' to build a risk prediction model which can early detect families that may very likely to have child abuse occurrence, the SCHF marks high-risk families red, yellow, or green to indicate the urgency for intervention, so as to those families concerned can be provided timely services. The accuracy and recall rates of the above model were 80% and 65%. This prediction model can not only improve the child abuse prevention process by helping social workers differentiate the risk level of newly reported cases, which may further reduce their major workload significantly but also can be referenced for future policy-making.Keywords: child abuse, high-risk families, big data analysis, risk prediction model
Procedia PDF Downloads 135566 Uplift Segmentation Approach for Targeting Customers in a Churn Prediction Model
Authors: Shivahari Revathi Venkateswaran
Abstract:
Segmenting customers plays a significant role in churn prediction. It helps the marketing team with proactive and reactive customer retention. For the reactive retention, the retention team reaches out to customers who already showed intent to disconnect by giving some special offers. When coming to proactive retention, the marketing team uses churn prediction model, which ranks each customer from rank 1 to 100, where 1 being more risk to churn/disconnect (high ranks have high propensity to churn). The churn prediction model is built by using XGBoost model. However, with the churn rank, the marketing team can only reach out to the customers based on their individual ranks. To profile different groups of customers and to frame different marketing strategies for targeted groups of customers are not possible with the churn ranks. For this, the customers must be grouped in different segments based on their profiles, like demographics and other non-controllable attributes. This helps the marketing team to frame different offer groups for the targeted audience and prevent them from disconnecting (proactive retention). For segmentation, machine learning approaches like k-mean clustering will not form unique customer segments that have customers with same attributes. This paper finds an alternate approach to find all the combination of unique segments that can be formed from the user attributes and then finds the segments who have uplift (churn rate higher than the baseline churn rate). For this, search algorithms like fast search and recursive search are used. Further, for each segment, all customers can be targeted using individual churn ranks from the churn prediction model. Finally, a UI (User Interface) is developed for the marketing team to interactively search for the meaningful segments that are formed and target the right set of audience for future marketing campaigns and prevent them from disconnecting.Keywords: churn prediction modeling, XGBoost model, uplift segments, proactive marketing, search algorithms, retention, k-mean clustering
Procedia PDF Downloads 71565 Water Quality in Buyuk Menderes Graben, Turkey
Authors: Tugbanur Ozen Balaban, Gultekin Tarcan, Unsal Gemici, Mumtaz Colak, I. Hakki Karamanderesi
Abstract:
Buyuk Menderes Graben is located in the Western Anatolia (Turkey). The graben has become the largest industrial and agricultural area with a total population exceeding 3.000.000. There are two big cities within the study areas from west to east as Aydın and Denizli. The study area is very rich with regard to cold ground waters and thermal waters. Electrical production using geothermal potential has become very popular in the last decades in this area. Buyuk Menderes Graben is a tectonically active extensional region and is undergoing a north–south extensional tectonic regime which commenced at the latest during Early Middle Miocene period. The basement of the study area consists of Menderes massif rocks that are made up of high-to low-grade metamorphics and they are aquifer for both cold ground waters and thermal waters depending on the location. Neogene terrestrial sediments, which are mainly composed by alluvium fan deposits unconformably cover the basement rocks in different facies have very low permeability and locally may act as cap rocks for the geothermal systems. The youngest unit is Quaternary alluvium which is the shallow regional aquifer consists of Holocene alluvial deposits in the study area. All the waters are of meteoric origin and reflect shallow or deep circulation according to the 8O, 2H and 3H contents. Meteoric waters move to deep zones by fractured system and rise to the surface along the faults. Water samples (drilling well, spring and surface waters) and local seawater were collected between 2010 and 2012 years. Geochemical modeling was calculated distribution of the aqueous species and exchange processes by using PHREEQCi speciation code. Geochemical analyses show that cold ground water types are evolving from Ca–Mg–HCO3 to Na–Cl–SO4 and geothermal aquifer waters reflect the water types of Na-Cl-HCO3 in Aydın. Water types of Denizli are Ca-Mg-HCO3 and Ca-Mg-HCO3-SO4. Thermal water types reflect generally Na-HCO3-SO4. The B versus Cl rates increase from east to west with the proportion of seawater introduced into the fresh water aquifers and geothermal reservoirs. Concentrations of some elements (As, B, Fe and Ni) are higher than the tolerance limit of the drinking water standard of Turkey (TS 266) and international drinking water standards (WHO, FAO etc).Keywords: Buyuk Menderes, isotope chemistry, geochemical modelling, water quality
Procedia PDF Downloads 536564 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice
Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha
Abstract:
Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability
Procedia PDF Downloads 117563 Impacts of Climate Change and Natural Gas Operations on the Hydrology of Northeastern BC, Canada: Quantifying the Water Budget for Coles Lake
Authors: Sina Abadzadesahraei, Stephen Déry, John Rex
Abstract:
Climate research has repeatedly identified strong associations between anthropogenic emissions of ‘greenhouses gases’ and observed increases of global mean surface air temperature over the past century. Studies have also demonstrated that the degree of warming varies regionally. Canada is not exempt from this situation, and evidence is mounting that climate change is beginning to cause diverse impacts in both environmental and socio-economic spheres of interest. For example, northeastern British Columbia (BC), whose climate is controlled by a combination of maritime, continental and arctic influences, is warming at a greater rate than the remainder of the province. There are indications that these changing conditions are already leading to shifting patterns in the region’s hydrological cycle, and thus its available water resources. Coincident with these changes, northeastern BC is undergoing rapid development for oil and gas extraction: This depends largely on subsurface hydraulic fracturing (‘fracking’), which uses enormous amounts of freshwater. While this industrial activity has made substantial contributions to regional and provincial economies, it is important to ensure that sufficient and sustainable water supplies are available for all those dependent on the resource, including ecological systems. In this turn demands a comprehensive understanding of how water in all its forms interacts with landscapes, the atmosphere, and of the potential impacts of changing climatic conditions on these processes. The aim of this study is therefore to characterize and quantify all components of the water budget in the small watershed of Coles Lake (141.8 km², 100 km north of Fort Nelson, BC), through a combination of field observations and numerical modelling. Baseline information will aid the assessment of the sustainability of current and future plans for freshwater extraction by the oil and gas industry, and will help to maintain the precarious balance between economic and environmental well-being. This project is a perfect example of interdisciplinary research, in that it not only examines the hydrology of the region but also investigates how natural gas operations and growth can affect water resources. Therefore, a fruitful collaboration between academia, government and industry has been established to fulfill the objectives of this research in a meaningful manner. This project aims to provide numerous benefits to BC communities. Further, the outcome and detailed information of this research can be a huge asset to researchers examining the effect of climate change on water resources worldwide.Keywords: northeastern British Columbia, water resources, climate change, oil and gas extraction
Procedia PDF Downloads 264562 Reactivation of Hydrated Cement and Recycled Concrete Powder by Thermal Treatment for Partial Replacement of Virgin Cement
Authors: Gustave Semugaza, Anne Zora Gierth, Tommy Mielke, Marianela Escobar Castillo, Nat Doru C. Lupascu
Abstract:
The generation of Construction and Demolition Waste (CDW) has globally increased enormously due to the enhanced need in construction, renovation, and demolition of construction structures. Several studies investigated the use of CDW materials in the production of new concrete and indicated the lower mechanical properties of the resulting concrete. Many other researchers considered the possibility of using the Hydrated Cement Powder (HCP) to replace a part of Ordinary Portland Cement (OPC), but only very few investigated the use of Recycled Concrete Powder (RCP) from CDW. The partial replacement of OPC for making new concrete intends to decrease the CO₂ emissions associated with OPC production. However, the RCP and HCP need treatment to produce the new concrete of required mechanical properties. The thermal treatment method has proven to improve HCP properties before their use. Previous research has stated that for using HCP in concrete, the optimum results are achievable by heating HCP between 400°C and 800°C. The optimum heating temperature depends on the type of cement used to make the Hydrated Cement Specimens (HCS), the crushing and heating method of HCP, and the curing method of the Rehydrated Cement Specimens (RCS). This research assessed the quality of recycled materials by using different techniques such as X-ray Diffraction (XRD), Differential Scanning Calorimetry (DSC) and thermogravimetry (TG), Scanning electron Microscopy (SEM), and X-ray Fluorescence (XRF). These recycled materials were thermally pretreated at different temperatures from 200°C to 1000°C. Additionally, the research investigated to what extent the thermally treated recycled cement could partially replace the OPC and if the new concrete produced would achieve the required mechanical properties. The mechanical properties were evaluated on the RCS, obtained by mixing the Dehydrated Cement Powder and Recycled Powder (DCP and DRP) with water (w/c = 0.6 and w/c = 0.45). The research used the compressive testing machine for compressive strength testing, and the three-point bending test was used to assess the flexural strength.Keywords: hydrated cement powder, dehydrated cement powder, recycled concrete powder, thermal treatment, reactivation, mechanical performance
Procedia PDF Downloads 153561 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 401560 Evaluation of the Impact of Telematics Use on Young Drivers’ Driving Behaviour: A Naturalistic Driving Study
Authors: WonSun Chen, James Boylan, Erwin Muharemovic, Denny Meyer
Abstract:
In Australia, drivers aged between 18 and 24 remained at high risk of road fatality over the last decade. Despite the successful implementation of the Graduated Licensing System (GLS) that supports young drivers in their early phases of driving, the road fatality statistics for these drivers remains high. In response to these statistics, studies conducted in Australia prior to the start of the COVID-19 pandemic have demonstrated the benefits of using telematics devices for improving driving behaviour, However, the impact of COVID-19 lockdown on young drivers’ driving behaviour has emerged as a global concern. Therefore, this naturalistic study aimed to evaluate and compare the driving behaviour(such as acceleration, braking, speeding, etc.) of young drivers with the adoption of in-vehicle telematics devices. Forty-two drivers aged between 18 and 30 and residing in the Australian state of Victoria participated in this study during the period of May to October 2022. All participants drove with the telematics devices during the first 30-day. At the start of the second 30-day, twenty-one participants were randomised to an intervention group where they were provided with an additional telematics ray device that provided visual feedback to the drivers, especially when they committed to aggressive driving behaviour. The remaining twenty-one participants remined their driving journeys without the extra telematics ray device (control group). Such trustworthy data enabled the assessment of changes in the driving behaviour of these young drivers using a machine learning approach in Python. Results are expected to show participants from the intervention group will show improvements in their driving behaviour compared to those from the control group.Furthermore, the telematics data enable the assessment and quantification of such improvements in driving behaviour. The findings from this study are anticipated to shed some light in guiding the development of customised campaigns and interventions to further address the high road fatality among young drivers in Australia.Keywords: driving behaviour, naturalistic study, telematics data, young drivers
Procedia PDF Downloads 124559 Development and Application of an Intelligent Masonry Modulation in BIM Tools: Literature Review
Authors: Sara A. Ben Lashihar
Abstract:
The heritage building information modelling (HBIM) of the historical masonry buildings has expanded lately to meet the urgent needs for conservation and structural analysis. The masonry structures are unique features for ancient building architectures worldwide that have special cultural, spiritual, and historical significance. However, there is a research gap regarding the reliability of the HBIM modeling process of these structures. The HBIM modeling process of the masonry structures faces significant challenges due to the inherent complexity and uniqueness of their structural systems. Most of these processes are based on tracing the point clouds and rarely follow documents, archival records, or direct observation. The results of these techniques are highly abstracted models where the accuracy does not exceed LOD 200. The masonry assemblages, especially curved elements such as arches, vaults, and domes, are generally modeled with standard BIM components or in-place models, and the brick textures are graphically input. Hence, future investigation is necessary to establish a methodology to generate automatically parametric masonry components. These components are developed algorithmically according to mathematical and geometric accuracy and the validity of the survey data. The main aim of this paper is to provide a comprehensive review of the state of the art of the existing researches and papers that have been conducted on the HBIM modeling of the masonry structural elements and the latest approaches to achieve parametric models that have both the visual fidelity and high geometric accuracy. The paper reviewed more than 800 articles, proceedings papers, and book chapters focused on "HBIM and Masonry" keywords from 2017 to 2021. The studies were downloaded from well-known, trusted bibliographic databases such as Web of Science, Scopus, Dimensions, and Lens. As a starting point, a scientometric analysis was carried out using VOSViewer software. This software extracts the main keywords in these studies to retrieve the relevant works. It also calculates the strength of the relationships between these keywords. Subsequently, an in-depth qualitative review followed the studies with the highest frequency of occurrence and the strongest links with the topic, according to the VOSViewer's results. The qualitative review focused on the latest approaches and the future suggestions proposed in these researches. The findings of this paper can serve as a valuable reference for researchers, and BIM specialists, to make more accurate and reliable HBIM models for historic masonry buildings.Keywords: HBIM, masonry, structure, modeling, automatic, approach, parametric
Procedia PDF Downloads 165558 Formulation Development, Process Optimization and Comparative study of Poorly Compressible Drugs Ibuprofen, Acetaminophen Using Direct Compression and Top Spray Granulation Technique
Authors: Abhishek Pandey
Abstract:
Ibuprofen and Acetaminophen is widely used as prescription & non-prescription medicine. Ibuprofen mainly used in the treatment of mild to moderate pain related to headache, migraine, postoperative condition and in the management of spondylitis, osteoarthritis and rheumatoid arthritis. Acetaminophen is used as an analgesic and antipyretic drug. Ibuprofen having high tendency of sticking to punches of tablet punching machine while Acetaminophen is not ordinarily compressible to tablet formulation because Acetaminophen crystals are very hard and brittle in nature and fracture very easily when compressed producing capping and laminating tablet defects therefore wet granulation method is used to make them compressible. The aim of study was to prepare Ibuprofen and Acetaminophen tablets by direct compression and top spray granulation technique. In this Investigation tablets were prepared by using directly compressible grade excipients. Dibasic calcium phosphate, lactose anhydrous (DCL21), microcrystalline cellulose (Avicel PH 101). In order to obtain best or optimized formulation, nine different formulations were generated among them batch F7, F8, F9 shows good results and within the acceptable limit. Formulation (F7) selected as optimize product on the basis of dissolution study. Furtherly, directly compressible granules of both drugs were prepared by using top spray granulation technique in fluidized bed processor equipment and compressed .In order to obtain best product process optimization was carried out by performing four trials in which various parameters like inlet air temperature, spray rate, peristaltic pump rpm, % LOD, properties of granules, blending time and hardness were optimized. Batch T3 coined as optimized batch on the basis physical & chemical evaluation. Finally formulations prepared by both techniques were compared.Keywords: direct compression, top spray granulation, process optimization, blending time
Procedia PDF Downloads 363557 Arabic Light Word Analyser: Roles with Deep Learning Approach
Authors: Mohammed Abu Shquier
Abstract:
This paper introduces a word segmentation method using the novel BP-LSTM-CRF architecture for processing semantic output training. The objective of web morphological analysis tools is to link a formal morpho-syntactic description to a lemma, along with morpho-syntactic information, a vocalized form, a vocalized analysis with morpho-syntactic information, and a list of paradigms. A key objective is to continuously enhance the proposed system through an inductive learning approach that considers semantic influences. The system is currently under construction and development based on data-driven learning. To evaluate the tool, an experiment on homograph analysis was conducted. The tool also encompasses the assumption of deep binary segmentation hypotheses, the arbitrary choice of trigram or n-gram continuation probabilities, language limitations, and morphology for both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), which provide justification for updating this system. Most Arabic word analysis systems are based on the phonotactic morpho-syntactic analysis of a word transmitted using lexical rules, which are mainly used in MENA language technology tools, without taking into account contextual or semantic morphological implications. Therefore, it is necessary to have an automatic analysis tool taking into account the word sense and not only the morpho-syntactic category. Moreover, they are also based on statistical/stochastic models. These stochastic models, such as HMMs, have shown their effectiveness in different NLP applications: part-of-speech tagging, machine translation, speech recognition, etc. As an extension, we focus on language modeling using Recurrent Neural Network (RNN); given that morphological analysis coverage was very low in dialectal Arabic, it is significantly important to investigate deeply how the dialect data influence the accuracy of these approaches by developing dialectal morphological processing tools to show that dialectal variability can support to improve analysis.Keywords: NLP, DL, ML, analyser, MSA, RNN, CNN
Procedia PDF Downloads 42556 Modeling of the Dynamic Characteristics of a Spindle with Experimental Validation
Authors: Jhe-Hao Huang, Kun-Da Wu, Wei-Cheng Shih, Jui-Pin Hung
Abstract:
This study presented the investigation on the dynamic characteristics of a spindle tool system by experimental and finite element modeling approaches. As well known facts, the machining stability is greatly determined by the dynamic characteristics of the spindle tool system. Therefore, understanding the factors affecting dynamic behavior of a spindle tooling system is a prerequisite in dominating the final machining performance of machine tool system. To this purpose, a physical spindle unit was employed to assess the dynamic characteristics by vibration tests. Then, a three-dimensional finite element model of a high-speed spindle system integrated with tool holder was created to simulate the dynamic behaviors. For modeling the angular contact bearings, a series of spring elements were introduced between the inner and outer rings. The spring constant can be represented by the contact stiffness of the rolling bearing based on Hertz theory. The interface characteristic between spindle nose and tool holder taper can be quantified from the comparison of the measurements and predictions. According to the results obtained from experiments and finite element predictions, the vibration behavior of the spindle is dominated by the bending deformation of the spindle shaft in different modes, which is further determined by the stiffness of the bearings in spindle housing. Also, the spindle unit with tool holder shows a different dynamic behavior from that of spindle without tool holder. This indicates the interface property between tool holder and spindle nose plays an dominance on the dynamic characteristics the spindle tool system. Overall, the dynamic behaviors the spindle with and without tool holder can be successfully investigated through the finite element model proposed in this study. The prediction accuracy is determined by the modeling of the rolling interface of ball bearings in spindles and the interface characteristics between tool holder and spindle nose. Besides, identifications of the interface characteristics of a ball bearing and spindle tool holder are important for the refinement of the spindle tooling system to achieve the optimum machining performance.Keywords: contact stiffness, dynamic characteristics, spindle, tool holder interface
Procedia PDF Downloads 298555 Optimum Turbomachine Preliminary Selection for Power Regeneration in Vapor Compression Cool Production Plants
Authors: Sayyed Benyamin Alavi, Giovanni Cerri, Leila Chennaoui, Ambra Giovannelli, Stefano Mazzoni
Abstract:
Primary energy consumption and emissions of pollutants (including CO2) sustainability call to search methodologies to lower power absorption for unit of a given product. Cool production plants based on vapour compression are widely used for many applications: air conditioning, food conservation, domestic refrigerators and freezers, special industrial processes, etc. In the field of cool production, the amount of Yearly Consumed Primary Energy is enormous, thus, saving some percentage of it, leads to big worldwide impact in the energy consumption and related energy sustainability. Among various techniques to reduce power required by a Vapour Compression Cool Production Plant (VCCPP), the technique based on Power Regeneration by means of Internal Direct Cycle (IDC) will be considered in this paper. Power produced by IDC reduces power need for unit of produced Cool Power by the VCCPP. The paper contains basic concepts that lead to develop IDCs and the proposed options to use the IDC Power. Among various selections for using turbo machines, Best Economically Available Technologies (BEATs) have been explored. Based on vehicle engine turbochargers, they have been taken into consideration for this application. According to BEAT Database and similarity rules, the best turbo machine selection leads to the minimum nominal power required by VCCPP Main Compressor. Results obtained installing the prototype in “ad hoc” designed test bench will be discussed and compared with the expected performance. Forecasts for the upgrading VCCPP, various applications will be given and discussed. 4-6% saving is expected for air conditioning cooling plants and 15-22% is expected for cryogenic plants.Keywords: Refrigeration Plant, Vapour Pressure Amplifier, Compressor, Expander, Turbine, Turbomachinery Selection, Power Saving
Procedia PDF Downloads 426554 Hydrogeophysical Investigations And Mapping of Ingress Channels Along The Blesbokspruit Stream In The East Rand Basin Of The Witwatersrand, South Africa
Authors: Melvin Sethobya, Sithule Xanga, Sechaba Lenong, Lunga Nolakana, Gbenga Adesola
Abstract:
Mining has been the cornerstone of the South African economy for the last century. Most of the gold mining in South Africa was conducted within the Witwatersrand basin, which contributed to the rapid growth of the city of Johannesburg and capitulated the city to becoming the business and wealth capital of the country. But with gradual depletion of resources, a stoppage in the extraction of underground water from mines and other factors relating to survival of the mining operations over a lengthy period, most of the mines were abandoned and left to pollute the local waterways and groundwater with toxins, heavy metal residue and increased acid mine drainage ensued. The Department of Mineral Resources and Energy commissioned a project whose aim is to monitor, maintain, and mitigate the adverse environmental impacts of polluted water mine water flowing into local streams affecting local ecosystems and livelihoods downstream. As part of mitigation efforts, the diagnosis and monitoring of groundwater or surface water polluted sites has become important. Geophysical surveys, in particular, Resistivity and Magnetics surveys, were selected as some of most suitable techniques for investigation of local ingress points along of one the major streams cutting through the Witwatersrand basin, namely the Blesbokspruit, which is found in the eastern part of the basin. The aim of the surveys was to provide information that could be used to assist in determining possible water loss/ ingress from the Blesbokspriut stream. Modelling of geophysical surveys results offered an in-depth insight into the interaction and pathways of polluted water through mapping of possible ingress channels near the Blesbokspruit. The resistivity - depth profile of the surveyed site exhibit a three(3) layered model with low resistivity values (10 to 200 Ω.m) overburden, which is underlain by a moderate resistivity weathered layer (>300 Ω.m), which sits on a more resistive crystalline bedrock (>500 Ω.m). Two locations of potential ingress channels were mapped across the two traverses at the site. The magnetic survey conducted at the site mapped a major NE-SW trending regional linearment with a strong magnetic signature, which was modeled to depth beyond 100m, with the potential to act as a conduit for dispersion of stream water away from the stream, as it shared a similar orientation with the potential ingress channels as mapped using the resistivity method.Keywords: eletrictrical resistivity, magnetics survey, blesbokspruit, ingress
Procedia PDF Downloads 63553 A Personality-Based Behavioral Analysis on eSports
Authors: Halkiopoulos Constantinos, Gkintoni Evgenia, Koutsopoulou Ioanna, Antonopoulou Hera
Abstract:
E-sports and e-gaming have emerged in recent years since the increase in internet use have become universal and e-gamers are the new reality in our homes. The excessive involvement of young adults with e-sports has already been revealed and the adverse consequences have been reported in researches in the past few years, but the issue has not been fully studied yet. The present research is conducted in Greece and studies the psychological profile of video game players and provides information on personality traits, habits and emotional status that affect online gamers’ behaviors in order to help professionals and policy makers address the problem. Three standardized self-report questionnaires were administered to participants who were young male and female adults aged from 19-26 years old. The Profile of Mood States (POMS) scale was used to evaluate people’s perceptions of their everyday life mood; the personality features that can trace back to people’s habits and anticipated reactions were measured by Eysenck Personality Questionnaire (EPQ), and the Trait Emotional Intelligence Questionnaire (TEIQue) was used to measure which cognitive (gamers’ beliefs) and emotional parameters (gamers’ emotional abilities) mainly affected/ predicted gamers’ behaviors and leisure time activities?/ gaming behaviors. Data mining techniques were used to analyze the data, which resulted in machine learning algorithms that were included in the software package R. The research findings attempt to designate the effect of personality traits, emotional status and emotional intelligence influence and correlation with e-sports, gamers’ behaviors and help policy makers and stakeholders take action, shape social policy and prevent the adverse consequences on young adults. The need for further research, prevention and treatment strategies is also addressed.Keywords: e-sports, e-gamers, personality traits, POMS, emotional intelligence, data mining, R
Procedia PDF Downloads 231552 Applicability of Linearized Model of Synchronous Generator for Power System Stability Analysis
Authors: J. Ritonja, B. Grcar
Abstract:
For the synchronous generator simulation and analysis and for the power system stabilizer design and synthesis a mathematical model of synchronous generator is needed. The model has to accurately describe dynamics of oscillations, while at the same time has to be transparent enough for an analysis and sufficiently simplified for design of control system. To study the oscillations of the synchronous generator against to the rest of the power system, the model of the synchronous machine connected to an infinite bus through a transmission line having resistance and inductance is needed. In this paper, the linearized reduced order dynamic model of the synchronous generator connected to the infinite bus is presented and analysed in details. This model accurately describes dynamics of the synchronous generator only in a small vicinity of an equilibrium state. With the digression from the selected equilibrium point the accuracy of this model is decreasing considerably. In this paper, the equations’ descriptions and the parameters’ determinations for the linearized reduced order mathematical model of the synchronous generator are explained and summarized and represent the useful origin for works in the areas of synchronous generators’ dynamic behaviour analysis and synchronous generator’s control systems design and synthesis. The main contribution of this paper represents the detailed analysis of the accuracy of the linearized reduced order dynamic model in the entire synchronous generator’s operating range. Borders of the areas where the linearized reduced order mathematical model represents accurate description of the synchronous generator’s dynamics are determined with the systemic numerical analysis. The thorough eigenvalue analysis of the linearized models in the entire operating range is performed. In the paper, the parameters of the linearized reduced order dynamic model of the laboratory salient poles synchronous generator were determined and used for the analysis. The theoretical conclusions were confirmed with the agreement of experimental and simulation results.Keywords: eigenvalue analysis, mathematical model, power system stability, synchronous generator
Procedia PDF Downloads 245551 Effect of Impact Angle on Erosive Abrasive Wear of Ductile and Brittle Materials
Authors: Ergin Kosa, Ali Göksenli
Abstract:
Erosion and abrasion are wear mechanisms reducing the lifetime of machine elements like valves, pump and pipe systems. Both wear mechanisms are acting at the same time, causing a “Synergy” effect, which leads to a rapid damage of the surface. Different parameters are effective on erosive abrasive wear rate. In this study effect of particle impact angle on wear rate and wear mechanism of ductile and brittle materials was investigated. A new slurry pot was designed for experimental investigation. As abrasive particle, silica sand was used. Particle size was ranking between 200-500 µm. All tests were carried out in a sand-water mixture of 20% concentration for four hours. Impact velocities of the particles were 4,76 m/s. As ductile material steel St 37 with Brinell Hardness Number (BHN) of 245 and quenched St 37 with 510 BHN was used as brittle material. After wear tests, morphology of the eroded surfaces were investigated for better understanding of the wear mechanisms acting at different impact angles by using optical microscopy and Scanning Electron Microscope. The results indicated that wear rate of ductile material was higher than brittle material. Maximum wear was observed by ductile material at a particle impact angle of 300. On the contrary wear rate increased by brittle materials by an increase in impact angle and reached maximum value at 450. High amount of craters were detected after observation on ductile material surface Also plastic deformation zones were detected, which are typical failure modes for ductile materials. Craters formed by particles were deeper according to brittle material worn surface. Amount of craters decreased on brittle material surface. Microcracks around craters were detected which are typical failure modes of brittle materials. Deformation wear was the dominant wear mechanism on brittle material. At the end it is concluded that wear rate could not be directly related to impact angle of the hard particle due to the different responses of ductile and brittle materials.Keywords: erosive wear, particle impact angle, silica sand, wear rate, ductile-brittle material
Procedia PDF Downloads 401550 Rain Gauges Network Optimization in Southern Peninsular Malaysia
Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno
Abstract:
Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.Keywords: geostatistics, simulated annealing, semivariogram, optimization
Procedia PDF Downloads 302549 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development
Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo
Abstract:
Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.Keywords: maritime training, oil spill response, simulation, vessel manoeuvring
Procedia PDF Downloads 172548 One-Class Classification Approach Using Fukunaga-Koontz Transform and Selective Multiple Kernel Learning
Authors: Abdullah Bal
Abstract:
This paper presents a one-class classification (OCC) technique based on Fukunaga-Koontz Transform (FKT) for binary classification problems. The FKT is originally a powerful tool to feature selection and ordering for two-class problems. To utilize the standard FKT for data domain description problem (i.e., one-class classification), in this paper, a set of non-class samples which exist outside of positive class (target class) describing boundary formed with limited training data has been constructed synthetically. The tunnel-like decision boundary around upper and lower border of target class samples has been designed using statistical properties of feature vectors belonging to the training data. To capture higher order of statistics of data and increase discrimination ability, the proposed method, termed one-class FKT (OC-FKT), has been extended to its nonlinear version via kernel machines and referred as OC-KFKT for short. Multiple kernel learning (MKL) is a favorable family of machine learning such that tries to find an optimal combination of a set of sub-kernels to achieve a better result. However, the discriminative ability of some of the base kernels may be low and the OC-KFKT designed by this type of kernels leads to unsatisfactory classification performance. To address this problem, the quality of sub-kernels should be evaluated, and the weak kernels must be discarded before the final decision making process. MKL/OC-FKT and selective MKL/OC-FKT frameworks have been designed stimulated by ensemble learning (EL) to weight and then select the sub-classifiers using the discriminability and diversities measured by eigenvalue ratios. The eigenvalue ratios have been assessed based on their regions on the FKT subspaces. The comparative experiments, performed on various low and high dimensional data, against state-of-the-art algorithms confirm the effectiveness of our techniques, especially in case of small sample size (SSS) conditions.Keywords: ensemble methods, fukunaga-koontz transform, kernel-based methods, multiple kernel learning, one-class classification
Procedia PDF Downloads 21547 The Relationship between Representational Conflicts, Generalization, and Encoding Requirements in an Instance Memory Network
Authors: Mathew Wakefield, Matthew Mitchell, Lisa Wise, Christopher McCarthy
Abstract:
The properties of memory representations in artificial neural networks have cognitive implications. Distributed representations that encode instances as a pattern of activity across layers of nodes afford memory compression and enforce the selection of a single point in instance space. These encoding schemes also appear to distort the representational space, as well as trading off the ability to validate that input information is within the bounds of past experience. In contrast, a localist representation which encodes some meaningful information into individual nodes in a network layer affords less memory compression while retaining the integrity of the representational space. This allows the validity of an input to be determined. The validity (or familiarity) of input along with the capacity of localist representation for multiple instance selections affords a memory sampling approach that dynamically balances the bias-variance trade-off. When the input is familiar, bias may be high by referring only to the most similar instances in memory. When the input is less familiar, variance can be increased by referring to more instances that capture a broader range of features. Using this approach in a localist instance memory network, an experiment demonstrates a relationship between representational conflict, generalization performance, and memorization demand. Relatively small sampling ranges produce the best performance on a classic machine learning dataset of visual objects. Combining memory validity with conflict detection produces a reliable confidence judgement that can separate responses with high and low error rates. Confidence can also be used to signal the need for supervisory input. Using this judgement, the need for supervised learning as well as memory encoding can be substantially reduced with only a trivial detriment to classification performance.Keywords: artificial neural networks, representation, memory, conflict monitoring, confidence
Procedia PDF Downloads 127546 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 91545 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 174544 The Importance of Developing Pedagogical Agency Capacities in Initial Teacher Formation: A Critical Approach to Advance in Social Justice
Authors: Priscilla Echeverria
Abstract:
This paper addresses initial teacher formation as a formative space in which pedagogy students develop a pedagogical agency capacity to contribute to social justice, considering ethical, political, and epistemic dimensions. This paper is structured by discussing first the concepts of agency, pedagogical interaction, and social justice from a critical perspective; and continues offering preliminary results on the capacity of pedagogical agency in novice teachers after the analysis of critical incidents as a research methodology. This study is motivated by the concern that responding to the current neoliberal scenario, many initial teacher formation (ITF) programs have reduced the meaning of education to instruction, and pedagogy to methodology, favouring the formation of a technical professional over a reflective or critical one. From this concern, this study proposes that the restitution of the subject is an urgent task in teacher formation, so it is essential to enable him in his capacity for action and advance in eliminating institutionalized oppression insofar as it affects that capacity. Given that oppression takes place in human interaction, through this work, I propose that initial teacher formation develops sensitivity and educates the gaze to identify oppression and take action against it, both in pedagogical interactions -which configure political, ethical, and epistemic subjectivities- as in the hidden and official curriculum. All this from the premise that modelling democratic and dialogical interactions are basic for any program that seeks to contribute to a more just and empowered society. The contribution of this study lies in the fact that it opens a discussion in an area about which we know little: the impact of the type of interactions offered by university teaching at ITF on the capacity of future teachers to be pedagogical agents. For this reason, this study seeks to gather evidence of the result of this formation, analysing the capacity of pedagogical agency of novice teachers, or, in other words, how capable the graduates of secondary pedagogies are in their first pedagogical experiences to act and make decisions putting the formative purposes that they are capable of autonomously defining before technical or bureaucratic issues imposed by the curriculum or the official culture. This discussion is part of my doctoral research, "The importance of developing the capacity for ethical-political-epistemic agency in novice teachers during initial teacher formation to contribute to social justice", which I am currently developing in the Educational Research program of the University of Lancaster, United Kingdom, as a Conicyt fellow for the 2019 cohort.Keywords: initial teacher formation, pedagogical agency, pedagogical interaction, social justice, hidden curriculum
Procedia PDF Downloads 97543 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole
Authors: Basavaraj R. Endigeri, S. G. Sarganachari
Abstract:
Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.Keywords: finite element method, optimization, stress concentration factor, auxiliary holes
Procedia PDF Downloads 453