Search results for: computational modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3613

Search results for: computational modelling

823 A Comparative Study on the Effects of Different Clustering Layouts and Geometry of Urban Street Canyons on Urban Heat Island in Residential Neighborhoods of Kolkata

Authors: Shreya Banerjee, Roshmi Sen, Subrata Chattopadhyay

Abstract:

Urbanization during the second half of the last century has created many serious environment related issues leading to global warming and climate change. India is not an exception as the country is also facing the problems of global warming and urban heat islands (UHI) in all the major metropolises. This paper discusses the effect of different housing cluster layouts, site geometry, and geometry of urban street canyons on the urban heat island profile. The study is carried out using the three dimensional microclimatic computational fluid dynamics model ENVI-met version 3.1. Simulation models are done for a typical summer day of 21st June, 2015 in four different residential neighborhoods in the city of Kolkata which predominantly belongs to Warm-Humid Monsoon Climate. The results show the changing pattern of urban heat island profile with respect to different clustering layouts, geometry, and morphology of urban street canyons. The comparison between the four neighborhoods shows that different microclimatic variables are strongly dependant on the neighborhood layout pattern and geometry. The inferences obtained from this study can be indicative towards the formulation of neighborhood design by-laws that will attenuate the urban heat island effect.

Keywords: urban heat island, neighborhood morphology, site microclimate, ENVI-met, numerical analysis

Procedia PDF Downloads 364
822 Spanish Language Violence Corpus: An Analysis of Offensive Language in Twitter

Authors: Beatriz Botella-Gil, Patricio Martínez-Barco, Lea Canales

Abstract:

The Internet and ICT are an integral element of and omnipresent in our daily lives. Technologies have changed the way we see the world and relate to it. The number of companies in the ICT sector is increasing every year, and there has also been an increase in the work that occurs online, from sending e-mails to the way companies promote themselves. In social life, ICT’s have gained momentum. Social networks are useful for keeping in contact with family or friends that live far away. This change in how we manage our relationships using electronic devices and social media has been experienced differently depending on the age of the person. According to currently available data, people are increasingly connected to social media and other forms of online communication. Therefore, it is no surprise that violent content has also made its way to digital media. One of the important reasons for this is the anonymity provided by social media, which causes a sense of impunity in the victim. Moreover, it is not uncommon to find derogatory comments, attacking a person’s physical appearance, hobbies, or beliefs. This is why it is necessary to develop artificial intelligence tools that allow us to keep track of violent comments that relate to violent events so that this type of violent online behavior can be deterred. The objective of our research is to create a guide for detecting and recording violent messages. Our annotation guide begins with a study on the problem of violent messages. First, we consider the characteristics that a message should contain for it to be categorized as violent. Second, the possibility of establishing different levels of aggressiveness. To download the corpus, we chose the social network Twitter for its ease of obtaining free messages. We chose two recent, highly visible violent cases that occurred in Spain. Both of them experienced a high degree of social media coverage and user comments. Our corpus has a total of 633 messages, manually tagged, according to the characteristics we considered important, such as, for example, the verbs used, the presence of exclamations or insults, and the presence of negations. We consider it necessary to create wordlists that are present in violent messages as indicators of violence, such as lists of negative verbs, insults, negative phrases. As a final step, we will use automatic learning systems to check the data obtained and the effectiveness of our guide.

Keywords: human language technologies, language modelling, offensive language detection, violent online content

Procedia PDF Downloads 122
821 A Simple Approach to Reliability Assessment of Structures via Anomaly Detection

Authors: Rims Janeliukstis, Deniss Mironovs, Andrejs Kovalovs

Abstract:

Operational Modal Analysis (OMA) is widely applied as a method for Structural Health Monitoring for structural damage identification and assessment by tracking the changes of the identified modal parameters over time. Unfortunately, modal parameters also depend on such external factors as temperature and loads. Any structural condition assessment using modal parameters should be done taking into consideration those external factors, otherwise there is a high chance of false positives. A method of structural reliability assessment based on anomaly detection technique called Machalanobis Squared Distance (MSD) is proposed. It requires a set of reference conditions to learn healthy state of a structure, which all future parameters are compared to. In this study, structural modal parameters (natural frequency and mode shape), as well as ambient temperature and loads acting on the structure are used as features. Numerical tests were performed on a finite element model of a carbon fibre reinforced polymer composite beam with delamination damage at various locations and of various severities. The advantages of the demonstrated approach include relatively few computational steps, ability to distinguish between healthy and damaged conditions and discriminate between different damage severities. It is anticipated to be promising in reliability assessment of massively produced structural parts.

Keywords: operational modal analysis, reliability assessment, anomaly detection, damage, mahalanobis squared distance

Procedia PDF Downloads 108
820 Variable vs. Fixed Window Width Code Correlation Reference Waveform Receivers for Multipath Mitigation in Global Navigation Satellite Systems with Binary Offset Carrier and Multiplexed Binary Offset Carrier Signals

Authors: Fahad Alhussein, Huaping Liu

Abstract:

This paper compares the multipath mitigation performance of code correlation reference waveform receivers with variable and fixed window width, for binary offset carrier and multiplexed binary offset carrier signals typically used in global navigation satellite systems. In the variable window width method, such width is iteratively reduced until the distortion on the discriminator with multipath is eliminated. This distortion is measured as the Euclidean distance between the actual discriminator (obtained with the incoming signal), and the local discriminator (generated with a local copy of the signal). The variable window width have shown better performance compared to the fixed window width. In particular, the former yields zero error for all delays for the BOC and MBOC signals considered, while the latter gives rather large nonzero errors for small delays in all cases. Due to its computational simplicity, the variable window width method is perfectly suitable for implementation in low-cost receivers.

Keywords: correlation reference waveform receivers, binary offset carrier, multiplexed binary offset carrier, global navigation satellite systems

Procedia PDF Downloads 127
819 Process Performance and Nitrogen Removal Kinetics in Anammox Hybrid Reactor

Authors: Swati Tomar, Sunil Kumar Gupta

Abstract:

Anammox is a promising and cost effective alternative to conventional treatment systems that facilitates direct oxidation of ammonium nitrogen under anaerobic conditions with nitrite as an electron acceptor without addition of any external carbon sources. The present study investigates the process kinetics of laboratory scale anammox hybrid reactor (AHR) which combines the dual advantages of attached and suspended growth. The performance & behaviour of AHR was studied under varying hydraulic retention time (HRTs) and nitrogen loading rate (NLRs). The experimental unit consisted of 4 numbers of 5L capacity anammox hybrid reactor inoculated with mixed seed culture containing anoxic and activated sludge. Pseudo steady state (PSS) ammonium and nitrite removal efficiencies of 90.6% and 95.6%, respectively, were achieved during acclimation phase. After establishment of PSS, the performance of AHR was monitored at seven different HRTs of 3.0, 2.5, 2.0, 1.5, 1.0, 0.5 and 0.25 d with increasing NLR from 0.4 to 4.8 kg N/m3d. The results showed that with increase in NLR and decrease in HRT (3.0 to 0.25 d), AHR registered appreciable decline in nitrogen removal efficiency from 92.9% to 67.4 %, respectively. The HRT of 2.0 d was considered optimal to achieve substantial nitrogen removal of 89%, because on further decrease in HRT below 1.5 days, remarkable decline in the values of nitrogen removal efficiency were observed. Analysis of data indicated that attached growth system contributes an additional 15.4 % ammonium removal and reduced the sludge washout rate (additional 29% reduction). This enhanced performance may be attributed to 25% increase in sludge retention time due to the attached growth media. Three kinetic models, namely, first order, Monod and Modified Stover-Kincannon model were applied to assess the substrate removal kinetics of nitrogen removal in AHR. Validation of the models were carried out by comparing experimental set of data with the predicted values obtained from the respective models. For substrate removal kinetics, model validation revealed that Modified Stover-Kincannon is most precise (R2=0.943) and can be suitably applied to predict the kinetics of nitrogen removal in AHR. Lawrence and McCarty model described the kinetics of bacterial growth. The predicted value of yield coefficient and decay constant were in line with the experimentally observed values.

Keywords: anammox, kinetics, modelling, nitrogen removal, sludge wash out rate, AHR

Procedia PDF Downloads 307
818 Influence of Kinematic, Physical and Mechanical Structure Parameters on Aeroelastic GTU Shaft Vibrations in Magnetic Bearings

Authors: Evgeniia V. Mekhonoshina, Vladimir Ya. Modorskii, Vasilii Yu. Petrov

Abstract:

At present, vibrations of rotors of gas transmittal unit evade sustainable forecasting. This paper describes elastic oscillation modes in resilient supports and rotor impellers modeled during computational experiments with regard to interference in the system of gas-dynamic flow and compressor rotor. Verification of aeroelastic approach was done on model problem of interaction between supersonic jet in shock tube with deformed plate. ANSYS 15.0 engineering analysis system was used as a modeling tool of numerical simulation in this paper. Finite volume method for gas dynamics and finite elements method for assessment of the strain stress state (SSS) components were used as research methods. Rotation speed and material’s elasticity modulus varied during calculations, and SSS components and gas-dynamic parameters in the dynamic system of gas-dynamic flow and compressor rotor were evaluated. The analysis of time dependence demonstrated that gas-dynamic parameters near the rotor blades oscillate at 200 Hz, and SSS parameters at the upper blade edge oscillate four times higher, i.e. with blade frequency. It has been detected that vibration amplitudes correction in the test points at magnetic bearings by aeroelasticity may correspond up to 50%, and about -π/4 for phases.

Keywords: Centrifugal compressor, aeroelasticity, interdisciplinary calculation, oscillation phase displacement, vibration, nonstationarity

Procedia PDF Downloads 255
817 Algae Biofertilizers Promote Sustainable Food Production and Nutrient Efficiency: An Integrated Empirical-Modeling Study

Authors: Zeenat Rupawalla, Nicole Robinson, Susanne Schmidt, Sijie Li, Selina Carruthers, Elodie Buisset, John Roles, Ben Hankamer, Juliane Wolf

Abstract:

Agriculture has radically changed the global biogeochemical cycle of nitrogen (N). Fossil fuel-enabled synthetic N-fertiliser is a foundation of modern agriculture but applied to soil crops only use about half of it. To address N-pollution from cropping and the large carbon and energy footprint of N-fertiliser synthesis, new technologies delivering enhanced energy efficiency, decarbonisation, and a circular nutrient economy are needed. We characterised algae fertiliser (AF) as an alternative to synthetic N-fertiliser (SF) using empirical and modelling approaches. We cultivated microalgae in nutrient solution and modelled up-scaled production in nutrient-rich wastewater. Over four weeks, AF released 63.5% of N as ammonium and nitrate, and 25% of phosphorous (P) as phosphate to the growth substrate, while SF released 100% N and 20% P. To maximise crop N-use and minimise N-leaching, we explored AF and SF dose-response-curves with spinach in glasshouse conditions. AF-grown spinach produced 36% less biomass than SF-grown plants due to AF’s slower and linear N-release, while SF resulted in 5-times higher N-leaching loss than AF. Optimised blends of AF and SF boosted crop yield and minimised N-loss due to greater synchrony of N-release and crop uptake. Additional benefits of AF included greener leaves, lower leaf nitrate concentration, and higher microbial diversity and water holding capacity in the growth substrate. Life-cycle-analysis showed that replacing the most effective SF dosage with AF lowered the carbon footprint of fertiliser production from 2.02 g CO₂ (C-producing) to -4.62 g CO₂ (C-sequestering), with a further 12% reduction when AF is produced on wastewater. Embodied energy was lowest for AF-SF blends and could be reduced by 32% when cultivating algae on wastewater. We conclude that (i) microalgae offer a sustainable alternative to synthetic N-fertiliser in spinach production and potentially other crop systems, and (ii) microalgae biofertilisers support the circular nutrient economy and several sustainable development goals.

Keywords: bioeconomy, decarbonisation, energy footprint, microalgae

Procedia PDF Downloads 134
816 Discovering New Organic Materials through Computational Methods

Authors: Lucas Viani, Benedetta Mennucci, Soo Young Park, Johannes Gierschner

Abstract:

Organic semiconductors have attracted the attention of the scientific community in the past decades due to their unique physicochemical properties, allowing new designs and alternative device fabrication methods. Until today, organic electronic devices are largely based on conjugated polymers mainly due to their easy processability. In the recent years, due to moderate ET and CT efficiencies and the ill-defined nature of polymeric systems the focus has been shifting to small conjugated molecules with well-defined chemical structure, easier control of intermolecular packing, and enhanced CT and ET properties. It has led to the synthesis of new small molecules, followed by the growth of their crystalline structure and ultimately by the device preparation. This workflow is commonly followed without a clear knowledge of the ET and CT properties related mainly to the macroscopic systems, which may lead to financial and time losses, since not all materials will deliver the properties and efficiencies demanded by the current standards. In this work, we present a theoretical workflow designed to predict the key properties of ET of these new materials prior synthesis, thus speeding up the discovery of new promising materials. It is based on quantum mechanical, hybrid, and classical methodologies, starting from a single molecule structure, finishing with the prediction of its packing structure, and prediction of properties of interest such as static and averaged excitonic couplings, and exciton diffusion length.

Keywords: organic semiconductor, organic crystals, energy transport, excitonic couplings

Procedia PDF Downloads 252
815 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI

Procedia PDF Downloads 116
814 Using Rainfall Simulators to Design and Assess the Post-Mining Erosional Stability

Authors: Ashraf M. Khalifa, Hwat Bing So, Greg Maddocks

Abstract:

Changes to the mining environmental approvals process in Queensland have been rolled out under the MERFP Act (2018). This includes requirements for a Progressive Rehabilitation and Closure Plan (PRC Plan). Key considerations of the landform design report within the PRC Plan must include: (i) identification of materials available for landform rehabilitation, including their ability to achieve the required landform design outcomes, (ii) erosion assessments to determine landform heights, gradients, profiles, and material placement, (iii) slope profile design considering the interactions between soil erodibility, rainfall erosivity, landform height, gradient, and vegetation cover to identify acceptable erosion rates over a long-term average, (iv) an analysis of future stability based on the factors described above e.g., erosion and /or landform evolution modelling. ACARP funded an extensive and thorough erosion assessment program using rainfall simulators from 1998 to 2010. The ACARP program included laboratory assessment of 35 soil and spoil samples from 16 coal mines and samples from a gold mine in Queensland using 3 x 0.8 m laboratory rainfall simulator. The reliability of the laboratory rainfall simulator was verified through field measurements using larger flumes 20 x 5 meters and catchment scale measurements at three sites (3 different catchments, average area of 2.5 ha each). Soil cover systems are a primary component of a constructed mine landform. The primary functions of a soil cover system are to sustain vegetation and limit the infiltration of water and oxygen into underlying reactive mine waste. If the external surface of the landform erodes, the functions of the cover system cannot be maintained, and the cover system will most likely fail. Assessing a constructed landform’s potential ‘long-term’ erosion stability requires defensible erosion rate thresholds below which rehabilitation landform designs are considered acceptably erosion-resistant or ‘stable’. The process used to quantify erosion rates using rainfall simulators (flumes) to measure rill and inter-rill erosion on bulk samples under laboratory conditions or on in-situ material under field conditions will be explained.

Keywords: open-cut, mining, erosion, rainfall simulator

Procedia PDF Downloads 97
813 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space

Authors: Nanjiang Chen

Abstract:

In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.

Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation

Procedia PDF Downloads 36
812 Numerical Simulation of the Effect of Single and Dual Synthetic Jet on Stall Phenomenon On NACA (National Advisory Committee for Aeronautics) GA(W)-2 Airfoil

Authors: Abbasali Abouei Mehrizi, Hamid Hassanzadeh Afrouzi

Abstract:

Reducing the drag force increases the efficiency of the aircraft and its better performance. Flow control methods delay the phenomenon of flow separation and consequently reduce the reversed flow phenomenon in the separation region and enhance the performance of the lift force while decreasing the drag force and thus improving the aircraft efficiency. Flow control methods can be divided into active and passive types. The use of synthetic jets actuator (SJA) used in this study for NACA GA (W) -2 airfoil is one of the active flow control methods to prevent stall phenomenon on the airfoil. In this research, the relevant airfoil in different angles of attack with and without jets has been compared by OpenFOAM. Also, after achieving the proper SJA position on the airfoil suction surface, the simultaneous effect of two SJAs has been discussed. It was found to have the best effect at 12% chord (C), close to the airfoil’s leading edge (LE). At 12% chord, SJA decreases the drag significantly with increasing lift, and also, the average lift increase was higher than other situations and was equal to 10.4%. The highest drag reduction was about 5% in SJA=0.25C. Then, due to the positive effects of SJA in the 12% and 25% chord regions, these regions were considered for applying dual jets in two post-stall angles of attack, i.e., 16° and 22°.

Keywords: active and passive flow control methods, computational fluid dynamics, flow separation, synthetic jet

Procedia PDF Downloads 78
811 Automatic Fluid-Structure Interaction Modeling and Analysis of Butterfly Valve Using Python Script

Authors: N. Guru Prasath, Sangjin Ma, Chang-Wan Kim

Abstract:

A butterfly valve is a quarter turn valve which is used to control the flow of a fluid through a section of pipe. Generally, butterfly valve is used in wide range of applications such as water distribution, sewage, oil and gas plants. In particular, butterfly valve with larger diameter finds its immense applications in hydro power plants to control the fluid flow. In-lieu with the constraints in cost and size to run laboratory setup, analysis of large diameter values will be mostly studied by computational method which is the best and inexpensive solution. For fluid and structural analysis, CFD and FEM software is used to perform large scale valve analyses, respectively. In order to perform above analysis in butterfly valve, the CAD model has to recreate and perform mesh in conventional software’s for various dimensions of valve. Therefore, its limitation is time consuming process. In-order to overcome that issue, python code was created to outcome complete pre-processing setup automatically in Salome software. Applying dimensions of the model clearly in the python code makes the running time comparatively lower and easier way to perform analysis of the valve. Hence, in this paper, an attempt was made to study the fluid-structure interaction (FSI) of butterfly valves by varying the valve angles and dimensions using python code in pre-processing software, and results are produced.

Keywords: butterfly valve, flow coefficient, automatic CFD analysis, FSI analysis

Procedia PDF Downloads 236
810 Structural Performance Evaluation of Electronic Road Sign Panels Reflecting Damage Scenarios

Authors: Junwon Seo, Bipin Adhikari, Euiseok Jeong

Abstract:

This paper is intended to evaluate the structural performance of welded electronic road signs under various damage scenarios (DSs) using a finite element (FE) model calibrated with full-scale ultimate load testing results. The tested electronic road sign specimen was built with a back skin made of 5052 aluminum and two channels and a frame made of 6061 aluminum, where the back skin was connected to the frame by welding. The size of the tested specimen was 1.52 m long, 1.43 m wide, and 0.28 m deep. An actuator applied vertical loads at the center of the back skin of the specimen, resulting in a displacement of 158.7 mm and an ultimate load of 153.46 kN. Using these testing data, generation and calibration of a FE model of the tested specimen were executed in ABAQUS, indicating that the difference in the ultimate load between the calibrated model simulation and full-scale testing was only 3.32%. Then, six different DSs were simulated where the areas of the welded connection in the calibrated model were diminished for the DSs. It was found that the corners at the back skin-frame joint were prone to connection failure for all the DSs, and failure of the back skin-frame connection occurred remarkably from the distant edges.

Keywords: computational analysis, damage scenarios, electronic road signs, finite element, welded connections

Procedia PDF Downloads 89
809 Iris Feature Extraction and Recognition Based on Two-Dimensional Gabor Wavelength Transform

Authors: Bamidele Samson Alobalorun, Ifedotun Roseline Idowu

Abstract:

Biometrics technologies apply the human body parts for their unique and reliable identification based on physiological traits. The iris recognition system is a biometric–based method for identification. The human iris has some discriminating characteristics which provide efficiency to the method. In order to achieve this efficiency, there is a need for feature extraction of the distinct features from the human iris in order to generate accurate authentication of persons. In this study, an approach for an iris recognition system using 2D Gabor for feature extraction is applied to iris templates. The 2D Gabor filter formulated the patterns that were used for training and equally sent to the hamming distance matching technique for recognition. A comparison of results is presented using two iris image subjects of different matching indices of 1,2,3,4,5 filter based on the CASIA iris image database. By comparing the two subject results, the actual computational time of the developed models, which is measured in terms of training and average testing time in processing the hamming distance classifier, is found with best recognition accuracy of 96.11% after capturing the iris localization or segmentation using the Daughman’s Integro-differential, the normalization is confined to the Daugman’s rubber sheet model.

Keywords: Daugman rubber sheet, feature extraction, Hamming distance, iris recognition system, 2D Gabor wavelet transform

Procedia PDF Downloads 61
808 A Finite Element Based Predictive Stone Lofting Simulation Methodology for Automotive Vehicles

Authors: Gaurav Bisht, Rahul Rathnakumar, Ravikumar Duggirala

Abstract:

Predictive simulations are one of the key focus areas in safety-critical industries such as aerospace and high-performance automotive engineering. The stone-chipping study is one such effort taken up by the industry to predict and evaluate the damage caused due to gravel impact on vehicles. This paper describes a finite elements based method that can simulate the ejection of gravel chips from a vehicle tire. The FE simulations were used to obtain the initial ejection velocity of the stones for various driving conditions using a computational contact mechanics approach. To verify the accuracy of the tire model, several parametric studies were conducted. The FE simulations resulted in stone loft velocities ranging from 0–8 m/s, regardless of tire speed. The stress on the tire at the instant of initial contact with the stone increased linearly with vehicle speed. Mesh convergence studies indicated that a highly resolved tire mesh tends to result in better momentum transfer between the tire and the stone. A fine tire mesh also showed a linearly increasing relationship between the tire forward speed and stone lofting speed, which was not observed in coarser meshes. However, it also highlighted a potential challenge, in that the ejection velocity vector of the stone seemed to be sensitive to the mesh, owing to the FE-based contact mechanical formulation of the problem.

Keywords: abaqus, contact mechanics, foreign object debris, stone chipping

Procedia PDF Downloads 259
807 The Effect of Innovation Capability and Activity, and Wider Sector Condition on the Performance of Malaysian Public Sector Innovation Policy

Authors: Razul Ikmal Ramli

Abstract:

Successful implementation of innovation is a key success formula of a great organization. Innovation will ensure competitive advantages as well as sustainability of organization in the long run. In public sector context, the role of innovation is crucial to resolve dynamic challenges of public services such as operating in economic uncertainty with limited resources, increasing operating expenditure and growing expectation among citizens towards high quality, swift and reliable public services. Acknowledging the prospect of innovation as a tool for achieving high-performance public sector, the Malaysian New Economic Model launched in the year 2011 intensified government commitment to foster innovation in the public sector. Since 2011 various initiatives have been implemented, however little is known about the performance of public sector innovation in Malaysia. Hence, by applying the national innovation system theory as a pillar, the formulated research objectives were focused on measuring the level of innovation capabilities, wider public sector condition for innovation, innovation activity, and innovation performance as well as to examine the relationship between the four constructs with innovation performance as a dependent variable. For that purpose, 1,000 sets of self-administrated survey questionnaires were distributed to heads of units and divisions of 22 Federal Ministry and Central Agencies in the administrative, security, social and economic sector. Based on 456 returned questionnaires, the descriptive analysis found that innovation capabilities, wider sector condition, innovation activities and innovation performance were rated by respondents at moderately high level. Based on Structural Equation Modelling, innovation performance was found to be influenced by innovation capability, wider sector condition for innovation and innovation activity. In addition, the analysis also found innovation activity to be the most important construct that influences innovation performance. The implication of the study concluded that the innovation policy implemented in the public sector of Malaysia sparked motivation to innovate and resulted in various forms of innovation. However, the overall achievements were not as well as they were expected to be. Thus, the study suggested for the formulation of a dedicated policy to strengthen innovation capability, wider public sector condition for innovation and innovation activity of the Malaysian public sector. Furthermore, strategic intervention needs to be focused on innovation activity as the construct plays an important role in determining the innovation performance. The success of public sector innovation implementation will not only benefit the citizens, but will also spearhead the competitiveness and sustainability of the country.

Keywords: public sector, innovation, performance, innovation policy

Procedia PDF Downloads 278
806 Assessment of the Effect of Building Materials on Energy Demand of Buildings in Jos: An Experimental and Numerical Approach

Authors: Zwalnan Selfa Johnson, Caleb Nanchen Nimyel, Gideon Duvuna Ayuba

Abstract:

Air conditioning accounts for a significant share of the overall energy consumed in residential buildings. Solar thermal gains in buildings account for a significant component of the air conditioning load in buildings. This study compares the solar thermal gain and air conditioning load of a proposed building design with a typical conventional building in the climatic conditions of Jos, Nigeria, using a combined experimental and computational method using TRNSYS software. According to the findings of this study, the proposed design building's annual average solar thermal gains are lower compared to the reference building's average solar heat gains. The study case building's decreased solar heat gain is mostly attributable to the lower temperature of the building zones because of the greater building volume and lower fenestration ratio (ratio external opening area to the area of the external walls). This result shows that the proposed building design adjusts to the local climate better than the standard conventional construction in Jos to maintain a suitable temperature within the building. This finding means that the air-conditioning electrical energy consumption per volume of the proposed building design will be lower than that of a conventional building design.

Keywords: solar heat gain, building zone, cooling energy, air conditioning, zone temperature

Procedia PDF Downloads 85
805 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 254
804 CO₂ Capture by Membrane Applied to Steel Production Process

Authors: Alexandra-Veronica Luca, Letitia Petrescu

Abstract:

Steel production is a major contributor to global warming potential. An average value of 1.83 tons of CO₂ is emitted for every ton of steel produced, resulting in over 3.3 Mt of CO₂ emissions each year. The present paper is focused on the investigation and comparison of two O₂ separation methods and two CO₂ capture technologies applicable to iron and steel industry. The O₂ used in steel production comes from an Air Separation Unit (ASU) using distillation or from air separation using membranes. The CO₂ capture technologies are represented by a two-stage membrane separation process and the gas-liquid absorption using methyl di-ethanol amine (MDEA). Process modelling and simulation tools, as well as environmental tools, are used in the present study. The production capacity of the steel mill is 4,000,000 tones/year. In order to compare the two CO₂ capture technologies in terms of efficiency, performance, and sustainability, the following cases have been investigated: Case 1: steel production using O₂ from ASU and no CO₂ capture; Case 2: steel production using O₂ from ASU and gas-liquid absorption for CO₂ capture; Case 3: steel production using O₂ from ASU and membranes for CO₂ capture; Case 4: steel production using O₂ from membrane separation method and gas-liquid absorption for CO₂ capture and Case-5: steel production using membranes for air separation and CO₂ capture. The O₂ separation rate obtained in the distillation technology was about 96%, and about 33% in the membrane technology. Similarly, the O₂ purity resulting in the conventional process (i.e. distillation) is higher compared to the O₂ purity obtained in the membrane unit (e.g., 99.50% vs. 73.66%). The air flow-rate required for membrane separation is about three times higher compared to the air flow-rate for cryogenic distillation (e.g., 549,096.93 kg/h vs. 189,743.82 kg/h). A CO₂ capture rate of 93.97% was obtained in the membrane case, while the CO₂ capture rate for the gas-liquid absorption was 89.97%. A quantity of 6,626.49 kg/h CO₂ with a purity of 95.45% is separated from the total 23,352.83 kg/h flue-gas in the membrane process, while with absorption of 6,173.94 kg/h CO₂ with a purity of 98.79% is obtained from 21,902.04 kg/h flue-gas and 156,041.80 kg/h MDEA is recycled. The simulation results, performed using ChemCAD process simulator software, lead to the conclusion that membrane-based technology can be a suitable alternative for CO₂ removal for steel production. An environmental evaluation using Life Cycle Assessment (LCA) methodology was also performed. Considering the electricity consumption, the performance, and environmental indicators, Case 3 can be considered the most effective. The environmental evaluation, performed using GaBi software, shows that membrane technology can lead to lower environmental emissions if membrane production is based on benzene derived from toluene hydrodealkilation and chlorine and sodium hydroxide are produced using mixed technologies.

Keywords: CO₂ capture, gas-liquid absorption, Life Cycle Assessment, membrane separation, steel production

Procedia PDF Downloads 287
803 Impacts on the Modification of a Two-Blade Mobile on the Agitation of Newtonian Fluids

Authors: Abderrahim Sidi Mohammed Nekrouf, Sarra Youcefi

Abstract:

Fluid mixing plays a crucial role in numerous industries as it has a significant impact on the final product quality and performance. In certain cases, the circulation of viscous fluids presents challenges, leading to the formation of stagnant zones. To overcome this issue, stirring devices are employed for fluid mixing. This study focuses on a numerical analysis aimed at understanding the behavior of Newtonian fluids when agitated by a two-blade agitator in a cylindrical vessel. We investigate the influence of the agitator shape on fluid motion. Bi-blade agitators of this type are commonly used in the food, cosmetic, and chemical industries to agitate both viscous and non-viscous liquids. Numerical simulations were conducted using Computational Fluid Dynamics (CFD) software to obtain velocity profiles, streamlines, velocity contours, and the associated power number. The obtained results were compared with experimental data available in the literature, validating the accuracy of our numerical approach. The results clearly demonstrate that modifying the agitator shape has a significant impact on fluid motion. This modification generates an axial flow that enhances the efficiency of the fluid flow. The various velocity results convincingly reveal that the fluid is more uniformly agitated with this modification, resulting in improved circulation and a substantial reduction in stagnant zones.

Keywords: Newtonian fluids, numerical modeling, two blade., CFD

Procedia PDF Downloads 70
802 A Posteriori Trading-Inspired Model-Free Time Series Segmentation

Authors: Plessen Mogens Graf

Abstract:

Within the context of multivariate time series segmentation, this paper proposes a method inspired by a posteriori optimal trading. After a normalization step, time series are treated channelwise as surrogate stock prices that can be traded optimally a posteriori in a virtual portfolio holding either stock or cash. Linear transaction costs are interpreted as hyperparameters for noise filtering. Trading signals, as well as trading signals obtained on the reversed time series, are used for unsupervised channelwise labeling before a consensus over all channels is reached that determines the final segmentation time instants. The method is model-free such that no model prescriptions for segments are made. Benefits of proposed approach include simplicity, computational efficiency, and adaptability to a wide range of different shapes of time series. Performance is demonstrated on synthetic and real-world data, including a large-scale dataset comprising a multivariate time series of dimension 1000 and length 2709. Proposed method is compared to a popular model-based bottom-up approach fitting piecewise affine models and to a recent model-based top-down approach fitting Gaussian models and found to be consistently faster while producing more intuitive results in the sense of segmenting time series at peaks and valleys.

Keywords: time series segmentation, model-free, trading-inspired, multivariate data

Procedia PDF Downloads 130
801 Modeling the Impact of Aquaculture in Wetland Ecosystems Using an Integrated Ecosystem Approach: Case Study of Setiu Wetlands, Malaysia

Authors: Roseliza Mat Alipiah, David Raffaelli, J. C. R. Smart

Abstract:

This research is a new approach as it integrates information from both environmental and social sciences to inform effective management of the wetlands. A three-stage research framework was developed for modelling the drivers and pressures imposed on the wetlands and their impacts to the ecosystem and the local communities. Firstly, a Bayesian Belief Network (BBN) was used to predict the probability of anthropogenic activities affecting the delivery of different key wetland ecosystem services under different management scenarios. Secondly, Choice Experiments (CEs) were used to quantify the relative preferences which key wetland stakeholder group (aquaculturists) held for delivery of different levels of these key ecosystem services. Thirdly, a Multi-Criteria Decision Analysis (MCDA) was applied to produce an ordinal ranking of the alternative management scenarios accounting for their impacts upon ecosystem service delivery as perceived through the preferences of the aquaculturists. This integrated ecosystem management approach was applied to a wetland ecosystem in Setiu, Terengganu, Malaysia which currently supports a significant level of aquaculture activities. This research has produced clear guidelines to inform policy makers considering alternative wetland management scenarios: Intensive Aquaculture, Conservation or Ecotourism, in addition to the Status Quo. The findings of this research are as follows: The BBN revealed that current aquaculture activity is likely to have significant impacts on water column nutrient enrichment, but trivial impacts on caged fish biomass, especially under the Intensive Aquaculture scenario. Secondly, the best fitting CE models identified several stakeholder sub-groups for aquaculturists, each with distinct sets of preferences for the delivery of key ecosystem services. Thirdly, the MCDA identified Conservation as the most desirable scenario overall based on ordinal ranking in the eyes of most of the stakeholder sub-groups. Ecotourism and Status Quo scenarios were the next most preferred and Intensive Aquaculture was the least desirable scenario. The methodologies developed through this research provide an opportunity for improving planning and decision making processes that aim to deliver sustainable management of wetland ecosystems in Malaysia.

Keywords: Bayesian belief network (BBN), choice experiments (CE), multi-criteria decision analysis (MCDA), aquaculture

Procedia PDF Downloads 288
800 Antidiabetic and Admet Pharmacokinetic Properties of Grewia Lasiocarpa E. Mey. Ex Harv. Stem Bark Extracts: An in Vitro and in Silico Study

Authors: Akwu N. A., Naidoo Y., Salau V. F., Olofinsan K. A.

Abstract:

Grewia lasiocarpa E. Mey. ex Harv. (Malvaceae) is a Southern African medicinal plant indigenously used with other plants for birthing problems. The anti-diabetic properties of the hexane, chloroform, and methanol extracts of Grewia lasiocarpa stem bark were assessed using in vitro α-glucosidase enzyme inhibition assay. The predictive in silico drug-likeness and toxicity properties of the phytocompounds were conducted using the pKCSM, ADMElab, and SwissADME computer-aided online tools. The highest α-glucosidase percentage inhibition was observed in the hexane extract (86.76%, IC50= 0.24 mg/mL), followed by chloroform (63.08%, IC50= 4.87 mg/mL) and methanol (53.22%, IC50= 9.41 mg/mL); while acarbose, the standard anti-diabetic drug was (84.54%, IC50= 1.96 mg/mL). The α-glucosidase assay revealed that the hexane extract exhibited the strongest carbohydrate inhibiting capacity and is a better inhibitor than the standard reference drug-acarbose. The computational studies also affirm the results observed in the in vitroα-glucosidaseassay. Thus, the extracts of G. lasiocarpa may be considered a potential plant-sourced compound for treating type 2 diabetes mellitus. This is the first study on the anti-diabetic properties of Grewia lasiocarpa hexane, chloroform, and methanol extracts using in vitro and in silico models.

Keywords: grewia lasiocarpa, α-glucosidase inhibition, anti-diabetes, ADMET

Procedia PDF Downloads 100
799 Investigating Effects of Vehicle Speed and Road PSDs on Response of a 35-Ton Heavy Commercial Vehicle (HCV) Using Mathematical Modelling

Authors: Amal G. Kurian

Abstract:

The use of mathematical modeling has seen a considerable boost in recent times with the development of many advanced algorithms and mathematical modeling capabilities. The advantages this method has over other methods are that they are much closer to standard physics theories and thus represent a better theoretical model. They take lesser solving time and have the ability to change various parameters for optimization, which is a big advantage, especially in automotive industry. This thesis work focuses on a thorough investigation of the effects of vehicle speed and road roughness on a heavy commercial vehicle ride and structural dynamic responses. Since commercial vehicles are kept in operation continuously for longer periods of time, it is important to study effects of various physical conditions on the vehicle and its user. For this purpose, various experimental as well as simulation methodologies, are adopted ranging from experimental transfer path analysis to various road scenario simulations. To effectively investigate and eliminate several causes of unwanted responses, an efficient and robust technique is needed. Carrying forward this motivation, the present work focuses on the development of a mathematical model of a 4-axle configuration heavy commercial vehicle (HCV) capable of calculating responses of the vehicle on different road PSD inputs and vehicle speeds. Outputs from the model will include response transfer functions and PSDs and wheel forces experienced. A MATLAB code will be developed to implement the objectives in a robust and flexible manner which can be exploited further in a study of responses due to various suspension parameters, loading conditions as well as vehicle dimensions. The thesis work resulted in quantifying the effect of various physical conditions on ride comfort of the vehicle. An increase in discomfort is seen with velocity increase; also the effect of road profiles has a considerable effect on comfort of the driver. Details of dominant modes at each frequency are analysed and mentioned in work. The reduction in ride height or deflection of tire and suspension with loading along with load on each axle is analysed and it is seen that the front axle supports a greater portion of vehicle weight while more of payload weight comes on fourth and third axles. The deflection of the vehicle is seen to be well inside acceptable limits.

Keywords: mathematical modeling, HCV, suspension, ride analysis

Procedia PDF Downloads 248
798 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool

Procedia PDF Downloads 364
797 Off-Policy Q-learning Technique for Intrusion Response in Network Security

Authors: Zheni S. Stefanova, Kandethody M. Ramachandran

Abstract:

With the increasing dependency on our computer devices, we face the necessity of adequate, efficient and effective mechanisms, for protecting our network. There are two main problems that Intrusion Detection Systems (IDS) attempt to solve. 1) To detect the attack, by analyzing the incoming traffic and inspect the network (intrusion detection). 2) To produce a prompt response when the attack occurs (intrusion prevention). It is critical creating an Intrusion detection model that will detect a breach in the system on time and also challenging making it provide an automatic and with an acceptable delay response at every single stage of the monitoring process. We cannot afford to adopt security measures with a high exploiting computational power, and we are not able to accept a mechanism that will react with a delay. In this paper, we will propose an intrusion response mechanism that is based on artificial intelligence, and more precisely, reinforcement learning techniques (RLT). The RLT will help us to create a decision agent, who will control the process of interacting with the undetermined environment. The goal is to find an optimal policy, which will represent the intrusion response, therefore, to solve the Reinforcement learning problem, using a Q-learning approach. Our agent will produce an optimal immediate response, in the process of evaluating the network traffic.This Q-learning approach will establish the balance between exploration and exploitation and provide a unique, self-learning and strategic artificial intelligence response mechanism for IDS.

Keywords: cyber security, intrusion prevention, optimal policy, Q-learning

Procedia PDF Downloads 231
796 Recycling Waste Product for Metal Removal from Water

Authors: Saidur R. Chowdhury, Mamme K. Addai, Ernest K. Yanful

Abstract:

The research was performed to assess the potential of nickel smelter slag, an industrial waste, as an adsorbent in the removal of metals from aqueous solution. An investigation was carried out for Arsenic (As), Copper (Cu), lead (Pb) and Cadmium (Cd) adsorption from aqueous solution. Smelter slag was obtain from Ni ore at the Vale Inco Ni smelter in Sudbury, Ontario, Canada. The batch experimental studies were conducted to evaluate the removal efficiencies of smelter slag. The slag was characterized by surface analytical techniques. The slag contained different iron oxides and iron silicate bearing compounds. In this study, the effect of pH, contact time, particle size, competition by other ions, slag dose and distribution coefficient were evaluated to measure the optimum adsorption conditions of the slag as an adsorbent for As, Cu, Pb and Cd. The results showed 95-99% removal of As, Cu, Pb, and almost 50-60% removal of Cd, while batch experimental studies were conducted at 5-10 mg/L of initial concentration of metals, 10 g/L of slag doses, 10 hours of contact time and 170 rpm of shaking speed and 25oC condition. The maximum removal of Arsenic (As), Copper (Cu), lead (Pb) was achieved at pH 5 while the maximum removal of Cd was found after pH 7. The column experiment was also conducted to evaluate adsorption depth and service time for metal removal. This study also determined adsorption capacity, adsorption rate and mass transfer rate. The maximum adsorption capacity was found to be 3.84 mg/g for As, 4 mg/g for Pb, and 3.86 mg/g for Cu. The adsorption capacity of nickel slag for the four test metals were in decreasing order of Pb > Cu > As > Cd. Modelling of experimental data with Visual MINTEQ revealed that saturation indices of < 0 were recorded in all cases suggesting that the metals at this pH were under- saturated and thus in their aqueous forms. This confirms the absence of precipitation in the removal of these metals at the pHs. The experimental results also showed that Fe and Ni leaching from the slag during the adsorption process was found to be very minimal, ranging from 0.01 to 0.022 mg/L indicating the potential adsorbent in the treatment industry. The study also revealed that waste product (Ni smelter slag) can be used about five times more before disposal in a landfill or as a stabilization material. It also highlighted the recycled slags as a potential reactive adsorbent in the field of remediation engineering. It also explored the benefits of using renewable waste products for the water treatment industry.

Keywords: adsorption, industrial waste, recycling, slag, treatment

Procedia PDF Downloads 140
795 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 234
794 Power Allocation Algorithm for Orthogonal Frequency Division Multiplexing Based Cognitive Radio Networks

Authors: Bircan Demiral

Abstract:

Cognitive radio (CR) is the promising technology that addresses the spectrum scarcity problem for future wireless communications. Orthogonal Frequency Division Multiplexing (OFDM) technology provides more power band ratios for cognitive radio networks (CRNs). While CR is a solution to the spectrum scarcity, it also brings up the capacity problem. In this paper, a novel power allocation algorithm that aims at maximizing the sum capacity in the OFDM based cognitive radio networks is proposed. Proposed allocation algorithm is based on the previously developed water-filling algorithm. To reduce the computational complexity calculating in water filling algorithm, proposed algorithm allocates the total power according to each subcarrier. The power allocated to the subcarriers increases sum capacity. To see this increase, Matlab program was used, and the proposed power allocation was compared with average power allocation, water filling and general power allocation algorithms. The water filling algorithm performed worse than the proposed algorithm while it performed better than the other two algorithms. The proposed algorithm is better than other algorithms in terms of capacity increase. In addition the effect of the change in the number of subcarriers on capacity was discussed. Simulation results show that the increase in the number of subcarrier increases the capacity.

Keywords: cognitive radio network, OFDM, power allocation, water filling

Procedia PDF Downloads 132