Search results for: analytical network design model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30352

Search results for: analytical network design model

27952 Cooperative Sensing for Wireless Sensor Networks

Authors: Julien Romieux, Fabio Verdicchio

Abstract:

Wireless Sensor Networks (WSNs), which sense environmental data with battery-powered nodes, require multi-hop communication. This power-demanding task adds an extra workload that is unfairly distributed across the network. As a result, nodes run out of battery at different times: this requires an impractical individual node maintenance scheme. Therefore we investigate a new Cooperative Sensing approach that extends the WSN operational life and allows a more practical network maintenance scheme (where all nodes deplete their batteries almost at the same time). We propose a novel cooperative algorithm that derives a piecewise representation of the sensed signal while controlling approximation accuracy. Simulations show that our algorithm increases WSN operational life and spreads communication workload evenly. Results convey a counterintuitive conclusion: distributing workload fairly amongst nodes may not decrease the network power consumption and yet extend the WSN operational life. This is achieved as our cooperative approach decreases the workload of the most burdened cluster in the network.

Keywords: cooperative signal processing, signal representation and approximation, power management, wireless sensor networks

Procedia PDF Downloads 390
27951 Design and Manufacture of a Hybrid Gearbox Reducer System

Authors: Ahmed Mozamel, Kemal Yildizli

Abstract:

Due to mechanical energy losses and a competitive of minimizing these losses and increases the machine efficiency, the need for contactless gearing system has raised. In this work, one stage of mechanical planetary gear transmission system integrated with one stage of magnetic planetary gear system is designed as a two-stage hybrid gearbox system. The permanent magnets internal energy in the form of the magnetic field is used to create meshing between contactless magnetic rotors in order to provide self-system protection against overloading and decrease the mechanical loss of the transmission system by eliminating the friction losses. Classical methods, such as analytical, tabular method and the theory of elasticity are used to calculate the planetary gear design parameters. The finite element method (ANSYS Maxwell) is used to predict the behaviors of a magnetic gearing system. The concentric magnetic gearing system has been modeled and analyzed by using 2D finite element method (ANSYS Maxwell). In addition to that, design and manufacturing processes of prototype components (a planetary gear, concentric magnetic gear, shafts and the bearings selection) of a gearbox system are investigated. The output force, the output moment, the output power and efficiency of the hybrid gearbox system are experimentally evaluated. The viability of applying a magnetic force to transmit mechanical power through a non-contact gearing system is presented. The experimental test results show that the system is capable to operate continuously within the range of speed from 400 rpm to 3000 rpm with the reduction ratio of 2:1 and maximum efficiency of 91%.

Keywords: hybrid gearbox, mechanical gearboxes, magnetic gears, magnetic torque

Procedia PDF Downloads 152
27950 3-Dimensional Contamination Conceptual Site Model: A Case Study Illustrating the Multiple Applications of Developing and Maintaining a 3D Contamination Model during an Active Remediation Project on a Former Urban Gasworks Site

Authors: Duncan Fraser

Abstract:

A 3-Dimensional (3D) conceptual site model was developed using the Leapfrog Works® platform utilising a comprehensive historical dataset for a large former Gasworks site in Fitzroy, Melbourne. The gasworks had been constructed across two fractured geological units with varying hydraulic conductivities. A Newer Volcanic (basaltic) outcrop covered approximately half of the site and was overlying a fractured Melbourne formation (Siltstone) bedrock outcropping over the remaining portion. During the investigative phase of works, a dense non-aqueous phase liquid (DNAPL) plume (coal tar) was identified within both geological units in the subsurface originating from multiple sources, including gasholders, tar wells, condensers, and leaking pipework. The first stage of model development was undertaken to determine the horizontal and vertical extents of the coal tar in the subsurface and assess the potential causality between potential sources, plume location, and site geology. Concentrations of key contaminants of interest (COIs) were also interpolated within Leapfrog to refine the distribution of contaminated soils. The model was subsequently used to develop a robust soil remediation strategy and achieve endorsement from an Environmental Auditor. A change in project scope, following the removal and validation of the three former gasholders, necessitated the additional excavation of a significant volume of residual contaminated rock to allow for the future construction of two-story underground basements. To assess financial liabilities associated with the offsite disposal or thermal treatment of material, the 3D model was updated with three years of additional analytical data from the active remediation phase of works. Chemical concentrations and the residual tar plume within the rock fractures were modelled to pre-classify the in-situ material and enhance separation strategies to prevent the unnecessary treatment of material and reduce costs.

Keywords: 3D model, contaminated land, Leapfrog, remediation

Procedia PDF Downloads 133
27949 A Network-Theorical Perspective on Music Analysis

Authors: Alberto Alcalá-Alvarez, Pablo Padilla-Longoria

Abstract:

The present paper describes a framework for constructing mathematical networks encoding relevant musical information from a music score for structural analysis. These graphs englobe statistical information about music elements such as notes, chords, rhythms, intervals, etc., and the relations among them, and so become helpful in visualizing and understanding important stylistic features of a music fragment. In order to build such networks, musical data is parsed out of a digital symbolic music file. This data undergoes different analytical procedures from Graph Theory, such as measuring the centrality of nodes, community detection, and entropy calculation. The resulting networks reflect important structural characteristics of the fragment in question: predominant elements, connectivity between them, and complexity of the information contained in it. Music pieces in different styles are analyzed, and the results are contrasted with the traditional analysis outcome in order to show the consistency and potential utility of this method for music analysis.

Keywords: computational musicology, mathematical music modelling, music analysis, style classification

Procedia PDF Downloads 102
27948 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending

Authors: Yang Zheng, Wei Sun

Abstract:

This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b → 0) and plane strain (b → ∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.

Keywords: bending, creep, thin plate, materials engineering

Procedia PDF Downloads 474
27947 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach

Authors: Kristina Pflug, Markus Busch

Abstract:

Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.

Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology

Procedia PDF Downloads 124
27946 Sentiment Analysis of Fake Health News Using Naive Bayes Classification Models

Authors: Danielle Shackley, Yetunde Folajimi

Abstract:

As more people turn to the internet seeking health-related information, there is more risk of finding false, inaccurate, or dangerous information. Sentiment analysis is a natural language processing technique that assigns polarity scores to text, ranging from positive, neutral, and negative. In this research, we evaluate the weight of a sentiment analysis feature added to fake health news classification models. The dataset consists of existing reliably labeled health article headlines that were supplemented with health information collected about COVID-19 from social media sources. We started with data preprocessing and tested out various vectorization methods such as Count and TFIDF vectorization. We implemented 3 Naive Bayes classifier models, including Bernoulli, Multinomial, and Complement. To test the weight of the sentiment analysis feature on the dataset, we created benchmark Naive Bayes classification models without sentiment analysis, and those same models were reproduced, and the feature was added. We evaluated using the precision and accuracy scores. The Bernoulli initial model performed with 90% precision and 75.2% accuracy, while the model supplemented with sentiment labels performed with 90.4% precision and stayed constant at 75.2% accuracy. Our results show that the addition of sentiment analysis did not improve model precision by a wide margin; while there was no evidence of improvement in accuracy, we had a 1.9% improvement margin of the precision score with the Complement model. Future expansion of this work could include replicating the experiment process and substituting the Naive Bayes for a deep learning neural network model.

Keywords: sentiment analysis, Naive Bayes model, natural language processing, topic analysis, fake health news classification model

Procedia PDF Downloads 97
27945 Design an Development of an Agorithm for Prioritizing the Test Cases Using Neural Network as Classifier

Authors: Amit Verma, Simranjeet Kaur, Sandeep Kaur

Abstract:

Test Case Prioritization (TCP) has gained wide spread acceptance as it often results in good quality software free from defects. Due to the increase in rate of faults in software traditional techniques for prioritization results in increased cost and time. Main challenge in TCP is difficulty in manually validate the priorities of different test cases due to large size of test suites and no more emphasis are made to make the TCP process automate. The objective of this paper is to detect the priorities of different test cases using an artificial neural network which helps to predict the correct priorities with the help of back propagation algorithm. In our proposed work one such method is implemented in which priorities are assigned to different test cases based on their frequency. After assigning the priorities ANN predicts whether correct priority is assigned to every test case or not otherwise it generates the interrupt when wrong priority is assigned. In order to classify the different priority test cases classifiers are used. Proposed algorithm is very effective as it reduces the complexity with robust efficiency and makes the process automated to prioritize the test cases.

Keywords: test case prioritization, classification, artificial neural networks, TF-IDF

Procedia PDF Downloads 397
27944 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 112
27943 Network Mobility Support in Content-Centric Internet

Authors: Zhiwei Yan, Jong-Hyouk Lee, Yong-Jin Park, Xiaodong Lee

Abstract:

In this paper, we analyze NEtwork MObility (NEMO) supporting problems in Content-Centric Networking (CCN), and propose the CCN-NEMO which can well support the deployment of the content-centric paradigm in large-scale mobile Internet. The CCN-NEMO extends the signaling message of the basic CCN protocol, to support the mobility discovery and fast trigger of Interest re-issuing during the network mobility. Besides, the Mobile Router (MR) is extended to optimize the content searching and relaying in the local subnet. These features can be employed by the nested NEMO to maximize the advantages of content retrieving with CCN. Based on the analysis, we compare the performance on handover latency between the basic CCN and our proposed CCN-NEMO. The results show that our scheme can facilitate the content-retrieving in the NEMO scenario with improved performance.

Keywords: NEMO, CCN, mobility, handover latency

Procedia PDF Downloads 470
27942 An Exact Algorithm for Location–Transportation Problems in Humanitarian Relief

Authors: Chansiri Singhtaun

Abstract:

This paper proposes a mathematical model and examines the performance of an exact algorithm for a location–transportation problems in humanitarian relief. The model determines the number and location of distribution centers in a relief network, the amount of relief supplies to be stocked at each distribution center and the vehicles to take the supplies to meet the needs of disaster victims under capacity restriction, transportation and budgetary constraints. The computational experiments are conducted on the various sizes of problems that are generated. Branch and bound algorithm is applied for these problems. The results show that this algorithm can solve problem sizes of up to three candidate locations with five demand points and one candidate location with up to twenty demand points without premature termination.

Keywords: disaster response, facility location, humanitarian relief, transportation

Procedia PDF Downloads 451
27941 Optimization of Hybrid off Grid Energy Station

Authors: Yehya Abdellatif, Iyad M. Muslih, Azzah Alkhalailah, Abdallah Muslih

Abstract:

Hybrid Optimization Model for Electric Renewable (HOMER) software was utilized to find the optimum design of a hybrid off-Grid system, by choosing the optimal solution depending on the cost analysis of energy based on different capacity shortage percentages. A complete study for the site conditions and load profile was done to optimize the design and implementation of a hybrid off-grid power station. In addition, the solution takes into consecration the ambient temperature effect on the efficiency of the power generation and the economical aspects of selection depending on real market price. From the analysis of the HOMER model results, the optimum hybrid power station was suggested, based on wind speed, and solar conditions. The optimization function objective is to minimize the Net Price Cost (NPC) and the Cost of Energy (COE) with zero and 10 percentage of capacity shortage.

Keywords: energy modeling, HOMER, off-grid system, optimization

Procedia PDF Downloads 563
27940 Determination of the Walkability Comfort for Urban Green Space Using Geographical Information System

Authors: Muge Unal, Cengiz Uslu, Mehmet Faruk Altunkasa

Abstract:

Walkability relates to the ability of the places to connect people with varied destinations within a reasonable amount of time and effort, and to offer visual interest in journeys throughout the network. So, the good quality of the physical environment and arrangement of walkway and sidewalk appear to be more crucial in influencing the pedestrian route choice. Also, proximity, connectivity, and accessibility are significant factor for walkability in terms of an equal opportunity for using public spaces. As a result, there are two important points for walkability. Firstly, the place should have a well-planned street network for accessible and secondly facilitate the pedestrian need for comfort. In this respect, this study aims to examine the both physical and bioclimatic comfort levels of the current condition of pedestrian route with reference to design criteria of a street to access the urban green spaces. These aspects have been identified as the main indicators for walkable streets such as continuity, materials, slope, bioclimatic condition, walkway width, greenery, and surface. Additionally, the aim was to identify the factors that need to be considered in future guidelines and policies for planning and design in urban spaces especially streets. Adana city was chosen as a study area. Adana is a province of Turkey located in south-central Anatolia. This study workflow can be summarized in four stages: (1) environmental and physical data were collected by referred to literature and used in a weighted criteria method to determine the importance level of these data , (2) environmental characteristics of pedestrian routes gained from survey studies are evaluated to hierarchies these criteria of the collected information, (3) and then each pedestrian routes will have a score that provides comfortable access to the park, (4) finally, the comfortable routes to park will be mapped using GIS. It is hoped that this study will provide an insight into future development planning and design to create a friendly and more comfort street environment for the users.

Keywords: comfort level, geographical information system (GIS), walkability, weighted criteria method

Procedia PDF Downloads 311
27939 Subcontractor Development Practices and Processes: A Conceptual Model for LEED Projects

Authors: Andrea N. Ofori-Boadu

Abstract:

The purpose is to develop a conceptual model of subcontractor development practices and processes that strengthen the integration of subcontractors into construction supply chain systems for improved subcontractor performance on Leadership in Energy and Environmental Design (LEED) certified building projects. The construction management of a LEED project has an important objective of meeting sustainability certification requirements. This is in addition to the typical project management objectives of cost, time, quality, and safety for traditional projects; and, therefore increases the complexity of LEED projects. Considering that construction management organizations rely heavily on subcontractors, poor performance on complex projects such as LEED projects has been largely attributed to the unsatisfactory preparation of subcontractors. Furthermore, the extensive use of unique and non-repetitive short term contracts limits the full integration of subcontractors into construction supply chains and hinders long-term cooperation and benefits that could enhance performance on construction projects. Improved subcontractor development practices are needed to better prepare and manage subcontractors, so that complex objectives can be met or exceeded. While supplier development and supply chain theories and practices for the manufacturing sector have been extensively investigated to address similar challenges, investigations in the construction sector are not that obvious. Consequently, the objective of this research is to investigate effective subcontractor development practices and processes to guide construction management organizations in their development of a strong network of high performing subcontractors. Drawing from foundational supply chain and supplier development theories in the manufacturing sector, a mixed interpretivist and empirical methodology is utilized to assess the body of knowledge within literature for conceptual model development. A self-reporting survey with five-point Likert scale items and open-ended questions is administered to 30 construction professionals to estimate their perceptions of the effectiveness of 37 practices, classified into five subcontractor development categories. Data analysis includes descriptive statistics, weighted means, and t-tests that guide the effectiveness ranking of practices and categories. The results inform the proposed three-phased LEED subcontractor development program model which focuses on preparation, development and implementation, and monitoring. Highly ranked LEED subcontractor pre-qualification, commitment, incentives, evaluation, and feedback practices are perceived as more effective, when compared to practices requiring more direct involvement and linkages between subcontractors and construction management organizations. This is attributed to unfamiliarity, conflicting interests, lack of trust, and resource sharing challenges. With strategic modifications, the recommended practices can be extended to other non-LEED complex projects. Additional research is needed to guide the development of subcontractor development programs that strengthen direct involvement between construction management organizations and their network of high performing subcontractors. Insights from this present research strengthen theoretical foundations to support future research towards more integrated construction supply chains. In the long-term, this would lead to increased performance, profits and client satisfaction.

Keywords: construction management, general contractor, supply chain, sustainable construction

Procedia PDF Downloads 110
27938 A Study on Improvement of the Electromagnetic Vibration of a Polygon Mirror Scanner Motor

Authors: Yongmin You

Abstract:

Electric machines for office automation device such as printer and scanner have been required the low noise and vibration performance. Many researches about the low noise and vibration of polygon mirror scanner motor have been also progressed. The noise and vibration of polygon mirror scanner motor can be classified by aerodynamic, structural and electromagnetic. Electromagnetic noise and vibration can be occurred by high cogging torque and nonsinusoidal back EMF. To improve the cogging torque and back EMF characteristic, we apply unequal air-gap. To analyze characteristic of a polygon mirror scanner motor, two dimensional finite element method is used. To minimize the cogging torque of a polygon mirror motor, Kriging based on latin hypercube sampling (LHS) is utilized. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 23.4 % while maintaining the back EMF and average torque. To verify the optimal design results, the experiment was performed. We measured the vibration in motors at 23,600 rpm which is the rated velocity. The radial and axial gravitational acceleration of the optimal model were declined more than seven times and three times, respectively. From these results, a shape optimized unequal polygon mirror scanner motor has shown the usefulness of an improvement in the torque ripple and electromagnetic vibration characteristic.

Keywords: polygon mirror scanner motor, optimal design, finite element method, vibration

Procedia PDF Downloads 342
27937 Role of Web Graphics and Interface in Creating Visitor Trust

Authors: Pramika J. Muthya

Abstract:

This paper investigates the impact of web graphics and interface design on building visitor trust in websites. A quantitative survey approach was used to examine how aesthetic and usability elements of website design influence user perceptions of trustworthiness. 133 participants aged 18-25 who live in urban Bangalore and engage in online transactions were recruited via convenience sampling. Data was collected through an online survey measuring trust levels based on website design, using validated constructs like the Visual Aesthetic of Websites Inventory (VisAWI). Statistical analysis, including ordinal regression, was conducted to analyze the results. The findings show a statistically significant relationship between web graphics and interface design and the level of trust visitors place in a website. The goodness-of-fit statistics and highly significant model fitting information provide strong evidence for rejecting the null hypothesis of no relationship. Well-designed visual aesthetics like simplicity, diversity, colorfulness, and craftsmanship are key drivers of perceived credibility. Intuitive navigation and usability also increase trust. The results emphasize the strategic importance for companies to invest in appealing graphic design, consistent with existing theoretical frameworks. There are also implications for taking a user-centric approach to web design and acknowledging the reciprocal link between pre-existing user trust and perception of visuals. While generalizable, limitations include possible sampling and self-report biases. Further research can build on these findings to deepen understanding of nuanced cultural and temporal factors influencing online trust. Overall, this study makes a significant contribution by providing empirical evidence that reinforces the crucial impact of thoughtful graphic design in fostering lasting user trust in websites.

Keywords: web graphics, interface design, visitor trust, website design, aesthetics, user experience, online trust, visual design, graphic design, user perceptions, user expectations

Procedia PDF Downloads 51
27936 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon

Abstract:

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Keywords: battery energy storage system, electrical network frequency stability, frequency control unit, PowerFactor

Procedia PDF Downloads 129
27935 Prediction of Structural Response of Reinforced Concrete Buildings Using Artificial Intelligence

Authors: Juan Bojórquez, Henry E. Reyes, Edén Bojórquez, Alfredo Reyes-Salazar

Abstract:

This paper addressed the use of Artificial Intelligence to obtain the structural reliability of reinforced concrete buildings. For this purpose, artificial neuronal networks (ANN) are developed to predict seismic demand hazard curves. In order to have enough input-output data to train the ANN, a set of reinforced concrete buildings (low, mid, and high rise) are designed, then a probabilistic seismic hazard analysis is made to obtain the seismic demand hazard curves. The results are then used as input-output data to train the ANN in a feedforward backpropagation model. The predicted values of the seismic demand hazard curves found by the ANN are then compared. Finally, it is concluded that the computer time analysis is significantly lower and the predictions obtained from the ANN were accurate in comparison to the values obtained from the conventional methods.

Keywords: structural reliability, seismic design, machine learning, artificial neural network, probabilistic seismic hazard analysis, seismic demand hazard curves

Procedia PDF Downloads 196
27934 Reconfigurable Intelligent Surfaces (RIS)-Assisted Integrated Leo Satellite and UAV for Non-terrestrial Networks Using a Deep Reinforcement Learning Approach

Authors: Tesfaw Belayneh Abebe

Abstract:

Integrating low-altitude earth orbit (LEO) satellites and unmanned aerial vehicles (UAVs) within a non-terrestrial network (NTN) with the assistance of reconfigurable intelligent surfaces (RIS), we investigate the problem of how to enhance throughput through integrated LEO satellites and UAVs with the assistance of RIS. We propose a method to jointly optimize the associations with the LEO satellite, the 3D trajectory of the UAV, and the phase shifts of the RIS to maximize communication throughput for RIS-assisted integrated LEO satellite and UAV-enabled wireless communications, which is challenging due to the time-varying changes in the position of the LEO satellite, the high mobility of UAVs, an enormous number of possible control actions, and also the large number of RIS elements. Utilizing a multi-agent double deep Q-network (MADDQN), our approach dynamically adjusts LEO satellite association, UAV positioning, and RIS phase shifts. Simulation results demonstrate that our method significantly outperforms baseline strategies in maximizing throughput. Lastly, thanks to the integrated network and the RIS, the proposed scheme achieves up to 65.66x higher peak throughput and 25.09x higher worst-case throughput.

Keywords: integrating low-altitude earth orbit (LEO) satellites, unmanned aerial vehicles (UAVs) within a non-terrestrial network (NTN), reconfigurable intelligent surfaces (RIS), multi-agent double deep Q-network (MADDQN)

Procedia PDF Downloads 48
27933 Numerical and Sensitivity Analysis of Modeling the Newcastle Disease Dynamics

Authors: Nurudeen Oluwasola Lasisi

Abstract:

Newcastle disease is a highly contagious disease of birds caused by a para-myxo virus. In this paper, we presented Novel quarantine-adjusted incident and linear incident of Newcastle disease model equations. We considered the dynamics of transmission and control of Newcastle disease. The existence and uniqueness of the solutions were obtained. The existence of disease-free points was shown, and the model threshold parameter was examined using the next-generation operator method. The sensitivity analysis was carried out in order to identify the most sensitive parameters of the disease transmission. This revealed that as parameters β,ω, and ᴧ increase while keeping other parameters constant, the effective reproduction number R_ev increases. This implies that the parameters increase the endemicity of the infection of individuals. More so, when the parameters μ,ε,γ,δ_1, and α increase, while keeping other parameters constant, the effective reproduction number R_ev decreases. This implies the parameters decrease the endemicity of the infection as they have negative indices. Analytical results were numerically verified by the Differential Transformation Method (DTM) and quantitative views of the model equations were showcased. We established that as contact rate (β) increases, the effective reproduction number R_ev increases, as the effectiveness of drug usage increases, the R_ev decreases and as the quarantined individual decreases, the R_ev decreases. The results of the simulations showed that the infected individual increases when the susceptible person approaches zero, also the vaccination individual increases when the infected individual decreases and simultaneously increases the recovery individual.

Keywords: disease-free equilibrium, effective reproduction number, endemicity, Newcastle disease model, numerical, Sensitivity analysis

Procedia PDF Downloads 45
27932 Geometallurgy of Niobium Deposits: An Integrated Multi-Disciplined Approach

Authors: Mohamed Nasraoui

Abstract:

Spatial ore distribution, ore heterogeneity and their links with geological processes involved in Niobium concentration are all factors for consideration when bridging field observations to extraction scheme. Indeed, mineralogy changes of Nb-hosting phases, their textural relationships with hydrothermal or secondary minerals, play a key control over mineral processing. This study based both on filed work and ore characterization presents data from several Nb-deposits related to carbonatite complexes. The results obtained by a wide range of analytical techniques, including, XRD, XRF, ICP-MS, SEM, Microprobe, Spectro-CL, FTIR-DTA and Mössbauer spectroscopy, demonstrate how geometallurgical assessment, at all stage of mine development, can greatly assist in the design of a suitable extraction flowsheet and data reconciliation.

Keywords: carbonatites, Nb-geometallurgy, Nb-mineralogy, mineral processing.

Procedia PDF Downloads 165
27931 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization

Procedia PDF Downloads 118
27930 Encoding the Design of the Memorial Park and the Family Network as the Icon of 9/11 in Amy Waldman's the Submission

Authors: Masami Usui

Abstract:

After 9/11, the American literary scene was confronted with new perspectives that enabled both writers and readers to recognize the hidden aspects of their political, economic, legal, social, and cultural phenomena. There appeared an argument over new and challenging multicultural aspects after 9/11 and this argument is presented by a tension of space related to 9/11. In Amy Waldman’s the Submission (2011), designing both the memorial park and the family network has a significant meaning in establishing the progress of understanding from multiple perspectives. The most intriguing and controversial topic of racism is reflected in the Submission, where one young architect’s blind entry to the competition for the memorial of Ground Zero is nominated, yet he is confronted with strong objections and hostility as soon as he turns out to be a Muslim named Mohammad Khan. This ‘Khan’ issue, immediately enlarged into a social controversial issue on American soil, causes repeated acts of hostility to Muslim women by ignorant citizens all over America. His idea of the park is to design a new concept of tracing the cultural background of the open space. Against his will, his name is identified as the ‘ingredient’ of the networking of the resistant community with his supporters: on the other hand, the post 9/11 hysteria and victimization is presented in such family associations as the Angry Family Members and Grieving Family Members. These rapidly expanding networks, whether political or not, constructed by the internet, embody the contemporary societal connection and representation. The contemporary quest for the significance of human relationships is recognized as a quest for global peace. Designing both the memorial park and the communication networks strengthens a process of facing the shared conflicts and healing the survivors’ trauma. The tension between the idea and networking of the Garden for the memorial site and the collapse of Ground Zero signifies the double mission of the site: to establish the space to ease the wounded and to remember the catastrophe. Reading the design of these icons of 9/11 in the Submission means that decoding the myth of globalization and its representations in this century.

Keywords: American literature, cultural studies, globalization, literature of catastrophe

Procedia PDF Downloads 533
27929 Landscape Factors Eliciting the Sense of Relaxation in Urban Green Space

Authors: Kaowen Grace Chang

Abstract:

Urban green spaces play an important role in promoting wellbeing through the sense of relaxation for urban residents. Among many designing factors, what the principal ones that could effectively influence people’s sense of relaxation? And, what are the relationship between the sense of relaxation and those factors? Regarding those questions, there is still little evidence for sufficient support. Therefore, the purpose of this study, based on individual responses to environmental information, is to investigate the landscape factors that relate to well-being through the sense of relaxation in mixed-use urban environments. We conducted the experimental design and model construction utilizing choice-based conjoint analysis to test the factors of plant arrangement pattern, plant trimming condition, the distance to visible automobile, the number of landmark objects, and the depth of view. Through the operation of balanced fractional orthogonal design, the goal is to know the relationship between the sense of relaxation and different designs. In a result, the three factors of plant trimming condition, the distance to visible automobile, and the depth of view shed are significantly effective to the sense of relaxation. The stronger magnitude of maintenance and trimming, the further distance to visible automobiles, and deeper view shed that allow the users to see further scenes could significantly promote green space users’ sense of relaxation in urban green spaces.

Keywords: urban green space, landscape planning and design, sense of relaxation, choice model

Procedia PDF Downloads 148
27928 Designing User Interfaces for Just in Time Enterprise Solution

Authors: Romi Dey

Abstract:

Introduction: One of the most important criteria for technology to sustain and grow is through it’s elaborate and intuitive design methodology and design thinking. Designing for enterprise applications that cater to Just in Time Technology is one of the most challenging and detailed processes any User Experience Designer would come across. Description: The basic principles of Design, when applied to tailor to these technologies, creates an immense challenge and that’s how a set of redefined and revised design principles that can be applied to designing any Just In Time manufacturing solution. Findings: The thorough process of understanding the end user, their existing pain points which they’ve faced in the real world, their responsibilities and expectations, the core needs and last but not the least the demands, creates havoc nurturing of the design methodologies for the Just in Time solutions. With respect to the business aspect, design and design principles play a strong role in any form of innovation. Conclusion: Innovation and knowledge about the latest technologies are the keywords in the manufacturing industry. It becomes crucial for the product development team to be precise in their understanding of the technology and being sure of end users expectation.

Keywords: design thinking, enterprise application, Just in Time, user experience design

Procedia PDF Downloads 170
27927 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm

Authors: Frodouard Minani

Abstract:

Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.

Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks

Procedia PDF Downloads 144
27926 Model Based Design and Development of Horticultural Produce Crate from Bamboo

Authors: Sisay Wondmagegn Molla, Mulugeta Admasu Delele, Tadelle Nigusu Mekonen

Abstract:

It is common to observe quality deterioration and mechanical injury of horticulture products as a result of suboptimal design and handling of the packaging systems. Society uses the old and primitive way of handling horticulture products, which is produced through trial and error This method is known to have many limitations on quality, environmental pollution, labor and cost. Ethiopia stands first in bamboo resources in Africa, which is 67 % of the African and 7 % of the world's bamboo resources. The purpose of this project was to design and develop bamboo-based ventilated horticultural produce crates using validated computational fluid dynamics (CFD). The model was used to predict the airflow and temperature distribution inside the loaded crate. The study included: sizing, collection of the thermo-physical properties, and designing and developing a CFD model of the bamboo-based ventilated horticultural crate. The designed crate (40×30×25cm) had a capacity of about 18 kg, and cold air temperature (130C) was used for cooling the fruit. Airflow in the loaded crate is far from uniform. There is a relatively high-velocity flow at the top, near inlet and near outlet sections, and a relatively low airflow near the center of the loaded crate. The predicted velocity variation within the bulk of the produce was relatively large, it was in the range of 0.04-7m/s. The vented produce package contributed the highest cooling airflow resistance. Similar to the airflow, the cooling characteristics of the product were not uniform. There was a difference in the cooling rate of the produce in the airflow direction and from the top to the bottom section of the loaded crate. The products that were located near the inlet side and top of the bulk showed a faster cooling rate than the rest of the bulk. The result showed that the produced volume average temperature was 17.9°C after a cooling period of 3 hr. It was reduced by 12.05°C. The result showed the potential of the CFD modeling approach in developing the bamboo-based design of horticultural produce crates in terms of airflow and heat transfer characteristics.

Keywords: bamboo, modeling, cooling, horticultural, packaging

Procedia PDF Downloads 25
27925 Object Oriented Software Engineering Approach to Industrial Information System Design and Implementation

Authors: Issa Hussein Manita

Abstract:

This paper presents an example of industrial information system design and implementation (IIDC), the most common software engineering design steps that are applied to the different design stages. We are going through the life cycle of software system development. We start by a study of system requirement and end with testing and delivering system, going by system design and coding, program integration and system integration step. The most modern software design tools available used in the design this includes, but not limited to, Unified Modeling Language (UML), system modeling, SQL server side application, uses case analysis, design and testing as applied to information processing systems. The system is designed to perform tasks specified by the client with real data. By the end of the implementation of the system, default or user defined acceptance policy to provide an overall score as an indication of the system performance is used. To test the reliability of he designed system, it is tested in different environment and different work burden such as multi-user environment.

Keywords: software engineering, design, system requirement, integration, unified modeling language

Procedia PDF Downloads 570
27924 Efficient Model Order Reduction of Descriptor Systems Using Iterative Rational Krylov Algorithm

Authors: Muhammad Anwar, Ameen Ullah, Intakhab Alam Qadri

Abstract:

This study presents a technique utilizing the Iterative Rational Krylov Algorithm (IRKA) to reduce the order of large-scale descriptor systems. Descriptor systems, which incorporate differential and algebraic components, pose unique challenges in Model Order Reduction (MOR). The proposed method partitions the descriptor system into polynomial and strictly proper parts to minimize approximation errors, applying IRKA exclusively to the strictly adequate component. This approach circumvents the unbounded errors that arise when IRKA is directly applied to the entire system. A comparative analysis demonstrates the high accuracy of the reduced model and a significant reduction in computational burden. The reduced model enables more efficient simulations and streamlined controller designs. The study highlights IRKA-based MOR’s effectiveness in optimizing complex systems’ performance across various engineering applications. The proposed methodology offers a promising solution for reducing the complexity of large-scale descriptor systems while maintaining their essential characteristics and facilitating their analysis, simulation, and control design.

Keywords: model order reduction, descriptor systems, iterative rational Krylov algorithm, interpolatory model reduction, computational efficiency, projection methods, H₂-optimal model reduction

Procedia PDF Downloads 31
27923 A Feasibility and Implementation Model of Small-Scale Hydropower Development for Rural Electrification in South Africa: Design Chart Development

Authors: Gideon J. Bonthuys, Marco van Dijk, Jay N. Bhagwan

Abstract:

Small scale hydropower used to play a very important role in the provision of energy to urban and rural areas of South Africa. The national electricity grid, however, expanded and offered cheap, coal generated electricity and a large number of hydropower systems were decommissioned. Unfortunately, large numbers of households and communities will not be connected to the national electricity grid for the foreseeable future due to high cost of transmission and distribution systems to remote communities due to the relatively low electricity demand within rural communities and the allocation of current expenditure on upgrading and constructing of new coal fired power stations. This necessitates the development of feasible alternative power generation technologies. A feasibility and implementation model was developed to assist in designing and financially evaluating small-scale hydropower (SSHP) plants. Several sites were identified using the model. The SSHP plants were designed for the selected sites and the designs for the different selected sites were priced using pricing models (civil, mechanical and electrical aspects). Following feasibility studies done on the designed and priced SSHP plants, a feasibility analysis was done and a design chart developed for future similar potential SSHP plant projects. The methodology followed in conducting the feasibility analysis for other potential sites consisted of developing cost and income/saving formulae, developing net present value (NPV) formulae, Capital Cost Comparison Ratio (CCCR) and levelised cost formulae for SSHP projects for the different types of plant installations. It included setting up a model for the development of a design chart for a SSHP, calculating the NPV, CCCR and levelised cost for the different scenarios within the model by varying different parameters within the developed formulae, setting up the design chart for the different scenarios within the model and analyzing and interpreting results. From the interpretation of the develop design charts for feasible SSHP in can be seen that turbine and distribution line cost are the major influences on the cost and feasibility of SSHP. High head, short transmission line and islanded mini-grid SSHP installations are the most feasible and that the levelised cost of SSHP is high for low power generation sites. The main conclusion from the study is that the levelised cost of SSHP projects indicate that the cost of SSHP for low energy generation is high compared to the levelised cost of grid connected electricity supply; however, the remoteness of SSHP for rural electrification and the cost of infrastructure to connect remote rural communities to the local or national electricity grid provides a low CCCR and renders SSHP for rural electrification feasible on this basis.

Keywords: cost, feasibility, rural electrification, small-scale hydropower

Procedia PDF Downloads 224