Search results for: e-content producing algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4962

Search results for: e-content producing algorithm

2892 Analysis of Determinate and Indeterminate Structures: Applications of Non-Economic Structure

Authors: Toral Khalpada, Kanhai Joshi

Abstract:

Generally, constructions of structures built in India are indeterminate structures. The purpose of this study is to investigate the application of a structure that is proved to be non-economical. The testing practice involves the application of different types of loads on both, determinate and indeterminate structure by computing it on a software system named Staad and also inspecting them practically on the construction site, analyzing the most efficient structure and diagnosing the utilization of the structure which is not so beneficial as compared to other. Redundant structures (indeterminate structure) are found to be more reasonable. All types of loads were applied on the beams of both determinate and indeterminate structures parallelly on the software and the same was done on the site practically which proved that maximum stresses in statically indeterminate structures are generally lower than those in comparable determinate structures. These structures are found to have higher stiffness resulting in lesser deformations so indeterminate structures are economical and are better than determinate structures to use for construction. On the other hand, statically determinate structures have the benefit of not producing stresses because of temperature changes. Therefore, our study tells that indeterminate structure is more beneficial but determinate structure also has used as it can be used in many areas; it can be used for the construction of two hinged arch bridges where two supports are sufficient and where there is no need for expensive indeterminate structure. Further investigation is needed to contrive more implementation of the determinate structure.

Keywords: construction, determinate structure, indeterminate structure, stress

Procedia PDF Downloads 232
2891 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby

Authors: Jazim Sohail, Filipe Teixeira-Dias

Abstract:

Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.

Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI

Procedia PDF Downloads 218
2890 Climate Change and the Role of Foreign-Invested Enterprises

Authors: Xuemei Jiang, Kunfu Zhu, Shouyang Wang

Abstract:

In this paper, we selected China as a case and employ a time-series of unique input-output tables distinguishing firm ownership and processing exports, to evaluate the role of foreign-invested enterprises (FIEs) in China’s rapid carbon dioxide emission growth. The results suggested that FIEs contributed to 11.55% of the economic outputs’ growth in China between 1992-2010, but accounted for only 9.65% of the growth of carbon dioxide emissions. In relative term, until 2010 FIEs still emitted much less than Chinese-owned enterprises (COEs) when producing the same amount of outputs, although COEs experienced much faster technology upgrades. In an ideal scenario where we assume the final demands remain unchanged and COEs completely mirror the advanced technologies of FIEs, more than 2000 Mt of carbon dioxide emissions would be reduced for China in 2010. From a policy perspective, the widespread FIEs are very effective and efficient channel to encourage technology transfer from developed to developing countries.

Keywords: carbon dioxide emissions, foreign-invested enterprises, technology transfer, input–output analysis, China

Procedia PDF Downloads 399
2889 Revolutionizing Financial Forecasts: Enhancing Predictions with Graph Convolutional Networks (GCN) - Long Short-Term Memory (LSTM) Fusion

Authors: Ali Kazemi

Abstract:

Those within the volatile and interconnected international economic markets, appropriately predicting market trends, hold substantial fees for traders and financial establishments. Traditional device mastering strategies have made full-size strides in forecasting marketplace movements; however, monetary data's complicated and networked nature calls for extra sophisticated processes. This observation offers a groundbreaking method for monetary marketplace prediction that leverages the synergistic capability of Graph Convolutional Networks (GCNs) and Long Short-Term Memory (LSTM) networks. Our suggested algorithm is meticulously designed to forecast the traits of inventory market indices and cryptocurrency costs, utilizing a comprehensive dataset spanning from January 1, 2015, to December 31, 2023. This era, marked by sizable volatility and transformation in financial markets, affords a solid basis for schooling and checking out our predictive version. Our algorithm integrates diverse facts to construct a dynamic economic graph that correctly reflects market intricacies. We meticulously collect opening, closing, and high and low costs daily for key inventory marketplace indices (e.g., S&P 500, NASDAQ) and widespread cryptocurrencies (e.g., Bitcoin, Ethereum), ensuring a holistic view of marketplace traits. Daily trading volumes are also incorporated to seize marketplace pastime and liquidity, providing critical insights into the market's shopping for and selling dynamics. Furthermore, recognizing the profound influence of the monetary surroundings on financial markets, we integrate critical macroeconomic signs with hobby fees, inflation rates, GDP increase, and unemployment costs into our model. Our GCN algorithm is adept at learning the relational patterns amongst specific financial devices represented as nodes in a comprehensive market graph. Edges in this graph encapsulate the relationships based totally on co-movement styles and sentiment correlations, enabling our version to grasp the complicated community of influences governing marketplace moves. Complementing this, our LSTM algorithm is trained on sequences of the spatial-temporal illustration discovered through the GCN, enriched with historic fee and extent records. This lets the LSTM seize and expect temporal marketplace developments accurately. Inside the complete assessment of our GCN-LSTM algorithm across the inventory marketplace and cryptocurrency datasets, the version confirmed advanced predictive accuracy and profitability compared to conventional and opportunity machine learning to know benchmarks. Specifically, the model performed a Mean Absolute Error (MAE) of 0.85%, indicating high precision in predicting day-by-day charge movements. The RMSE was recorded at 1.2%, underscoring the model's effectiveness in minimizing tremendous prediction mistakes, which is vital in volatile markets. Furthermore, when assessing the model's predictive performance on directional market movements, it achieved an accuracy rate of 78%, significantly outperforming the benchmark models, averaging an accuracy of 65%. This high degree of accuracy is instrumental for techniques that predict the course of price moves. This study showcases the efficacy of mixing graph-based totally and sequential deep learning knowledge in economic marketplace prediction and highlights the fee of a comprehensive, records-pushed evaluation framework. Our findings promise to revolutionize investment techniques and hazard management practices, offering investors and economic analysts a powerful device to navigate the complexities of cutting-edge economic markets.

Keywords: financial market prediction, graph convolutional networks (GCNs), long short-term memory (LSTM), cryptocurrency forecasting

Procedia PDF Downloads 68
2888 Optimization of Structures with Mixed Integer Non-linear Programming (MINLP)

Authors: Stojan Kravanja, Andrej Ivanič, Tomaž Žula

Abstract:

This contribution focuses on structural optimization in civil engineering using mixed integer non-linear programming (MINLP). MINLP is characterized as a versatile method that can handle both continuous and discrete optimization variables simultaneously. Continuous variables are used to optimize parameters such as dimensions, stresses, masses, or costs, while discrete variables represent binary decisions to determine the presence or absence of structural elements within a structure while also calculating discrete materials and standard sections. The optimization process is divided into three main steps. First, a mechanical superstructure with a variety of different topology-, material- and dimensional alternatives. Next, a MINLP model is formulated to encapsulate the optimization problem. Finally, an optimal solution is searched in the direction of the defined objective function while respecting the structural constraints. The economic or mass objective function of the material and labor costs of a structure is subjected to the constraints known from structural analysis. These constraints include equations for the calculation of internal forces and deflections, as well as equations for the dimensioning of structural components (in accordance with the Eurocode standards). Given the complex, non-convex and highly non-linear nature of optimization problems in civil engineering, the Modified Outer-Approximation/Equality-Relaxation (OA/ER) algorithm is applied. This algorithm alternately solves subproblems of non-linear programming (NLP) and main problems of mixed-integer linear programming (MILP), in this way gradually refines the solution space up to the optimal solution. The NLP corresponds to the continuous optimization of parameters (with fixed topology, discrete materials and standard dimensions, all determined in the previous MILP), while the MILP involves a global approximation to the superstructure of alternatives, where a new topology, materials, standard dimensions are determined. The optimization of a convex problem is stopped when the MILP solution becomes better than the best NLP solution. Otherwise, it is terminated when the NLP solution can no longer be improved. While the OA/ER algorithm, like all other algorithms, does not guarantee global optimality due to the presence of non-convex functions, various modifications, including convexity tests, are implemented in OA/ER to mitigate these difficulties. The effectiveness of the proposed MINLP approach is demonstrated by its application to various structural optimization tasks, such as mass optimization of steel buildings, cost optimization of timber halls, composite floor systems, etc. Special optimization models have been developed for the optimization of these structures. The MINLP optimizations, facilitated by the user-friendly software package MIPSYN, provide insights into a mass or cost-optimal solutions, optimal structural topologies, optimal material and standard cross-section choices, confirming MINLP as a valuable method for the optimization of structures in civil engineering.

Keywords: MINLP, mixed-integer non-linear programming, optimization, structures

Procedia PDF Downloads 48
2887 Design, Analysis and Obstacle Avoidance Control of an Electric Wheelchair with Sit-Sleep-Seat Elevation Functions

Authors: Waleed Ahmed, Huang Xiaohua, Wilayat Ali

Abstract:

The wheelchair users are generally exposed to physical and psychological health problems, e.g., pressure sores and pain in the hip joint, associated with seating posture or being inactive in a wheelchair for a long time. Reclining Wheelchair with back, thigh, and leg adjustment helps in daily life activities and health preservation. The seat elevating function of an electric wheelchair allows the user (lower limb amputation) to reach different heights. An electric wheelchair is expected to ease the lives of the elderly and disable people by giving them mobility support and decreasing the percentage of accidents caused by users’ narrow sight or joystick operation errors. Thus, this paper proposed the design, analysis and obstacle avoidance control of an electric wheelchair with sit-sleep-seat elevation functions. A 3D model of a wheelchair is designed in SolidWorks that was later used for multi-body dynamic (MBD) analysis and to verify driving control system. The control system uses the fuzzy algorithm to avoid the obstacle by getting information in the form of distance from the ultrasonic sensor and user-specified direction from the joystick’s operation. The proposed fuzzy driving control system focuses on the direction and velocity of the wheelchair. The wheelchair model has been examined and proven in MSC Adams (Automated Dynamic Analysis of Mechanical Systems). The designed fuzzy control algorithm is implemented on Gazebo robotic 3D simulator using Robotic Operating System (ROS) middleware. The proposed wheelchair design enhanced mobility and quality of life by improving the user’s functional capabilities. Simulation results verify the non-accidental behavior of the electric wheelchair.

Keywords: fuzzy logic control, joystick, multi body dynamics, obstacle avoidance, scissor mechanism, sensor

Procedia PDF Downloads 129
2886 Characterization of an Isopropanol-Butanol Clostridium

Authors: Chen Zhang, Fengxue Xin, Jianzhong He

Abstract:

A unique Clostridium beijerinckii species strain BGS1 was obtained from grass land samples, which is capable of producing 8.43g/L butanol and 3.21 isopropanol from 60g/L glucose while generating 4.68g/L volatile fatty acids (VFAs) from 30g/L xylan. The concentration of isopropanol produced by culture BGS1 is ~15% higher than previously reported wild-type Clostridium beijerinckii under similar conditions. Compared to traditional Acetone-Butanol-Ethanol (ABE) fermentation species, culture BGS1 only generates negligible amount of ethanol and acetone, but produces butanol and isopropanol as biosolvent end-products which are pure alcohols and more economical than ABE. More importantly, culture BGS1 can consume acetone to produce isopropanol, e.g., 1.84g/L isopropanol from 0.81g/L acetone in 60g/L glucose medium containing 6.15g/L acetone. The analysis of BGS1 draft genome annotated by RAST server demonstrates that no ethanol production is caused by the lack of pyruvate decarboxylase gene – related to ethanol production. In addition, an alcohol dehydrogenase (adhe gene) was found in BGS1 which could be a potential gene responsible for isopropanol-generation. This is the first report on Isopropanol-Butanol (IB) fermentation by wild-type Clostridium strain and its application for isopropanol and butanol production.

Keywords: acetone conversion, butanol, clostridium, isopropanol

Procedia PDF Downloads 294
2885 3D Simulation of the Twin-Aperture IRON Superconducting Quadrupole for Charm-Tau Factory

Authors: K. K. Riabchenko, T. V Rybitskaya, A. A. Starostenko

Abstract:

Sper Charm-Tau Factory is a double ring e+e- collider to be operated in the center-of-mass energy range from 2 to 6 GeV, with a peak luminosity of about 1035 cm-2s-1 (Crab Waist collision) and with longitudinally polarized electrons at the IP (interaction point). One of the important elements of the cτ-factory is the superconducting two-aperture quadrupole of the final focus. It was decided to make a full-scale prototype quadrupole. The main objectives of our study included: 1) 3D modeling of the quadrupole in the Opera program, 2) Optimization of the geometry of the quadrupole lens, 3) Study of the influence of magnetic properties and geometry of a quadrupole on integral harmonics. In addition to this, the ways of producing unwanted harmonics have been studied. In the course of this work, a 3D model of a two-aperture iron superconducting quadrupole lens was created. A three-dimensional simulation of the magnetic field was performed, and the geometrical parameters of the lens were selected. Calculations helped to find sources of possible errors and methods for correcting unwanted harmonics. In addition to this, calculations show that there are no obstacles to the production of a prototype lens.

Keywords: super cτ-factory, final focus, twin aperture quadrupole lens, integral harmonics

Procedia PDF Downloads 127
2884 A Benchmark System for Testing Medium Voltage Direct Current (MVDC-CB) Robustness Utilizing Real Time Digital Simulation and Hardware-In-Loop Theory

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

The integration of green energy resources is a major focus, and the role of Medium Voltage Direct Current (MVDC) systems is exponentially expanding. However, the protection of MVDC systems against DC faults is a challenge that can have consequences on reliable and safe grid operation. This challenge reveals the need for MVDC circuit breakers (MVDC CB), which are in infancies of their improvement. Therefore will be a lack of MVDC CBs standards, including thresholds for acceptable power losses and operation speed. To establish a baseline for comparison purposes, a benchmark system for testing future MVDC CBs is vital. The literatures just give the timing sequence of each switch and the emphasis is on the topology, without in-depth study on the control algorithm of DCCB, as the circuit breaker control system is not yet systematic. A digital testing benchmark is designed for the Proof-of-concept of simulation studies using software models. It can validate studies based on real-time digital simulators and Transient Network Analyzer (TNA) models. The proposed experimental setup utilizes data accusation from the accurate sensors installed on the tested MVDC CB and through general purpose input/outputs (GPIO) from the microcontroller and PC Prototype studies in the laboratory-based models utilizing Hardware-in-the-Loop (HIL) equipment connected to real-time digital simulators is achieved. The improved control algorithm of the circuit breaker can reduce the peak fault current and avoid arc resignation, helping the coordination of DCCB in relay protection. Moreover, several research gaps are identified regarding case studies and evaluation approaches.

Keywords: DC circuit breaker, hardware-in-the-loop, real time digital simulation, testing benchmark

Procedia PDF Downloads 81
2883 An Evidence-Based Laboratory Medicine (EBLM) Test to Help Doctors in the Assessment of the Pancreatic Endocrine Function

Authors: Sergio J. Calleja, Adria Roca, José D. Santotoribio

Abstract:

Pancreatic endocrine diseases include pathologies like insulin resistance (IR), prediabetes, and type 2 diabetes mellitus (DM2). Some of them are highly prevalent in the U.S.—40% of U.S. adults have IR, 38% of U.S. adults have prediabetes, and 12% of U.S. adults have DM2—, as reported by the National Center for Biotechnology Information (NCBI). Building upon this imperative, the objective of the present study was to develop a non-invasive test for the assessment of the patient’s pancreatic endocrine function and to evaluate its accuracy in detecting various pancreatic endocrine diseases, such as IR, prediabetes, and DM2. This approach to a routine blood and urine test is based around serum and urine biomarkers. It is made by the combination of several independent public algorithms, such as the Adult Treatment Panel III (ATP-III), triglycerides and glucose (TyG) index, homeostasis model assessment-insulin resistance (HOMA-IR), HOMA-2, and the quantitative insulin-sensitivity check index (QUICKI). Additionally, it incorporates essential measurements such as the creatinine clearance, estimated glomerular filtration rate (eGFR), urine albumin-to-creatinine ratio (ACR), and urinalysis, which are helpful to achieve a full image of the patient’s pancreatic endocrine disease. To evaluate the estimated accuracy of this test, an iterative process was performed by a machine learning (ML) algorithm, with a training set of 9,391 patients. The sensitivity achieved was 97.98% and the specificity was 99.13%. Consequently, the area under the receiver operating characteristic (AUROC) curve, the positive predictive value (PPV), and the negative predictive value (NPV) were 92.48%, 99.12%, and 98.00%, respectively. The algorithm was validated with a randomized controlled trial (RCT) with a target sample size (n) of 314 patients. However, 50 patients were initially excluded from the study, because they had ongoing clinically diagnosed pathologies, symptoms or signs, so the n dropped to 264 patients. Then, 110 patients were excluded because they didn’t show up at the clinical facility for any of the follow-up visits—this is a critical point to improve for the upcoming RCT, since the cost of each patient is very high and for this RCT almost a third of the patients already tested were lost—, so the new n consisted of 154 patients. After that, 2 patients were excluded, because some of their laboratory parameters and/or clinical information were wrong or incorrect. Thus, a final n of 152 patients was achieved. In this validation set, the results obtained were: 100.00% sensitivity, 100.00% specificity, 100.00% AUROC, 100.00% PPV, and 100.00% NPV. These results suggest that this approach to a routine blood and urine test holds promise in providing timely and accurate diagnoses of pancreatic endocrine diseases, particularly among individuals aged 40 and above. Given the current epidemiological state of these type of diseases, these findings underscore the significance of early detection. Furthermore, they advocate for further exploration, prompting the intention to conduct a clinical trial involving 26,000 participants (from March 2025 to December 2026).

Keywords: algorithm, diabetes, laboratory medicine, non-invasive

Procedia PDF Downloads 36
2882 The Use of Stochastic Gradient Boosting Method for Multi-Model Combination of Rainfall-Runoff Models

Authors: Phanida Phukoetphim, Asaad Y. Shamseldin

Abstract:

In this study, the novel Stochastic Gradient Boosting (SGB) combination method is addressed for producing daily river flows from four different rain-runoff models of Ohinemuri catchment, New Zealand. The selected rainfall-runoff models are two empirical black-box models: linear perturbation model and linear varying gain factor model, two conceptual models: soil moisture accounting and routing model and Nedbør-Afrstrømnings model. In this study, the simple average combination method and the weighted average combination method were used as a benchmark for comparing the results of the novel SGB combination method. The models and combination results are evaluated using statistical and graphical criteria. Overall results of this study show that the use of combination technique can certainly improve the simulated river flows of four selected models for Ohinemuri catchment, New Zealand. The results also indicate that the novel SGB combination method is capable of accurate prediction when used in a combination method of the simulated river flows in New Zealand.

Keywords: multi-model combination, rainfall-runoff modeling, stochastic gradient boosting, bioinformatics

Procedia PDF Downloads 340
2881 A Trends Analysis of Yatch Simulator

Authors: Jae-Neung Lee, Keun-Chang Kwak

Abstract:

This paper describes an analysis of Yacht Simulator international trends and also explains about Yacht. Examples of yacht Simulator using Yacht Simulator include image processing for totaling the total number of vehicles, edge/target detection, detection and evasion algorithm, image processing using SIFT (scale invariant features transform) matching, and application of median filter and thresholding.

Keywords: yacht simulator, simulator, trends analysis, SIFT

Procedia PDF Downloads 434
2880 A Framework of Dynamic Rule Selection Method for Dynamic Flexible Job Shop Problem by Reinforcement Learning Method

Authors: Rui Wu

Abstract:

In the volatile modern manufacturing environment, new orders randomly occur at any time, while the pre-emptive methods are infeasible. This leads to a real-time scheduling method that can produce a reasonably good schedule quickly. The dynamic Flexible Job Shop problem is an NP-hard scheduling problem that hybrid the dynamic Job Shop problem with the Parallel Machine problem. A Flexible Job Shop contains different work centres. Each work centre contains parallel machines that can process certain operations. Many algorithms, such as genetic algorithms or simulated annealing, have been proposed to solve the static Flexible Job Shop problems. However, the time efficiency of these methods is low, and these methods are not feasible in a dynamic scheduling problem. Therefore, a dynamic rule selection scheduling system based on the reinforcement learning method is proposed in this research, in which the dynamic Flexible Job Shop problem is divided into several parallel machine problems to decrease the complexity of the dynamic Flexible Job Shop problem. Firstly, the features of jobs, machines, work centres, and flexible job shops are selected to describe the status of the dynamic Flexible Job Shop problem at each decision point in each work centre. Secondly, a framework of reinforcement learning algorithm using a double-layer deep Q-learning network is applied to select proper composite dispatching rules based on the status of each work centre. Then, based on the selected composite dispatching rule, an available operation is selected from the waiting buffer and assigned to an available machine in each work centre. Finally, the proposed algorithm will be compared with well-known dispatching rules on objectives of mean tardiness, mean flow time, mean waiting time, or mean percentage of waiting time in the real-time Flexible Job Shop problem. The result of the simulations proved that the proposed framework has reasonable performance and time efficiency.

Keywords: dynamic scheduling problem, flexible job shop, dispatching rules, deep reinforcement learning

Procedia PDF Downloads 110
2879 Improving Lane Detection for Autonomous Vehicles Using Deep Transfer Learning

Authors: Richard O’Riordan, Saritha Unnikrishnan

Abstract:

Autonomous Vehicles (AVs) are incorporating an increasing number of ADAS features, including automated lane-keeping systems. In recent years, many research papers into lane detection algorithms have been published, varying from computer vision techniques to deep learning methods. The transition from lower levels of autonomy defined in the SAE framework and the progression to higher autonomy levels requires increasingly complex models and algorithms that must be highly reliable in their operation and functionality capacities. Furthermore, these algorithms have no room for error when operating at high levels of autonomy. Although the current research details existing computer vision and deep learning algorithms and their methodologies and individual results, the research also details challenges faced by the algorithms and the resources needed to operate, along with shortcomings experienced during their detection of lanes in certain weather and lighting conditions. This paper will explore these shortcomings and attempt to implement a lane detection algorithm that could be used to achieve improvements in AV lane detection systems. This paper uses a pre-trained LaneNet model to detect lane or non-lane pixels using binary segmentation as the base detection method using an existing dataset BDD100k followed by a custom dataset generated locally. The selected roads will be modern well-laid roads with up-to-date infrastructure and lane markings, while the second road network will be an older road with infrastructure and lane markings reflecting the road network's age. The performance of the proposed method will be evaluated on the custom dataset to compare its performance to the BDD100k dataset. In summary, this paper will use Transfer Learning to provide a fast and robust lane detection algorithm that can handle various road conditions and provide accurate lane detection.

Keywords: ADAS, autonomous vehicles, deep learning, LaneNet, lane detection

Procedia PDF Downloads 107
2878 Overview About Sludge Produced From Treatment Plant of Bahr El-Baqar Drain and Reusing It With Cement in Outdoor Paving

Authors: Khaled M.Naguib, Ahmed M.Noureldin

Abstract:

This paper aims to achieve many goals such as knowing (quantities produced- main properties- characteristics) of sludge produced from Bahr EL-Baqar drains treatment plant. This prediction or projection was made by laboratory analysis and modelling of Model samples from sludge depending on many studies that have previously done, second check the feasibility and do a risk analysis to know the best alternatives for reuse in producing secondary products that add value to sludge. Also, to know alternatives that have no value to add. All recovery methods are relatively very expensive and challenging to be done in this mega plant, so the recommendation from this study is to use the sludge as a coagulant to reduce some compounds or in secondary products. The study utilized sludge-cement replacement percentages of 10%, 20%, 30%, 40% and 50%. Produced tiles were tested for water absorption and breaking (bending) strength. The study showed that all produced tiles exhibited a water absorption ratio of around 10%. The study concluded that produced tiles, except for 50% sludge-cement replacement, comply with the breaking strength requirements of 2.8 MPa for tiles for external use.

Keywords: cement, tiles, water treatment sludge, breaking strength, absorption, heavy metals, risk analysis

Procedia PDF Downloads 114
2877 Thermal and Mechanical Properties of Powder Injection Molded Alumina Nano-Powder

Authors: Mostafa Rezaee Saraji, Ali Keshavarz Panahi

Abstract:

In this work, the processing steps for producing alumina parts using powder injection molding (PIM) technique and nano-powder were investigated and the thermal conductivity and flexural strength of samples were determined as a function of sintering temperature and holding time. In the first step, the feedstock with 58 vol. % of alumina nano-powder with average particle size of 100nm was prepared using Extrumixing method to obtain appropriate homogeneity. This feedstock was injection molded into the two cavity mold with rectangular shape. After injection molding step, thermal and solvent debinding methods were used for debinding of molded samples and then these debinded samples were sintered in different sintering temperatures and holding times. From the results, it was found that the flexural strength and thermal conductivity of samples increased by increasing sintering temperature and holding time; in sintering temperature of 1600ºC and holding time of 5h, the flexural strength and thermal conductivity of sintered samples reached to maximum values of 488MPa and 40.8 W/mK, respectively.

Keywords: alumina nano-powder, thermal conductivity, flexural strength, powder injection molding

Procedia PDF Downloads 330
2876 Insulation and Architectural Design to Have Sustainable Buildings in Iran

Authors: Ali Bayati, Jamileh Azarnoush

Abstract:

Nowadays according to increasing the population all around the world, consuming of fossil fuels increased dramatically. Many believe that most of the atmospheric pollution comes by using fossil fuels. The process of natural sources entering cities shows one of the large challenges in consumption sources management. Nowadays, everyone considered about the consumption of fossil fuels and also Reduction of consumption civil energy in megacities that play a key role in solving serious problems such as air pollution, producing greenhouse gasses, global warming and damage ozone layer. In the construction industry, we should use the materials with the lowest need to energy for making and carrying them, and also the materials which need the lowest energy and expenses to recycling. In this way, the kind of usage material, the way of processing, regional materials and the adaptation with the environment is critical. Otherwise, the isolation should be use and mention in the long term. Accordingly, in this article we investigates the new ways in order to reduce environmental pollution and save more energy by using materials that are not harmful to the environment, fully insulated materials in buildings, sustainable and diversified buildings, suitable urban design and using solar energy more efficiently in order to reduce energy consumption.

Keywords: building design, construction masonry, insulation, sustainable construction

Procedia PDF Downloads 542
2875 Effect in Animal Nutrition of Genetical Modified Plant(GM)

Authors: Abdullah Özbilgin, Oguzhan Kahraman, Mustafa Selçuk Alataş

Abstract:

Plant breeders have made and will continue to make important contributions toward meeting the need for more and better feed and food. The use of new techniques to modify the genetic makeup of plants to improve their properties has led to a new generation of crops, grains and their by-products for feed. Plant breeders have made and will continue to make important contributions toward meeting the need for more and better feed and food. The use of new techniques to modify the genetic makeup of plants to improve their properties has led to a new generation of crops, grains and their by-products for feed. The land area devoted to the cultivation of genetically modified (GM) plants has increased in recent years: in 2012 such plants were grown on over 170 million hectares globally, in 28 different countries, and are at resent used by 17.3 million farmers worldwide. The majority of GM plants are used as feed material for food-producing farm animals. Despite the facts that GM plants have been used as feed for years and a number of feeding studies have proved their safety for animals, they still give rise to emotional public discussion.

Keywords: crops, genetical modified plant(GM), plant, safety

Procedia PDF Downloads 566
2874 Investigation of Optimized Mechanical Properties on Friction Stir Welded Al6063 Alloy

Authors: Lingaraju Dumpala, Narasa Raju Gosangi

Abstract:

Friction Stir Welding (FSW) is relatively new, environmentally friendly, versatile, and widely used joining technique for soft materials such as aluminum. FSW has got a lot of attention as a solid-state joining method which avoids many common problems of fusion welding and provides an improved way of producing aluminum joints in a faster way. FSW can be used for various aerospace, defense, automotive and transportation applications. It is necessary to understand the friction stir welded joints and its characteristics to use this new joining technique in critical applications. This study investigated the mechanical properties of friction stir welded aluminum 6063 alloys. FSW is carried out based on the design of experiments using L16 mixed level array by considering tool rotational speeds, tool feed rate and tool tilt angles as process parameters. The optimization of process parameters is carried by Taguchi based regression analysis and the significance of process parameters is analyzed using ANOVA. It is observed that the considered process parameters are high influences the mechanical properties of Al6063.

Keywords: FSW, aluminum alloy, mechanical properties, optimization, Taguchi, ANOVA

Procedia PDF Downloads 134
2873 Scientific Interpretation of “Fertilizing Winds” Mentioned in Verse 15:22 of Al-Quran

Authors: Md. Mamunur Rashid

Abstract:

Allah (SWT) bestowed us with the Divine blessing, providing the wonderful source of water as stated in verse 15:22 of Al-Quran. Arabic “Ar-Riaaha Lawaaqiha (ٱلرِّيَـٰحَ لَوَٰقِحَ)” of this verse is translated as “fertilizing winds.” The “fertilizing winds” literally, refer the winds of having the roles: to fertilize something similar to the “zygotes” in humans and animals (formation of clouds in the sky in this case); to produce fertilizers for the plants, crops, etc.; and to pollinate the plants. In this paper, these roles of “fertilizing winds” have been validated by presenting the modern knowledge of science in this regard. Existing interpretations are mostly focused on the “formation of clouds in the sky” while few of them mention about the pollination of trees. However, production of fertilizers, in this regard, has not been considered by any translator or interpreter. It has been observed that the winds contain, the necessary components of forming the clouds; the necessary components of producing the fertilizers; and the necessary components to pollinate the plants. The Science of Meteorology gives us the clear understanding of the formation of clouds. Moreover, we know that the lightning bolts breaks the nitrogen molecules of winds and the water molecules of vapor to form fertilizers. Pollination is a common role of winds in plants fertilization. All the scientific phenomena presented here give us the better interpretations of “fertilizing winds.”

Keywords: Al-Quran, fertilizing winds, meteorology, scientific

Procedia PDF Downloads 122
2872 Influence of Bio-Based Admixture on Compressive Strength of Concrete for Columns

Authors: K. Raza, S. Gul, M. Ali

Abstract:

Concrete is a fundamental building material, extensively utilized by the construction industry. Problems related to the strength of concrete is an immense issue for the sustainability of concrete structures. Concrete mostly loses its strength due to the cracks produced in it by shrinkage or hydration process. This study aims to enhance the strength and service life of the concrete structures by incorporating bio-based admixture in the concrete. By the injection of bio-based admixture (BBA) in concrete, it will self-heal the cracks by producing calcium carbonate. Minimization of cracks will compact the microstructure of the concrete, due to which strength will increase. For this study, Bacillus subtilis will be used as a bio-based admixture (BBA) in concrete. Calcium lactate up to 1.5% will be used as the food source for the Bacillus subtilis in concrete. Two formulations containing 0 and 5% of Bacillus subtilis by weight of cement, will be used for the casting of concrete specimens. Direct mixing method will be adopted for the usage of bio-based admixture in concrete. Compressive strength test will be carried out after 28 days of curing. Scanning electron microscopy (SEM) and X-ray diffraction analysis (XRD) will be performed for the examination of micro-structure of concrete. Results will be drawn by comparing the test results of 0 and 5% the formulations. It will be recommended to use to bio-based admixture (BBA) in concrete for columns because of the satisfactory increase in the compressive strength of concrete.

Keywords: bio-based admixture, Bacillus subtilis, calcium lactate, compressive strength

Procedia PDF Downloads 228
2871 Optimization of Bio-Based Lightweight Mortars Containing Wood Waste

Authors: Valeria Corinaldesi, Nicola Generosi, Daniele Berdini

Abstract:

In this study, wood waste from processing by-products was used by replacing natural sand for producing bio-based lightweight mortars. Manufacturers of wood products and furniture usually generate sawdust and pieces of side-cuts. These are produced by cutting, drilling, and milling operations as well. Three different percentages of substitution of quartz sand were tried: 2.5%, 5%, and 10% by volume. Wood by-products were pre-soaked in calcium hydroxide aqueous solution in order to obtain wood mineralization to avoid undesirable effects on the bio-based building materials. Bio-based mortars were characterized by means of compression and bending tests, free drying shrinkage tests, resistance to water vapour permeability, water capillary absorption, and, finally, thermal conductivity measurements. Results obtained showed that a maximum dosage of 5% wood by-products should be used in order to avoid an excessive loss of bio-based mortar mechanical strength. On the other hand, by adding the proper dosage of water-reducing admixture, adequate mechanical performance can be achieved even with 10% wood waste addition.

Keywords: bio-based mortar, energy efficiency, lightweight mortar, thermal insulation, wood waste

Procedia PDF Downloads 13
2870 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 178
2869 Investigating the Behavior of Underground Structures in the Event of an Earthquake

Authors: Davoud Beheshtizadeh, Farzin Malekpour

Abstract:

The progress of technology and producing new machinery have made a big change in excavation operations and construction of underground structures. The limitations of space and some other economic, politic and military considerations gained the attention of most developed and developing countries towards the construction of these structures for mine, military, and development objectives. Underground highways, tunnels, subways, oil reservoir resources, fuels, nuclear wastes burying reservoir and underground stores are increasingly developing and being used in these countries. The existence and habitability of the cities depend on these underground installations or in other words these vital arteries. Stopping the flow of water, gas leakage and explosion, collapsing of sewage paths, etc., resulting from the earthquake are among the factors that can severely harm the environment and increase the casualty. Lack of sewage network and complete stoppage of the flow of water in Bam (Iran) is a good example of this kind. In this paper, we investigate the effect of wave orientation on structures and deformation of them and the effect of faulting on underground structures, and then, we study resistance of reinforced concrete against earthquake, simulate two different samples, analyze the result and point out the importance of paying attention to underground installations.

Keywords: underground structures, earthquake, underground installations, axial deformations

Procedia PDF Downloads 195
2868 Intercultural Communication in the Teaching of English as a Foreign Language in Malawi

Authors: Peter Mayeso Jiyajiya

Abstract:

This paper discusses how the teaching of English as a foreign language in Malawi can enhance intercultural communication competence in a multicultural society. It argues that incorporation of intercultural communication in the teaching of English as a foreign language would improve cultural awareness in communication in the multicultural Malawi. The teaching of English in Malawi is geared towards producing students who would communicate in the global world. This entails the use of proper pedagogical approaches and instructional materials that prepare the students toward intercultural awareness. In view of this, the language teachers were interviewed in order to determine their instructional approaches to intercultural communication. Instructional materials were further evaluated to assess how interculturality is incorporated. The study found out that teachers face perceptual and technical challenges that hinder them from exercising creativity to incorporate interculturality in their lessons. This is also compounded by lack of clear direction in the teaching materials on cultural elements. The paper, therefore, suggests a holistic approach to the teaching of English language in Malawian school in which the diversity of culture in classrooms must be considered an opportunity for addressing students’ cultural needs that may be lacking in the instructional materials.

Keywords: cultural awareness, grammar, foreign language, intercultural communication, language teaching

Procedia PDF Downloads 344
2867 Design Standardization in Aramco: Strategic Analysis

Authors: Mujahid S. Alharbi

Abstract:

The construction of process plants in oil and gas-producing countries, such as Saudi Arabia, necessitates substantial investment in design and building. Each new plant, while unique, includes common building types, suggesting an opportunity for design standardization. This study investigates the adoption of standardized Issue For Construction (IFC) packages for non-process buildings in Saudi Aramco. A SWOT analysis presents the strengths, weaknesses, opportunities, and threats of this approach. The approach's benefits are illustrated using the Hawiyah Unayzah Gas Reservoir Storage Program (HUGRSP) as a case study. Standardization not only offers significant cost savings and operational efficiencies but also expedites project timelines, reduces the potential for change orders, and fosters local economic growth by allocating building tasks to local contractors. Standardization also improves project management by easing interface constraints between different contractors and promoting adaptability to future industry changes. This research underscores the standardization of non-process buildings as a powerful strategy for cost optimization, efficiency enhancement, and local economic development in process plant construction within the oil and gas sector.

Keywords: building, construction, management, project, standardization

Procedia PDF Downloads 66
2866 Improving Security by Using Secure Servers Communicating via Internet with Standalone Secure Software

Authors: Carlos Gonzalez

Abstract:

This paper describes the use of the Internet as a feature to enhance the security of our software that is going to be distributed/sold to users potentially all over the world. By placing in a secure server some of the features of the secure software, we increase the security of such software. The communication between the protected software and the secure server is done by a double lock algorithm. This paper also includes an analysis of intruders and describes possible responses to detect threats.

Keywords: internet, secure software, threats, cryptography process

Procedia PDF Downloads 336
2865 A Robust Optimization Model for Multi-Objective Closed-Loop Supply Chain

Authors: Mohammad Y. Badiee, Saeed Golestani, Mir Saman Pishvaee

Abstract:

In recent years consumers and governments have been pushing companies to design their activities in such a way as to reduce negative environmental impacts by producing renewable product or threat free disposal policy more and more. It is therefore important to focus more accurate to the optimization of various aspect of total supply chain. Modeling a supply chain can be a challenging process due to the fact that there are a large number of factors that need to be considered in the model. The use of multi-objective optimization can lead to overcome those problems since more information is used when designing the model. Uncertainty is inevitable in real world. Considering uncertainty on parameters in addition to use multi-objectives are ways to give more flexibility to the decision making process since the process can take into account much more constraints and requirements. In this paper we demonstrate a stochastic scenario based robust model to cope with uncertainty in a closed-loop multi-objective supply chain. By applying the proposed model in a real world case, the power of proposed model in handling data uncertainty is shown.

Keywords: supply chain management, closed-loop supply chain, multi-objective optimization, goal programming, uncertainty, robust optimization

Procedia PDF Downloads 419
2864 Identification of Vehicle Dynamic Parameters by Using Optimized Exciting Trajectory on 3- DOF Parallel Manipulator

Authors: Di Yao, Gunther Prokop, Kay Buttner

Abstract:

Dynamic parameters, including the center of gravity, mass and inertia moments of vehicle, play an essential role in vehicle simulation, collision test and real-time control of vehicle active systems. To identify the important vehicle dynamic parameters, a systematic parameter identification procedure is studied in this work. In the first step of the procedure, a conceptual parallel manipulator (virtual test rig), which possesses three rotational degrees-of-freedom, is firstly proposed. To realize kinematic characteristics of the conceptual parallel manipulator, the kinematic analysis consists of inverse kinematic and singularity architecture is carried out. Based on the Euler's rotation equations for rigid body dynamics, the dynamic model of parallel manipulator and derivation of measurement matrix for parameter identification are presented subsequently. In order to reduce the sensitivity of parameter identification to measurement noise and other unexpected disturbances, a parameter optimization process of searching for optimal exciting trajectory of parallel manipulator is conducted in the following section. For this purpose, the 321-Euler-angles defined by parameterized finite-Fourier-series are primarily used to describe the general exciting trajectory of parallel manipulator. To minimize the condition number of measurement matrix for achieving better parameter identification accuracy, the unknown coefficients of parameterized finite-Fourier-series are estimated by employing an iterative algorithm based on MATLAB®. Meanwhile, the iterative algorithm will ensure the parallel manipulator still keeps in an achievable working status during the execution of optimal exciting trajectory. It is showed that the proposed procedure and methods in this work can effectively identify the vehicle dynamic parameters and could be an important application of parallel manipulator in the fields of parameter identification and test rig development.

Keywords: parameter identification, parallel manipulator, singularity architecture, dynamic modelling, exciting trajectory

Procedia PDF Downloads 269
2863 A Hybrid-Evolutionary Optimizer for Modeling the Process of Obtaining Bricks

Authors: Marius Gavrilescu, Sabina-Adriana Floria, Florin Leon, Silvia Curteanu, Costel Anton

Abstract:

Natural sciences provide a wide range of experimental data whose related problems require study and modeling beyond the capabilities of conventional methodologies. Such problems have solution spaces whose complexity and high dimensionality require correspondingly complex regression methods for proper characterization. In this context, we propose an optimization method which consists in a hybrid dual optimizer setup: a global optimizer based on a modified variant of the popular Imperialist Competitive Algorithm (ICA), and a local optimizer based on a gradient descent approach. The ICA is modified such that intermediate solution populations are more quickly and efficiently pruned of low-fitness individuals by appropriately altering the assimilation, revolution and competition phases, which, combined with an initialization strategy based on low-discrepancy sampling, allows for a more effective exploration of the corresponding solution space. Subsequently, gradient-based optimization is used locally to seek the optimal solution in the neighborhoods of the solutions found through the modified ICA. We use this combined approach to find the optimal configuration and weights of a fully-connected neural network, resulting in regression models used to characterize the process of obtained bricks using silicon-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. Thus, the purpose of our approach is to determine by simulation the working conditions, including the manufacturing mix recipe with the addition of different materials, to minimize the emissions represented by CO and CH4. Our approach determines regression models which perform significantly better than those found using the traditional ICA for the aforementioned problem, resulting in better convergence and a substantially lower error.

Keywords: optimization, biologically inspired algorithm, regression models, bricks, emissions

Procedia PDF Downloads 84