Search results for: vector optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4286

Search results for: vector optimization

1796 An Algorithm to Depreciate the Energy Utilization Using a Bio-Inspired Method in Wireless Sensor Network

Authors: Navdeep Singh Randhawa, Shally Sharma

Abstract:

Wireless Sensor Network is an autonomous technology emanating in the current scenario at a fast pace. This technology faces a number of defiance’s and energy management is one of them, which has a huge impact on the network lifetime. To sustain energy the different types of routing protocols have been flourished. The classical routing protocols are no more compatible to perform in complicated environments. Hence, in the field of routing the intelligent algorithms based on nature systems is a turning point in Wireless Sensor Network. These nature-based algorithms are quite efficient to handle the challenges of the WSN as they are capable of achieving local and global best optimization solutions for the complex environments. So, the main attention of this paper is to develop a routing algorithm based on some swarm intelligent technique to enhance the performance of Wireless Sensor Network.

Keywords: wireless sensor network, routing, swarm intelligence, MPRSO

Procedia PDF Downloads 352
1795 Stakeholder Voices in Digital Evolution: Challenges Faced by SMEs in Automotive Supply Chain

Authors: Mohammed Sharaf, Alireza Shokri, Adrian Small, Toby Bridges

Abstract:

This paper investigates digital transformation challenges in SMEs within the automotive supply chain. A case study approach and participant observation revealed significant data management and process optimization barriers, corroborated by a conceptual model. Stakeholder feedback, visualized through a pie chart, emphasized data management and process efficiency as primary concerns. Recommended strategies include implementing advanced data systems, process simplification, and enhancing digital skills. Despite the single-case study limitation, the findings offer actionable insights for SMEs to leverage Industry 4.0 technologies effectively. This research contributes to the strategic roadmap necessary for SMEs to achieve competitive digital transformation.

Keywords: automotive supply chain, digital transformation, industry 4.0

Procedia PDF Downloads 35
1794 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical

Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani

Abstract:

Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.

Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality

Procedia PDF Downloads 349
1793 Optimal Planning and Design of Hybrid Energy System for Taxila University

Authors: Habib Ur Rahman Habib

Abstract:

Renewable energy resources are being realized as suitable options in hybrid energy planning for on-grid and micro grid. In this paper, operation, planning and optimal design of on-grid distributed energy resources based hybrid system are investigated. The aim is to minimize the cost of the overall energy system keeping in view the environmental emission and minimum penetration of conventional energy resources. Seven grid connected different case studies including diesel only, diesel-renewable based, and renewable based only are designed to perform economic analysis, operational planning and emission. Sensitivity analysis is implemented to investigate the impact of different parameters on the performance of energy resources.

Keywords: data management, renewable energy, distributed energy, smart grid, micro-grid, modeling, energy planning, design optimization

Procedia PDF Downloads 460
1792 Optimization of Energy Consumption with Various Design Parameters on Office Buildings in Chinese Severe Cold Zone

Authors: Yuang Guo, Dewancker Bart

Abstract:

The primary energy consumption of buildings throughout China was approximately 814 million tons of coal equivalents in 2014, which accounts for 19.12% of China's total primary energy consumption. Also, the energy consumption of public buildings takes a bigger share than urban residential buildings and rural residential buildings among the total energy consumption. To improve the level of energy demand, various design parameters were chosen. Meanwhile, a series of simulations by Energy Plus (EP-Launch) is performed using a base case model established in Open Studio. Through the results, 16%-23% of total energy demand reductions can be found in the severe cold zone of China, and it can also provide a reference for the architectural design of other similar climate zones.

Keywords: energy consumption, design parameters, indoor thermal comfort, simulation study, severe cold climate zone

Procedia PDF Downloads 156
1791 Simulation of a Fluid Catalytic Cracking Process

Authors: Sungho Kim, Dae Shik Kim, Jong Min Lee

Abstract:

Fluid catalytic cracking (FCC) process is one of the most important process in modern refinery indusrty. This paper focuses on the fluid catalytic cracking (FCC) process. As the FCC process is difficult to model well, due to its nonlinearities and various interactions between its process variables, rigorous process modeling of whole FCC plant is demanded for control and plant-wide optimization of the plant. In this study, a process design for the FCC plant includes riser reactor, main fractionator, and gas processing unit was developed. A reactor model was described based on four-lumped kinetic scheme. Main fractionator, gas processing unit and other process units are designed to simulate real plant data, using a process flowsheet simulator, Aspen PLUS. The custom reactor model was integrated with the process flowsheet simulator to develop an integrated process model.

Keywords: fluid catalytic cracking, simulation, plant data, process design

Procedia PDF Downloads 457
1790 Real-Time Nonintrusive Heart Rate Measurement: Comparative Case Study of LED Sensorics' Accuracy and Benefits in Heart Monitoring

Authors: Goran Begović

Abstract:

In recent years, many researchers are focusing on non-intrusive measuring methods when it comes to human biosignals. These methods provide solutions for everyday use, whether it’s health monitoring or finessing the workout routine. One of the biggest issues with these solutions is that the sensors’ accuracy is highly variable due to many factors, such as ambiental light, skin color diversity, etc. That is why we wanted to explore different outcomes under those kinds of circumstances in order to find the most optimal algorithm(s) for extracting heart rate (HR) information. The optimization of such algorithms can benefit the wider, cheaper, and safer application of home health monitoring, without having to visit medical professionals as often when it comes to observing heart irregularities. In this study, we explored the accuracy of infrared (IR), red, and green LED sensorics in a controlled environment and compared the results with a medically accurate ECG monitoring device.

Keywords: data science, ECG, heart rate, holter monitor, LED sensors

Procedia PDF Downloads 127
1789 Numerical Analysis of a Mechanism for the Morphology in the Extrados of an Airfoil

Authors: E. R. Jimenez Barron, M. Castillo Morales, D. F. Ramírez Morales

Abstract:

The study of the morphology (shape change) in wings leads to the optimization of aerodynamic characteristics in an aircraft, so for the development and implementation of a change in the structure and shape of an airfoil, in this case the extrados, helps to increase the aerodynamic performance of an aircraft at different operating velocities, according to the required mission profile. A previous work on morphology is continued where the 'initial' profile is the NACA 4415 and as a new profile 'objective' the FUSION. The objective of this work is the dimensioning of the elements of the mechanism used to achieve the required changes. We consulted the different materials used in the aeronautics industry, as well as new materials in this area that could contribute to the good performance of the mechanism without negatively affecting the aerodynamics. These results allow evaluating the performance of a wing with variable extrados with respect to the defined morphology.

Keywords: numerical analysis, mechanisms, morphing airfoil, morphing wings

Procedia PDF Downloads 237
1788 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 188
1787 Mathematical Modeling for the Break-Even Point Problem in a Non-homogeneous System

Authors: Filipe Cardoso de Oliveira, Lino Marcos da Silva, Ademar Nogueira do Nascimento, Cristiano Hora de Oliveira Fontes

Abstract:

This article presents a mathematical formulation for the production Break-Even Point problem in a non-homogeneous system. The optimization problem aims to obtain the composition of the best product mix in a non-homogeneous industrial plant, with the lowest cost until the breakeven point is reached. The problem constraints represent real limitations of a generic non-homogeneous industrial plant for n different products. The proposed model is able to solve the equilibrium point problem simultaneously for all products, unlike the existing approaches that propose a resolution in a sequential way, considering each product in isolation and providing a sub-optimal solution to the problem. The results indicate that the product mix found through the proposed model has economical advantages over the traditional approach used.

Keywords: branch and bound, break-even point, non-homogeneous production system, integer linear programming, management accounting

Procedia PDF Downloads 211
1786 A Numerical Study of Seismic Effects on Slope Stability Using Node-Based Smooth Finite Element Method

Authors: H. C. Nguyen

Abstract:

This contribution considers seismic effects on the stability of slope and footing resting on a slope. The seismic force is simply treated as static inertial force through the values of acceleration factor. All domains are assumed to be plasticity deformations approximated using node-based smoothed finite element method (NS-FEM). The failure mechanism and safety factor were then explored using numerical procedure based on upper bound approach in which optimization problem was formed as second order cone programming (SOCP). The data obtained confirm that upper bound procedure using NS-FEM and SOCP can give stable and rapid convergence results of seismic stability factors.

Keywords: upper bound analysis, safety factor, slope stability, footing resting on slope

Procedia PDF Downloads 117
1785 A High-Level Co-Evolutionary Hybrid Algorithm for the Multi-Objective Job Shop Scheduling Problem

Authors: Aydin Teymourifar, Gurkan Ozturk

Abstract:

In this paper, a hybrid distributed algorithm has been suggested for the multi-objective job shop scheduling problem. Many new approaches are used at design steps of the distributed algorithm. Co-evolutionary structure of the algorithm and competition between different communicated hybrid algorithms, which are executed simultaneously, causes to efficient search. Using several machines for distributing the algorithms, at the iteration and solution levels, increases computational speed. The proposed algorithm is able to find the Pareto solutions of the big problems in shorter time than other algorithm in the literature. Apache Spark and Hadoop platforms have been used for the distribution of the algorithm. The suggested algorithm and implementations have been compared with results of the successful algorithms in the literature. Results prove the efficiency and high speed of the algorithm.

Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, multi-objective optimization

Procedia PDF Downloads 363
1784 TNF Receptor-Associated Factor 6 (TRAF6) Mediating the Angiotensin-Induced Non-Canonical TGFβ Pathway Activation and Differentiation of c-kit+ Cardiac Stem Cells

Authors: Qing Cao, Fei Wang, Yu-Qiang Wang, Li-Ya Huang, Tian-Tian Sang, Shu-Yan Chen

Abstract:

Aims: TNF Receptor-Associated Factor 6 (TRAF6) acts as a multifunctional regulator of the Transforming Growth Factor (TGF)-β signaling pathway, and mediates Smad-independent JNK and p38 activation via TGF-β. This study was performed to test the hypothesis that TGF-β/TRAF6 is essential for angiotensin-II (Ang II)-induced differentiation of rat c-kit+ Cardiac Stem Cells (CSCs). Methods and Results: c-kit+ CSCs were isolated from neonatal Sprague Dawley (SD) rats, and their c-kit status was confirmed with immunofluorescence staining. A TGF-β type I receptor inhibitor (SB431542) or the small interfering RNA (siRNA)-mediated knockdown of TRAF6 were used to investigate the role of TRAF6 in TGF-β signaling. Rescue of TRAF6 siRNA transfected cells with a 3'UTR deleted siRNA insensitive construct was conducted to rule out the off target effects of the siRNA. TRAF6 dominant negative (TRAF6DN) vector was constructed and used to infect c-kit+ CSCs, and western blotting was used to assess the expression of TRAF6, JNK, p38, cardiac-specific proteins, and Wnt signaling proteins. Physical interactions between TRAF6 and TGFβ receptors were studied by coimmunoprecipitation. Cardiac differentiation was suppressed in the absence of TRAF6. Forced expression of TRAF6 enhanced the expression of TGF-β-activated kinase1 (TAK1), and inhibited Wnt signaling. Furthermore, TRAF6 increased the expression of cardiac-specific proteins (cTnT and Cx-43) but inhibited the expression of Wnt3a. Conclusions: Our data suggest that TRAF6 plays an important role in Ang II induced differentiation of c-kit+ CSCs via the non-canonical signaling pathway.

Keywords: cardiac stem cells, differentiation, TGF-β, TRAF6, ubiquitination, Wnt

Procedia PDF Downloads 401
1783 Effective Scheduling of Hybrid Reconfigurable Microgrids Considering High Penetration of Renewable Sources

Authors: Abdollah Kavousi Fard

Abstract:

This paper addresses the optimal scheduling of hybrid reconfigurable microgrids considering hybrid electric vehicle charging demands. A stochastic framework based on unscented transform to model the high uncertainties of renewable energy sources including wind turbine and photovoltaic panels, as well as the hybrid electric vehicles’ charging demand. In order to get to the optimal scheduling, the network reconfiguration is employed as an effective tool for changing the power supply path and avoiding possible congestions. The simulation results are analyzed and discussed in three different scenarios including coordinated, uncoordinated and smart charging demand of hybrid electric vehicles. A typical grid-connected microgrid is employed to show the satisfying performance of the proposed method.

Keywords: microgrid, renewable energy sources, reconfiguration, optimization

Procedia PDF Downloads 272
1782 Numerical Modeling for Water Engineering and Obstacle Theory

Authors: Mounir Adal, Baalal Azeddine, Afifi Moulay Larbi

Abstract:

Numerical analysis is a branch of mathematics devoted to the development of iterative matrix calculation techniques. We are searching for operations optimization as objective to calculate and solve systems of equations of order n with time and energy saving for computers that are conducted to calculate and analyze big data by solving matrix equations. Furthermore, this scientific discipline is producing results with a margin of error of approximation called rates. Thus, the results obtained from the numerical analysis techniques that are held on computer software such as MATLAB or Simulink offers a preliminary diagnosis of the situation of the environment or space targets. By this we can offer technical procedures needed for engineering or scientific studies exploitable by engineers for water.

Keywords: numerical analysis methods, obstacles solving, engineering, simulation, numerical modeling, iteration, computer, MATLAB, water, underground, velocity

Procedia PDF Downloads 463
1781 Motion Estimator Architecture with Optimized Number of Processing Elements for High Efficiency Video Coding

Authors: Seongsoo Lee

Abstract:

Motion estimation occupies the heaviest computation in HEVC (high efficiency video coding). Many fast algorithms such as TZS (test zone search) have been proposed to reduce the computation. Still the huge computation of the motion estimation is a critical issue in the implementation of HEVC video codec. In this paper, motion estimator architecture with optimized number of PEs (processing element) is presented by exploiting early termination. It also reduces hardware size by exploiting parallel processing. The presented motion estimator architecture has 8 PEs, and it can efficiently perform TZS with very high utilization of PEs.

Keywords: motion estimation, test zone search, high efficiency video coding, processing element, optimization

Procedia PDF Downloads 363
1780 Code Embedding for Software Vulnerability Discovery Based on Semantic Information

Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson

Abstract:

Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.

Keywords: code representation, deep learning, source code semantics, vulnerability discovery

Procedia PDF Downloads 159
1779 Identification of Spam Keywords Using Hierarchical Category in C2C E-Commerce

Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park

Abstract:

Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like e-bay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C e-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C e-commerce.

Keywords: spam keyword, e-commerce, keyword features, spam filtering

Procedia PDF Downloads 294
1778 A Comparative Study on a Tilt-Integral-Derivative Controller with Proportional-Integral-Derivative Controller for a Pacemaker

Authors: Aysan Esgandanian, Sabalan Daneshvar

Abstract:

The study is done to determine the comparison between proportional-integral-derivative controller (PID controller) and tilt-integral-derivative (TID controller) for cardiac pacemaker systems, which can automatically control the heart rate to accurately track a desired preset profile. The controller offers good adaption of heart to the physiological needs of the patient. The parameters of the both controllers are tuned by particle swarm optimization (PSO) algorithm which uses the integral of time square error as a fitness function to be minimized. Simulation results are performed on the developed cardiovascular system of humans and results demonstrate that the TID controller produces superior control performance than PID controllers. In this paper, all simulations were performed in Matlab.

Keywords: integral of time square error, pacemaker systems, proportional-integral-derivative controller, PSO algorithm, tilt-integral-derivative controller

Procedia PDF Downloads 463
1777 Logistics Hub Location and Scheduling Model for Urban Last-Mile Deliveries

Authors: Anastasios Charisis, Evangelos Kaisar, Steven Spana, Lili Du

Abstract:

Logistics play a vital role in the prosperity of today’s cities, but current urban logistics practices are proving problematic, causing negative effects such as traffic congestion and environmental impacts. This paper proposes an alternative urban logistics system, leasing hubs inside cities for designated time intervals, and using handcarts for last-mile deliveries. A mathematical model for selecting the locations of hubs and allocating customers, while also scheduling the optimal times during the day for leasing hubs is developed. The proposed model is compared to current delivery methods requiring door-to-door truck deliveries. It is shown that truck traveled distances decrease by more than 60%. In addition, analysis shows that in certain conditions the approach can be economically competitive and successfully applied to address real problems.

Keywords: hub location, last-mile, logistics, optimization

Procedia PDF Downloads 194
1776 Field Saturation Flow Measurement Using Dynamic Passenger Car Unit under Mixed Traffic Condition

Authors: Ramesh Chandra Majhi

Abstract:

Saturation flow is a very important input variable for the design of signalized intersections. Saturation flow measurement is well established for homogeneous traffic. However, saturation flow measurement and modeling is a challenging task in heterogeneous characterized by multiple vehicle types and non-lane based movement. Present study focuses on proposing a field procedure for Saturation flow measurement and the effect of typical mixed traffic behavior at the signal as far as non-lane based traffic movement is concerned. Data collected during peak and off-peak hour from five intersections with varying approach width is used for validating the saturation flow model. The insights from the study can be used for modeling saturation flow and delay at signalized intersection in heterogeneous traffic conditions.

Keywords: optimization, passenger car unit, saturation flow, signalized intersection

Procedia PDF Downloads 327
1775 Model Based Development of a Processing Map for Friction Stir Welding of AA7075

Authors: Elizabeth Hoyos, Hernán Alvarez, Diana Lopez, Yesid Montoya

Abstract:

The main goal of this research relates to the modeling of FSW from a different or unusual perspective coming from mechanical engineering, particularly looking for a way to establish process windows by assessing soundness of the joints as a priority and with the added advantage of lower computational time. This paper presents the use of a previously developed model applied to specific aspects of soundness evaluation of AA7075 FSW welds. EMSO software (Environment for Modeling, Simulation, and Optimization) was used for simulation and an adapted CNC machine was used for actual welding. This model based approach showed good agreement with the experimental data, from which it is possible to set a window of operation for commercial aluminum alloy AA7075, all with low computational costs and employing simple quality indicators that can be used by non-specialized users in process modeling.

Keywords: aluminum AA7075, friction stir welding, phenomenological based semiphysical model, processing map

Procedia PDF Downloads 261
1774 Rail Corridors between Minimal Use of Train and Unsystematic Tightening of Population: A Methodological Essay

Authors: A. Benaiche

Abstract:

In the current situation, the automobile has become the main means of locomotion. It allows traveling long distances, encouraging urban sprawl. To counteract this trend, the train is often proposed as an alternative to the car. Simultaneously, the favoring of urban development around public transport nodes such as railway stations is one of the main issues of the coordination between urban planning and transportation and the keystone of the sustainable urban development implementation. In this context, this paper focuses on the study of the spatial structuring dynamics around the railway. Specifically, it is a question of studying the demographic dynamics in rail corridors of Nantes, Angers and Le Mans (Western France) basing on the radiation of railway stations. Consequently, the methodology is concentrated on the knowledge of demographic weight and gains of these corridors, the index of urban intensity and the mobility behaviors (workers’ travels, scholars' travels, modal practices of travels). The perimeter considered to define the rail corridors includes the communes of urban area which have a railway station and communes with an access time to the railway station is less than fifteen minutes by car (time specified by the Regional Transport Scheme of Travelers). The main tools used are the statistical data from the census of population, the basis of detailed tables and databases on mobility flows. The study reveals that the population is not tightened along rail corridors and train use is minimal despite the presence of a nearby railway station. These results lead to propose guidelines to make the train, a real vector of mobility across the rail corridors.

Keywords: coordination between urban planning and transportation, rail corridors, railway stations, travels

Procedia PDF Downloads 244
1773 Optimization of Waqf Land through Sukuk Al-Intifa’ to Build MSMEs in Indonesia

Authors: Khadijah Hasim, Achmad Fauzan Firdaus, Choirunnisa

Abstract:

Waqf land which previously was idle assets can be built on top of a building that is a means for people to conduct business. Nadzir (waqf managers) lease of waqf lands it manages, the agreed rental fee, which is payable in the form of the building, not in cash. After standing building, the developer will lease to interested companies. Given the magnitude of the beginning funds needed, The company later issuing sukuk al-intifa on the trading floor. With this sukuk issuance, the company has sufficient capital to begin operations and pay obligations of the rental fee to the developer each year. Building that had stood trade area will be established (Micro, Small, Middle Entreprises) MSMEs. It is expected that through the sukuk al-intifa, can help to make waqf land previously unproductive due to lack of capital to be very beneficial and help awaken the people of Indonesian MSMEs

Keywords: Sukuk Al-Intifa, MSMEs, waqf land, underlying asset

Procedia PDF Downloads 471
1772 Studies on Optimization of Batch Biosorption of Cr (VI) and Cu (II) from Wastewater Using Bacillus subtilis

Authors: Narasimhulu Korrapati

Abstract:

The objective of this present study is to optimize the process parameters for batch biosorption of Cr(VI) and Cu(II) ions by Bacillus subtilis using Response Surface Methodology (RSM). Batch biosorption studies were conducted under optimum pH, temperature, biomass concentration and contact time for the removal of Cr(VI) and Cu(II) ions using Bacillus subtilis. From the studies it is noticed that the maximum biosorption of Cr(VI) and Cu(II) was by Bacillus subtilis at optimum conditions of contact time of 30 minutes, pH of 4.0, biomass concentration of 2.0 mg/mL, the temperature of 32°C in batch biosorption studies. Predicted percent biosorption of the selected heavy metal ions by the design expert software is in agreement with experimental results of percent biosorption. The percent biosorption of Cr(VI) and Cu(II) in batch studies is 80% and 78.4%, respectively.

Keywords: heavy metal ions, response surface methodology, biosorption, wastewater

Procedia PDF Downloads 274
1771 Optimization of the Energy Management for a Solar System of an Agricultural Greenhouse

Authors: Nora Arbaoui, Rachid Tadili, Ilham Ihoume

Abstract:

To improve the climatic conditions and increase production in the greenhouse during the winter season under the Mediterranean climate, this thesis project proposes a design of an integrated and autonomous solar system for heating, cooling, and conservation of production in an agricultural greenhouse. To study the effectiveness of this system, experiments are conducted in two similar agricultural greenhouses oriented north-south. The first greenhouse is equipped with an active solar system integrated into the double glazing of the greenhouse’s roof, while the second greenhouse has no system, it serves as a controlled greenhouse for comparing thermal and agronomic performance The solar system allowed for an average increase in the indoor temperature of the experimental greenhouse of 6°C compared to the outdoor environment and 4°C compared to the control greenhouse. This improvement in temperature has a favorable effect on the plants' climate and subsequently positively affects their development, quality, and production.

Keywords: solar system, agricultural greenhouse, heating, cooling, storage, drying

Procedia PDF Downloads 100
1770 Formulation Development and Characterization of Oligonucleotide Containing Chitosan Nanoparticles

Authors: Gyati Shilakari Asthana, Abhay Asthana

Abstract:

Purpose: The therapeutic potential of oligonucleotide (ODN) is primarily dependent upon its safe and efficient delivery to specific cells overcoming degradation and maximizing cellular uptake in vivo. The present study is focused to design low molecular weight chitosan nanoconstructs to meet the requirements of safe and effectual delivery of ODNs. LMW-chitosan is a biodegradable, water soluble, biocompatible polymer and is useful as a non-viral vector for gene delivery due to its better stability in water. Methods: LMW chitosan ODN nanoparticles (CHODN NPs) were formulated by self assembled method using various N/P ratios (moles ratio of amine groups of CH to phosphate moieties of ODNs; 0.5:1, 1:1, 3:1, 5:1 and 7:1) of CH to ODN. The developed CHODN NPs were evaluated with respect to gel retardation assay, particle size, zeta potential and cytotoxicity and transfection efficiency. Results: Complete complexation of CH/ODN was achieved at the charge ratio of 0.5:1 or above and CHODN NPs displayed resistance against DNase I. On increasing the N/P ratio of CH/ODN, particle size of the NPs decreased whereas zeta potential (ZV) value increased. No significant toxicity was observed at all CH concentrations. The transfection efficiency was increased on increasing N/P ratio from 1:1 to 3:1, whereas it was decreased with further increment in N/P ratio upto 7:1. Maximum transfection of CHODN NPs with both the cell lines (Raw 267.4 cells and Hela cells) was achieved at N/P ratio of 3:1. The results suggest that transfection efficiency of CHODN NPs is dependent on N/P ratio. Conclusion: Thus the present study states that LMW chitosan nanoparticulate carriers would be acceptable choice to improve transfection efficiency in vitro as well as in vivo delivery of oligonucleotide.

Keywords: LMW-chitosan, chitosan nanoparticles, biocompatibility, cytotoxicity study, transfection efficiency, oligonucleotide

Procedia PDF Downloads 493
1769 A Heuristic Approach for the General Flowshop Scheduling Problem to Minimize the Makespan

Authors: Mohsen Ziaee

Abstract:

Almost all existing researches on the flowshop scheduling problems focus on the permutation schedules and there is insufficient study dedicated to the general flowshop scheduling problems in the literature, since the modeling and solving of the general flowshop scheduling problems are more difficult than the permutation ones, especially for the large-size problem instances. This paper considers the general flowshop scheduling problem with the objective function of the makespan (F//Cmax). We first find the optimal solution of the problem by solving a mixed integer linear programming model. An efficient heuristic method is then presented to solve the problem. An ant colony optimization algorithm is also proposed for the problem. In order to evaluate the performance of the methods, computational experiments are designed and performed. Numerical results show that the heuristic algorithm can result in reasonable solutions with low computational effort and even achieve optimal solutions in some cases.

Keywords: scheduling, general flow shop scheduling problem, makespan, heuristic

Procedia PDF Downloads 207
1768 Discrete Breeding Swarm for Cost Minimization of Parallel Job Shop Scheduling Problem

Authors: Tarek Aboueldahab, Hanan Farag

Abstract:

Parallel Job Shop Scheduling Problem (JSP) is a multi-objective and multi constrains NP- optimization problem. Traditional Artificial Intelligence techniques have been widely used; however, they could be trapped into the local minimum without reaching the optimum solution, so we propose a hybrid Artificial Intelligence model (AI) with Discrete Breeding Swarm (DBS) added to traditional Artificial Intelligence to avoid this trapping. This model is applied in the cost minimization of the Car Sequencing and Operator Allocation (CSOA) problem. The practical experiment shows that our model outperforms other techniques in cost minimization.

Keywords: parallel job shop scheduling problem, artificial intelligence, discrete breeding swarm, car sequencing and operator allocation, cost minimization

Procedia PDF Downloads 188
1767 Biogas Separation, Alcohol Amine Solutions

Authors: Jingxiao Liang, David Rooneyman

Abstract:

Biogas, which is a valuable renewable energy source, can be produced by anaerobic fermentation of agricultural waste, manure, municipal waste, plant material, sewage, green waste, or food waste. It is composed of methane (CH4) and carbon dioxide (CO2) but also contains significant quantities of undesirable compounds such as hydrogen sulfide (H2S), ammonia (NH3), and siloxanes. Since typical raw biogas contains 25–45% CO2, The requirements for biogas quality depend on its further application. Before biogas is being used more efficiently, CO2 should be removed. One of the existing options for biogas separation technologies is based on chemical absorbents, in particular, mono-, di- and tri-alcohol amine solutions. Such amine solutions have been applied as highly efficient CO2 capturing agents. The benchmark in this experiment is N-methyldiethanolamine (MDEA) with piperazine (PZ) as an activator, from CO2 absorption Isotherm curve, optimization conditions are collected, such as activator percentage, temperature etc. This experiment makes new alcohol amines, which could have the same CO2 absorbing ability as activated MDEA, using glycidol as one of reactant, the result is quite satisfying.

Keywords: biogas, CO2, MDEA, separation

Procedia PDF Downloads 634