Search results for: transmission optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5126

Search results for: transmission optimization

3536 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon

Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison

Abstract:

Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.

Keywords: asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax

Procedia PDF Downloads 415
3535 The AI Method and System for Analyzing Wound Status in Wound Care Nursing

Authors: Ho-Hsin Lee, Yue-Min Jiang, Shu-Hui Tsai, Jian-Ren Chen, Mei-Yu XU, Wen-Tien Wu

Abstract:

This project presents an AI-based method and system for wound status analysis. The system uses a three-in-one sensor device to analyze wound status, including color, temperature, and a 3D sensor to provide wound information up to 2mm below the surface, such as redness, heat, and blood circulation information. The system has a 90% accuracy rate, requiring only one manual correction in 70% of cases, with a one-second delay. The system also provides an offline application that allows for manual correction of the wound bed range using color-based guidance to estimate wound bed size with 96% accuracy and a maximum of one manual correction in 96% of cases, with a one-second delay. Additionally, AI-assisted wound bed range selection achieves 100% of cases without manual intervention, with an accuracy rate of 76%, while AI-based wound tissue type classification achieves an 85.3% accuracy rate for five categories. The AI system also includes similar case search and expert recommendation capabilities. For AI-assisted wound range selection, the system uses WIFI6 technology, increasing data transmission speeds by 22 times. The project aims to save up to 64% of the time required for human wound record keeping and reduce the estimated time to assess wound status by 96%, with an 80% accuracy rate. Overall, the proposed AI method and system integrate multiple sensors to provide accurate wound information and offer offline and online AI-assisted wound bed size estimation and wound tissue type classification. The system decreases delay time to one second, reduces the number of manual corrections required, saves time on wound record keeping, and increases data transmission speed, all of which have the potential to significantly improve wound care and management efficiency and accuracy.

Keywords: wound status analysis, AI-based system, multi-sensor integration, color-based guidance

Procedia PDF Downloads 115
3534 Preparation and Sealing of Polymer Microchannels Using EB Lithography and Laser Welding

Authors: Ian Jones, Jonathan Griffiths

Abstract:

Laser welding offers the potential for making very precise joints in plastics products, both in terms of the joint location and the amount of heating applied. These methods have allowed the production of complex products such as microfluidic devices where channels and structure resolution below 100 µm is regularly used. However, to date, the dimension of welds made using lasers has been limited by the focus spot size that is achievable from the laser source. Theoretically, the minimum spot size possible from a laser is comparable to the wavelength of the radiation emitted. Practically, with reasonable focal length optics the spot size achievable is a few factors larger than this, and the melt zone in a plastics weld is larger again than this. The narrowest welds feasible to date have therefore been 10-20 µm wide using a near-infrared laser source. The aim of this work was to prepare laser absorber tracks and channels less than 10 µm wide in PMMA thermoplastic using EB lithography followed by sealing of channels using laser welding to carry out welds with widths of the order of 1 µm, below the resolution limit of the near-infrared laser used. Welded joints with a width of 1 µm have been achieved as well as channels with a width of 5 µm. The procedure was based on the principle of transmission laser welding using a thin coating of infrared absorbent material at the joint interface. The coating was patterned using electron-beam lithography to obtain the required resolution in a reproducible manner and that resolution was retained after the transmission laser welding process. The joint strength was ratified using larger scale samples. The results demonstrate that plastics products could be made with a high density of structure with resolution below 1 um, and that welding can be applied without excessively heating regions beyond the weld lines. This may be applied to smaller scale sensor and analysis chips, micro-bio and chemical reactors and to microelectronic packaging.

Keywords: microchannels, polymer, EB lithography, laser welding

Procedia PDF Downloads 402
3533 Acoustic Echo Cancellation Using Different Adaptive Algorithms

Authors: Hamid Sharif, Nazish Saleem Abbas, Muhammad Haris Jamil

Abstract:

An adaptive filter is a filter that self-adjusts its transfer function according to an optimization algorithm driven by an error signal. Because of the complexity of the optimization algorithms, most adaptive filters are digital filters. Adaptive filtering constitutes one of the core technologies in digital signal processing and finds numerous application areas in science as well as in industry. Adaptive filtering techniques are used in a wide range of applications, including adaptive noise cancellation and echo cancellation. Acoustic echo cancellation is a common occurrence in today’s telecommunication systems. The signal interference caused by acoustic echo is distracting to both users and causes a reduction in the quality of the communication. In this paper, we review different techniques of adaptive filtering to reduce this unwanted echo. In this paper, we see the behavior of techniques and algorithms of adaptive filtering like Least Mean Square (LMS), Normalized Least Mean Square (NLMS), Variable Step-Size Least Mean Square (VSLMS), Variable Step-Size Normalized Least Mean Square (VSNLMS), New Varying Step Size LMS Algorithm (NVSSLMS) and Recursive Least Square (RLS) algorithms to reduce this unwanted echo, to increase communication quality.

Keywords: adaptive acoustic, echo cancellation, LMS algorithm, adaptive filter, normalized least mean square (NLMS), variable step-size least mean square (VSLMS)

Procedia PDF Downloads 80
3532 Engine with Dual Helical Crankshaft System Operating at an Overdrive Gear Ratio

Authors: Anierudh Vishwanathan

Abstract:

This paper suggests a new design of the crankshaft system that would help to use a low revving engine for applications requiring the use of a high revving engine operating at the same power by converting the extra or unnecessary torque obtained from a low revving engine into angular velocity of the crankshaft of the engine hence, improve the fuel economy of the vehicle because of the fact that low revving engines run more effectively on lean air fuel mixtures accompanied with less wear and tear of the engine due to lesser rubbing of the piston rings with the cylinder walls. If the crankshaft with the proposed design is used in a low revving engine, then it will give the same torque and speed as that given by a high revving engine operating at the same power but the new engine will give better fuel economy. Hence the new engine will give the benefits of a low revving engine as well as a high revving engine. The proposed crankshaft design will be achieved by changing the design of the crankweb in such a way that it functions both as a counterweight as well as a helical gear that can transfer power to the secondary gear shaft which will be incorporated in the crankshaft system. The crankshaft and the secondary gear shaft will be operating at an overdrive ratio. The crankshaft will now be a two shaft system instead of a single shaft system. The newly designed crankshaft will be mounted on the bearings instead of being connected to the flywheel of the engine. This newly designed crankshaft will transmit power to the secondary shaft which will rotate the flywheel and then the rotary motion will be transmitted to the transmission system as usual. In this design, the concept of power transmission will be incorporated in the crankshaft system. In the paper, the crankshaft and the secondary shafts have been designed in such a way that at any instant of time only half the number of crankwebs will be meshed with the secondary shaft. For example, during one revolution of the crankshaft, if for the first half of revolution; first, second, seventh and eighth crankwebs are meshing with the secondary shaft then for the next half revolution, third, fourth, fifth and sixth crankwebs will mesh with the secondary shaft. This paper also analyses the proposed crankshaft design for safety against fatigue failure. Finite element analysis of the crankshaft has been done and the resultant stresses have been calculated.

Keywords: low revving, high revving, secondary shaft, partial meshing

Procedia PDF Downloads 269
3531 Parametric Influence and Optimization of Wire-EDM on Oil Hardened Non-Shrinking Steel

Authors: Nixon Kuruvila, H. V. Ravindra

Abstract:

Wire-cut Electro Discharge Machining (WEDM) is a special form of conventional EDM process in which electrode is a continuously moving conductive wire. The present study aims at determining parametric influence and optimum process parameters of Wire-EDM using Taguchi’s Technique and Genetic algorithm. The variation of the performance parameters with machining parameters was mathematically modeled by Regression analysis method. The objective functions are Dimensional Accuracy (DA) and Material Removal Rate (MRR). Experiments were designed as per Taguchi’s L16 Orthogonal Array (OA) where in Pulse-on duration, Pulse-off duration, Current, Bed-speed and Flushing rate have been considered as the important input parameters. The matrix experiments were conducted for the material Oil Hardened Non Shrinking Steel (OHNS) having the thickness of 40 mm. The results of the study reveals that among the machining parameters it is preferable to go in for lower pulse-off duration for achieving over all good performance. Regarding MRR, OHNS is to be eroded with medium pulse-off duration and higher flush rate. Finally, the validation exercise performed with the optimum levels of the process parameters. The results confirm the efficiency of the approach employed for optimization of process parameters in this study.

Keywords: dimensional accuracy (DA), regression analysis (RA), Taguchi method (TM), volumetric material removal rate (VMRR)

Procedia PDF Downloads 409
3530 Exploring the Connectedness of Ad Hoc Mesh Networks in Rural Areas

Authors: Ibrahim Obeidat

Abstract:

Reaching a fully-connected network of mobile nodes in rural areas got a great attention between network researchers. This attention rose due to the complexity and high costs while setting up the needed infrastructures for these networks, in addition to the low transmission range these nodes has. Terranet technology, as an example, employs ad-hoc mesh network where each node has a transmission range not exceed one kilometer, this means that every two nodes are able to communicate with each other if they are just one kilometer far from each other, otherwise a third-party will play the role of the “relay”. In Terranet, and as an idea to reduce network setup cost, every node in the network will be considered as a router that is responsible of forwarding data between other nodes which result in a decentralized collaborative environment. Most researches on Terranet presents the idea of how to encourage mobile nodes to become more cooperative by letting their devices in “ON” state as long as possible while accepting to play the role of relay (router). This research presents the issue of finding the percentage of nodes in ad-hoc mesh network within rural areas that should play the role of relay at every time slot, relating to what is the actual area coverage of nodes in order to have the network reach the fully-connectivity. Far from our knowledge, till now there is no current researches discussed this issue. The research is done by making an implementation that depends on building adjacency matrix as an indicator to the connectivity between network members. This matrix is continually updated until each value in it refers to the number of hubs that should be followed to reach from one node to another. After repeating the algorithm on different area sizes, different coverage percentages for each size, and different relay percentages for several times, results extracted shows that for area coverage less than 5% we need to have 40% of the nodes to be relays, where 10% percentage is enough for areas with node coverage greater than 5%.

Keywords: ad-hoc mesh networks, network connectivity, mobile ad-hoc networks, Terranet, adjacency matrix, simulator, wireless sensor networks, peer to peer networks, vehicular Ad hoc networks, relay

Procedia PDF Downloads 282
3529 Resource Leveling Optimization in Construction Projects of High Voltage Substations Using Nature-Inspired Intelligent Evolutionary Algorithms

Authors: Dimitrios Ntardas, Alexandros Tzanetos, Georgios Dounias

Abstract:

High Voltage Substations (HVS) are the intermediate step between production of power and successfully transmitting it to clients, making them one of the most important checkpoints in power grids. Nowadays - renewable resources and consequently distributed generation are growing fast, the construction of HVS is of high importance both in terms of quality and time completion so that new energy producers can quickly and safely intergrade in power grids. The resources needed, such as machines and workers, should be carefully allocated so that the construction of a HVS is completed on time, with the lowest possible cost (e.g. not spending additional cost that were not taken into consideration, because of project delays), but in the highest quality. In addition, there are milestones and several checkpoints to be precisely achieved during construction to ensure the cost and timeline control and to ensure that the percentage of governmental funding will be granted. The management of such a demanding project is a NP-hard problem that consists of prerequisite constraints and resource limits for each task of the project. In this work, a hybrid meta-heuristic method is implemented to solve this problem. Meta-heuristics have been proven to be quite useful when dealing with high-dimensional constraint optimization problems. Hybridization of them results in boost of their performance.

Keywords: hybrid meta-heuristic methods, substation construction, resource allocation, time-cost efficiency

Procedia PDF Downloads 152
3528 Modeling and Minimizing the Effects of Ferroresonance for Medium Voltage Transformers

Authors: Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee, Arian Amirnia, Atena Taheri, Mohammadreza Arabi, Mahmud Fotuhi-Firuzabad

Abstract:

Ferroresonance effects cause overvoltage in medium voltage transformers and isolators used in electrical networks. Ferroresonance effects are nonlinear and occur between the network capacitor and the nonlinear inductance of the voltage transformer during saturation. This phenomenon is unwanted for transformers since it causes overheating, introduction of high dynamic forces in primary coils, and rise of voltage in primary coils for the voltage transformer. Furthermore, it results in electrical and thermal failure of the transformer. Expansion of distribution lines, design of the transformer in smaller sizes, and the increase of harmonics in distribution networks result in an increase of ferroresonance. There is limited literature available to improve the effects of ferroresonance; therefore, optimizing its effects for voltage transformers is of great importance. In this study, comprehensive modeling of a medium voltage block-type voltage transformer is performed. In addition, a recent model is proposed to improve the performance of voltage transformers during the occurrence of ferroresonance using damping oscillations. Also, transformer design optimization is presented in this study to show further improvements in the performance of the voltage transformer. The recently proposed model is experimentally tested and verified on a medium voltage transformer in the laboratory, and simulation results show a large reduction of the effects of ferroresonance.

Keywords: optimization, voltage transformer, ferroresonance, modeling, damper

Procedia PDF Downloads 101
3527 Rotorcraft Performance and Environmental Impact Evaluation by Multidisciplinary Modelling

Authors: Pierre-Marie Basset, Gabriel Reboul, Binh DangVu, Sébastien Mercier

Abstract:

Rotorcraft provides invaluable services thanks to their Vertical Take-Off and Landing (VTOL), hover and low speed capabilities. Yet their use is still often limited by their cost and environmental impact, especially noise and energy consumption. One of the main brakes to the expansion of the use of rotorcraft for urban missions is the environmental impact. The first main concern for the population is the noise. In order to develop the transversal competency to assess the rotorcraft environmental footprint, a collaboration has been launched between six research departments within ONERA. The progress in terms of models and methods are capitalized into the numerical workshop C.R.E.A.T.I.O.N. “Concepts of Rotorcraft Enhanced Assessment Through Integrated Optimization Network”. A typical mission for which the environmental impact issue is of great relevance has been defined. The first milestone is to perform the pre-sizing of a reference helicopter for this mission. In a second milestone, an alternate rotorcraft concept has been defined: a tandem rotorcraft with optional propulsion. The key design trends are given for the pre-sizing of this rotorcraft aiming at a significant reduction of the global environmental impact while still giving equivalent flight performance and safety with respect to the reference helicopter. The models and methods have been improved for catching sooner and more globally, the relative variations on the environmental impact when changing the rotorcraft architecture, the pre-design variables and the operation parameters.

Keywords: environmental impact, flight performance, helicopter, multi objectives multidisciplinary optimization, rotorcraft

Procedia PDF Downloads 270
3526 The Characteristics of Porcine Immune Synapse via Flow Cytometry and Transmission Electron Microscope

Authors: Ann Ying-An Chen, Yi-Lun Tsai, Hso-Chi Chaung

Abstract:

An understanding of pathogens and the immune system has played an utmost important role in agricultural research for the development of vaccinations. The immunological synapse, cell to cell interaction play a crucial role in triggering the body's immune system, such as activation between antigen-presenting cells (APCs) and different subsets of T-cell. If these interactions are regulated appropriately, the host has the ability to defend itself against a wide spectrum of infectious pathogens. The aim of this study is to establish and to characterize a porcine immune synapse system by co-culturing T cell/APC. In this study, blood samples were collected from specific-pathogen-free piglets, and peripheral blood mononuclear cells (PBMC) were separated by using Ficoll-Pague. The PBMC were then stained with CD4 (FITC) and CD25 (PE) antibodies. Different subsets of T cells sorted by fluorescence-activated cell sorting flow cytometer were co-cultured for 24 hrs with alveolar macrophages, and the profiles of cytokine secretion and mRNA transcription levels of Toll-like receptors were examined after. Results showed that the three stages of immune synapse were clearly visible and identified under both transmission and scanning electron microscope (TEM and SEM). The significant interaction differences in toll-like receptor expressions within the co-cultured cell system were observed. The TLR7 mRNA expressions in CD4+CD25- cells were lower than those in CD4+CD25+ and CD4 -CD25+. Interestingly, the IL-10 production levels in CD4+CD25- cells (7.732 pg/mL) were significantly higher than those of CD4+CD25+ (2.636 pg/mL) and CD4 -CD25+ (2.48 pg/mL). These findings demonstrated that a clear understanding of the porcine immune synapse system can contribute greatly for further investigations on the mechanism of T-cell activation, which can benefit in the discovery of potential adjuvant candidate or effective antigen epitopes in the development of vaccinations with high efficacy.

Keywords: antigen-presenting cells, immune synapse, pig, T subsets, toll-like receptor

Procedia PDF Downloads 127
3525 Biomechanical Perspectives on the Urinary Bladder: Insights from the Hydrostatic Skeleton Concept

Authors: Igor Vishnevskyi

Abstract:

Introduction: The urinary bladder undergoes repeated strain during its working cycle, suggesting the presence of an efficient support system, force transmission, and mechanical amplification. The concept of a "hydrostatic skeleton" (HS) could contribute to our understanding of the functional relationships among bladder constituents. Methods: A multidisciplinary literature review was conducted to identify key features of the HS and to gather evidence supporting its applicability in urinary bladder biomechanics. The collected evidence was synthesized to propose a framework for understanding the potential hydrostatic properties of the urinary bladder based on existing knowledge and HS principles. Results: Our analysis revealed similarities in biomechanical features between living fluid-filled structures and the urinary bladder. These similarities include the geodesic arrangement of fibres, the role of enclosed fluid (urine) in force transmission, prestress as a determinant of stiffness, and the ability to maintain shape integrity during various activities. From a biomechanical perspective, urine may be considered an essential component of the bladder. The hydrostatic skeleton, with its autonomy and flexibility, may provide insights for researchers involved in bladder engineering. Discussion: The concept of a hydrostatic skeleton offers a holistic perspective for understanding bladder function by considering multiple mechanical factors as a single structure with emergent properties. Incorporating viewpoints from various fields on HS can help identify how this concept applies to live fluid-filled structures or organs and reveal its broader relevance to biological systems, both natural and artificial. Conclusion: The hydrostatic skeleton (HS) design principle can be applied to the urinary bladder. Understanding the bladder as a structure with HS can be instrumental in biomechanical modelling and engineering. Further research is required to fully elucidate the cellular and molecular mechanisms underlying HS in the bladder.

Keywords: hydrostatic skeleton, urinary bladder morphology, shape integrity, prestress, biomechanical modelling

Procedia PDF Downloads 78
3524 Using Real Truck Tours Feedback for Address Geocoding Correction

Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle

Abstract:

When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.

Keywords: driver experience feedback, geocoding correction, real truck tours

Procedia PDF Downloads 674
3523 Finite Element Analysis of Connecting Rod

Authors: Mohammed Mohsin Ali H., Mohamed Haneef

Abstract:

The connecting rod transmits the piston load to the crank causing the latter to turn, thus converting the reciprocating motion of the piston into a rotary motion of the crankshaft. Connecting rods are subjected to forces generated by mass and fuel combustion. This study investigates and compares the fatigue behavior of forged steel, powder forged and ASTM a 514 steel cold quenched connecting rods. The objective is to suggest for a new material with reduced weight and cost with the increased fatigue life. This has entailed performing a detailed load analysis. Therefore, this study has dealt with two subjects: first, dynamic load and stress analysis of the connecting rod, and second, optimization for material, weight and cost. In the first part of the study, the loads acting on the connecting rod as a function of time were obtained. Based on the observations of the dynamic FEA, static FEA, and the load analysis results, the load for the optimization study was selected. It is the conclusion of this study that the connecting rod can be designed and optimized under a load range comprising tensile load and compressive load. Tensile load corresponds to 360o crank angle at the maximum engine speed. The compressive load is corresponding to the peak gas pressure. Furthermore, the existing connecting rod can be replaced with a new connecting rod made of ASTM a 514 steel cold quenched that is 12% lighter and 28% cheaper.

Keywords: connecting rod, ASTM a514 cold quenched material, static analysis, fatigue analysis, stress life approach

Procedia PDF Downloads 300
3522 Optimization of Culture Conditions of Paecilomyces tenuipes, Entomopathogenic Fungi Inoculated into the Silkworm Larva, Bombyx mori

Authors: Sunghee Nam

Abstract:

Entomopathogenic fungi is a Cordyceps species that is isolated from dead silkworm and cicada. Fungi on cicadas were described in old Chinese medicinal books and from ancient times, vegetable wasps and plant worms were widely known to have active substance and have been studied for pharmacological use. Among many fungi belonging to the genus Cordyceps, Cordyceps sinensis have been demonstrated to yield natural products possessing various biological activities and many bioactive components. Generally, It is commonly used to replenish the kidney and soothe the lung, and for the treatment of fatigue. Due to their commercial and economic importance, the demand for Cordyceps has been rapidly increased. However, a supply of Cordyceps specimen could not meet the increasing demand because of their sole dependence on field collection and habitat destruction. Because it is difficult to obtain many insect hosts in nature and the edibility of host insect needs to be verified in a pharmacological aspect. Recently, this setback was overcome that P. tenuipes was able to be cultivated in a large scale using silkworm as host. Pharmacological effects of P. tenuipes cultured on silkworm such as strengthening immune function, anti-fatigue, anti-tumor activity and controlling liver etc. have been proved. They are widely commercialized. In this study, we attempted to establish a method for stable growth inhibition of P. tenuipes on silkworm hosts and an optimal condition for synnemata formation. To determine optimum culturing conditions, temperature and light conditions were varied. The length and number of synnemata was highest at 25℃ temperature and 100~300 lux illumination. On an average, the synnemata of wild P. tenuipes measures 70 ㎜ in length and 20 in number; those of the cultured strain were relatively shorter and more in number. The number of synnemata may have increased as a result of inoculating the host with highly concentrated conidia, while the length may have decreased due to limited nutrition per individual. It is not able that changes in light illumination cause morphological variations in the synnemata. However, regulation of only light and temperature could not produce stromata like perithecia, asci, and ascospores.

Keywords: optimization of culture conditions of paecilomyces tenuipes, entomopathogenic fungi optimization of culture conditions of paecilomyces tenuipes, entomopathogenic fungi silkworm larva, bombyx mori

Procedia PDF Downloads 253
3521 Detecting Geographically Dispersed Overlay Communities Using Community Networks

Authors: Madhushi Bandara, Dharshana Kasthurirathna, Danaja Maldeniya, Mahendra Piraveenan

Abstract:

Community detection is an extremely useful technique in understanding the structure and function of a social network. Louvain algorithm, which is based on Newman-Girman modularity optimization technique, is extensively used as a computationally efficient method extract the communities in social networks. It has been suggested that the nodes that are in close geographical proximity have a higher tendency of forming communities. Variants of the Newman-Girman modularity measure such as dist-modularity try to normalize the effect of geographical proximity to extract geographically dispersed communities, at the expense of losing the information about the geographically proximate communities. In this work, we propose a method to extract geographically dispersed communities while preserving the information about the geographically proximate communities, by analyzing the ‘community network’, where the centroids of communities would be considered as network nodes. We suggest that the inter-community link strengths, which are normalized over the community sizes, may be used to identify and extract the ‘overlay communities’. The overlay communities would have relatively higher link strengths, despite being relatively apart in their spatial distribution. We apply this method to the Gowalla online social network, which contains the geographical signatures of its users, and identify the overlay communities within it.

Keywords: social networks, community detection, modularity optimization, geographically dispersed communities

Procedia PDF Downloads 235
3520 Bayesian Analysis of Topp-Leone Generalized Exponential Distribution

Authors: Najrullah Khan, Athar Ali Khan

Abstract:

The Topp-Leone distribution was introduced by Topp- Leone in 1955. In this paper, an attempt has been made to fit Topp-Leone Generalized exponential (TPGE) distribution. A real survival data set is used for illustrations. Implementation is done using R and JAGS and appropriate illustrations are made. R and JAGS codes have been provided to implement censoring mechanism using both optimization and simulation tools. The main aim of this paper is to describe and illustrate the Bayesian modelling approach to the analysis of survival data. Emphasis is placed on the modeling of data and the interpretation of the results. Crucial to this is an understanding of the nature of the incomplete or 'censored' data encountered. Analytic approximation and simulation tools are covered here, but most of the emphasis is on Markov chain based Monte Carlo method including independent Metropolis algorithm, which is currently the most popular technique. For analytic approximation, among various optimization algorithms and trust region method is found to be the best. In this paper, TPGE model is also used to analyze the lifetime data in Bayesian paradigm. Results are evaluated from the above mentioned real survival data set. The analytic approximation and simulation methods are implemented using some software packages. It is clear from our findings that simulation tools provide better results as compared to those obtained by asymptotic approximation.

Keywords: Bayesian Inference, JAGS, Laplace Approximation, LaplacesDemon, posterior, R Software, simulation

Procedia PDF Downloads 535
3519 Optimization of Two Quality Characteristics in Injection Molding Processes via Taguchi Methodology

Authors: Joseph C. Chen, Venkata Karthik Jakka

Abstract:

The main objective of this research is to optimize tensile strength and dimensional accuracy in injection molding processes using Taguchi Parameter Design. An L16 orthogonal array (OA) is used in Taguchi experimental design with five control factors at four levels each and with non-controllable factor vibration. A total of 32 experiments were designed to obtain the optimal parameter setting for the process. The optimal parameters identified for the shrinkage are shot volume, 1.7 cubic inch (A4); mold term temperature, 130 ºF (B1); hold pressure, 3200 Psi (C4); injection speed, 0.61 inch3/sec (D2); and hold time of 14 seconds (E2). The optimal parameters identified for the tensile strength are shot volume, 1.7 cubic inch (A4); mold temperature, 160 ºF (B4); hold pressure, 3100 Psi (C3); injection speed, 0.69 inch3/sec (D4); and hold time of 14 seconds (E2). The Taguchi-based optimization framework was systematically and successfully implemented to obtain an adjusted optimal setting in this research. The mean shrinkage of the confirmation runs is 0.0031%, and the tensile strength value was found to be 3148.1 psi. Both outcomes are far better results from the baseline, and defects have been further reduced in injection molding processes.

Keywords: injection molding processes, taguchi parameter design, tensile strength, high-density polyethylene(HDPE)

Procedia PDF Downloads 196
3518 Condition Optimization for Trypsin and Chymotrypsin Activities in Economic Animals

Authors: Mallika Supa-Aksorn, Buaream Maneewan, Jiraporn Rojtinnakorn

Abstract:

For animals, trypsin and chymotrypsin are the 2 proteases that play the important role in protein digestion and involving in growth rate. In many animals, these two enzymes are indicated as growth parameter by feed. Although enzyme assay at optimal condition is significant for its accuracy activity determination. There is less report of trypsin and chymotrypsin. Therefore, in this study, optimization of pH and temperature for trypsin (T) and chymotrypsin (C) in economic species; i.e. Nile tilapia (Oreochromis niloticus), sand goby (Oxyeleotoris marmoratus), giant freshwater prawn (Macrobachium rosenberchii) and native chicken (Gallus gallus) were investigated. Each enzyme of each species was assaying for its specific activity with variation of pH in range of 2-12 and temperature in range of 30-80 °C. It revealed that, for Nile tilapia, T had optimal condition at pH 9 and temperature 50-80 °C, whereas C had optimal condition at pH 8 and temperature 60 °C. For sand goby, T had optimal condition at pH 7 and temperature of 50 °C, while C had optimal condition at pH 11 and temperature of 70-75 °C. For juvenile freshwater prawn, T had optimal condition at pH 10-11 and temperature of 60-65 °C, C had optimal condition at pH 8 and temperature of 70°C. For starter native chicken, T has optimal condition at pH 7 and temperature of 70 °C, whereas C had o optimal condition at pH 8 and temperature of 60°C. This information of optimal conditions will be high valuable in further for, actual enzyme measurement of T and C activities that benefit for growth and feed analysis.

Keywords: trypsin, chymotrypsin, Oreochromis niloticus, Oxyeleotoris marmoratus, Macrobachium rosenberchii, Gallus gallus

Procedia PDF Downloads 259
3517 Optimization of Hate Speech and Abusive Language Detection on Indonesian-language Twitter using Genetic Algorithms

Authors: Rikson Gultom

Abstract:

Hate Speech and Abusive language on social media is difficult to detect, usually, it is detected after it becomes viral in cyberspace, of course, it is too late for prevention. An early detection system that has a fairly good accuracy is needed so that it can reduce conflicts that occur in society caused by postings on social media that attack individuals, groups, and governments in Indonesia. The purpose of this study is to find an early detection model on Twitter social media using machine learning that has high accuracy from several machine learning methods studied. In this study, the support vector machine (SVM), Naïve Bayes (NB), and Random Forest Decision Tree (RFDT) methods were compared with the Support Vector machine with genetic algorithm (SVM-GA), Nave Bayes with genetic algorithm (NB-GA), and Random Forest Decision Tree with Genetic Algorithm (RFDT-GA). The study produced a comparison table for the accuracy of the hate speech and abusive language detection model, and presented it in the form of a graph of the accuracy of the six algorithms developed based on the Indonesian-language Twitter dataset, and concluded the best model with the highest accuracy.

Keywords: abusive language, hate speech, machine learning, optimization, social media

Procedia PDF Downloads 128
3516 Solar Building Design Using GaAs PV Cells for Optimum Energy Consumption

Authors: Hadis Pouyafar, D. Matin Alaghmandan

Abstract:

Gallium arsenide (GaAs) solar cells are widely used in applications like spacecraft and satellites because they have a high absorption coefficient and efficiency and can withstand high-energy particles such as electrons and protons. With the energy crisis, there's a growing need for efficiency and cost-effective solar cells. GaAs cells, with their 46% efficiency compared to silicon cells 23% can be utilized in buildings to achieve nearly zero emissions. This way, we can use irradiation and convert more solar energy into electricity. III V semiconductors used in these cells offer performance compared to other technologies available. However, despite these advantages, Si cells dominate the market due to their prices. In our study, we took an approach by using software from the start to gather all information. By doing so, we aimed to design the optimal building that harnesses the full potential of solar energy. Our modeling results reveal a future; for GaAs cells, we utilized the Grasshopper plugin for modeling and optimization purposes. To assess radiation, weather data, solar energy levels and other factors, we relied on the Ladybug and Honeybee plugins. We have shown that silicon solar cells may not always be the choice for meeting electricity demands, particularly when higher power output is required. Therefore, when it comes to power consumption and the available surface area for photovoltaic (PV) installation, it may be necessary to consider efficient solar cell options, like GaAs solar cells. By considering the building requirements and utilizing GaAs technology, we were able to optimize the PV surface area.

Keywords: gallium arsenide (GaAs), optimization, sustainable building, GaAs solar cells

Procedia PDF Downloads 94
3515 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground

Authors: Bhim Kumar Dahal

Abstract:

Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies.  Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication.  And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.

Keywords: cement, improvement, physical properties, strength

Procedia PDF Downloads 174
3514 IEEE802.15.4e Based Scheduling Mechanisms and Systems for Industrial Internet of Things

Authors: Ho-Ting Wu, Kai-Wei Ke, Bo-Yu Huang, Liang-Lin Yan, Chun-Ting Lin

Abstract:

With the advances in advanced technology, wireless sensor network (WSN) has become one of the most promising candidates to implement the wireless industrial internet of things (IIOT) architecture. However, the legacy IEEE 802.15.4 based WSN technology such as Zigbee system cannot meet the stringent QoS requirement of low powered, real-time, and highly reliable transmission imposed by the IIOT environment. Recently, the IEEE society developed IEEE 802.15.4e Time Slotted Channel Hopping (TSCH) access mode to serve this purpose. Furthermore, the IETF 6TiSCH working group has proposed standards to integrate IEEE 802.15.4e with IPv6 protocol smoothly to form a complete protocol stack for IIOT. In this work, we develop key network technologies for IEEE 802.15.4e based wireless IIoT architecture, focusing on practical design and system implementation. We realize the OpenWSN-based wireless IIOT system. The system architecture is divided into three main parts: web server, network manager, and sensor nodes. The web server provides user interface, allowing the user to view the status of sensor nodes and instruct sensor nodes to follow commands via user-friendly browser. The network manager is responsible for the establishment, maintenance, and management of scheduling and topology information. It executes centralized scheduling algorithm, sends the scheduling table to each node, as well as manages the sensing tasks of each device. Sensor nodes complete the assigned tasks and sends the sensed data. Furthermore, to prevent scheduling error due to packet loss, a schedule inspection mechanism is implemented to verify the correctness of the schedule table. In addition, when network topology changes, the system will act to generate a new schedule table based on the changed topology for ensuring the proper operation of the system. To enhance the system performance of such system, we further propose dynamic bandwidth allocation and distributed scheduling mechanisms. The developed distributed scheduling mechanism enables each individual sensor node to build, maintain and manage the dedicated link bandwidth with its parent and children nodes based on locally observed information by exchanging the Add/Delete commands via two processes. The first process, termed as the schedule initialization process, allows each sensor node pair to identify the available idle slots to allocate the basic dedicated transmission bandwidth. The second process, termed as the schedule adjustment process, enables each sensor node pair to adjust their allocated bandwidth dynamically according to the measured traffic loading. Such technology can sufficiently satisfy the dynamic bandwidth requirement in the frequently changing environments. Last but not least, we propose a packet retransmission scheme to enhance the system performance of the centralized scheduling algorithm when the packet delivery rate (PDR) is low. We propose a multi-frame retransmission mechanism to allow every single network node to resend each packet for at least the predefined number of times. The multi frame architecture is built according to the number of layers of the network topology. Performance results via simulation reveal that such retransmission scheme is able to provide sufficient high transmission reliability while maintaining low packet transmission latency. Therefore, the QoS requirement of IIoT can be achieved.

Keywords: IEEE 802.15.4e, industrial internet of things (IIOT), scheduling mechanisms, wireless sensor networks (WSN)

Procedia PDF Downloads 160
3513 Predictive Maintenance of Industrial Shredders: Efficient Operation through Real-Time Monitoring Using Statistical Machine Learning

Authors: Federico Pittino, Thomas Arnold

Abstract:

The shredding of waste materials is a key step in the recycling process towards the circular economy. Industrial shredders for waste processing operate in very harsh operating conditions, leading to the need for frequent maintenance of critical components. Maintenance optimization is particularly important also to increase the machine’s efficiency, thereby reducing the operational costs. In this work, a monitoring system has been developed and deployed on an industrial shredder located at a waste recycling plant in Austria. The machine has been monitored for one year, and methods for predictive maintenance have been developed for two key components: the cutting knives and the drive belt. The large amount of collected data is leveraged by statistical machine learning techniques, thereby not requiring very detailed knowledge of the machine or its live operating conditions. The results show that, despite the wide range of operating conditions, a reliable estimate of the optimal time for maintenance can be derived. Moreover, the trade-off between the cost of maintenance and the increase in power consumption due to the wear state of the monitored components of the machine is investigated. This work proves the benefits of real-time monitoring system for the efficient operation of industrial shredders.

Keywords: predictive maintenance, circular economy, industrial shredder, cost optimization, statistical machine learning

Procedia PDF Downloads 125
3512 Procedure Model for Data-Driven Decision Support Regarding the Integration of Renewable Energies into Industrial Energy Management

Authors: M. Graus, K. Westhoff, X. Xu

Abstract:

The climate change causes a change in all aspects of society. While the expansion of renewable energies proceeds, industry could not be convinced based on general studies about the potential of demand side management to reinforce smart grid considerations in their operational business. In this article, a procedure model for a case-specific data-driven decision support for industrial energy management based on a holistic data analytics approach is presented. The model is executed on the example of the strategic decision problem, to integrate the aspect of renewable energies into industrial energy management. This question is induced due to considerations of changing the electricity contract model from a standard rate to volatile energy prices corresponding to the energy spot market which is increasingly more affected by renewable energies. The procedure model corresponds to a data analytics process consisting on a data model, analysis, simulation and optimization step. This procedure will help to quantify the potentials of sustainable production concepts based on the data from a factory. The model is validated with data from a printer in analogy to a simple production machine. The overall goal is to establish smart grid principles for industry via the transformation from knowledge-driven to data-driven decisions within manufacturing companies.

Keywords: data analytics, green production, industrial energy management, optimization, renewable energies, simulation

Procedia PDF Downloads 435
3511 Interval Bilevel Linear Fractional Programming

Authors: F. Hamidi, N. Amiri, H. Mishmast Nehi

Abstract:

The Bilevel Programming (BP) model has been presented for a decision making process that consists of two decision makers in a hierarchical structure. In fact, BP is a model for a static two person game (the leader player in the upper level and the follower player in the lower level) wherein each player tries to optimize his/her personal objective function under dependent constraints; this game is sequential and non-cooperative. The decision making variables are divided between the two players and one’s choice affects the other’s benefit and choices. In other words, BP consists of two nested optimization problems with two objective functions (upper and lower) where the constraint region of the upper level problem is implicitly determined by the lower level problem. In real cases, the coefficients of an optimization problem may not be precise, i.e. they may be interval. In this paper we develop an algorithm for solving interval bilevel linear fractional programming problems. That is to say, bilevel problems in which both objective functions are linear fractional, the coefficients are interval and the common constraint region is a polyhedron. From the original problem, the best and the worst bilevel linear fractional problems have been derived and then, using the extended Charnes and Cooper transformation, each fractional problem can be reduced to a linear problem. Then we can find the best and the worst optimal values of the leader objective function by two algorithms.

Keywords: best and worst optimal solutions, bilevel programming, fractional, interval coefficients

Procedia PDF Downloads 446
3510 Significant Reduction in Specific CO₂ Emission through Process Optimization at G Blast Furnace, Tata Steel Jamshedpur

Authors: Shoumodip Roy, Ankit Singhania, M. K. G. Choudhury, Santanu Mallick, M. K. Agarwal, R. V. Ramna, Uttam Singh

Abstract:

One of the key corporate goals of Tata Steel company is to demonstrate Environment Leadership. Decreasing specific CO₂ emission is one of the key steps to achieve the stated corporate goal. At any Blast Furnace, specific CO₂ emission is directly proportional to fuel intake. To reduce the fuel intake at G Blast Furnace, an initial benchmarking exercise was carried out with international and domestic Blast Furnaces to determine the potential for improvement. The gap identified during the exercise revealed that the benchmark Blast Furnaces operated with superior raw material quality than that in G Blast Furnace. However, since the raw materials to G Blast Furnace are sourced from the captive mines, improvement in the raw material quality was out of scope. Therefore, trials were taken with different operating regimes, to identify the key process parameters, which on optimization could significantly reduce the fuel intake in G Blast Furnace. The key process parameters identified from the trial were the Stoichiometric Oxygen Ratio, Melting Capacity ratio and the burden distribution inside the furnace. These identified process parameters were optimized to bridge the gap in fuel intake at G Blast Furnace, thereby reducing specific CO₂ emission to benchmark levels. This paradigm shift enabled to lower the fuel intake by 70kg per ton of liquid iron produced, thereby reducing the specific CO₂ emission by 15 percent.

Keywords: benchmark, blast furnace, CO₂ emission, fuel rate

Procedia PDF Downloads 280
3509 Electricity Sector's Status in Lebanon and Portfolio Optimization for the Future Electricity Generation Scenarios

Authors: Nour Wehbe

Abstract:

The Lebanese electricity sector is at the heart of a deep crisis. Electricity in Lebanon is supplied by Électricité du Liban (EdL) which has to suffer from technical and financial deficiencies for decades and proved to be insufficient and deficient as the demand still exceeds the supply. As a result, backup generation is widespread throughout Lebanon. The sector costs massive government resources and, on top of it, consumers pay massive additional amounts for satisfying their electrical needs. While the developed countries have been investing in renewable energy for the past two decades, the Lebanese government realizes the importance of adopting such energy sourcing strategies for the upgrade of the electricity sector in the country. The diversification of the national electricity generation mix has increased considerably in Lebanon's energy planning agenda, especially that a detailed review of the energy potential in Lebanon has revealed a great potential of solar and wind energy resources, a considerable potential of biomass resource, and an important hydraulic potential in Lebanon. This paper presents a review of the energy status of Lebanon, and illustrates a detailed review of the EDL structure with the existing problems and recommended solutions. In addition, scenarios reflecting implementation of policy projects are presented, and conclusions are drawn on the usefulness of a proposed evaluation methodology and the effectiveness of the adopted new energy policy for the electrical sector in Lebanon.

Keywords: EdL Electricite du Liban, portfolio optimization, electricity generation mix, mean-variance approach

Procedia PDF Downloads 248
3508 Model Updating Based on Modal Parameters Using Hybrid Pattern Search Technique

Authors: N. Guo, C. Xu, Z. C. Yang

Abstract:

In order to ensure the high reliability of an aircraft, the accurate structural dynamics analysis has become an indispensable part in the design of an aircraft structure. Therefore, the structural finite element model which can be used to accurately calculate the structural dynamics and their transfer relations is the prerequisite in structural dynamic design. A dynamic finite element model updating method is presented to correct the uncertain parameters of the finite element model of a structure using measured modal parameters. The coordinate modal assurance criterion is used to evaluate the correlation level at each coordinate over the experimental and the analytical mode shapes. Then, the weighted summation of the natural frequency residual and the coordinate modal assurance criterion residual is used as the objective function. Moreover, the hybrid pattern search (HPS) optimization technique, which synthesizes the advantages of pattern search (PS) optimization technique and genetic algorithm (GA), is introduced to solve the dynamic FE model updating problem. A numerical simulation and a model updating experiment for GARTEUR aircraft model are performed to validate the feasibility and effectiveness of the present dynamic model updating method, respectively. The updated results show that the proposed method can be successfully used to modify the incorrect parameters with good robustness.

Keywords: model updating, modal parameter, coordinate modal assurance criterion, hybrid genetic/pattern search

Procedia PDF Downloads 161
3507 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization

Authors: Soheila Sadeghi

Abstract:

Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.

Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction

Procedia PDF Downloads 59