Search results for: logistics network optimization
4681 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation
Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton
Abstract:
Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication
Procedia PDF Downloads 1714680 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh
Abstract:
The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese
Procedia PDF Downloads 744679 Evaluation of Tumor Microenvironment Using Molecular Imaging
Authors: Fakhrosadat Sajjadian, Ramin Ghasemi Shayan
Abstract:
The tumor microenvironment plays an fundamental part in tumor start, movement, metastasis, and treatment resistance. It varies from ordinary tissue in terms of its extracellular network, vascular and lymphatic arrange, as well as physiological conditions. The clinical application of atomic cancer imaging is regularly prevented by the tall commercialization costs of focused on imaging operators as well as the constrained clinical applications and little showcase measure of a few operators. . Since numerous cancer types share comparable characteristics of the tumor microenvironment, the capacity to target these biomarkers has the potential to supply clinically translatable atomic imaging advances for numerous types encompassing cancer and broad clinical applications. Noteworthy advance has been made in focusing on the tumor microenvironment for atomic cancer imaging. In this survey, we summarize the standards and methodologies of later progresses in atomic imaging of the tumor microenvironment, utilizing distinctive imaging modalities for early discovery and conclusion of cancer. To conclude, The tumor microenvironment (TME) encompassing tumor cells could be a profoundly energetic and heterogeneous composition of safe cells, fibroblasts, forerunner cells, endothelial cells, flagging atoms and extracellular network (ECM) components.Keywords: molecular, imaging, TME, medicine
Procedia PDF Downloads 454678 Isolation, Characterization and Optimization of Alkalophilic and Thermotolerant Lipase from Bacillus subtilis Strain
Authors: Indu Bhushan Sharma, Rashmi Saraswat
Abstract:
The thermotolerant, solvent stable and alkalophilic lipase producing bacterial strain was isolated from the water sample of the foothills of Trikuta Mountain in Kakryal (Reasi district) in Jammu and Kashmir, India. The lipase-producing microorganisms were screened using tributyrin agar plates. The selected microbe was optimized for maximum lipase production by subjecting to various carbon and nitrogen sources, incubation period and inoculum size. The selected strain was identified as Bacillus subtilis strain kakrayal_1 (BSK_1) using 16S rRNA sequence analysis. Effect of pH, temperature, metal ions, detergents and organic solvents were studied on lipase activity. Lipase was found to be stable over a pH range of 6.0 to 9.0 and exhibited maximum activity at pH 8. Lipolytic activity was highest at 37°C and the enzyme activity remained at 60°C for 24hrs, hence, established as thermo-tolerant. Production of lipase was significantly induced by vegetable oil and the best nitrogen source was found to be peptone. The isolated Bacillus lipase was stimulated by pre-treatment with Mn2+, Ca2+, K+, Zn2+, and Fe2+. Lipase was stable in detergents such as triton X 100, tween 20 and Tween 80. The 100% ethyl acetate enhanced lipase activity whereas, lipase activity were found to be stable in Hexane. The optimization resulted in 4 fold increase in lipase production. Bacillus lipases are ‘generally recognized as safe’ (GRAS) and are industrially interesting. The inducible alkaline, thermo-tolerant lipase exhibited the ability to be stable in detergents and organic solvents. This could be further researched as a potential biocatalyst for industrial applications such as biotransformation, detergent formulation, bioremediation and organic synthesis.Keywords: bacillus, lipase, thermotolerant, alkalophilic
Procedia PDF Downloads 2554677 Optimizing Heavy-Duty Green Hydrogen Refueling Stations: A Techno-Economic Analysis of Turbo-Expander Integration
Authors: Christelle Rabbat, Carole Vouebou, Sary Awad, Alan Jean-Marie
Abstract:
Hydrogen has been proven to be a viable alternative to standard fuels as it is easy to produce and only generates water vapour and zero carbon emissions. However, despite the hydrogen benefits, the widespread adoption of hydrogen fuel cell vehicles and internal combustion engine vehicles is impeded by several challenges. The lack of refueling infrastructures remains one of the main hindering factors due to the high costs associated with their design, construction, and operation. Besides, the lack of hydrogen vehicles on the road diminishes the economic viability of investing in refueling infrastructure. Simultaneously, the absence of accessible refueling stations discourages consumers from adopting hydrogen vehicles, perpetuating a cycle of limited market uptake. To address these challenges, the implementation of adequate policies incentivizing the use of hydrogen vehicles and the reduction of the investment and operation costs of hydrogen refueling stations (HRS) are essential to put both investors and customers at ease. Even though the transition to hydrogen cars has been rather slow, public transportation companies have shown a keen interest in this highly promising fuel. Besides, their hydrogen demand is easier to predict and regulate than personal vehicles. Due to the reduced complexity of designing a suitable hydrogen supply chain for public vehicles, this sub-sector could be a great starting point to facilitate the adoption of hydrogen vehicles. Consequently, this study will focus on designing a chain of on-site green HRS for the public transportation network in Nantes Metropole leveraging the latest relevant technological advances aiming to reduce the costs while ensuring reliability, safety, and ease of access. To reduce the cost of HRS and encourage their widespread adoption, a network of 7 H35-T40 HRS has been designed, replacing the conventional J-T valves with turbo-expanders. Each station in the network has a daily capacity of 1,920 kg. Thus, the HRS network can produce up to 12.5 tH2 per day. The detailed cost analysis has revealed a CAPEX per station of 16.6 M euros leading to a network CAPEX of 116.2 M euros. The proposed station siting prioritized Nantes metropole’s 5 bus depots and included 2 city-centre locations. Thanks to the turbo-expander technology, the cooling capacity of the proposed HRS is 19% lower than that of a conventional station equipped with J-T valves, resulting in significant CAPEX savings estimated at 708,560 € per station, thus nearly 5 million euros for the whole HRS network. Besides, the turbo-expander power generation ranges from 7.7 to 112 kW. Thus, the power produced can be used within the station or sold as electricity to the main grid, which would, in turn, maximize the station’s profit. Despite the substantial initial investment required, the environmental benefits, cost savings, and energy efficiencies realized through the transition to hydrogen fuel cell buses and the deployment of HRS equipped with turbo-expanders offer considerable advantages for both TAN and Nantes Metropole. These initiatives underscore their enduring commitment to fostering green mobility and combatting climate change in the long term.Keywords: green hydrogen, refueling stations, turbo-expander, heavy-duty vehicles
Procedia PDF Downloads 564676 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box
Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar
Abstract:
To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection
Procedia PDF Downloads 1304675 Early Detection of Breast Cancer in Digital Mammograms Based on Image Processing and Artificial Intelligence
Authors: Sehreen Moorat, Mussarat Lakho
Abstract:
A method of artificial intelligence using digital mammograms data has been proposed in this paper for detection of breast cancer. Many researchers have developed techniques for the early detection of breast cancer; the early diagnosis helps to save many lives. The detection of breast cancer through mammography is effective method which detects the cancer before it is felt and increases the survival rate. In this paper, we have purposed image processing technique for enhancing the image to detect the graphical table data and markings. Texture features based on Gray-Level Co-Occurrence Matrix and intensity based features are extracted from the selected region. For classification purpose, neural network based supervised classifier system has been used which can discriminate between benign and malignant. Hence, 68 digital mammograms have been used to train the classifier. The obtained result proved that automated detection of breast cancer is beneficial for early diagnosis and increases the survival rates of breast cancer patients. The proposed system will help radiologist in the better interpretation of breast cancer.Keywords: medical imaging, cancer, processing, neural network
Procedia PDF Downloads 2594674 Participation in the Decision Making and Job Satisfaction in Greek Fish Farms
Authors: S. Anastasiou, C. Nathanailides
Abstract:
There is considerable evidence to suggest that employees participation in the decision-making process of an organisation, has a positive effect on job satisfaction and work performance of the employees. The purpose of the present work was to examine the HRM practices, demographics and the level of job satisfaction of employees in Greek Aquaculture fish farms. A survey of employees (n=86) in 6 Greek Aquaculture Firms was carried out. The results indicate that HRM practices such as recruitment of the personnel and communication between the departments did not vary between different firms. The most frequent method of recruitment was through the professional network or the personal network of the managers. The preferred method of HRM communication was through the line managers and through group meeting. The level of job satisfaction increased with work experience participation and participation in the decision making process. A high percentage of the employees (81,3%±8.39) felt that they frequently participated in the decision making process. The Aquaculture employees exhibited high level of job satisfaction (88,1±6.95). The level of job satisfaction was related with participation in the decision making process (-0.633, P<0.05) but was not related with as age or gender. In terms of the working conditions, employees were mostly satisfied with their work itself, their colleagues and mostly dissatisfied with working hours, salary issues and low prospects of pay rises.Keywords: aquaculture, human resources, job satisfaction
Procedia PDF Downloads 4674673 Real-Time Inventory Management and Operational Efficiency in Manufacturing
Authors: Tom Wanyama
Abstract:
We have developed a weight-based parts inventory monitoring system utilizing the Industrial Internet of Things (IIoT) to enhance operational efficiencies in manufacturing. The system addresses various challenges, including eliminating downtimes caused by stock-outs, preventing human errors in parts delivery and product assembly, and minimizing motion waste by reducing unnecessary worker movements. The system incorporates custom QR codes for simplified inventory tracking and retrieval processes. The generated data serves a dual purpose by enabling real-time optimization of parts flow within manufacturing facilities and facilitating retroactive optimization of stock levels for informed decision-making in inventory management. The pilot implementation at SEPT Learning Factory successfully eradicated data entry errors, optimized parts delivery, and minimized workstation downtimes, resulting in a remarkable increase of over 10% in overall equipment efficiency across all workstations. Leveraging the IIoT features, the system seamlessly integrates information into the process control system, contributing to the enhancement of product quality. This approach underscores the importance of effective tracking of parts inventory in manufacturing to achieve transparency, improved inventory control, and overall profitability. In the broader context, our inventory monitoring system aligns with the evolving focus on optimizing supply chains and maintaining well-managed warehouses to ensure maximum efficiency in the manufacturing industry.Keywords: industrial Internet of things, industrial systems integration, inventory monitoring, inventory control in manufacturing
Procedia PDF Downloads 354672 Cerebrovascular Modeling: A Vessel Network Approach for Fluid Distribution
Authors: Karla E. Sanchez-Cazares, Kim H. Parker, Jennifer H. Tweedy
Abstract:
The purpose of this work is to develop a simple compartmental model of cerebral fluid balance including blood and cerebrospinal-fluid (CSF). At the first level the cerebral arteries and veins are modelled as bifurcating trees with constant scaling factors between generations which are connected through a homogeneous microcirculation. The arteries and veins are assumed to be non-rigid and the cross-sectional area, resistance and mean pressure in each generation are determined as a function of blood volume flow rate. From the mean pressure and further assumptions about the variation of wall permeability, the transmural fluid flux can be calculated. The results suggest the next level of modelling where the cerebral vasculature is divided into three compartments; the large arteries, the small arteries, the capillaries and the veins with effective compliances and permeabilities derived from the detailed vascular model. These vascular compartments are then linked to other compartments describing the different CSF spaces, the cerebral ventricles and the subarachnoid space. This compartmental model is used to calculate the distribution of fluid in the cranium. Known volumes and flows for normal conditions are used to determine reasonable parameters for the model, which can then be used to help understand pathological behaviour and suggest clinical interventions.Keywords: cerebrovascular, compartmental model, CSF model, vascular network
Procedia PDF Downloads 2754671 Modeling of Power Network by ATP-Draw for Lightning Stroke Studies
Authors: John Morales, Armando Guzman
Abstract:
Protection relay algorithms play a crucial role in Electric Power System stability, where, it is clear that lightning strokes produce the mayor percentage of faults and outages of Transmission Lines (TLs) and Distribution Feeders (DFs). In this context, it is imperative to develop novel protection relay algorithms. However, in order to get this aim, Electric Power Systems (EPS) network have to be simulated as real as possible, especially the lightning phenomena, and EPS elements that affect their behavior like direct and indirect lightning, insulator string, overhead line, soil ionization and other. However, researchers have proposed new protection relay algorithms considering common faults, which are not produced by lightning strokes, omitting these imperative phenomena for the transmission line protection relays behavior. Based on the above said, this paper presents the possibilities of using the Alternative Transient Program ATP-Draw for the modeling and simulation of some models to make lightning stroke studies, especially for protection relays, which are developed through Transient Analysis of Control Systems (TACS) and MODELS language corresponding to the ATP-Draw.Keywords: back-flashover, faults, flashover, lightning stroke, modeling of lightning, outages, protection relays
Procedia PDF Downloads 3164670 Large Core Silica Few-Mode Optical Fibers with Reduced Differential Mode Delay and Enhanced Mode Effective Area over 'C'-Band
Authors: Anton V. Bourdine, Vladimir A. Burdin, Oleg R. Delmukhametov
Abstract:
This work presents a fast and simple method for the design of large core silica optical fibers with differential mode delay (DMD) management. Some results are reported concerned with refractive index profile optimization for 42 µm core 16-LP-mode optical fiber for next-generation optical networks. Here special refractive index profile form provides total DMD reducing over all mode staff under desired enhanced mode effective area. Method for the simulation of 'real manufactured' few-mode optical fiber (FMF) core geometry differing from the desired optimized structure by core non-symmetrical ellipticity and refractive index profile deviation including local fluctuations is proposed. Results of the following analysis of optimized FMF with inserted geometry distortions performed by earlier on developed modification of rigorous mixed finite-element method showed strong DMD degradation that requires additional higher-order mode management. In addition, this work also presents a method for design mode division multiplexer channel precision spatial positioning scheme at FMF core end that provides one of the potentiality solutions of described DMD degradation problem concerned with 'distorted' core geometry due to features of optical fiber manufacturing techniques.Keywords: differential mode delay, few-mode optical fibers, nonlinear Shannon limit, optical fiber non-circularity, ‘real manufactured’ optical fiber core geometry simulation, refractive index profile optimization
Procedia PDF Downloads 1574669 The Benefits of End-To-End Integrated Planning from the Mine to Client Supply for Minimizing Penalties
Authors: G. Martino, F. Silva, E. Marchal
Abstract:
The control over delivered iron ore blend characteristics is one of the most important aspects of the mining business. The iron ore price is a function of its composition, which is the outcome of the beneficiation process. So, end-to-end integrated planning of mine operations can reduce risks of penalties on the iron ore price. In a standard iron mining company, the production chain is composed of mining, ore beneficiation, and client supply. When mine planning and client supply decisions are made uncoordinated, the beneficiation plant struggles to deliver the best blend possible. Technological improvements in several fields allowed bridging the gap between departments and boosting integrated decision-making processes. Clusterization and classification algorithms over historical production data generate reasonable previsions for quality and volume of iron ore produced for each pile of run-of-mine (ROM) processed. Mathematical modeling can use those deterministic relations to propose iron ore blends that better-fit specifications within a delivery schedule. Additionally, a model capable of representing the whole production chain can clearly compare the overall impact of different decisions in the process. This study shows how flexibilization combined with a planning optimization model between the mine and the ore beneficiation processes can reduce risks of out of specification deliveries. The model capabilities are illustrated on a hypothetical iron ore mine with magnetic separation process. Finally, this study shows ways of cost reduction or profit increase by optimizing process indicators across the production chain and integrating the different plannings with the sales decisions.Keywords: clusterization and classification algorithms, integrated planning, mathematical modeling, optimization, penalty minimization
Procedia PDF Downloads 1234668 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 2144667 The Optimization of the Parameters for Eco-Friendly Leaching of Precious Metals from Waste Catalyst
Authors: Silindile Gumede, Amir Hossein Mohammadi, Mbuyu Germain Ntunka
Abstract:
Goal 12 of the 17 Sustainable Development Goals (SDGs) encourages sustainable consumption and production patterns. This necessitates achieving the environmentally safe management of chemicals and all wastes throughout their life cycle and the proper disposal of pollutants and toxic waste. Fluid catalytic cracking (FCC) catalysts are widely used in the refinery to convert heavy feedstocks to lighter ones. During the refining processes, the catalysts are deactivated and discarded as hazardous toxic solid waste. Spent catalysts (SC) contain high-cost metal, and the recovery of metals from SCs is a tactical plan for supplying part of the demand for these substances and minimizing the environmental impacts. Leaching followed by solvent extraction, has been found to be the most efficient method to recover valuable metals with high purity from spent catalysts. However, the use of inorganic acids during the leaching process causes a secondary environmental issue. Therefore, it is necessary to explore other alternative efficient leaching agents that are economical and environmentally friendly. In this study, the waste catalyst was collected from a domestic refinery and was characterised using XRD, ICP, XRF, and SEM. Response surface methodology (RSM) and Box Behnken design were used to model and optimize the influence of some parameters affecting the acidic leaching process. The parameters selected in this investigation were the acid concentration, temperature, and leaching time. From the characterisation results, it was found that the spent catalyst consists of high concentrations of Vanadium (V) and Nickel (Ni); hence this study focuses on the leaching of Ni and V using a biodegradable acid to eliminate the formation of the secondary pollution.Keywords: eco-friendly leaching, optimization, metal recovery, leaching
Procedia PDF Downloads 684666 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting
Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi
Abstract:
An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power
Procedia PDF Downloads 4114665 Strengthening Farmer-to-farmer Knowledge Sharing Network: A Pathway to Improved Extension Service Delivery
Authors: Farouk Shehu Abdulwahab
Abstract:
The concept of farmer-farmer knowledge sharing was introduced to bridge the extension worker-farmer ratio gap in developing countries. However, the idea was poorly accepted, especially in typical agrarian communities. Therefore, the study explores the concept of a farmer-to-farmer knowledge-sharing network to enhance extension service delivery. The study collected data from 80 farmers randomly selected through a series of multiple stages. The Data was analysed using a 5-point Likert scale and descriptive statistics. The Likert scale results revealed that 62.5% of the farmers are satisfied with farmer-to-farmer knowledge-sharing networks. Moreover, descriptive statistics show that lack of capacity building and low level of education are the most significant problems affecting farmer-farmer sharing networks. The major implication of these findings is that the concept of farmer-farmer knowledge-sharing networks can work better for farmers in developing countries as it was perceived by them as a reliable alternative for information sharing. Therefore, the study recommends introducing incentives into the concept of farmer-farmer knowledge-sharing networks and enhancing the capabilities of farmers who are opinion leaders in the farmer-farmer concept of knowledge-sharing to make it more sustainable.Keywords: agricultural productivity, extension, farmer-to-farmer, livelihood, technology transfer
Procedia PDF Downloads 644664 Intelligent Rainwater Reuse System for Irrigation
Authors: Maria M. S. Pires, Andre F. X. Gloria, Pedro J. A. Sebastiao
Abstract:
The technological advances in the area of Internet of Things have been creating more and more solutions in the area of agriculture. These solutions are quite important for life, as they lead to the saving of the most precious resource, water, being this need to save water a concern worldwide. The paper proposes the creation of an Internet of Things system based on a network of sensors and interconnected actuators that automatically monitors the quality of the rainwater that is stored inside a tank in order to be used for irrigation. The main objective is to promote sustainability by reusing rainwater for irrigation systems instead of water that is usually available for other functions, such as other productions or even domestic tasks. A mobile application was developed for Android so that the user can control and monitor his system in real time. In the application, it is possible to visualize the data that translate the quality of the water inserted in the tank, as well as perform some actions on the implemented actuators, such as start/stop the irrigation system and pour the water in case of poor water quality. The implemented system translates a simple solution with a high level of efficiency and tests and results obtained within the possible environment.Keywords: internet of things, irrigation system, wireless sensor and actuator network, ESP32, sustainability, water reuse, water efficiency
Procedia PDF Downloads 1494663 Analysis of Residents’ Travel Characteristics and Policy Improving Strategies
Authors: Zhenzhen Xu, Chunfu Shao, Shengyou Wang, Chunjiao Dong
Abstract:
To improve the satisfaction of residents' travel, this paper analyzes the characteristics and influencing factors of urban residents' travel behavior. First, a Multinominal Logit Model (MNL) model is built to analyze the characteristics of residents' travel behavior, reveal the influence of individual attributes, family attributes and travel characteristics on the choice of travel mode, and identify the significant factors. Then put forward suggestions for policy improvement. Finally, Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP) models are introduced to evaluate the policy effect. This paper selects Futian Street in Futian District, Shenzhen City for investigation and research. The results show that gender, age, education, income, number of cars owned, travel purpose, departure time, journey time, travel distance and times all have a significant influence on residents' choice of travel mode. Based on the above results, two policy improvement suggestions are put forward from reducing public transportation and non-motor vehicle travel time, and the policy effect is evaluated. Before the evaluation, the prediction effect of MNL, SVM and MLP models was evaluated. After parameter optimization, it was found that the prediction accuracy of the three models was 72.80%, 71.42%, and 76.42%, respectively. The MLP model with the highest prediction accuracy was selected to evaluate the effect of policy improvement. The results showed that after the implementation of the policy, the proportion of public transportation in plan 1 and plan 2 increased by 14.04% and 9.86%, respectively, while the proportion of private cars decreased by 3.47% and 2.54%, respectively. The proportion of car trips decreased obviously, while the proportion of public transport trips increased. It can be considered that the measures have a positive effect on promoting green trips and improving the satisfaction of urban residents, and can provide a reference for relevant departments to formulate transportation policies.Keywords: neural network, travel characteristics analysis, transportation choice, travel sharing rate, traffic resource allocation
Procedia PDF Downloads 1384662 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Objective: Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. Methods: 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as independent component analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. Results: The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. Conclusion: This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.Keywords: ICA, RSN, refractory epilepsy, rsfMRI
Procedia PDF Downloads 764661 Enhancing Plant Throughput in Mineral Processing Through Multimodal Artificial Intelligence
Authors: Muhammad Bilal Shaikh
Abstract:
Mineral processing plants play a pivotal role in extracting valuable minerals from raw ores, contributing significantly to various industries. However, the optimization of plant throughput remains a complex challenge, necessitating innovative approaches for increased efficiency and productivity. This research paper investigates the application of Multimodal Artificial Intelligence (MAI) techniques to address this challenge, aiming to improve overall plant throughput in mineral processing operations. The integration of multimodal AI leverages a combination of diverse data sources, including sensor data, images, and textual information, to provide a holistic understanding of the complex processes involved in mineral extraction. The paper explores the synergies between various AI modalities, such as machine learning, computer vision, and natural language processing, to create a comprehensive and adaptive system for optimizing mineral processing plants. The primary focus of the research is on developing advanced predictive models that can accurately forecast various parameters affecting plant throughput. Utilizing historical process data, machine learning algorithms are trained to identify patterns, correlations, and dependencies within the intricate network of mineral processing operations. This enables real-time decision-making and process optimization, ultimately leading to enhanced plant throughput. Incorporating computer vision into the multimodal AI framework allows for the analysis of visual data from sensors and cameras positioned throughout the plant. This visual input aids in monitoring equipment conditions, identifying anomalies, and optimizing the flow of raw materials. The combination of machine learning and computer vision enables the creation of predictive maintenance strategies, reducing downtime and improving the overall reliability of mineral processing plants. Furthermore, the integration of natural language processing facilitates the extraction of valuable insights from unstructured textual data, such as maintenance logs, research papers, and operator reports. By understanding and analyzing this textual information, the multimodal AI system can identify trends, potential bottlenecks, and areas for improvement in plant operations. This comprehensive approach enables a more nuanced understanding of the factors influencing throughput and allows for targeted interventions. The research also explores the challenges associated with implementing multimodal AI in mineral processing plants, including data integration, model interpretability, and scalability. Addressing these challenges is crucial for the successful deployment of AI solutions in real-world industrial settings. To validate the effectiveness of the proposed multimodal AI framework, the research conducts case studies in collaboration with mineral processing plants. The results demonstrate tangible improvements in plant throughput, efficiency, and cost-effectiveness. The paper concludes with insights into the broader implications of implementing multimodal AI in mineral processing and its potential to revolutionize the industry by providing a robust, adaptive, and data-driven approach to optimizing plant operations. In summary, this research contributes to the evolving field of mineral processing by showcasing the transformative potential of multimodal artificial intelligence in enhancing plant throughput. The proposed framework offers a holistic solution that integrates machine learning, computer vision, and natural language processing to address the intricacies of mineral extraction processes, paving the way for a more efficient and sustainable future in the mineral processing industry.Keywords: multimodal AI, computer vision, NLP, mineral processing, mining
Procedia PDF Downloads 684660 Advanced Simulation and Enhancement for Distributed and Energy Efficient Scheduling for IEEE802.11s Wireless Enhanced Distributed Channel Access Networks
Authors: Fisayo G. Ojo, Shamala K. Subramaniam, Zuriati Ahmad Zukarnain
Abstract:
As technology is advancing and wireless applications are becoming dependable sources, while the physical layer of the applications are been embedded into tiny layer, so the more the problem on energy efficiency and consumption. This paper reviews works done in recent years in wireless applications and distributed computing, we discovered that applications are becoming dependable, and resource allocation sharing with other applications in distributed computing. Applications embedded in distributed system are suffering from power stability and efficiency. In the reviews, we also prove that discrete event simulation has been left behind untouched and not been adapted into distributed system as a simulation technique in scheduling of each event that took place in the development of distributed computing applications. We shed more lights on some researcher proposed techniques and results in our reviews to prove the unsatisfactory results, and to show that more work still have to be done on issues of energy efficiency in wireless applications, and congestion in distributed computing.Keywords: discrete event simulation (DES), distributed computing, energy efficiency (EE), internet of things (IOT), quality of service (QOS), user equipment (UE), wireless mesh network (WMN), wireless sensor network (wsn), worldwide interoperability for microwave access x (WiMAX)
Procedia PDF Downloads 1924659 Manufacturing Anomaly Detection Using a Combination of Gated Recurrent Unit Network and Random Forest Algorithm
Authors: Atinkut Atinafu Yilma, Eyob Messele Sefene
Abstract:
Anomaly detection is one of the essential mechanisms to control and reduce production loss, especially in today's smart manufacturing. Quick anomaly detection aids in reducing the cost of production by minimizing the possibility of producing defective products. However, developing an anomaly detection model that can rapidly detect a production change is challenging. This paper proposes Gated Recurrent Unit (GRU) combined with Random Forest (RF) to detect anomalies in the production process in real-time quickly. The GRU is used as a feature detector, and RF as a classifier using the input features from GRU. The model was tested using various synthesis and real-world datasets against benchmark methods. The results show that the proposed GRU-RF outperforms the benchmark methods with the shortest time taken to detect anomalies in the production process. Based on the investigation from the study, this proposed model can eliminate or reduce unnecessary production costs and bring a competitive advantage to manufacturing industries.Keywords: anomaly detection, multivariate time series data, smart manufacturing, gated recurrent unit network, random forest
Procedia PDF Downloads 1184658 Computational Model for Predicting Effective siRNA Sequences Using Whole Stacking Energy (ΔG) for Gene Silencing
Authors: Reena Murali, David Peter S.
Abstract:
The small interfering RNA (siRNA) alters the regulatory role of mRNA during gene expression by translational inhibition. Recent studies shows that up regulation of mRNA cause serious diseases like Cancer. So designing effective siRNA with good knockdown effects play an important role in gene silencing. Various siRNA design tools had been developed earlier. In this work, we are trying to analyze the existing good scoring second generation siRNA predicting tools and to optimize the efficiency of siRNA prediction by designing a computational model using Artificial Neural Network and whole stacking energy (ΔG), which may help in gene silencing and drug design in cancer therapy. Our model is trained and tested against a large data set of siRNA sequences. Validation of our results is done by finding correlation coefficient of experimental versus observed inhibition efficacy of siRNA. We achieved a correlation coefficient of 0.727 in our previous computational model and we could improve the correlation coefficient up to 0.753 when the threshold of whole tacking energy is greater than or equal to -32.5 kcal/mol.Keywords: artificial neural network, double stranded RNA, RNA interference, short interfering RNA
Procedia PDF Downloads 5264657 Other-Generated Disclosure: A Challenge to Privacy on Social Network Sites
Authors: Tharntip Tawnie Chutikulrungsee, Oliver Kisalay Burmeister, Maumita Bhattacharya, Dragana Calic
Abstract:
Sharing on social network sites (SNSs) has rapidly emerged as a new social norm and has become a global phenomenon. Billions of users reveal not only their own information (self disclosure) but also information about others (other-generated disclosure), resulting in a risk and a serious threat to either personal or informational privacy. Self-disclosure (SD) has been extensively researched in the literature, particularly regarding control of individual and existing privacy management. However, far too little attention has been paid to other-generated disclosure (OGD), especially by insiders. OGD has a strong influence on self-presentation, self-image, and electronic word of mouth (eWOM). Moreover, OGD is more credible and less likely manipulated than SD, but lacks privacy control and legal protection to some extent. This article examines OGD in depth, ranging from motivation to both online and offline impacts, based upon lived experiences from both ‘the disclosed’ and ‘the discloser’. Using purposive sampling, this phenomenological study involves an online survey and in-depth interviews. The findings report the influence of peer disclosure as well as users’ strategies to mitigate privacy issues. This article also calls attention to the challenge of OGD privacy and inadequacies in the law related to privacy protection in the digital domain.Keywords: facebook, online privacy, other-generated disclosure, social networks sites (SNSs)
Procedia PDF Downloads 2514656 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms
Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri
Abstract:
Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.Keywords: connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks
Procedia PDF Downloads 2414655 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction
Authors: Omer Cahana, Ofer Levi, Maya Herman
Abstract:
Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning
Procedia PDF Downloads 914654 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty
Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos
Abstract:
Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning
Procedia PDF Downloads 2084653 Performance Based Road Asset Evaluation
Authors: Kidus Dawit Gedamu
Abstract:
Addis Ababa City Road Authority is responsible for managing and setting performance evaluation of the city’s road network using the International Roughness Index (IRI). This helps the authority to conduct pavement condition assessments of asphalt roads each year to determine the health status or Level of service (LOS) of the roadway network and plan program improvements such as maintenance, resurfacing and rehabilitation. For a lower IRI limit economical and acceptable maintenance strategy may be selected among a number of maintenance alternatives. The Highway Development and Management (HDM-4) tool can do such measures to help decide which option is the best by evaluating the economic and structural conditions. This paper specifically addresses flexible pavement, including two principal arterial streets under the administration of the Addis Ababa City Roads Authority. The roads include the road from Megenagna Interchange to Ayat Square and from Ayat Square to Tafo RA. First, it was assessed the procedures followed by the city's road authority to develop the appropriate road maintenance strategies. Questionnaire surveys and interviews are used to collect information from the city's road maintenance departments. Second, the project analysis was performed for functional and economic comparison of different maintenance alternatives using HDM-4.Keywords: appropriate maintenance strategy, cost stream, road deterioration, maintenance alternative
Procedia PDF Downloads 614652 Strengthening by Assessment: A Case Study of Rail Bridges
Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas
Abstract:
The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening
Procedia PDF Downloads 309