Search results for: optimal transport
1375 Risk Assessment of Trace Element Pollution in Gymea Bay, NSW, Australia
Authors: Yasir M. Alyazichi, Brian G. Jones, Errol McLean, Hamd N. Altalyan, Ali K. M. Al-Nasrawi
Abstract:
The main purpose of this study is to assess the sediment quality and potential ecological risk in marine sediments in Gymea Bay located in south Sydney, Australia. A total of 32 surface sediment samples were collected from the bay. Current track trajectories and velocities have also been measured in the bay. The resultant trace elements were compared with the adverse biological effect values Effect Range Low (ERL) and Effect Range Median (ERM) classifications. The results indicate that the average values of chromium, arsenic, copper, zinc, and lead in surface sediments all reveal low pollution levels and are below ERL and ERM values. The highest concentrations of trace elements were found close to discharge points and in the inner bay, and were linked with high percentages of clay minerals, pyrite and organic matter, which can play a significant role in trapping and accumulating these elements. The lowest concentrations of trace elements were found to be on the shoreline of the bay, which contained high percentages of sand fractions. It is postulated that the fine particles and trace elements are disturbed by currents and tides, then transported and deposited in deeper areas. The current track velocities recorded in Gymea Bay had the capability to transport fine particles and trace element pollution within the bay. As a result, hydrodynamic measurements were able to provide useful information and to help explain the distribution of sedimentary particles and geochemical properties. This may lead to knowledge transfer to other bay systems, including those in remote areas. These activities can be conducted at a low cost, and are therefore also transferrable to developing countries. The advent of portable instruments to measure trace elements in the field has also contributed to the development of these lower cost and easily applied methodologies available for use in remote locations and low-cost economies.Keywords: current track velocities, gymea bay, surface sediments, trace elements
Procedia PDF Downloads 2451374 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements
Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating
Abstract:
Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly
Procedia PDF Downloads 2331373 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 4121372 A BIM-Based Approach to Assess COVID-19 Risk Management Regarding Indoor Air Ventilation and Pedestrian Dynamics
Authors: T. Delval, C. Sauvage, Q. Jullien, R. Viano, T. Diallo, B. Collignan, G. Picinbono
Abstract:
In the context of the international spread of COVID-19, the Centre Scientifique et Technique du Bâtiment (CSTB) has led a joint research with the French government authorities Hauts-de-Seine department, to analyse the risk in school spaces according to their configuration, ventilation system and spatial segmentation strategy. This paper describes the main results of this joint research. A multidisciplinary team involving experts in indoor air quality/ventilation, pedestrian movements and IT domains was established to develop a COVID risk analysis tool based on Building Information Model. The work started with specific analysis on two pilot schools in order to provide for the local administration specifications to minimize the spread of the virus. Different recommendations were published to optimize/validate the use of ventilation systems and the strategy of student occupancy and student flow segmentation within the building. This COVID expertise has been digitized in order to manage a quick risk analysis on the entire building that could be used by the public administration through an easy user interface implemented in a free BIM Management software. One of the most interesting results is to enable a dynamic comparison of different ventilation system scenarios and space occupation strategy inside the BIM model. This concurrent engineering approach provides users with the optimal solution according to both ventilation and pedestrian flow expertise.Keywords: BIM, knowledge management, system expert, risk management, indoor ventilation, pedestrian movement, integrated design
Procedia PDF Downloads 1081371 Adsorption of Chlorinated Pesticides in Drinking Water by Carbon Nanotubes
Authors: Hacer Sule Gonul, Vedat Uyak
Abstract:
Intensive use of pesticides in agricultural activity causes mixing of these compounds into water sources with surface flow. Especially after the 1970s, a number of limitations imposed on the use of chlorinated pesticides that have a carcinogenic risk potential and regulatory limit have been established. These chlorinated pesticides discharge to water resources, transport in the water and land environment and accumulation in the human body through the food chain raises serious health concerns. Carbon nanotubes (CNTs) have attracted considerable attention from on all because of their excellent mechanical, electrical, and environmental characteristics. Due to CNT particles' high degree of hydrophobic surfaces, these nanoparticles play critical role in the removal of water contaminants of natural organic matters, pesticides and phenolic compounds in water sources. Health concerns associated with chlorinated pesticides requires the removal of such contaminants from aquatic environment. Although the use of aldrin and atrazine was restricted in our country, repatriation of illegal entry and widespread use of such chemicals in agricultural areas cause increases for the concentration of these chemicals in the water supply. In this study, the compounds of chlorinated pesticides such as aldrin and atrazine compounds would be tried to eliminate from drinking water with carbon nanotube adsorption method. Within this study, 2 different types of CNT would be used including single-wall (SWCNT) and multi-wall (MWCNT) carbon nanotubes. Adsorption isotherms within the scope of work, the parameters affecting the adsorption of chlorinated pesticides in water are considered as pH, contact time, CNT type, CNT dose and initial concentration of pesticides. As a result, under conditions of neutral pH conditions with MWCNT respectively for atrazine and aldrin obtained adsorption capacity of determined as 2.24 µg/mg ve 3.84 µg/mg. On the other hand, the determined adsorption capacity rates for SWCNT for aldrin and atrazine has identified as 3.91 µg/mg ve 3.92 µg/mg. After all, each type of pesticide that provides superior performance in relieving SWCNT particles has emerged.Keywords: pesticide, drinking water, carbon nanotube, adsorption
Procedia PDF Downloads 1711370 A New Multi-Target, Multi-Agent Search and Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.Keywords: search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization
Procedia PDF Downloads 3711369 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 1161368 Audit of Post-Caesarean Section Analgesia
Authors: Rachel Ashwell, Sally Millett
Abstract:
Introduction: Adequate post-operative pain relief is a key priority in the delivery of caesarean sections. This improves patient experience, reduces morbidity and enables optimal mother-infant interaction. Recommendations outlined in the NICE guidelines for caesarean section (CS) include offering peri-operative intrathecal/epidural diamorphine and post-operative opioid analgesics; offering non-steroidal anti-inflammatory drugs (NSAIDs) unless contraindicated and taking hourly observations for 12 hours following intrathecal diamorphine. Method: This audit assessed the provision of post-CS analgesia in 29 women over a two-week period. Indicators used were the use of intrathecal/epidural opioids, use of post-operative opioids and NSAIDs, frequency of observations and patient satisfaction with pain management on post-operative days 1 and 2. Results: All women received intrathecal/epidural diamorphine, 97% were prescribed post-operative opioids and all were prescribed NSAIDs unless contraindicated. Hourly observations were not maintained for 12 hours following intrathecal diamorphine. 97% of women were satisfied with their pain management on post-operative day 1 whereas only 75% were satisfied on day 2. Discussion: This service meets the proposed standards for the provision of post-operative analgesia, achieving high levels of patient satisfaction 1 day after CS. However, patient satisfaction levels are significantly lower on post-operative day 2, which may be due to reduced frequency of observations. The lack of an official audit standard for patient satisfaction on postoperative day 2 may result in reduced incentive to prioritise pain management at this stage.Keywords: Caesarean section, analgesia, postoperative care, patient satisfaction
Procedia PDF Downloads 3871367 Advanced Particle Characterisation of Suspended Sediment in the Danube River Using Automated Imaging and Laser Diffraction
Authors: Flóra Pomázi, Sándor Baranya, Zoltán Szalai
Abstract:
A harmonized monitoring of the suspended sediment transport along such a large river as the world’s most international river, the Danube River, is a rather challenging task. The traditional monitoring method in Hungary is obsolete but using indirect measurement devices and techniques like optical backscatter sensors (OBS), laser diffraction or acoustic backscatter sensors (ABS) could provide a fast and efficient alternative option of direct methods. However, these methods are strongly sensitive to the particle characteristics (i.e. particle shape, particle size and mineral composition). The current method does not provide sufficient information about particle size distribution, mineral analysis is rarely done, and the shape of the suspended sediment particles have not been examined yet. The aims of the study are (1) to determine the particle characterisation of suspended sediment in the Danube River using advanced particle characterisation methods as laser diffraction and automated imaging, and (2) to perform a sensitivity analysis of the indirect methods in order to determine the impact of suspended particle characteristics. The particle size distribution is determined by laser diffraction. The particle shape and mineral composition analysis is done by the Morphologi G3ID image analyser. The investigated indirect measurement devices are the LISST-Portable|XR, the LISST-ABS (Sequoia Inc.) and the Rio Grande 1200 kHz ADCP (Teledyne Marine). The major findings of this study are (1) the statistical shape of the suspended sediment particle - this is the first research in this context, (2) the actualised particle size distribution – that can be compared to historical information, so that the morphological changes can be tracked, (3) the actual mineral composition of the suspended sediment in the Danube River, and (4) the reliability of the tested indirect methods has been increased – based on the results of the sensitivity analysis and the previous findings.Keywords: advanced particle characterisation, automated imaging, indirect methods, laser diffraction, mineral composition, suspended sediment
Procedia PDF Downloads 1461366 Does Pakistan Stock Exchange Offer Diversification Benefits to Regional and International Investors: A Time-Frequency (Wavelets) Analysis
Authors: Syed Jawad Hussain Shahzad, Muhammad Zakaria, Mobeen Ur Rehman, Saniya Khaild
Abstract:
This study examines the co-movement between the Pakistan, Indian, S&P 500 and Nikkei 225 stock markets using weekly data from 1998 to 2013. The time-frequency relationship between the selected stock markets is conducted by using measures of continuous wavelet power spectrum, cross-wavelet transform and cross (squared) wavelet coherency. The empirical evidence suggests strong dependence between Pakistan and Indian stock markets. The co-movement of Pakistani index with U.S and Japanese, the developed markets, varies over time and frequency where the long-run relationship is dominant. The results of cross wavelet and wavelet coherence analysis indicate moderate covariance and correlation between stock indexes and the markets are in phase (i.e. cyclical in nature) over varying durations. Pakistan stock market was lagging during the entire period in relation to Indian stock market, corresponding to the 8~32 and then 64~256 weeks scale. Similar findings are evident for S&P 500 and Nikkei 225 indexes, however, the relationship occurs during the later period of study. All three wavelet indicators suggest strong evidence of higher co-movement during 2008-09 global financial crises. The empirical analysis reveals a strong evidence that the portfolio diversification benefits vary across frequencies and time. This analysis is unique and have several practical implications for regional and international investors while assigning the optimal weightage of different assets in portfolio formulation.Keywords: co-movement, Pakistan stock exchange, S&P 500, Nikkei 225, wavelet analysis
Procedia PDF Downloads 3581365 Introduction to Multi-Agent Deep Deterministic Policy Gradient
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents
Procedia PDF Downloads 241364 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 1451363 Sustainability of Photovoltaic Recycling Planning
Authors: Jun-Ki Choi
Abstract:
The usage of valuable resources and the potential for waste generation at the end of the life cycle of photovoltaic (PV) technologies necessitate a proactive planning for a PV recycling infrastructure. To ensure the sustainability of PV in large scales of deployment, it is vital to develop and institute low-cost recycling technologies and infrastructure for the emerging PV industry in parallel with the rapid commercialization of these new technologies. There are various issues involved in the economics of PV recycling and this research examine those at macro and micro levels, developing a holistic interpretation of the economic viability of the PV recycling systems. This study developed mathematical models to analyze the profitability of recycling technologies and to guide tactical decisions for allocating optimal location of PV take-back centers (PVTBC), necessary for the collection of end of life products. The economic decision is usually based on the level of the marginal capital cost of each PVTBC, cost of reverse logistics, distance traveled, and the amount of PV waste collected from various locations. Results illustrated that the reverse logistics costs comprise a major portion of the cost of PVTBC; PV recycling centers can be constructed in the optimally selected locations to minimize the total reverse logistics cost for transporting the PV wastes from various collection facilities to the recycling center. In the micro- process level, automated recycling processes should be developed to handle the large amount of growing PV wastes economically. The market price of the reclaimed materials are important factors for deciding the profitability of the recycling process and this illustrates the importance of the recovering the glass and expensive metals from PV modules.Keywords: photovoltaic, recycling, mathematical models, sustainability
Procedia PDF Downloads 2551362 Sustainable Membranes Based on 2D Materials for H₂ Separation and Purification
Authors: Juan A. G. Carrio, Prasad Talluri, Sergio G. Echeverrigaray, Antonio H. Castro Neto
Abstract:
Hydrogen as a fuel and environmentally pleasant energy carrier is part of this transition towards low-carbon systems. The extensive deployment of hydrogen production, purification and transport infrastructures still represents significant challenges. Independent of the production process, the hydrogen generally is mixed with light hydrocarbons and other undesirable gases that need to be removed to obtain H₂ with the required purity for end applications. In this context, membranes are one of the simplest, most attractive, sustainable, and performant technologies enabling hydrogen separation and purification. They demonstrate high separation efficiencies and low energy consumption levels in operation, which is a significant leap compared to current energy-intensive options technologies. The unique characteristics of 2D laminates have given rise to a diversity of research on their potential applications in separation systems. Specifically, it is already known in the scientific literature that graphene oxide-based membranes present the highest reported selectivity of H₂ over other gases. This work explores the potential of a new type of 2D materials-based membranes in separating H₂ from CO₂ and CH₄. We have developed nanostructured composites based on 2D materials that have been applied in the fabrication of membranes to maximise H₂ selectivity and permeability, for different gas mixtures, by adjusting the membranes' characteristics. Our proprietary technology does not depend on specific porous substrates, which allows its integration in diverse separation modules with different geometries and configurations, looking to address the technical performance required for industrial applications and economic viability. The tuning and precise control of the processing parameters allowed us to control the thicknesses of the membranes below 100 nanometres to provide high permeabilities. Our results for the selectivity of new nanostructured 2D materials-based membranes are in the range of the performance reported in the available literature around 2D materials (such as graphene oxide) applied to hydrogen purification, which validates their use as one of the most promising next-generation hydrogen separation and purification solutions.Keywords: membranes, 2D materials, hydrogen purification, nanocomposites
Procedia PDF Downloads 1341361 Study on the Process of Detumbling Space Target by Laser
Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming
Abstract:
The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.Keywords: detumbling, laser ablation drive, space target, space debris remove
Procedia PDF Downloads 851360 The Effect of Micro/Nano Structure of Poly (ε-caprolactone) (PCL) Film Using a Two-Step Process (Casting/Plasma) on Cellular Responses
Authors: JaeYoon Lee, Gi-Hoon Yang, JongHan Ha, MyungGu Yeo, SeungHyun Ahn, Hyeongjin Lee, HoJun Jeon, YongBok Kim, Minseong Kim, GeunHyung Kim
Abstract:
One of the important factors in tissue engineering is to design optimal biomedical scaffolds, which can be governed by topographical surface characteristics, such as size, shape, and direction. Of these properties, we focused on the effects of nano- to micro-sized hierarchical surface. To fabricate the hierarchical surface structure on poly(ε-caprolactone) (PCL) film, we employed a micro-casting technique by pressing the mold and nano-etching technique using a modified plasma process. The micro-sized topography of PCL film was controlled by sizes of the micro structures on lotus leaf. Also, the nano-sized topography and hydrophilicity of PCL film were controlled by a modified plasma process. After the plasma treatment, the hydrophobic property of the PCL film was significantly changed into hydrophilic property, and the nano-sized structure was well developed. The surface properties of the modified PCL film were investigated in terms of initial cell morphology, attachment, and proliferation using osteoblast-like-cells (MG63). In particular, initial cell attachment, proliferation and osteogenic differentiation in the hierarchical structure were enhanced dramatically compared to those of the smooth surface. We believe that these results are because of a synergistic effect between the hierarchical structure and the reactive functional groups due to the plasma process. Based on the results presented here, we propose a new biomimetic surface model that maybe useful for effectively regenerating hard tissues.Keywords: hierarchical surface, lotus leaf, nano-etching, plasma treatment
Procedia PDF Downloads 3771359 Designing Web Application to Simulate Agricultural Management for Smart Farmer: Land Development Department’s Integrated Management Farm
Authors: Panasbodee Thachaopas, Duangdorm Gamnerdsap, Waraporn Inthip, Arissara Pungpa
Abstract:
LDD’s IM Farm or Land Development Department’s Integrated Management Farm is the agricultural simulation application developed by Land Development Department relies on actual data in simulation game to grow 12 cash crops which are rice, corn, cassava, sugarcane, soybean, rubber tree, oil palm, pineapple, longan, rambutan, durian, and mangosteen. Launching in simulation game, players could select preferable areas for cropping from base map or Orthophoto map scale 1:4,000. Farm management is simulated from field preparation to harvesting. The system uses soil group, and present land use database to facilitate player to know whether what kind of crop is suitable to grow in each soil groups and integrate LDD’s data with other agencies which are soil types, soil properties, soil problems, climate, cultivation cost, fertilizer use, fertilizer price, socio-economic data, plant diseases, weed, pest, interest rate for taking on loan from Bank for Agriculture and Agricultural Cooperatives (BAAC), labor cost, market prices. These mentioned data affect the cost and yield differently to each crop. After completing, the player will know the yield, income and expense, profit/loss. The player could change to other crops that are more suitable to soil groups for optimal yields and profits.Keywords: agricultural simulation, smart farmer, web application, factors of agricultural production
Procedia PDF Downloads 1981358 Urban Corridor Management Strategy Based on Intelligent Transportation System
Authors: Sourabh Jain, Sukhvir Singh Jain, Gaurav V. Jain
Abstract:
Intelligent Transportation System (ITS) is the application of technology for developing a user–friendly transportation system for urban areas in developing countries. The goal of urban corridor management using ITS in road transport is to achieve improvements in mobility, safety, and the productivity of the transportation system within the available facilities through the integrated application of advanced monitoring, communications, computer, display, and control process technologies, both in the vehicle and on the road. This paper attempts to present the past studies regarding several ITS available that have been successfully deployed in urban corridors of India and abroad, and to know about the current scenario and the methodology considered for planning, design, and operation of Traffic Management Systems. This paper also presents the endeavor that was made to interpret and figure out the performance of the 27.4 Km long study corridor having eight intersections and four flyovers. The corridor consisting of 6 lanes as well as 8 lanes divided road network. Two categories of data were collected on February 2016 such as traffic data (traffic volume, spot speed, delay) and road characteristics data (no. of lanes, lane width, bus stops, mid-block sections, intersections, flyovers). The instruments used for collecting the data were video camera, radar gun, mobile GPS and stopwatch. From analysis, the performance interpretations incorporated were identification of peak hours and off peak hours, congestion and level of service (LOS) at mid blocks, delay followed by the plotting speed contours and recommending urban corridor management strategies. From the analysis, it is found that ITS based urban corridor management strategies will be useful to reduce congestion, fuel consumption and pollution so as to provide comfort and efficiency to the users. The paper presented urban corridor management strategies based on sensors incorporated in both vehicles and on the roads.Keywords: congestion, ITS strategies, mobility, safety
Procedia PDF Downloads 4431357 Hemispheric Locus and Gender Predict the Delay between the Moment of Stroke and Hospitalization
Authors: D. Anderlini, G. Wallis
Abstract:
Background: The number of people experiencing stroke is steadily increasing due to changes in diet and lifestyle, to longer life expectancy resulting in older population, to higher survival rates as a consequence of improvements during the acute phase. This study considers what risk factors might contribute to delayed entry to hospital for treatment. Methods: We analyzed data from 2472 patients admitted to the Stroke Unit of the Royal Brisbane Women's Hospital, Australia, between 2002 to 2011. Results: Previous studies have reported that factors which can contribute to delay include the patient’s age, the time of day, physical location, visit the GP instead of going to the emergency, means of transport, severity of symptoms and type of stroke. Contrary to findings of other studies, we found a strong correlation between side of lesion and delay in admission: patients with right hemisphere lesions had an average delay of 3.78 days, while patients with left hemisphere lesions had an average delay of 1.49 days. Damage to the right hemisphere generally ends in motor impairment in the non-dominant hand and no speech impediment. In contrast, left hemisphere lesions can result in deficit to; dominant hand function and aphasia which will be noticed even if their impact on performance is relatively minor. A finding which goes against many previous studies, is the fact that women get to the hospital much sooner than men, showing an average delay of 0.92 days in women vs. 3.36 days in men. Conclusion: Acute surgical-pharmacological therapies are most effective if applied immediately after stroke. Hence delays to admission can be crucial to the degree of recovery. The tendency of patients to overlook symptoms of right hemisphere lesion should be the target of information campaigns both for the general public and GPs. Why do men go to hospital so late? We don't know yet! Nevertheless an awareness plan specifically direct to male population should be on the agenda of Health Departments.Keywords: gender, admission delay, stroke location, bioinformatics, biomedicine
Procedia PDF Downloads 2301356 hsa-miR-1204 and hsa-miR-639 Prominent Role in Tamoxifen's Molecular Mechanisms on the EMT Phenomenon in Breast Cancer Patients
Authors: Mahsa Taghavi
Abstract:
In the treatment of breast cancer, tamoxifen is a regularly prescribed medication. The effect of tamoxifen on breast cancer patients' EMT pathways was studied. In this study to see if it had any effect on the cancer cells' resistance to tamoxifen and to look for specific miRNAs associated with EMT. In this work, we used continuous and integrated bioinformatics analysis to choose the optimal GEO datasets. Once we had sorted the gene expression profile, we looked at the mechanism of signaling, the ontology of genes, and the protein interaction of each gene. In the end, we used the GEPIA database to confirm the candidate genes. after that, I investigated critical miRNAs related to candidate genes. There were two gene expression profiles that were categorized into two distinct groups. Using the expression profile of genes that were lowered in the EMT pathway, the first group was examined. The second group represented the polar opposite of the first. A total of 253 genes from the first group and 302 genes from the second group were found to be common. Several genes in the first category were linked to cell death, focal adhesion, and cellular aging. Two genes in the second group were linked to cell death, focal adhesion, and cellular aging. distinct cell cycle stages were observed. Finally, proteins such as MYLK, SOCS3, and STAT5B from the first group and BIRC5, PLK1, and RAPGAP1 from the second group were selected as potential candidates linked to tamoxifen's influence on the EMT pathway. hsa-miR-1204 and hsa-miR-639 have a very close relationship with the candidates genes according to the node degrees and betweenness index. With this, the action of tamoxifen on the EMT pathway was better understood. It's important to learn more about how tamoxifen's target genes and proteins work so that we can better understand the drug.Keywords: tamoxifen, breast cancer, bioinformatics analysis, EMT, miRNAs
Procedia PDF Downloads 1291355 Analyzing Impacts of Road Network on Vegetation Using Geographic Information System and Remote Sensing Techniques
Authors: Elizabeth Malebogo Mosepele
Abstract:
Road transport has become increasingly common in the world; people rely on road networks for transportation purpose on a daily basis. However, environmental impact of roads on surrounding landscapes extends their potential effects even further. This study investigates the impact of road network on natural vegetation. The study will provide baseline knowledge regarding roadside vegetation and would be helpful in future for conservation of biodiversity along the road verges and improvements of road verges. The general hypothesis of this study is that the amount and condition of road side vegetation could be explained by road network conditions. Remote sensing techniques were used to analyze vegetation conditions. Landsat 8 OLI image was used to assess vegetation cover condition. NDVI image was generated and used as a base from which land cover classes were extracted, comprising four categories viz. healthy vegetation, degraded vegetation, bare surface, and water. The classification of the image was achieved using the supervised classification technique. Road networks were digitized from Google Earth. For observed data, transect based quadrats of 50*50 m were conducted next to road segments for vegetation assessment. Vegetation condition was related to road network, with the multinomial logistic regression confirming a significant relationship between vegetation condition and road network. The null hypothesis formulated was that 'there is no variation in vegetation condition as we move away from the road.' Analysis of vegetation condition revealed degraded vegetation within close proximity of a road segment and healthy vegetation as the distance increase away from the road. The Chi Squared value was compared with critical value of 3.84, at the significance level of 0.05 to determine the significance of relationship. Given that the Chi squared value was 395, 5004, the null hypothesis was therefore rejected; there is significant variation in vegetation the distance increases away from the road. The conclusion is that the road network plays an important role in the condition of vegetation.Keywords: Chi squared, geographic information system, multinomial logistic regression, remote sensing, road side vegetation
Procedia PDF Downloads 4321354 Cryptocurrency as a Payment Method in the Tourism Industry: A Comparison of Volatility, Correlation and Portfolio Performance
Authors: Shu-Han Hsu, Jiho Yoon, Chwen Sheu
Abstract:
With the rapidly growing of blockchain technology and cryptocurrency, various industries which include tourism has added in cryptocurrency as the payment method of their transaction. More and more tourism companies accept payments in digital currency for flights, hotel reservations, transportation, and more. For travellers and tourists, using cryptocurrency as a payment method has become a way to circumvent costs and prevent risks. Understanding volatility dynamics and interdependencies between standard currency and cryptocurrency is important for appropriate financial risk management to assist policy-makers and investors in marking more informed decisions. The purpose of this paper has been to understand and explain the risk spillover effects between six major cryptocurrencies and the top ten most traded standard currencies. Using data for the daily closing price of cryptocurrencies and currency exchange rates from 7 August 2015 to 10 December 2019, with 1,133 observations. The diagonal BEKK model was used to analyze the co-volatility spillover effects between cryptocurrency returns and exchange rate returns, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility. The empirical results show there are co-volatility spillover effects between the cryptocurrency returns and GBP/USD, CNY/USD and MXN/USD exchange rate returns. Therefore, currencies (British Pound, Chinese Yuan and Mexican Peso) and cryptocurrencies (Bitcoin, Ethereum, Ripple, Tether, Litecoin and Stellar) are suitable for constructing a financial portfolio from an optimal risk management perspective and also for dynamic hedging purposes.Keywords: blockchain, co-volatility effects, cryptocurrencies, diagonal BEKK model, exchange rates, risk spillovers
Procedia PDF Downloads 1431353 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score
Authors: Jianfeng Hu
Abstract:
Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes
Procedia PDF Downloads 2851352 Optimization of Smart Beta Allocation by Momentum Exposure
Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires
Abstract:
Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting. To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk. Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations
Procedia PDF Downloads 3401351 Comparative Efficacy of Gas Phase Sanitizers for Inactivating Salmonella, Escherichia coli O157:H7 and Listeria monocytogenes on Intact Lettuce Heads
Authors: Kayla Murray, Andrew Green, Gopi Paliyath, Keith Warriner
Abstract:
Introduction: It is now acknowledged that control of human pathogens associated with fresh produce requires an integrated approach of several interventions as opposed to relying on post-harvest washes to remove field acquired contamination. To this end, current research is directed towards identifying such interventions that can be applied at different points in leafy green processing. Purpose: In the following the efficacy of different gas phase treatments to decontaminate whole lettuce heads during pre-processing storage were evaluated. Methods: Whole Cos lettuce heads were spot inoculated with L. monocytogenes, E. coli O157:H7 or Salmonella spp. The inoculated lettuce heads were then placed in a treatment chamber and exposed to ozone, chlorine dioxide or hydroxyl radicals at different time periods under a range of relative humidity. Survivors of the treatments were enumerated along with sensory analysis performed on the treated lettuce. Results: Ozone gas reduced L. monocytogenes by 2-log10 after ten-minutes of exposure with Salmonella and E. coli O157:H7 being decreased by 0.66 and 0.56-log cfu respectively. Chlorine dioxide gas treatment reduced L. monocytogenes and Salmonella on lettuce heads by 4 log cfu but only supported a 0.8 log cfu reduction in E. coli O157:H7 numbers. In comparison, hydroxyl radicals supported a 2.9 – 4.8 log cfu reduction of model human pathogens inoculated onto lettuce heads but required extended exposure times and relative humidity < 0.8. Significance: From the gas phase sanitizers tested, chlorine dioxide and hydroxyl radicals are the most effective. The latter process holds most promise based on the ease of delivery, worker safety and preservation of lettuce sensory characteristics. Although expose times for hydroxyl radicles was relatively long (24h) this should not be considered a limitation given the intervention is applied in store rooms or in transport containers during transit.Keywords: gas phase sanitizers, iceberg lettuce heads, leafy green processing
Procedia PDF Downloads 4081350 Long-Term Modal Changes in International Traffic - Modelling Exercise
Authors: Tomasz Komornicki
Abstract:
The primary aim of the presentation is to try to model border traffic and, at the same time to explain on which economic variables the intensity of border traffic depended in the long term. For this purpose, long series of traffic data on the Polish borders were used. Models were estimated for three variants of explanatory variables: a) for total arrivals and departures (total movement of Poles and foreigners), b) for arrivals and departures of Poles, and c) for arrivals and departures of foreigners. Each of the defined explanatory variables in the models appeared as the logarithm of the natural number of persons. Data from 1994-2017 were used for modeling (for internal Schengen borders for the years 1994-2007). Information on the number of people arriving in and leaving Poland was collected for a total of 303 border crossings. On the basis of the analyses carried out, it was found that one of the main factors determining border traffic is generally differences in the level of economic development (GDP) and the condition of the economy (level of unemployment) and the degree of border permeability. Also statistically significant for border traffic are differences in the prices of goods (fuels, tobacco, and alcohol products) and services (mainly basic ones, e.g., hairdressing services). Such a relationship exists mainly on the eastern border (border traffic determined largely by differences in the prices of goods) and on the border with Germany (in the first analysed period, border traffic was determined mainly by the prices of goods, later - after Poland's accession to the EU and the Schengen area - also by the prices of services). The models also confirmed differences in the set of factors shaping the volume and structure of border traffic on the Polish borders resulting from general geopolitical conditions, with the year 2007 being an important caesura, after which the classical population mobility factors became visible. The results obtained were additionally related to changes in traffic that occurred as a result of the CPOVID-19 pandemic and as a result of the Russian aggression against Ukraine.Keywords: border, modal structure, transport, Ukraine
Procedia PDF Downloads 1151349 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring
Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover
Abstract:
Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels
Procedia PDF Downloads 1261348 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards
Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia
Abstract:
Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.Keywords: aquaponics, deep learning, internet of things, vermiponics
Procedia PDF Downloads 721347 A Constructivist Grounded Theory Study on the Impact of Automation on People and Gardening
Authors: Hamilton V. Niculescu
Abstract:
Following a three year study conducted on eighteen Irish people that are involved in growing vegetables in various community gardens around Dublin, Republic of Ireland, it was revealed that addition of some automated features aimed at improving agricultural practices represented a process which was regarded as potentially beneficial, and as a great tool to closely monitor climate conditions inside the greenhouses. The participants were provided with a free custom-built mobile app through which they could remotely monitor and control features such as irrigation, air ventilation, and windows to ensure optimal growing conditions for vegetables growing inside purpose-built greenhouses. While the initial interest was generally high, within weeks, the participants' level of interaction with the enclosures slowly declined. By employing a constructivist grounded theory methodology, following focus group discussions, in-depth semi-structured interviews, and observations, it was revealed that participants' trust in newer technologies, and renewables, in particular, was low. There are various reasons for this, but because the participants in this study consist of mainly working-class people, it can be argued that lack of education and knowledge are the main barriers acting against the adoption of innovations. Consequently, it was revealed that most participants eventually decided to "set and forget" the systems in automatic working mode, indicating that the immediate effect of introducing people to assisting technologies also introduced some unintended consequences into their lifestyle. It is argued that this occurrence also indicates the fact that people initially "read" newer technologies and only adopt those features that they find useful and less intrusive in regards to their current lifestyle.Keywords: automation, communication, greenhouse, sustainable
Procedia PDF Downloads 1191346 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 157