Search results for: field data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30611

Search results for: field data

27821 Routing Protocol in Ship Dynamic Positioning Based on WSN Clustering Data Fusion System

Authors: Zhou Mo, Dennis Chow

Abstract:

In the dynamic positioning system (DPS) for vessels, the reliable information transmission between each note basically relies on the wireless protocols. From the perspective of cluster-based routing protocols for wireless sensor networks, the data fusion technology based on the sleep scheduling mechanism and remaining energy in network layer is proposed, which applies the sleep scheduling mechanism to the routing protocols, considering the remaining energy of node and location information when selecting cluster-head. The problem of uneven distribution of nodes in each cluster is solved by the Equilibrium. At the same time, Classified Forwarding Mechanism as well as Redelivery Policy strategy is adopted to avoid congestion in the transmission of huge amount of data, reduce the delay in data delivery and enhance the real-time response. In this paper, a simulation test is conducted to improve the routing protocols, which turn out to reduce the energy consumption of nodes and increase the efficiency of data delivery.

Keywords: DPS for vessel, wireless sensor network, data fusion, routing protocols

Procedia PDF Downloads 521
27820 Development of DNDC Modelling Method for Evaluation of Carbon Dioxide Emission from Arable Soils in European Russia

Authors: Olga Sukhoveeva

Abstract:

Carbon dioxide (CO2) is the main component of carbon biogeochemical cycle and one of the most important greenhouse gases (GHG). Agriculture, particularly arable soils, are one the largest sources of GHG emission for the atmosphere including CO2.Models may be used for estimation of GHG emission from agriculture if they can be adapted for different countries conditions. The only model used in officially at national level in United Kingdom and China for this purpose is DNDC (DeNitrification-DeComposition). In our research, the model DNDC is offered for estimation of GHG emission from arable soils in Russia. The aim of our research was to create the method of DNDC using for evaluation of CO2 emission in Russia based on official statistical information. The target territory was European part of Russia where many field experiments are located. At the first step of research the database on climate, soil and cropping characteristics for the target region from governmental, statistical, and literature sources were created. All-Russia Research Institute of Hydrometeorological Information – World Data Centre provides open daily data about average meteorological and climatic conditions. It must be calculated spatial average values of maximum and minimum air temperature and precipitation over the region. Spatial average values of soil characteristics (soil texture, bulk density, pH, soil organic carbon content) can be determined on the base of Union state register of soil recourses of Russia. Cropping technologies are published by agricultural research institutes and departments. We offer to define cropping system parameters (annual information about crop yields, amount and types of fertilizers and manure) on the base of the Federal State Statistics Service data. Content of carbon in plant biomass may be calculated via formulas developed and published by Ministry of Natural Resources and Environment of the Russian Federation. At the second step CO2 emission from soil in this region were calculated by DNDC. Modelling data were compared with empirical and literature data and good results were obtained, modelled values were equivalent to the measured ones. It was revealed that the DNDC model may be used to evaluate and forecast the CO2 emission from arable soils in Russia based on the official statistical information. Also, it can be used for creation of the program for decreasing GHG emission from arable soils to the atmosphere. Financial Support: fundamental scientific researching theme 0148-2014-0005 No 01201352499 ‘Solution of fundamental problems of analysis and forecast of Earth climatic system condition’ for 2014-2020; fundamental research program of Presidium of RAS No 51 ‘Climate change: causes, risks, consequences, problems of adaptation and regulation’ for 2018-2020.

Keywords: arable soils, carbon dioxide emission, DNDC model, European Russia

Procedia PDF Downloads 186
27819 Advanced Data Visualization Techniques for Effective Decision-making in Oil and Gas Exploration and Production

Authors: Deepak Singh, Rail Kuliev

Abstract:

This research article explores the significance of advanced data visualization techniques in enhancing decision-making processes within the oil and gas exploration and production domain. With the oil and gas industry facing numerous challenges, effective interpretation and analysis of vast and diverse datasets are crucial for optimizing exploration strategies, production operations, and risk assessment. The article highlights the importance of data visualization in managing big data, aiding the decision-making process, and facilitating communication with stakeholders. Various advanced data visualization techniques, including 3D visualization, augmented reality (AR), virtual reality (VR), interactive dashboards, and geospatial visualization, are discussed in detail, showcasing their applications and benefits in the oil and gas sector. The article presents case studies demonstrating the successful use of these techniques in optimizing well placement, real-time operations monitoring, and virtual reality training. Additionally, the article addresses the challenges of data integration and scalability, emphasizing the need for future developments in AI-driven visualization. In conclusion, this research emphasizes the immense potential of advanced data visualization in revolutionizing decision-making processes, fostering data-driven strategies, and promoting sustainable growth and improved operational efficiency within the oil and gas exploration and production industry.

Keywords: augmented reality (AR), virtual reality (VR), interactive dashboards, real-time operations monitoring

Procedia PDF Downloads 79
27818 Transport of Reactive Carbo-Iron Composite Particles for in situ Groundwater Remediation Investigated at Laboratory and Field Scale

Authors: Sascha E. Oswald, Jan Busch

Abstract:

The in-situ dechlorination of contamination by chlorinated solvents in groundwater via zero-valent iron (nZVI) is potentially an efficient and prompt remediation method. A key requirement is that nZVI has to be introduced in the subsurface in a way that substantial quantities of the contaminants are actually brought into direct contact with the nZVI in the aquifer. Thus it could be a more flexible and precise alternative to permeable reactive barrier techniques using granular iron. However, nZVI are often limited by fast agglomeration and sedimentation in colloidal suspensions, even more so in the aquifer sediments, which is a handicap for the application to treat source zones or contaminant plumes. Colloid-supported nZVI show promising characteristics to overcome these limitations and Carbo-Iron Colloids is a newly developed composite material aiming for that. The nZVI is built onto finely ground activated carbon of about a micrometer diameter acting as a carrier for it. The Carbo-Iron Colloids are often suspended with a polyanionic stabilizer, and carboxymethyl cellulose is one with good properties for that. We have investigated the transport behavior of Carbo-Iron Colloids (CIC) on different scales and for different conditions to assess its mobility in aquifer sediments as a key property for making its application feasible. The transport properties were tested in one-dimensional laboratory columns, a two-dimensional model aquifer and also an injection experiment in the field. Those experiments were accompanied by non-invasive tomographic investigations of the transport and filtration processes of CIC suspensions. The laboratory experiments showed that a larger part of the CIC can travel at least scales of meters for favorable but realistic conditions. Partly this is even similar to a dissolved tracer. For less favorable conditions this can be much smaller and in all cases a particular fraction of the CIC injected is retained mainly shortly after entering the porous medium. As field experiment a horizontal flow field was established, between two wells with a distance of 5 meters, in a confined, shallow aquifer at a contaminated site in North German lowlands. First a tracer test was performed and a basic model was set up to define the design of the CIC injection experiment. Then CIC suspension was introduced into the aquifer at the injection well while the second well was pumped and samples taken there to observe the breakthrough of CIC. This was based on direct visual inspection and total particle and iron concentrations of water samples analyzed in the laboratory later. It could be concluded that at least 12% of the CIC amount injected reached the extraction well in due course, some of it traveling distances larger than 10 meters in the non-uniform dipole flow field. This demonstrated that these CIC particles have a substantial mobility for reaching larger volumes of a contaminated aquifer and for interacting there by their reactivity with dissolved contaminants in the pore space. Therefore they seem suited well for groundwater remediation by in-situ formation of reactive barriers for chlorinated solvent plumes or even source removal.

Keywords: carbo-iron colloids, chlorinated solvents, in-situ remediation, particle transport, plume treatment

Procedia PDF Downloads 243
27817 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors

Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin

Abstract:

IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).

Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)

Procedia PDF Downloads 131
27816 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons

Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung

Abstract:

Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.

Keywords: TEPC, lineal energy, microdosimetry, radiation quality

Procedia PDF Downloads 464
27815 Machine That Provides Mineral Fertilizer Equal to the Soil on the Slopes

Authors: Huseyn Nuraddin Qurbanov

Abstract:

The reliable food supply of the population of the republic is one of the main directions of the state's economic policy. Grain growing, which is the basis of agriculture, is important in this area. In the cultivation of cereals on the slopes, the application of equal amounts of mineral fertilizers the under the soil before sowing is a very important technological process. The low level of technical equipment in this area prevents producers from providing the country with the necessary quality cereals. Experience in the operation of modern technical means has shown that, at present, there is a need to provide an equal amount of fertilizer on the slopes to under the soil, fully meeting the agro-technical requirements. No fundamental changes have been made to the industrial machines that fertilize the under the soil, and unequal application of fertilizers under the soil on the slopes has been applied. This technological process leads to the destruction of new seedlings and reduced productivity due to intolerance to frost during the winter for the plant planted in the fall. In special climatic conditions, there is an optimal fertilization rate for each agricultural product. The application of fertilizers to the soil is one of the conditions that increase their efficiency in the field. As can be seen, the development of a new technical proposal for fertilizing and plowing the slopes in equal amounts on the slopes, improving the technological and design parameters, and taking into account the physical and mechanical properties of fertilizers is very important. Taking into account the above-mentioned issues, a combined plough was developed in our laboratory. Combined plough carries out pre-sowing technological operation in the cultivation of cereals, providing a smooth equal amount of mineral fertilizers under the soil on the slopes. Mathematical models of a smooth spreader that evenly distributes fertilizers in the field have been developed. Thus, diagrams and graphs obtained without distribution on the 8 partitions of the smooth spreader are constructed under the inclined angles of the slopes. Percentage and productivity of equal distribution in the field were noted by practical and theoretical analysis.

Keywords: combined plough, mineral fertilizer, equal sowing, fertilizer norm, grain-crops, sowing fertilizer

Procedia PDF Downloads 133
27814 Sea-Land Segmentation Method Based on the Transformer with Enhanced Edge Supervision

Authors: Lianzhong Zhang, Chao Huang

Abstract:

Sea-land segmentation is a basic step in many tasks such as sea surface monitoring and ship detection. The existing sea-land segmentation algorithms have poor segmentation accuracy, and the parameter adjustments are cumbersome and difficult to meet actual needs. Also, the current sea-land segmentation adopts traditional deep learning models that use Convolutional Neural Networks (CNN). At present, the transformer architecture has achieved great success in the field of natural images, but its application in the field of radar images is less studied. Therefore, this paper proposes a sea-land segmentation method based on the transformer architecture to strengthen edge supervision. It uses a self-attention mechanism with a gating strategy to better learn relative position bias. Meanwhile, an additional edge supervision branch is introduced. The decoder stage allows the feature information of the two branches to interact, thereby improving the edge precision of the sea-land segmentation. Based on the Gaofen-3 satellite image dataset, the experimental results show that the method proposed in this paper can effectively improve the accuracy of sea-land segmentation, especially the accuracy of sea-land edges. The mean IoU (Intersection over Union), edge precision, overall precision, and F1 scores respectively reach 96.36%, 84.54%, 99.74%, and 98.05%, which are superior to those of the mainstream segmentation models and have high practical application values.

Keywords: SAR, sea-land segmentation, deep learning, transformer

Procedia PDF Downloads 171
27813 New Security Approach of Confidential Resources in Hybrid Clouds

Authors: Haythem Yahyaoui, Samir Moalla, Mounir Bouden, Skander ghorbel

Abstract:

Nowadays, Cloud environments are becoming a need for companies, this new technology gives the opportunities to access to the data anywhere and anytime, also an optimized and secured access to the resources and gives more security for the data which stored in the platform, however, some companies do not trust Cloud providers, in their point of view, providers can access and modify some confidential data such as bank accounts, many works have been done in this context, they conclude that encryption methods realized by providers ensure the confidentiality, although, they forgot that Cloud providers can decrypt the confidential resources. The best solution here is to apply some modifications on the data before sending them to the Cloud in the objective to make them unreadable. This work aims on enhancing the quality of service of providers and improving the trust of the customers.

Keywords: cloud, confidentiality, cryptography, security issues, trust issues

Procedia PDF Downloads 372
27812 Scaling Siamese Neural Network for Cross-Domain Few Shot Learning in Medical Imaging

Authors: Jinan Fiaidhi, Sabah Mohammed

Abstract:

Cross-domain learning in the medical field is a research challenge as many conditions, like in oncology imaging, use different imaging modalities. Moreover, in most of the medical learning applications, the sample training size is relatively small. Although few-shot learning (FSL) through the use of a Siamese neural network was able to be trained on a small sample with remarkable accuracy, FSL fails to be effective for use in multiple domains as their convolution weights are set for task-specific applications. In this paper, we are addressing this problem by enabling FSL to possess the ability to shift across domains by designing a two-layer FSL network that can learn individually from each domain and produce a shared features map with extra modulation to be used at the second layer that can recognize important targets from mix domains. Our initial experimentations based on mixed medical datasets like the Medical-MNIST reveal promising results. We aim to continue this research to perform full-scale analytics for testing our cross-domain FSL learning.

Keywords: Siamese neural network, few-shot learning, meta-learning, metric-based learning, thick data transformation and analytics

Procedia PDF Downloads 51
27811 Monitoring Trends of Science and Technology Policies in South Korea

Authors: Jeonghwan Jeon

Abstract:

As the science and technology(S&T) has been rapidly advanced, the national government attempts to reflect changes in the S&T for promoting public R&D activities and economic development. Amongst others, due to the rapid advances and changes of S&T, it becomes important to monitor the trends of S&T policies for formulating the new policy and investigating promising S&T fields. Thus, this paper aims to trace the national S&T policies during this decade for monitoring the change of major S&T fields in the case of South Korea. As one of the organization for S&T policy in South Korea, the National Science and Technology Council (NSTC) has been established to coordinate inter-ministerial policies and programs and to determine all of the national and public S&T policy of South Korea. In this regard, the items on national S&T policy determined by the NSTC are useful for understanding the needs for major S&T fields and adapting to the rapid change of S&T. To this end, we first gathered the data on 512 items on the S&T agenda from 1999 to 2013. Based on these items, the trend of S&T policies is monitored and the major S&T fields are derived. Differences of policy purposes between S&T fields are identified to provide guideline for policy making such as budget allocation or investment promotion as well.

Keywords: science and technology policy, trends, S&T field, monitoring

Procedia PDF Downloads 315
27810 Verification and Application of Finite Element Model Developed for Flood Routing in Rivers

Authors: A. L. Qureshi, A. A. Mahessar, A. Baloch

Abstract:

Flood wave propagation in river channel flow can be enunciated by nonlinear equations of motion for unsteady flow. However, it is difficult to find analytical solution of these complex non-linear equations. Hence, verification of the numerical model should be carried out against field data and numerical predictions. This paper presents the verification of developed finite element model applying for unsteady flow in the open channels. The results of a proposed model indicate a good matching with both Preissmann scheme and HEC-RAS model for a river reach of 29 km at both sites (15 km from upstream and at downstream end) for discharge hydrographs. It also has an agreeable comparison with the Preissemann scheme for the flow depth (stage) hydrographs. The proposed model has also been applying to forecast daily discharges at 400 km downstream from Sukkur barrage, which demonstrates accurate model predictions with observed daily discharges. Hence, this model may be utilized for predicting and issuing flood warnings about flood hazardous in advance.

Keywords: finite element method, Preissmann scheme, HEC-RAS, flood forecasting, Indus river

Procedia PDF Downloads 497
27809 Estimation of Chronic Kidney Disease Using Artificial Neural Network

Authors: Ilker Ali Ozkan

Abstract:

In this study, an artificial neural network model has been developed to estimate chronic kidney failure which is a common disease. The patients’ age, their blood and biochemical values, and 24 input data which consists of various chronic diseases are used for the estimation process. The input data have been subjected to preprocessing because they contain both missing values and nominal values. 147 patient data which was obtained from the preprocessing have been divided into as 70% training and 30% testing data. As a result of the study, artificial neural network model with 25 neurons in the hidden layer has been found as the model with the lowest error value. Chronic kidney failure disease has been able to be estimated accurately at the rate of 99.3% using this artificial neural network model. The developed artificial neural network has been found successful for the estimation of chronic kidney failure disease using clinical data.

Keywords: estimation, artificial neural network, chronic kidney failure disease, disease diagnosis

Procedia PDF Downloads 438
27808 Positive Shock: The PhD Journey of International Students at UK Universities: A Qualitative Interpretative Phenomenological Study

Authors: Dounyazad Sour

Abstract:

This research examines international doctoral students’ reflections on their journey and experiences of studying for a PhD in the UK. Since the early 1990s, the international students’ number in the UK has increased. The significant contribution of these international students to the cultural and academic diversity of the UK universities’ doctoral programmes is widely acknowledged. The substantial fees these students bring to British Higher Education institutions is also much appreciated. The rationale for undertaking this study grew from personal experience of studying in the UK. Through membership in different groups both online and, when regulations permitted it, face to face social groups, it quickly became apparent that among all students, there were both shared and individual experiences of struggles and triumphs. This insight led to the decision to investigate these matters in greater detail. This in-depth qualitative interpretative study, inspired by a phenomenological approach, offers fresh insights into academic, social and cultural experiences of international PhD students in the UK. Data collection was carried out in the UK over a period of three months, deploying focus groups, individual semi-structured interviews, and images selected by participants that represent their feelings towards their experiences. The ten participants are attending different UK universities, studying a range of disciplines and have diverse backgrounds. Interviews and discussions took place in the participants' preferred languages; Arabic, English and French. The analysis shows that the participants had experienced two types of shock: negative and positive. Negative shocks, which have seen considerable attention in the field of international students’ experiences, relate to unexpected incidents that happened to the participants in relation to their interactions with others: people from different backgrounds and people from the same background. This impacted their experience negatively through experiencing feelings of anxiety, stress, low self-esteem and xenophobia, all these hindering factors contribute to make international students struggle to adapt to the new environment. Positive shocks, which have remained largely under-researched in the field of international students’ experiences, refer to all the positive occurrences that participants experienced. For instance, a shop assistant saying: “do you need any help, honey?” which brought a sense of belonging, feeling home, safety, and satisfaction to the respondents, and made their experiences less challenging. This investigation will offer insights into the PhD international students’ experiences and shed new light on the shocks that can work as facilitators, rather than as inhibitors.

Keywords: international students, PhD journey, phenomenological approach, positive shock

Procedia PDF Downloads 185
27807 Research on the Development and Space Optimization of Rental-Type Public Housing in Hangzhou

Authors: Xuran Zhang, Huiru Chen

Abstract:

In recent years, China has made great efforts to cultivate and develop the housing rental market, especially the rental-type public housing, which has been paid attention to by all sectors of the society. This paper takes Hangzhou rental-type public housing as the research object, and divides it into three development stages according to the different supply modes of rental-type public housing. Through data collection and field research, the paper summarizes the spatial characteristics of rental-type public housing from the five perspectives of spatial planning, spatial layout, spatial integration, spatial organization and spatial configuration. On this basis, the paper proposes the optimization of the spatial layout. The study concludes that the spatial layout of rental-type public housing should be coordinated with the development of urban planning. When planning and constructing, it is necessary to select more mixed construction modes, to be properly centralized, and to improve the surrounding transportation service facilities.  It is hoped that the recommendations in this paper will provide a reference for the further development of rental-type public housing in Hangzhou.

Keywords: Hangzhou, rental-type public housing, spatial distribution, spatial optimization

Procedia PDF Downloads 319
27806 Impact of Map Generalization in Spatial Analysis

Authors: Lin Li, P. G. R. N. I. Pussella

Abstract:

When representing spatial data and their attributes on different types of maps, the scale plays a key role in the process of map generalization. The process is consisted with two main operators such as selection and omission. Once some data were selected, they would undergo of several geometrical changing processes such as elimination, simplification, smoothing, exaggeration, displacement, aggregation and size reduction. As a result of these operations at different levels of data, the geometry of the spatial features such as length, sinuosity, orientation, perimeter and area would be altered. This would be worst in the case of preparation of small scale maps, since the cartographer has not enough space to represent all the features on the map. What the GIS users do is when they wanted to analyze a set of spatial data; they retrieve a data set and does the analysis part without considering very important characteristics such as the scale, the purpose of the map and the degree of generalization. Further, the GIS users use and compare different maps with different degrees of generalization. Sometimes, GIS users are going beyond the scale of the source map using zoom in facility and violate the basic cartographic rule 'it is not suitable to create a larger scale map using a smaller scale map'. In the study, the effect of map generalization for GIS analysis would be discussed as the main objective. It was used three digital maps with different scales such as 1:10000, 1:50000 and 1:250000 which were prepared by the Survey Department of Sri Lanka, the National Mapping Agency of Sri Lanka. It was used common features which were on above three maps and an overlay analysis was done by repeating the data with different combinations. Road data, River data and Land use data sets were used for the study. A simple model, to find the best place for a wild life park, was used to identify the effects. The results show remarkable effects on different degrees of generalization processes. It can see that different locations with different geometries were received as the outputs from this analysis. The study suggests that there should be reasonable methods to overcome this effect. It can be recommended that, as a solution, it would be very reasonable to take all the data sets into a common scale and do the analysis part.

Keywords: generalization, GIS, scales, spatial analysis

Procedia PDF Downloads 326
27805 Identity Verification Based on Multimodal Machine Learning on Red Green Blue (RGB) Red Green Blue-Depth (RGB-D) Voice Data

Authors: LuoJiaoyang, Yu Hongyang

Abstract:

In this paper, we experimented with a new approach to multimodal identification using RGB, RGB-D and voice data. The multimodal combination of RGB and voice data has been applied in tasks such as emotion recognition and has shown good results and stability, and it is also the same in identity recognition tasks. We believe that the data of different modalities can enhance the effect of the model through mutual reinforcement. We try to increase the three modalities on the basis of the dual modalities and try to improve the effectiveness of the network by increasing the number of modalities. We also implemented the single-modal identification system separately, tested the data of these different modalities under clean and noisy conditions, and compared the performance with the multimodal model. In the process of designing the multimodal model, we tried a variety of different fusion strategies and finally chose the fusion method with the best performance. The experimental results show that the performance of the multimodal system is better than that of the single modality, especially in dealing with noise, and the multimodal system can achieve an average improvement of 5%.

Keywords: multimodal, three modalities, RGB-D, identity verification

Procedia PDF Downloads 68
27804 Response of Okra (Abelmoschus Esculentus (L). Moench) to Soil Amendments and Weeding Regime

Authors: Olusegun Raphael Adeyemi, Samuel Oluwaseun Osunleti, Abiddin Adekunle Bashiruddin

Abstract:

Field trials were conducted in 2020 and 2021 at the Teaching and Research Farm of the Federal University of Agriculture Abeokuta, Ogun State, Nigeria to evaluate the effect of biochar application under different weeding regimes on growth and yield of okra. Treatments were laid out in split- plot in a randomized complete block design with three replications. Main plot treatments were three levels of biochar namely 0t/ha, 10t/ha and 20t/ha while sub-plots treatments consisted of four weeding regimes (weeding at 3, 6 and 9 WAS, weeding at 3 and 6 WAS, weeding at 3 WAS and weedy check as control). Data collected on growth and yield of okra, and weed parameters were subjected to analysis of variance and treatment means were separated using least significant difference at p < 0.05. Results showed that biochar applied at 20 t/ha increased okra yield by 47.5% compared to the control. Weeding at 3, 6 and 9 WAS gave the highest okra yield. Uncontrolled weed infestation throughout crop growth resulted in 87.3% yield reduction in okra. It is concluded that weed suppression , growth and yield of okra can be enhanced by the application of biochar at 20t/ha and weeding at 3, 6 and 9 WAS hence recommended.

Keywords: biochar, okra, weeding, weed competition

Procedia PDF Downloads 56
27803 Non-Linear Causality Inference Using BAMLSS and Bi-CAM in Finance

Authors: Flora Babongo, Valerie Chavez

Abstract:

Inferring causality from observational data is one of the fundamental subjects, especially in quantitative finance. So far most of the papers analyze additive noise models with either linearity, nonlinearity or Gaussian noise. We fill in the gap by providing a nonlinear and non-gaussian causal multiplicative noise model that aims to distinguish the cause from the effect using a two steps method based on Bayesian additive models for location, scale and shape (BAMLSS) and on causal additive models (CAM). We have tested our method on simulated and real data and we reached an accuracy of 0.86 on average. As real data, we considered the causality between financial indices such as S&P 500, Nasdaq, CAC 40 and Nikkei, and companies' log-returns. Our results can be useful in inferring causality when the data is heteroskedastic or non-injective.

Keywords: causal inference, DAGs, BAMLSS, financial index

Procedia PDF Downloads 146
27802 Managing Incomplete PSA Observations in Prostate Cancer Data: Key Strategies and Best Practices for Handling Loss to Follow-Up and Missing Data

Authors: Madiha Liaqat, Rehan Ahmed Khan, Shahid Kamal

Abstract:

Multiple imputation with delta adjustment is a versatile and transparent technique for addressing univariate missing data in the presence of various missing mechanisms. This approach allows for the exploration of sensitivity to the missing-at-random (MAR) assumption. In this review, we outline the delta-adjustment procedure and illustrate its application for assessing the sensitivity to deviations from the MAR assumption. By examining diverse missingness scenarios and conducting sensitivity analyses, we gain valuable insights into the implications of missing data on our analyses, enhancing the reliability of our study's conclusions. In our study, we focused on assessing logPSA, a continuous biomarker in incomplete prostate cancer data, to examine the robustness of conclusions against plausible departures from the MAR assumption. We introduced several approaches for conducting sensitivity analyses, illustrating their application within the pattern mixture model (PMM) under the delta adjustment framework. This proposed approach effectively handles missing data, particularly loss to follow-up.

Keywords: loss to follow-up, incomplete response, multiple imputation, sensitivity analysis, prostate cancer

Procedia PDF Downloads 80
27801 General Architecture for Automation of Machine Learning Practices

Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain

Abstract:

Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.

Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler

Procedia PDF Downloads 52
27800 Groundwater Potential Mapping using Frequency Ratio and Shannon’s Entropy Models in Lesser Himalaya Zone, Nepal

Authors: Yagya Murti Aryal, Bipin Adhikari, Pradeep Gyawali

Abstract:

The Lesser Himalaya zone of Nepal consists of thrusting and folding belts, which play an important role in the sustainable management of groundwater in the Himalayan regions. The study area is located in the Dolakha and Ramechhap Districts of Bagmati Province, Nepal. Geologically, these districts are situated in the Lesser Himalayas and partly encompass the Higher Himalayan rock sequence, which includes low-grade to high-grade metamorphic rocks. Following the Gorkha Earthquake in 2015, numerous springs dried up, and many others are currently experiencing depletion due to the distortion of the natural groundwater flow. The primary objective of this study is to identify potential groundwater areas and determine suitable sites for artificial groundwater recharge. Two distinct statistical approaches were used to develop models: The Frequency Ratio (FR) and Shannon Entropy (SE) methods. The study utilized both primary and secondary datasets and incorporated significant role and controlling factors derived from field works and literature reviews. Field data collection involved spring inventory, soil analysis, lithology assessment, and hydro-geomorphology study. Additionally, slope, aspect, drainage density, and lineament density were extracted from a Digital Elevation Model (DEM) using GIS and transformed into thematic layers. For training and validation, 114 springs were divided into a 70/30 ratio, with an equal number of non-spring pixels. After assigning weights to each class based on the two proposed models, a groundwater potential map was generated using GIS, classifying the area into five levels: very low, low, moderate, high, and very high. The model's outcome reveals that over 41% of the area falls into the low and very low potential categories, while only 30% of the area demonstrates a high probability of groundwater potential. To evaluate model performance, accuracy was assessed using the Area under the Curve (AUC). The success rate AUC values for the FR and SE methods were determined to be 78.73% and 77.09%, respectively. Additionally, the prediction rate AUC values for the FR and SE methods were calculated as 76.31% and 74.08%. The results indicate that the FR model exhibits greater prediction capability compared to the SE model in this case study.

Keywords: groundwater potential mapping, frequency ratio, Shannon’s Entropy, Lesser Himalaya Zone, sustainable groundwater management

Procedia PDF Downloads 76
27799 Lipid-Coated Magnetic Nanoparticles for Frequency Triggered Drug Delivery

Authors: Yogita Patil-Sen

Abstract:

Superparamagnetic Iron Oxide Nanoparticles (SPIONs) have become increasingly important materials for separation of specific bio-molecules, drug delivery vehicle, contrast agent for MRI and magnetic hyperthermia for cancer therapy. Hyperthermia is emerging as an alternative cancer treatment to the conventional radio- and chemo-therapy, which have harmful side effects. When subjected to an alternating magnetic field, the magnetic energy of SPIONs is converted into thermal energy due to movement of particles. The ability of SPIONs to generate heat and potentially kill cancerous cells, which are more susceptible than the normal cells to temperatures higher than 41 °C forms the basis of hyerpthermia treatement. The amount of heat generated depends upon the magnetic properties of SPIONs which in turn is affected by their properties such as size and shape. One of the main problems associated with SPIONs is particle aggregation which limits their employability in in vivo drug delivery applications and hyperthermia cancer treatments. Coating the iron oxide core with thermally responsive lipid based nanostructures tend to overcome the issue of aggregation as well as improve biocompatibility and can enhance drug loading efficiency. Herein we report suitability of SPIONs and silica coated core-shell SPIONs, which are further, coated with various lipids for drug delivery and magnetic hyperthermia applications. The synthesis of nanoparticles is carried out using the established methods reported in the literature with some modifications. The nanoparticles are characterised using Infrared spectroscopy (IR), X-ray Diffraction (XRD), Scanning Electron Microscopy (SEM), Transmission Electron Microscopy (TEM) and Vibrating Sample Magnetometer (VSM). The heating ability of nanoparticles is tested under alternating magnetic field. The efficacy of the nanoparticles as drug carrier is also investigated. The loading of an anticancer drug, Doxorubicin at 18 °C is measured up to 48 hours using UV-visible spectrophotometer. The drug release profile is obtained under thermal incubation condition at 37 °C and compared with that under the influence of alternating magnetic field. The results suggest that the nanoparticles exhibit superparamagnetic behaviour, although coating reduces the magnetic properties of the particles. Both the uncoated and coated particles show good heating ability, again it is observed that coating decreases the heating behaviour of the particles. However, coated particles show higher drug loading efficiency than the uncoated particles and the drug release is much more controlled under the alternating magnetic field. Thus, the results demonstrate that lipid coated SPIONs exhibit potential as drug delivery vehicles for magnetic hyperthermia based cancer therapy.

Keywords: drug delivery, hyperthermia, lipids, superparamagnetic iron oxide nanoparticles (SPIONS)

Procedia PDF Downloads 229
27798 Collaboration of Game Based Learning with Models Roaming the Stairs Using the Tajribi Method on the Eye PAI Lessons at the Ummul Mukminin Islamic Boarding School, Makassar South Sulawesi

Authors: Ratna Wulandari, Shahidin

Abstract:

This article aims to see how the Game Based Learning learning model with the Roaming The Stairs game makes a tajribi method can make PAI lessons active and interactive learning. This research uses a qualitative approach with a case study type of research. Data collection methods were carried out using interviews, observation, and documentation. Data analysis was carried out through the stages of data reduction, data display, and verification and drawing conclusions. The data validity test was carried out using the triangulation method. and drawing conclusions. The results of the research show that (1) children in grades 9A, 9B, and 9C like learning PAI using the Roaming The Stairs game (2) children in grades 9A, 9B, and 9C are active and can work in groups to solve problems in the Roaming The Stairs game (3) the class atmosphere becomes fun with learning method, namely learning while playing.

Keywords: game based learning, Roaming The Stairs, Tajribi PAI

Procedia PDF Downloads 12
27797 Effects of Magnetization Patterns on Characteristics of Permanent Magnet Linear Synchronous Generator for Wave Energy Converter Applications

Authors: Sung-Won Seo, Jang-Young Choi

Abstract:

The rare earth magnets used in synchronous generators offer many advantages, including high efficiency, greatly reduced the size, and weight. The permanent magnet linear synchronous generator (PMLSG) allows for direct drive without the need for a mechanical device. Therefore, the PMLSG is well suited to translational applications, such as wave energy converters and free piston energy converters. This manuscript compares the effects of different magnetization patterns on the characteristics of double-sided PMLSGs in slotless stator structures. The Halbach array has a higher flux density in air-gap than the Vertical array, and the advantages of its performance and efficiency are widely known. To verify the advantage of Halbach array, we apply a finite element method (FEM) and analytical method. In general, a FEM and an analytical method are used in the electromagnetic analysis for determining model characteristics, and the FEM is preferable to magnetic field analysis. However, the FEM is often slow and inflexible. On the other hand, the analytical method requires little time and produces accurate analysis of the magnetic field. Therefore, the flux density in air-gap and the Back-EMF can be obtained by FEM. In addition, the results from the analytical method correspond well with the FEM results. The model of the Halbach array reveals less copper loss than the model of the Vertical array, because of the Halbach array’s high output power density. The model of the Vertical array is lower core loss than the model of Halbach array, because of the lower flux density in air-gap. Therefore, the current density in the Vertical model is higher for identical power output. The completed manuscript will include the magnetic field characteristics and structural features of both models, comparing various results, and specific comparative analysis will be presented for the determination of the best model for application in a wave energy converting system.

Keywords: wave energy converter, permanent magnet linear synchronous generator, finite element method, analytical method

Procedia PDF Downloads 296
27796 Exploring 21st Century Ecolinguistics: Navigating Hybrid Identities in a Changing World

Authors: Dace Aleksandraviča

Abstract:

The paper presents a theoretical exploration of the emerging field of 21st-century ecolinguistics, which examines the multi-faceted relationship between language, ecology, and identity in our rapidly changing global landscape. In an era characterized by unprecedented linguistic and cultural hybridity, understanding the interplay between language and environment is paramount. This paper delves into the concept of hybrid identities, examining how individuals negotiate their linguistic and cultural affiliations within diverse ecological contexts based on relevant prior contributions in the field. Drawing upon interdisciplinary perspectives from linguistics, environmental studies, and cultural studies, the research investigates the ways in which language shapes and is shaped by environmental realities. The abstract underscores the importance of ecolinguistic approaches in fostering environmental stewardship and promoting sustainable practices. By acknowledging the intrinsic link between language, culture, and ecology, it becomes possible to cultivate a deeper appreciation for linguistic diversity and empower individuals to navigate their hybrid identities in a rapidly changing world. In line with that, the paper hopes to contribute to the growing body of literature on ecolinguistics and offer insights into how language can serve as a tool for both environmental conservation and cultural revitalization.

Keywords: ecolinguistics, hybrid identities, language, globalization

Procedia PDF Downloads 43
27795 Wedding Organizer Strategy in the Era Covid-19 Pandemic In Surabaya, Indonesia

Authors: Rifky Cahya Putra

Abstract:

At this time of corona makes some countries affected difficult. As a result, many traders or companies are difficult to work in this pandemic era. So human activities in some fields must implement a new lifestyle or known as new normal. The transition from the one activity to another certainly requires high adaptation. So that almost in all sectors experience the impact of this phase, on of which is the wedding organizer. This research aims to find out what strategies are used so that the company can run in this pandemic. Techniques in data collection in the form interview to the owner of the wedding organizer and his team. Data analysis qualitative descriptive use interactive model analysis consisting of three main things, namely data reduction, data presentaion, and conclusion. For the result of the interview, the conclusion is that there are three strategies consisting of social media, sponsorship, and promotion.

Keywords: strategy, wedding organizer, pandemic, indonesia

Procedia PDF Downloads 130
27794 Development of Multi-Leaf Collimator-Based Isocenter Verification Tool Using Electrical Portal Imaging Device for Stereotactic Radiosurgery

Authors: Panatda Intanin, Sangutid Thongsawad, Chirapha Tannanonta, Todsaporn Fuangrod

Abstract:

Stereotactic radiosurgery (SRS) is a highly precision delivery technique that requires comprehensive quality assurance (QA) tests prior to treatment delivery. An isocenter of delivery beam plays a critical role that affect the treatment accuracy. The uncertainty of isocenter is traditionally accessed using circular cone equipment, Winston-Lutz (WL) phantom and film. This technique is considered time consuming and highly dependent on the observer. In this work, the development of multileaf collimator (MLC)-based isocenter verification tool using electronic portal imaging device (EPID) was proposed and evaluated. A mechanical isocenter alignment with ball bearing diameter 5 mm and circular cone diameter 10 mm fixed to gantry head defines the radiation field was set as the conventional WL test method. The conventional setup was to compare to the proposed setup; using MLC (10 x 10 mm) to define the radiation filed instead of cone. This represents more realistic delivery field than using circular cone equipment. The acquisition from electronic portal imaging device (EPID) and radiographic film were performed in both experiments. The gantry angles were set as following: 0°, 90°, 180° and 270°. A software tool was in-house developed using MATLAB/SIMULINK programming to determine the centroid of radiation field and shadow of WL phantom automatically. This presents higher accuracy than manual measurement. The deviation between centroid of both cone-based and MLC-based WL tests were quantified. To compare between film and EPID image, the deviation for all gantry angle was 0.26±0.19mm and 0.43±0.30 for cone-based and MLC-based WL tests. For the absolute deviation calculation on EPID images between cone and MLC-based WL test was 0.59±0.28 mm and the absolute deviation on film images was 0.14±0.13 mm. Therefore, the MLC-based isocenter verification using EPID present high sensitivity tool for SRS QA.

Keywords: isocenter verification, quality assurance, EPID, SRS

Procedia PDF Downloads 146
27793 Support Provided by Midwives to Women during Labour in a Public Hospital, Limpopo Province, South Africa: A Participant Observation Study

Authors: Sonto Maputle

Abstract:

Background: Support during labour increase women's chances of having positive childbirth experiences as well as childbirth outcomes. The purpose of this study was to determine the support provided by midwives to women during labour at the public hospital in Limpopo Province. The study was conducted at the Tertiary hospital in Limpopo Province. Methods: A qualitative, participant observation approach was used. Population consisted of all women that were admitted to deliver their babies and the midwives who provided midwifery care in the obstetric unit of one tertiary public hospital in Limpopo Province. Non-probability, purposive and convenience sampling were used to sample 24 women and 12 midwives. Data were collected through participant observations which included unstructured conversations with the use of observational guide, field notes of events and conversations that occurred when women interact with midwives were recorded verbatim and a Visual Analog Scale to complement the observations. Data was analysed qualitatively but were presented in the tables and bar graphs. Results: Five themes emerged as support provided by midwives during labour, namely; communication between women and midwives, informational support, emotional support activities, interpretation of the experienced labour pain and supportive care activities during labour. Conclusion: The communication was occurring when the midwife was rendering midwifery care and very limited for empowering. The information sharing focused on the assistive actions rather than on the activities that would promote mothers’ participation. The emotional support activities indicated lack of respect and disregard cultural preferences and this contributed to inability to exercise choices in decision-making. The study recommended the implementation of Batho Pele principles in order to provide woman-centred care during labour.

Keywords: communication between women and midwives, labour pains, informational and emotional support, physical comforting measures

Procedia PDF Downloads 149
27792 Response of Okra (Abelmoschus Esculentus (L). Moench) to Soil Amendments and Weeding Regime

Authors: Olusegun Raphael Adeyemi, Samuel Oluwaseun Osunleti, Abiddin Adekunle Bashiruddin

Abstract:

Field trials were conducted in 2020 and 2021 at the Teaching and Research Farm of the Federal University of Agriculture Abeokuta, Ogun State, Nigeria, to evaluate the effect of biochar application under different weeding regimes on the growth and yield of okra. Treatments were laid out in a split- plot in a randomized complete block design with three replications. Main plot treatments were three levels of biochar, namely 0t/ha, 10t/ha and 20t/ha while sub-plot treatments consisted of four weeding regimes (weeding at 3, 6 and 9 WAS, weeding at 3 and 6 WAS, weeding at 3 WAS and weedy check as control). Data collected on growth and yield of okra and weed parameters were subjected to analysis of variance, and treatment means were separated using the least significant difference at p < 0.05. Results showed that biochar applied at 20 t/ha increased okra yield by 47.5% compared to the control. Weeding at 3, 6 and 9 WAS gave the highest okra yield. Uncontrolled weed infestation throughout crop growth resulted in an 87.3% yield reduction in okra. It is concluded that weed suppression, growth and yield of okra can be enhanced by the application of biochar at 20t/ha and weeding at 3, 6 and 9 WAS hence recommended.

Keywords: biochar, okra, weeding, weed competition, yield

Procedia PDF Downloads 54