Search results for: data mining techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29842

Search results for: data mining techniques

27562 Application of Nanofibers in Heavy Metal (HM) Filtration

Authors: Abhijeet Kumar, Palaniswamy N. K.

Abstract:

Heavy metal contamination in water sources endangers both the environment and human health. Various water filtration techniques have been employed till now for purification and removal of hazardous metals from water. Among all the existing methods, nanofibres have emerged as a viable alternative for effective heavy metal removal in recent years because of their unique qualities, such as large surface area, interconnected porous structure, and customizable surface chemistry. Among the numerous manufacturing techniques, solution blow spinning has gained popularity as a versatile process for producing nanofibers with customized properties. This paper seeks to offer a complete overview of the use of nanofibers for heavy metal filtration, particularly those produced using solution blow spinning. The review discusses current advances in nanofiber materials, production processes, and heavy metal removal performance. Furthermore, the field's difficulties and future opportunities are examined in order to direct future research and development activities.

Keywords: heavy metals, nanofiber composite, filter membranes, adsorption, impaction

Procedia PDF Downloads 68
27561 Modeling of Geotechnical Data Using GIS and Matlab for Eastern Ahmedabad City, Gujarat

Authors: Rahul Patel, S. P. Dave, M. V Shah

Abstract:

Ahmedabad is a rapidly growing city in western India that is experiencing significant urbanization and industrialization. With projections indicating that it will become a metropolitan city in the near future, various construction activities are taking place, making soil testing a crucial requirement before construction can commence. To achieve this, construction companies and contractors need to periodically conduct soil testing. This study focuses on the process of creating a spatial database that is digitally formatted and integrated with geotechnical data and a Geographic Information System (GIS). Building a comprehensive geotechnical Geo-database involves three essential steps. Firstly, borehole data is collected from reputable sources. Secondly, the accuracy and redundancy of the data are verified. Finally, the geotechnical information is standardized and organized for integration into the database. Once the Geo-database is complete, it is integrated with GIS. This integration allows users to visualize, analyze, and interpret geotechnical information spatially. Using a Topographic to Raster interpolation process in GIS, estimated values are assigned to all locations based on sampled geotechnical data values. The study area was contoured for SPT N-Values, Soil Classification, Φ-Values, and Bearing Capacity (T/m2). Various interpolation techniques were cross-validated to ensure information accuracy. The GIS map generated by this study enables the calculation of SPT N-Values, Φ-Values, and bearing capacities for different footing widths and various depths. This approach highlights the potential of GIS in providing an efficient solution to complex phenomena that would otherwise be tedious to achieve through other means. Not only does GIS offer greater accuracy, but it also generates valuable information that can be used as input for correlation analysis. Furthermore, this system serves as a decision support tool for geotechnical engineers. The information generated by this study can be utilized by engineers to make informed decisions during construction activities. For instance, they can use the data to optimize foundation designs and improve site selection. In conclusion, the rapid growth experienced by Ahmedabad requires extensive construction activities, necessitating soil testing. This study focused on the process of creating a comprehensive geotechnical database integrated with GIS. The database was developed by collecting borehole data from reputable sources, verifying its accuracy and redundancy, and organizing the information for integration. The GIS map generated by this study is an efficient solution that offers greater accuracy and generates valuable information that can be used as input for correlation analysis. It also serves as a decision support tool for geotechnical engineers, allowing them to make informed decisions during construction activities.

Keywords: arcGIS, borehole data, geographic information system (GIS), geo-database, interpolation, SPT N-value, soil classification, φ-value, bearing capacity

Procedia PDF Downloads 68
27560 Increasing the Apparent Time Resolution of Tc-99m Diethylenetriamine Pentaacetic Acid Galactosyl Human Serum Albumin Dynamic SPECT by Use of an 180-Degree Interpolation Method

Authors: Yasuyuki Takahashi, Maya Yamashita, Kyoko Saito

Abstract:

In general, dynamic SPECT data acquisition needs a few minutes for one rotation. Thus, the time-activity curve (TAC) derived from the dynamic SPECT is relatively coarse. In order to effectively shorten the interval, between data points, we adopted a 180-degree interpolation method. This method is already used for reconstruction of the X-ray CT data. In this study, we applied this 180-degree interpolation method to SPECT and investigated its effectiveness.To briefly describe the 180-degree interpolation method: the 180-degree data in the second half of one rotation are combined with the 180-degree data in the first half of the next rotation to generate a 360-degree data set appropriate for the time halfway between the first and second rotations. In both a phantom and a patient study, the data points from the interpolated images fell in good agreement with the data points tracking the accumulation of 99mTc activity over time for appropriate region of interest. We conclude that data derived from interpolated images improves the apparent time resolution of dynamic SPECT.

Keywords: dynamic SPECT, time resolution, 180-degree interpolation method, 99mTc-GSA.

Procedia PDF Downloads 493
27559 The Design Inspired by Phra Maha Chedi of King Rama I-IV at Wat Phra Chetuphon Vimolmangklaram Rajwaramahaviharn

Authors: Taechit Cheuypoung

Abstract:

The research will focus on creating pattern designs that are inspired by the pagodas, Phra Maha Chedi of King Rama I-IV, that are located in the temple, Wat Phra Chetuphon Vimolmangklararm Rajwaramahaviharn. Different aspects of the temple were studied, including the history, architecture, significance of the temple, and techniques used to decorate the pagodas, Phra Maha Chedi of King Rama I-IV. Moreover, composition of arts and the form of pattern designs which all led to the outcome of four Thai application pattern. The four patterns combine Thai traditional design with international scheme, however, maintaining the distinctiveness of the glaze mosaic tiles of each Phra Maha Chedi. The patterns consist of rounded and notched petal flowers, leaves and vine, and various square shapes, and original colors which are updated for modernity. These elements are then grouped and combined with new techniques, resulting in pattern designs with modern aspects and simultaneously reflecting the charm and the aesthetic of Thai craftsmanship which are eternally embedded in the designs.

Keywords: Chedi, Pagoda, pattern, Wat

Procedia PDF Downloads 387
27558 Trajectory Planning Algorithms for Autonomous Agricultural Vehicles

Authors: Caner Koc, Dilara Gerdan Koc, Mustafa Vatandas

Abstract:

The fundamental components of autonomous agricultural robot design, such as having a working understanding of coordinates, correctly constructing the desired route, and sensing environmental elements, are the most important. A variety of sensors, hardware, and software are employed by agricultural robots to find these systems.These enable the fully automated driving system of an autonomous vehicle to simulate how a human-driven vehicle would respond to changing environmental conditions. To calculate the vehicle's motion trajectory using data from the sensors, this automation system typically consists of a sophisticated software architecture based on object detection and driving decisions. In this study, the software architecture of an autonomous agricultural vehicle is compared to the trajectory planning techniques.

Keywords: agriculture 5.0, computational intelligence, motion planning, trajectory planning

Procedia PDF Downloads 78
27557 Wavelet Coefficients Based on Orthogonal Matching Pursuit (OMP) Based Filtering for Remotely Sensed Images

Authors: Ramandeep Kaur, Kamaljit Kaur

Abstract:

In recent years, the technology of the remote sensing is growing rapidly. Image enhancement is one of most commonly used of image processing operations. Noise reduction plays very important role in digital image processing and various technologies have been located ahead to reduce the noise of the remote sensing images. The noise reduction using wavelet coefficients based on Orthogonal Matching Pursuit (OMP) has less consequences on the edges than available methods but this is not as establish in edge preservation techniques. So in this paper we provide a new technique minimum patch based noise reduction OMP which reduce the noise from an image and used edge preservation patch which preserve the edges of the image and presents the superior results than existing OMP technique. Experimental results show that the proposed minimum patch approach outperforms over existing techniques.

Keywords: image denoising, minimum patch, OMP, WCOMP

Procedia PDF Downloads 389
27556 DeClEx-Processing Pipeline for Tumor Classification

Authors: Gaurav Shinde, Sai Charan Gongiguntla, Prajwal Shirur, Ahmed Hambaba

Abstract:

Health issues are significantly increasing, putting a substantial strain on healthcare services. This has accelerated the integration of machine learning in healthcare, particularly following the COVID-19 pandemic. The utilization of machine learning in healthcare has grown significantly. We introduce DeClEx, a pipeline that ensures that data mirrors real-world settings by incorporating Gaussian noise and blur and employing autoencoders to learn intermediate feature representations. Subsequently, our convolutional neural network, paired with spatial attention, provides comparable accuracy to state-of-the-art pre-trained models while achieving a threefold improvement in training speed. Furthermore, we provide interpretable results using explainable AI techniques. We integrate denoising and deblurring, classification, and explainability in a single pipeline called DeClEx.

Keywords: machine learning, healthcare, classification, explainability

Procedia PDF Downloads 55
27555 AI-Driven Solutions for Optimizing Master Data Management

Authors: Srinivas Vangari

Abstract:

In the era of big data, ensuring the accuracy, consistency, and reliability of critical data assets is crucial for data-driven enterprises. Master Data Management (MDM) plays a crucial role in this endeavor. This paper investigates the role of Artificial Intelligence (AI) in enhancing MDM, focusing on how AI-driven solutions can automate and optimize various stages of the master data lifecycle. By integrating AI (Quantitative and Qualitative Analysis) into processes such as data creation, maintenance, enrichment, and usage, organizations can achieve significant improvements in data quality and operational efficiency. Quantitative analysis is employed to measure the impact of AI on key metrics, including data accuracy, processing speed, and error reduction. For instance, our study demonstrates an 18% improvement in data accuracy and a 75% reduction in duplicate records across multiple systems post-AI implementation. Furthermore, AI’s predictive maintenance capabilities reduced data obsolescence by 22%, as indicated by statistical analyses of data usage patterns over a 12-month period. Complementing this, a qualitative analysis delves into the specific AI-driven strategies that enhance MDM practices, such as automating data entry and validation, which resulted in a 28% decrease in manual errors. Insights from case studies highlight how AI-driven data cleansing processes reduced inconsistencies by 25% and how AI-powered enrichment strategies improved data relevance by 24%, thus boosting decision-making accuracy. The findings demonstrate that AI significantly enhances data quality and integrity, leading to improved enterprise performance through cost reduction, increased compliance, and more accurate, real-time decision-making. These insights underscore the value of AI as a critical tool in modern data management strategies, offering a competitive edge to organizations that leverage its capabilities.

Keywords: artificial intelligence, master data management, data governance, data quality

Procedia PDF Downloads 18
27554 Reinforced Concrete Bridge Deck Condition Assessment Methods Using Ground Penetrating Radar and Infrared Thermography

Authors: Nicole M. Martino

Abstract:

Reinforced concrete bridge deck condition assessments primarily use visual inspection methods, where an inspector looks for and records locations of cracks, potholes, efflorescence and other signs of probable deterioration. Sounding is another technique used to diagnose the condition of a bridge deck, however this method listens for damage within the subsurface as the surface is struck with a hammer or chain. Even though extensive procedures are in place for using these inspection techniques, neither one provides the inspector with a comprehensive understanding of the internal condition of a bridge deck – the location where damage originates from.  In order to make accurate estimates of repair locations and quantities, in addition to allocating the necessary funding, a total understanding of the deck’s deteriorated state is key. The research presented in this paper collected infrared thermography and ground penetrating radar data from reinforced concrete bridge decks without an asphalt overlay. These decks were of various ages and their condition varied from brand new, to in need of replacement. The goals of this work were to first verify that these nondestructive evaluation methods could identify similar areas of healthy and damaged concrete, and then to see if combining the results of both methods would provide a higher confidence than if the condition assessment was completed using only one method. The results from each method were presented as plan view color contour plots. The results from one of the decks assessed as a part of this research, including these plan view plots, are presented in this paper. Furthermore, in order to answer the interest of transportation agencies throughout the United States, this research developed a step-by-step guide which demonstrates how to collect and assess a bridge deck using these nondestructive evaluation methods. This guide addresses setup procedures on the deck during the day of data collection, system setups and settings for different bridge decks, data post-processing for each method, and data visualization and quantification.

Keywords: bridge deck deterioration, ground penetrating radar, infrared thermography, NDT of bridge decks

Procedia PDF Downloads 154
27553 Optimization of Gold Mining Parameters by Cyanidation

Authors: Della Saddam Housseyn

Abstract:

Gold, the quintessential noble metal, is one of the most popular metals today, given its ever-increasing cost in the international market. The Amesmessa gold deposit is one of the gold-producing deposits. The first step in our job is to analyze the ore (considered rich ore). Mineralogical and chemical analysis has shown that the general constitution of the ore is quartz in addition to other phases such as Al2O3, Fe2O3, CaO, dolomite. The second step consists of all the leaching tests carried out in rolling bottles. These tests were carried out on 14 samples to determine the maximum recovery rate and the optimum consumption of reagent (NaCN and CaO). Tests carried out on a pulp density at 50% solid, 500 ppm cyanide concentration and particle size less than 0.6 mm at alkaline pH gave a recovery rate of 94.37%.

Keywords: cyanide, DRX, FX, gold, leaching, rate of recovery, SAA

Procedia PDF Downloads 181
27552 Genetic Data of Deceased People: Solving the Gordian Knot

Authors: Inigo de Miguel Beriain

Abstract:

Genetic data of deceased persons are of great interest for both biomedical research and clinical use. This is due to several reasons. On the one hand, many of our diseases have a genetic component; on the other hand, we share genes with a good part of our biological family. Therefore, it would be possible to improve our response considerably to these pathologies if we could use these data. Unfortunately, at the present moment, the status of data on the deceased is far from being satisfactorily resolved by the EU data protection regulation. Indeed, the General Data Protection Regulation has explicitly excluded these data from the category of personal data. This decision has given rise to a fragmented legal framework on this issue. Consequently, each EU member state offers very different solutions. For instance, Denmark considers the data as personal data of the deceased person for a set period of time while some others, such as Spain, do not consider this data as such, but have introduced some specifically focused regulations on this type of data and their access by relatives. This is an extremely dysfunctional scenario from multiple angles, not least of which is scientific cooperation at the EU level. This contribution attempts to outline a solution to this dilemma through an alternative proposal. Its main hypothesis is that, in reality, health data are, in a sense, a rara avis within data in general because they do not refer to one person but to several. Hence, it is possible to think that all of them can be considered data subjects (although not all of them can exercise the corresponding rights in the same way). When the person from whom the data were obtained dies, the data remain as personal data of his or her biological relatives. Hence, the general regime provided for in the GDPR may apply to them. As these are personal data, we could go back to thinking in terms of a general prohibition of data processing, with the exceptions provided for in Article 9.2 and on the legal bases included in Article 6. This may be complicated in practice, given that, since we are dealing with data that refer to several data subjects, it may be complex to refer to some of these bases, such as consent. Furthermore, there are theoretical arguments that may oppose this hypothesis. In this contribution, it is shown, however, that none of these objections is of sufficient substance to delegitimize the argument exposed. Therefore, the conclusion of this contribution is that we can indeed build a general framework on the processing of personal data of deceased persons in the context of the GDPR. This would constitute a considerable improvement over the current regulatory framework, although it is true that some clarifications will be necessary for its practical application.

Keywords: collective data conceptual issues, data from deceased people, genetic data protection issues, GDPR and deceased people

Procedia PDF Downloads 154
27551 Assessment of Memetic and Genetic Algorithm for a Flexible Integrated Logistics Network

Authors: E. Behmanesh, J. Pannek

Abstract:

The distribution-allocation problem is known as one of the most comprehensive strategic decision. In real-world cases, it is impossible to solve a distribution-allocation problem in traditional ways with acceptable time. Hence researchers develop efficient non-traditional techniques for the large-term operation of the whole supply chain. These techniques provide near-optimal solutions particularly for large scales test problems. This paper, presents an integrated supply chain model which is flexible in the delivery path. As the solution methodology, we apply a memetic algorithm with a novelty in population presentation. To illustrate the performance of the proposed memetic algorithm, LINGO optimization software serves as a comparison basis for small size problems. In large size cases that we are dealing with in the real world, the Genetic algorithm as the second metaheuristic algorithm is considered to compare the results and show the efficiency of the memetic algorithm.

Keywords: integrated logistics network, flexible path, memetic algorithm, genetic algorithm

Procedia PDF Downloads 374
27550 Lexicon-Based Sentiment Analysis for Stock Movement Prediction

Authors: Zane Turner, Kevin Labille, Susan Gauch

Abstract:

Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We present a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.

Keywords: computational finance, sentiment analysis, sentiment lexicon, stock movement prediction

Procedia PDF Downloads 127
27549 Lexicon-Based Sentiment Analysis for Stock Movement Prediction

Authors: Zane Turner, Kevin Labille, Susan Gauch

Abstract:

Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We introduce a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.

Keywords: computational finance, sentiment analysis, sentiment lexicon, stock movement prediction

Procedia PDF Downloads 170
27548 Modeling the Time Dependent Biodistribution of a 177Lu Labeled Somatostatin Analogues for Targeted Radiotherapy of Neuroendocrine Tumors Using Compartmental Analysis

Authors: Mahdieh Jajroudi

Abstract:

Developing a pharmacokinetic model for the neuroendocrine tumors therapy agent 177Lu-DOTATATE in nude mice bearing AR42J rat pancreatic tumor to investigate and evaluate the behavior of the complex was the main purpose of this study. The utilization of compartmental analysis permits the mathematical differencing of tissues and organs to become acquainted with the concentration of activity in each fraction of interest. Biodistribution studies are onerous and troublesome to perform in humans, but such data can be obtained facilely in rodents. A physiologically based pharmacokinetic model for scaling up activity concentration in particular organs versus time was developed. The mathematical model exerts physiological parameters including organ volumes, blood flow rates, and vascular permabilities; the compartments (organs) are connected anatomically. This allows the use of scale-up techniques to forecast new complex distribution in humans' each organ. The concentration of the radiopharmaceutical in various organs was measured at different times. The temporal behavior of biodistribution of 177Lu labeled somatostatin analogues was modeled and drawn as function of time. Conclusion: The variation of pharmaceutical concentration in all organs is characterized with summation of six to nine exponential terms and it approximates our experimental data with precision better than 1%.

Keywords: biodistribution modeling, compartmental analysis, 177Lu labeled somatostatin analogues, neuroendocrine tumors

Procedia PDF Downloads 368
27547 Synthesis of Novel Organic Dyes Based on Indigo for Dye-Sensitized Solar Cells

Authors: M. Hosseinnejad, K. Gharanjig, S. Moradian

Abstract:

A novel metal free organic dyes based on indigo was prepared and used as sensitizers in dye-sensitized solar cells. The synthesized dye together with its corresponding intermediates were purified and characterized by analytical techniques. Such techniques confirmed the corresponding structures of dye and its intermediate and the yield of all the stages of dye preparation were calculated to be above 85%. Fluorometric analyses show fluorescence in the green region of the visible spectrum for dye. Oxidation potential measurements for dye ensured an energetically permissible and thermodynamically favourable charge transfer throughout the continuous cycle of photo-electric conversion. Finally, dye sensitized solar cells were fabricated in order to determine the photovoltaic behaviour and conversion efficiencies of dye. Such evaluations demonstrate rather medium conversion efficiencies of 2.33% for such simple structured synthesized dye. Such conversion efficiencies demonstrate the potentiality of future use of such dye structures in dye-sensitized solar cells with respect to low material costs, ease of molecular tailoring, high yields of reactions, high performance and ease of recyclability.

Keywords: conversion efficiency, Dye-sensitized solar cells, indigo, photonic material

Procedia PDF Downloads 369
27546 Significance of Architectural Conservation Today, For a Better Tomorrow

Authors: Sneha Manjunath

Abstract:

Architecture is a continuous process of evolution that keeps changing and evolving through time and gives rise to various design solutions based on the purpose of change and the present function of the space. This evolution in design has been happening for a long time and hence a variety of climate-responsive, context-responsive and human-responsive developments in habitats are witnessed. India has been one of the hot spots for the conservation of heritage and architecture. Buildings ranging from Indus-valley civilization to modern contemporary dwellings have all evolved in one or the other way. Various historical sites such as Hampi in Karnataka, Taj Mahal in Agra and various temples in Southern India are identified and preserved under the Archeological Survey of India. The main objective of such preservation is to help in protecting, preserving and keeping it intact for the future. Study of such heritage-rich buildings and building techniques helps us in understanding the psychology, lifestyle and socio-cultural impacts it had on the complete urban fabric that developed in a region. It also gives an insight into the occupation, economic status and religious beliefs that gave rise to a pattern in an urban form that was more inclusive and appropriate as per the need of the users. Today’s generation draws various inspirations from history with respect to space planning, building services such as lighting, ventilation and sanitation systems and elevation treatments. It is important to know and understand the importance of certain urban planning techniques used to develop ancient towns or cities in a radial pattern, square pattern, or checkered pattern depending on the need of the administrative set-up of the respective town or city. It is believed that every element of design undergoes evolution and it is important for a designer to know, respect and develop the same for the future so as to acknowledge and conserve every aspect of heritage that has been a backbone in urban form generation even today. Hampi in the Karnataka state of India is a very good example of how the monuments and dwellings from 14th Century still stand strong. Temples from North India, such as Kedarnath Temple, survived heavy floods because of their building techniques. Such building materials and construction techniques are to be revived and reused for a better perspective towards space planning in urbanized cities.

Keywords: architecture, urban form, heritage, town planning

Procedia PDF Downloads 115
27545 Steps towards the Development of National Health Data Standards in Developing Countries

Authors: Abdullah I. Alkraiji, Thomas W. Jackson, Ian Murray

Abstract:

The proliferation of health data standards today is somewhat overlapping and conflicting, resulting in market confusion and leading to increasing proprietary interests. The government role and support in standardization for health data are thought to be crucial in order to establish credible standards for the next decade, to maximize interoperability across the health sector, and to decrease the risks associated with the implementation of non-standard systems. The normative literature missed out the exploration of the different steps required to be undertaken by the government towards the development of national health data standards. Based on the lessons learned from a qualitative study investigating the different issues to the adoption of health data standards in the major tertiary hospitals in Saudi Arabia and the opinions and feedback from different experts in the areas of data exchange and standards and medical informatics in Saudi Arabia and UK, a list of steps required towards the development of national health data standards was constructed. Main steps are the existence of: a national formal reference for health data standards, an agreed national strategic direction for medical data exchange, a national medical information management plan and a national accreditation body, and more important is the change management at the national and organizational level. The outcome of this study can be used by academics and practitioners to develop the planning of health data standards, and in particular those in developing countries.

Keywords: interoperabilty, medical data exchange, health data standards, case study, Saudi Arabia

Procedia PDF Downloads 338
27544 Developing New Algorithm and Its Application on Optimal Control of Pumps in Water Distribution Network

Authors: R. Rajabpour, N. Talebbeydokhti, M. H. Ahmadi

Abstract:

In recent years, new techniques for solving complex problems in engineering are proposed. One of these techniques is JPSO algorithm. With innovative changes in the nature of the jump algorithm JPSO, it is possible to construct a graph-based solution with a new algorithm called G-JPSO. In this paper, a new algorithm to solve the optimal control problem Fletcher-Powell and optimal control of pumps in water distribution network was evaluated. Optimal control of pumps comprise of optimum timetable operation (status on and off) for each of the pumps at the desired time interval. Maximum number of status on and off for each pumps imposed to the objective function as another constraint. To determine the optimal operation of pumps, a model-based optimization-simulation algorithm was developed based on G-JPSO and JPSO algorithms. The proposed algorithm results were compared well with the ant colony algorithm, genetic and JPSO results. This shows the robustness of proposed algorithm in finding near optimum solutions with reasonable computational cost.

Keywords: G-JPSO, operation, optimization, pumping station, water distribution networks

Procedia PDF Downloads 401
27543 A Proposal for U-City (Smart City) Service Method Using Real-Time Digital Map

Authors: SangWon Han, MuWook Pyeon, Sujung Moon, DaeKyo Seo

Abstract:

Recently, technologies based on three-dimensional (3D) space information are being developed and quality of life is improving as a result. Research on real-time digital map (RDM) is being conducted now to provide 3D space information. RDM is a service that creates and supplies 3D space information in real time based on location/shape detection. Research subjects on RDM include the construction of 3D space information with matching image data, complementing the weaknesses of image acquisition using multi-source data, and data collection methods using big data. Using RDM will be effective for space analysis using 3D space information in a U-City and for other space information utilization technologies.

Keywords: RDM, multi-source data, big data, U-City

Procedia PDF Downloads 433
27542 The Negative Effects of Controlled Motivation on Mathematics Achievement

Authors: John E. Boberg, Steven J. Bourgeois

Abstract:

The decline in student engagement and motivation through the middle years is well documented and clearly associated with a decline in mathematics achievement that persists through high school. To combat this trend and, very often, to meet high-stakes accountability standards, a growing number of parents, teachers, and schools have implemented various methods to incentivize learning. However, according to Self-Determination Theory, forms of incentivized learning such as public praise, tangible rewards, or threats of punishment tend to undermine intrinsic motivation and learning. By focusing on external forms of motivation that thwart autonomy in children, adults also potentially threaten relatedness measures such as trust and emotional engagement. Furthermore, these controlling motivational techniques tend to promote shallow forms of cognitive engagement at the expense of more effective deep processing strategies. Therefore, any short-term gains in apparent engagement or test scores are overshadowed by long-term diminished motivation, resulting in inauthentic approaches to learning and lower achievement. The current study focuses on the relationships between student trust, engagement, and motivation during these crucial years as students transition from elementary to middle school. In order to test the effects of controlled motivational techniques on achievement in mathematics, this quantitative study was conducted on a convenience sample of 22 elementary and middle schools from a single public charter school district in the south-central United States. The study employed multi-source data from students (N = 1,054), parents (N = 7,166), and teachers (N = 356), along with student achievement data and contextual campus variables. Cross-sectional questionnaires were used to measure the students’ self-regulated learning, emotional and cognitive engagement, and trust in teachers. Parents responded to a single item on incentivizing the academic performance of their child, and teachers responded to a series of questions about their acceptance of various incentive strategies. Structural equation modeling (SEM) was used to evaluate model fit and analyze the direct and indirect effects of the predictor variables on achievement. Although a student’s trust in teacher positively predicted both emotional and cognitive engagement, none of these three predictors accounted for any variance in achievement in mathematics. The parents’ use of incentives, on the other hand, predicted a student’s perception of his or her controlled motivation, and these two variables had significant negative effects on achievement. While controlled motivation had the greatest effects on achievement, parental incentives demonstrated both direct and indirect effects on achievement through the students’ self-reported controlled motivation. Comparing upper elementary student data with middle-school student data revealed that controlling forms of motivation may be taking their toll on student trust and engagement over time. While parental incentives positively predicted both cognitive and emotional engagement in the younger sub-group, such forms of controlling motivation negatively predicted both trust in teachers and emotional engagement in the middle-school sub-group. These findings support the claims, posited by Self-Determination Theory, about the dangers of incentivizing learning. Short-term gains belie the underlying damage to motivational processes that lead to decreased intrinsic motivation and achievement. Such practices also appear to thwart basic human needs such as relatedness.

Keywords: controlled motivation, student engagement, incentivized learning, mathematics achievement, self-determination theory, student trust

Procedia PDF Downloads 219
27541 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas

Authors: Sahithi Yarlagadda

Abstract:

The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.

Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm

Procedia PDF Downloads 110
27540 Impulsive Synchronization of Periodically Forced Complex Duffing's Oscillators

Authors: Shaban Aly, Ali Al-Qahtani, Houari B. Khenous

Abstract:

Synchronization is an important phenomenon commonly observed in nature. A system of periodically forced complex Duffings oscillators was introduced and shown to display chaotic behavior and possess strange attractors. Such complex oscillators appear in many problems of physics and engineering, as, for example, nonlinear optics, deep-water wave theory, plasma physics and bimolecular dynamics. In this paper, we study the remarkable phenomenon of chaotic synchronization on these oscillator systems, using impulsive synchronization techniques. We derive analytical expressions for impulsive control functions and show that the dynamics of error evolution is globally stable, by constructing appropriate Lyapunov functions. This means that, for a relatively large set initial conditions, the differences between the drive and response systems vanish exponentially and synchronization is achieved. Numerical results are obtained to test the validity of the analytical expressions and illustrate the efficiency of these techniques for inducing chaos synchronization in our nonlinear oscillators.

Keywords: complex nonlinear oscillators, impulsive synchronization, chaotic systems, global exponential synchronization

Procedia PDF Downloads 447
27539 Software Defect Analysis- Eclipse Dataset

Authors: Amrane Meriem, Oukid Salyha

Abstract:

The presence of defects or bugs in software can lead to costly setbacks, operational inefficiencies, and compromised user experiences. The integration of Machine Learning(ML) techniques has emerged to predict and preemptively address software defects. ML represents a proactive strategy aimed at identifying potential anomalies, errors, or vulnerabilities within code before they manifest as operational issues. By analyzing historical data, such as code changes, feature im- plementations, and defect occurrences. This en- ables development teams to anticipate and mitigate these issues, thus enhancing software quality, reducing maintenance costs, and ensuring smoother user interactions. In this work, we used a recommendation system to improve the performance of ML models in terms of predicting the code severity and effort estimation.

Keywords: software engineering, machine learning, bugs detection, effort estimation

Procedia PDF Downloads 86
27538 Identifying Model to Predict Deterioration of Water Mains Using Robust Analysis

Authors: Go Bong Choi, Shin Je Lee, Sung Jin Yoo, Gibaek Lee, Jong Min Lee

Abstract:

In South Korea, it is difficult to obtain data for statistical pipe assessment. In this paper, to address these issues, we find that various statistical model presented before is how data mixed with noise and are whether apply in South Korea. Three major type of model is studied and if data is presented in the paper, we add noise to data, which affects how model response changes. Moreover, we generate data from model in paper and analyse effect of noise. From this we can find robustness and applicability in Korea of each model.

Keywords: proportional hazard model, survival model, water main deterioration, ecological sciences

Procedia PDF Downloads 743
27537 Automated Testing to Detect Instance Data Loss in Android Applications

Authors: Anusha Konduru, Zhiyong Shan, Preethi Santhanam, Vinod Namboodiri, Rajiv Bagai

Abstract:

Mobile applications are increasing in a significant amount, each to address the requirements of many users. However, the quick developments and enhancements are resulting in many underlying defects. Android apps create and handle a large variety of 'instance' data that has to persist across runs, such as the current navigation route, workout results, antivirus settings, or game state. Due to the nature of Android, an app can be paused, sent into the background, or killed at any time. If the instance data is not saved and restored between runs, in addition to data loss, partially-saved or corrupted data can crash the app upon resume or restart. However, it is difficult for the programmer to manually test this issue for all the activities. This results in the issue of data loss that the data entered by the user are not saved when there is any interruption. This issue can degrade user experience because the user needs to reenter the information each time there is an interruption. Automated testing to detect such data loss is important to improve the user experience. This research proposes a tool, DroidDL, a data loss detector for Android, which detects the instance data loss from a given android application. We have tested 395 applications and found 12 applications with the issue of data loss. This approach is proved highly accurate and reliable to find the apps with this defect, which can be used by android developers to avoid such errors.

Keywords: Android, automated testing, activity, data loss

Procedia PDF Downloads 237
27536 Big Data: Appearance and Disappearance

Authors: James Moir

Abstract:

The mainstay of Big Data is prediction in that it allows practitioners, researchers, and policy analysts to predict trends based upon the analysis of large and varied sources of data. These can range from changing social and political opinions, patterns in crimes, and consumer behaviour. Big Data has therefore shifted the criterion of success in science from causal explanations to predictive modelling and simulation. The 19th-century science sought to capture phenomena and seek to show the appearance of it through causal mechanisms while 20th-century science attempted to save the appearance and relinquish causal explanations. Now 21st-century science in the form of Big Data is concerned with the prediction of appearances and nothing more. However, this pulls social science back in the direction of a more rule- or law-governed reality model of science and away from a consideration of the internal nature of rules in relation to various practices. In effect Big Data offers us no more than a world of surface appearance and in doing so it makes disappear any context-specific conceptual sensitivity.

Keywords: big data, appearance, disappearance, surface, epistemology

Procedia PDF Downloads 421
27535 From Data Processing to Experimental Design and Back Again: A Parameter Identification Problem Based on FRAP Images

Authors: Stepan Papacek, Jiri Jablonsky, Radek Kana, Ctirad Matonoha, Stefan Kindermann

Abstract:

FRAP (Fluorescence Recovery After Photobleaching) is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data processing part is still under development. In this paper, we formulate and solve the problem of data selection which enhances the processing of FRAP images. We introduce the concept of the irrelevant data set, i.e., the data which are almost not reducing the confidence interval of the estimated parameters and thus could be neglected. Based on sensitivity analysis, we both solve the problem of the optimal data space selection and we find specific conditions for optimizing an important experimental design factor, e.g., the radius of bleach spot. Finally, a theorem announcing less precision of the integrated data approach compared to the full data case is proven; i.e., we claim that the data set represented by the FRAP recovery curve lead to a larger confidence interval compared to the spatio-temporal (full) data.

Keywords: FRAP, inverse problem, parameter identification, sensitivity analysis, optimal experimental design

Procedia PDF Downloads 278
27534 Software-Defined Radio Based Channel Measurement System of Wideband HF Communication System in Low-Latitude Region

Authors: P. H. Mukti, I. Kurniawati, F. Oktaviansyah, A. D. Adhitya, N. Rachmadani, R. Corputty, G. Hendrantoro, T. Fukusako

Abstract:

HF Communication system is one of the attractive fields among many researchers since it can be reached long-distance areas with low-cost. This long-distance communication can be achieved by exploiting the ionosphere as a transmission medium for the HF radio wave. However, due to the dynamic nature of ionosphere, the channel characteristic of HF communication has to be investigated in order to gives better performances. Many techniques to characterize HF channel are available in the literature. However, none of those techniques describe the HF channel characteristic in low-latitude regions, especially equatorial areas. Since the ionosphere around equatorial region has an ESF phenomenon, it becomes an important investigation to characterize the wideband HF Channel in low-latitude region. On the other sides, the appearance of software-defined radio attracts the interest of many researchers. Accordingly, in this paper a SDR-based channel measurement system is proposed to be used for characterizing the HF channel in low-latitude region.

Keywords: channel characteristic, HF communication system, LabVIEW, software-defined radio, universal software radio peripheral

Procedia PDF Downloads 488
27533 Advances in Design Decision Support Tools for Early-stage Energy-Efficient Architectural Design: A Review

Authors: Maryam Mohammadi, Mohammadjavad Mahdavinejad, Mojtaba Ansari

Abstract:

The main driving force for increasing movement towards the design of High-Performance Buildings (HPB) are building codes and rating systems that address the various components of the building and their impact on the environment and energy conservation through various methods like prescriptive methods or simulation-based approaches. The methods and tools developed to meet these needs, which are often based on building performance simulation tools (BPST), have limitations in terms of compatibility with the integrated design process (IDP) and HPB design, as well as use by architects in the early stages of design (when the most important decisions are made). To overcome these limitations in recent years, efforts have been made to develop Design Decision Support Systems, which are often based on artificial intelligence. Numerous needs and steps for designing and developing a Decision Support System (DSS), which complies with the early stages of energy-efficient architecture design -consisting of combinations of different methods in an integrated package- have been listed in the literature. While various review studies have been conducted in connection with each of these techniques (such as optimizations, sensitivity and uncertainty analysis, etc.) and their integration of them with specific targets; this article is a critical and holistic review of the researches which leads to the development of applicable systems or introduction of a comprehensive framework for developing models complies with the IDP. Information resources such as Science Direct and Google Scholar are searched using specific keywords and the results are divided into two main categories: Simulation-based DSSs and Meta-simulation-based DSSs. The strengths and limitations of different models are highlighted, two general conceptual models are introduced for each category and the degree of compliance of these models with the IDP Framework is discussed. The research shows movement towards Multi-Level of Development (MOD) models, well combined with early stages of integrated design (schematic design stage and design development stage), which are heuristic, hybrid and Meta-simulation-based, relies on Big-real Data (like Building Energy Management Systems Data or Web data). Obtaining, using and combining of these data with simulation data to create models with higher uncertainty, more dynamic and more sensitive to context and culture models, as well as models that can generate economy-energy-efficient design scenarios using local data (to be more harmonized with circular economy principles), are important research areas in this field. The results of this study are a roadmap for researchers and developers of these tools.

Keywords: integrated design process, design decision support system, meta-simulation based, early stage, big data, energy efficiency

Procedia PDF Downloads 162