Search results for: algorithm techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9867

Search results for: algorithm techniques

7617 Secret Sharing in Visual Cryptography Using NVSS and Data Hiding Techniques

Authors: Misha Alexander, S. B. Waykar

Abstract:

Visual Cryptography is a special unbreakable encryption technique that transforms the secret image into random noisy pixels. These shares are transmitted over the network and because of its noisy texture it attracts the hackers. To address this issue a Natural Visual Secret Sharing Scheme (NVSS) was introduced that uses natural shares either in digital or printed form to generate the noisy secret share. This scheme greatly reduces the transmission risk but causes distortion in the retrieved secret image through variation in settings and properties of digital devices used to capture the natural image during encryption / decryption phase. This paper proposes a new NVSS scheme that extracts the secret key from randomly selected unaltered multiple natural images. To further improve the security of the shares data hiding techniques such as Steganography and Alpha channel watermarking are proposed.

Keywords: decryption, encryption, natural visual secret sharing, natural images, noisy share, pixel swapping

Procedia PDF Downloads 404
7616 Achieving Success in NPD Projects

Authors: Ankush Agrawal, Nadia Bhuiyan

Abstract:

The new product development (NPD) literature emphasizes the importance of introducing new products on the market for continuing business success. New products are responsible for employment, economic growth, technological progress, and high standards of living. Therefore, the study of NPD and the processes through which they emerge is important. The goal of our research is to propose a framework of critical success factors, metrics, and tools and techniques for implementing metrics for each stage of the new product development (NPD) process. An extensive literature review was undertaken to investigate decades of studies on NPD success and how it can be achieved. These studies were scanned for common factors for firms that enjoyed success of new products on the market. The paper summarizes NPD success factors, suggests metrics that should be used to measure these factors, and proposes tools and techniques to make use of these metrics. This was done for each stage of the NPD process, and brought together in a framework that the authors propose should be followed for complex NPD projects. While many studies have been conducted on critical success factors for NPD, these studies tend to be fragmented and focus on one or a few phases of the NPD process.

Keywords: new product development, performance, critical success factors, framework

Procedia PDF Downloads 399
7615 Special Features Of Phacoemulsification Technique For Dense Cataracts

Authors: Shilkin A.G., Goncharov D.V., Rotanov D.A., Voitecha M.A., Kulyagina Y.I., Mochalova U.E.

Abstract:

Context: Phacoemulsification is a surgical technique used to remove cataracts, but it has a higher number of complications when dense cataracts are present. The risk factors include thin posterior capsule, dense nucleus fragments, and prolonged exposure to high-power ultrasound. To minimize these complications, various methods are used. Research aim: The aim of this study is to develop and implement optimal methods of ultrasound phacoemulsification for dense cataracts in order to minimize postoperative complications. Methodology: The study involved 36 eyes of dogs with dense cataracts over a period of 5 years. The surgeries were performed using a LEICA 844 surgical microscope and an Oertli Faros phacoemulsifier. The surgical techniques included the optimal technique for breaking the nucleus, bimanual surgery, and the use of Akahoshi prechoppers. Findings: The complications observed during the surgery included rupture of the posterior capsule and the need for anterior vitrectomy. Complications in the postoperative period included corneal edema and uveitis. Theoretical importance: This study contributes to the field by providing insights into the special features of phacoemulsification for dense cataracts. It highlights the importance of using specific techniques and settings to minimize complications. Data collection and analysis procedures: The data for the study were collected from surgeries performed on dogs with dense cataracts. The complications were documented and analyzed. Question addressed: The study addressed the question of how to minimize complications during phacoemulsification surgery for dense cataracts. Conclusion: By following the optimal techniques, settings, and using prechoppers, the surgery for dense cataracts can be made safer and faster, minimizing the risks and complications.

Keywords: dense cataracts, phacoemulsification, phacoemulsification of cataracts in elderly dogs, осложнения факоэмульсификации

Procedia PDF Downloads 62
7614 Recent Developments in the Application of Deep Learning to Stock Market Prediction

Authors: Shraddha Jain Sharma, Ratnalata Gupta

Abstract:

Predicting stock movements in the financial market is both difficult and rewarding. Analysts and academics are increasingly using advanced approaches such as machine learning techniques to anticipate stock price patterns, thanks to the expanding capacity of computing and the recent advent of graphics processing units and tensor processing units. Stock market prediction is a type of time series prediction that is incredibly difficult to do since stock prices are influenced by a variety of financial, socioeconomic, and political factors. Furthermore, even minor mistakes in stock market price forecasts can result in significant losses for companies that employ the findings of stock market price prediction for financial analysis and investment. Soft computing techniques are increasingly being employed for stock market prediction due to their better accuracy than traditional statistical methodologies. The proposed research looks at the need for soft computing techniques in stock market prediction, the numerous soft computing approaches that are important to the field, past work in the area with their prominent features, and the significant problems or issue domain that the area involves. For constructing a predictive model, the major focus is on neural networks and fuzzy logic. The stock market is extremely unpredictable, and it is unquestionably tough to correctly predict based on certain characteristics. This study provides a complete overview of the numerous strategies investigated for high accuracy prediction, with a focus on the most important characteristics.

Keywords: stock market prediction, artificial intelligence, artificial neural networks, fuzzy logic, accuracy, deep learning, machine learning, stock price, trading volume

Procedia PDF Downloads 90
7613 Employing Visual Culture to Enhance Initial Adult Maltese Language Acquisition

Authors: Jacqueline Żammit

Abstract:

Recent research indicates that the utilization of right-brain strategies holds significant implications for the acquisition of language skills. Nevertheless, the utilization of visual culture as a means to stimulate these strategies and amplify language retention among adults engaging in second language (L2) learning remains a relatively unexplored area. This investigation delves into the impact of visual culture on activating right-brain processes during the initial stages of language acquisition, particularly in the context of teaching Maltese as a second language (ML2) to adult learners. By employing a qualitative research approach, this study convenes a focus group comprising twenty-seven educators to delve into a range of visual culture techniques integrated within language instruction. The collected data is subjected to thematic analysis using NVivo software. The findings underscore a variety of impactful visual culture techniques, encompassing activities such as drawing, sketching, interactive matching games, orthographic mapping, memory palace strategies, wordless picture books, picture-centered learning methodologies, infographics, Face Memory Game, Spot the Difference, Word Search Puzzles, the Hidden Object Game, educational videos, the Shadow Matching technique, Find the Differences exercises, and color-coded methodologies. These identified techniques hold potential for application within ML2 classes for adult learners. Consequently, this study not only provides insights into optimizing language learning through specific visual culture strategies but also furnishes practical recommendations for enhancing language competencies and skills.

Keywords: visual culture, right-brain strategies, second language acquisition, maltese as a second language, visual aids, language-based activities

Procedia PDF Downloads 61
7612 ViraPart: A Text Refinement Framework for Automatic Speech Recognition and Natural Language Processing Tasks in Persian

Authors: Narges Farokhshad, Milad Molazadeh, Saman Jamalabbasi, Hamed Babaei Giglou, Saeed Bibak

Abstract:

The Persian language is an inflectional subject-object-verb language. This fact makes Persian a more uncertain language. However, using techniques such as Zero-Width Non-Joiner (ZWNJ) recognition, punctuation restoration, and Persian Ezafe construction will lead us to a more understandable and precise language. In most of the works in Persian, these techniques are addressed individually. Despite that, we believe that for text refinement in Persian, all of these tasks are necessary. In this work, we proposed a ViraPart framework that uses embedded ParsBERT in its core for text clarifications. First, used the BERT variant for Persian followed by a classifier layer for classification procedures. Next, we combined models outputs to output cleartext. In the end, the proposed model for ZWNJ recognition, punctuation restoration, and Persian Ezafe construction performs the averaged F1 macro scores of 96.90%, 92.13%, and 98.50%, respectively. Experimental results show that our proposed approach is very effective in text refinement for the Persian language.

Keywords: Persian Ezafe, punctuation, ZWNJ, NLP, ParsBERT, transformers

Procedia PDF Downloads 218
7611 Using Machine Learning Techniques for Autism Spectrum Disorder Analysis and Detection in Children

Authors: Norah Mohammed Alshahrani, Abdulaziz Almaleh

Abstract:

Autism Spectrum Disorder (ASD) is a condition related to issues with brain development that affects how a person recognises and communicates with others which results in difficulties with interaction and communication socially and it is constantly growing. Early recognition of ASD allows children to lead safe and healthy lives and helps doctors with accurate diagnoses and management of conditions. Therefore, it is crucial to develop a method that will achieve good results and with high accuracy for the measurement of ASD in children. In this paper, ASD datasets of toddlers and children have been analyzed. We employed the following machine learning techniques to attempt to explore ASD and they are Random Forest (RF), Decision Tree (DT), Na¨ıve Bayes (NB) and Support Vector Machine (SVM). Then Feature selection was used to provide fewer attributes from ASD datasets while preserving model performance. As a result, we found that the best result has been provided by the Support Vector Machine (SVM), achieving 0.98% in the toddler dataset and 0.99% in the children dataset.

Keywords: autism spectrum disorder, machine learning, feature selection, support vector machine

Procedia PDF Downloads 152
7610 Condition Assessment of Reinforced Concrete Bridge Deck Using Ground Penetrating Radar

Authors: Azin Shakibabarough, Mojtaba Valinejadshoubi, Ashutosh Bagchi

Abstract:

Catastrophic bridge failure happens due to the lack of inspection, lack of design and extreme events like flooding, an earthquake. Bridge Management System (BMS) is utilized to diminish such an accident with proper design and frequent inspection. Visual inspection cannot detect any subsurface defects, so using Non-Destructive Evaluation (NDE) techniques remove these barriers as far as possible. Among all NDE techniques, Ground Penetrating Radar (GPR) has been proved as a highly effective device for detecting internal defects in a reinforced concrete bridge deck. GPR is used for detecting rebar location and rebar corrosion in the reinforced concrete deck. GPR profile is composed of hyperbola series in which sound hyperbola denotes sound rebar and blur hyperbola or signal attenuation shows corroded rebar. Interpretation of GPR images is implemented by numerical analysis or visualization. Researchers recently found that interpretation through visualization is more precise than interpretation through numerical analysis, but visualization is time-consuming and a highly subjective process. Automating the interpretation of GPR image through visualization can solve these problems. After interpretation of all scans of a bridge, condition assessment is conducted based on the generated corrosion map. However, this such a condition assessment is not objective and precise. Condition assessment based on structural integrity and strength parameters can make it more objective and precise. The main purpose of this study is to present an automated interpretation method of a reinforced concrete bridge deck through a visualization technique. In the end, the combined analysis of the structural condition in a bridge is implemented.

Keywords: bridge condition assessment, ground penetrating radar, GPR, NDE techniques, visualization

Procedia PDF Downloads 149
7609 Standardization Of Miniature Neutron Research Reactor And Occupational Safety Analysis

Authors: Raymond Limen Njinga

Abstract:

The comparator factors (Fc) for miniature research reactors are of great importance in the field of nuclear physics as it provide accurate bases for the evaluation of elements in all form of samples via ko-NAA techniques. The Fc was initially simulated theoretically thereafter, series of experiments were performed to validate the results. In this situation, the experimental values were obtained using the alloy of Au(0.1%) - Al monitor foil and a neutron flux setting of 5.00E+11 cm-2.s-1. As was observed in the inner irradiation position, the average experimental value of 7.120E+05 was reported against the theoretical value of 7.330E+05. In comparison, a percentage deviation of 2.86 (from theoretical value) was observed. In the large case of the outer irradiation position, the experimental value of 1.170E+06 was recorded against the theoretical value of 1.210E+06 with a percentage deviation of 3.310 (from the theoretical value). The estimation of equivalent dose rate at 5m from neutron flux of 5.00E+11 cm-2.s-1 within the neutron energies of 1KeV, 10KeV, 100KeV, 500KeV, 1MeV, 5MeV and 10MeV were calculated to be 0.01 Sv/h, 0.01 Sv/h, 0.03 Sv/h, 0.15 Sv/h, 0.21Sv/h and 0.25 Sv/h respectively with a total dose within a period of an hour was obtained to be 0.66 Sv.

Keywords: neutron flux, comparator factor, NAA techniques, neutron energy, equivalent dose

Procedia PDF Downloads 183
7608 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees

Authors: Alexandru-Ion Marinescu

Abstract:

There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.

Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution

Procedia PDF Downloads 117
7607 A Step Magnitude Haptic Feedback Device and Platform for Better Way to Review Kinesthetic Vibrotactile 3D Design in Professional Training

Authors: Biki Sarmah, Priyanko Raj Mudiar

Abstract:

In the modern world of remotely interactive virtual reality-based learning and teaching, including professional skill-building training and acquisition practices, as well as data acquisition and robotic systems, the revolutionary application or implementation of field-programmable neurostimulator aids and first-hand interactive sensitisation techniques into 3D holographic audio-visual platforms have been a coveted dream of many scholars, professionals, scientists, and students. Integration of 'kinaesthetic vibrotactile haptic perception' along with an actuated step magnitude contact profiloscopy in augmented reality-based learning platforms and professional training can be implemented by using an extremely calculated and well-coordinated image telemetry including remote data mining and control technique. A real-time, computer-aided (PLC-SCADA) field calibration based algorithm must be designed for the purpose. But most importantly, in order to actually realise, as well as to 'interact' with some 3D holographic models displayed over a remote screen using remote laser image telemetry and control, all spatio-physical parameters like cardinal alignment, gyroscopic compensation, as well as surface profile and thermal compositions, must be implemented using zero-order type 1 actuators (or transducers) because they provide zero hystereses, zero backlashes, low deadtime as well as providing a linear, absolutely controllable, intrinsically observable and smooth performance with the least amount of error compensation while ensuring the best ergonomic comfort ever possible for the users.

Keywords: haptic feedback, kinaesthetic vibrotactile 3D design, medical simulation training, piezo diaphragm based actuator

Procedia PDF Downloads 166
7606 Safety of Built Infrastructure: Single Degree of Freedom Approach to Blast Resistant RC Wall Panels

Authors: Muizz Sanni-Anibire

Abstract:

The 21st century has witnessed growing concerns for the protection of built facilities against natural and man-made disasters. Studies in earthquake resistant buildings, fire, and explosion resistant buildings now dominate the arena. To protect people and facilities from the effects of the explosion, reinforced concrete walls have been designed to be blast resistant. Understanding the performance of these walls is a key step in ensuring the safety of built facilities. Blast walls are mostly designed using simple techniques such as single degree of freedom (SDOF) method, despite the increasing use of multi-degree of freedom techniques such as the finite element method. This study is the first stage of a continuous research into the safety and reliability of blast walls. It presents the SDOF approach applied to the analysis of a concrete wall panel under three representative bomb situations. These are motorcycle 50 kg, car 400kg and also van with the capacity of 1500 kg of TNT explosive.

Keywords: blast wall, safety, protection, explosion

Procedia PDF Downloads 263
7605 Delineation of Green Infrastructure Buffer Areas with a Simulated Annealing: Consideration of Ecosystem Services Trade-Offs in the Objective Function

Authors: Andres Manuel Garcia Lamparte, Rocio Losada Iglesias, Marcos BoullóN Magan, David Miranda Barros

Abstract:

The biodiversity strategy of the European Union for 2030, mentions climate change as one of the key factors for biodiversity loss and considers green infrastructure as one of the solutions to this problem. In this line, the European Commission has developed a green infrastructure strategy which commits members states to consider green infrastructure in their territorial planning. This green infrastructure is aimed at granting the provision of a wide number of ecosystem services to support biodiversity and human well-being by countering the effects of climate change. Yet, there are not too many tools available to delimit green infrastructure. The available ones consider the potential of the territory to provide ecosystem services. However, these methods usually aggregate several maps of ecosystem services potential without considering possible trade-offs. This can lead to excluding areas with a high potential for providing ecosystem services which have many trade-offs with other ecosystem services. In order to tackle this problem, a methodology is proposed to consider ecosystem services trade-offs in the objective function of a simulated annealing algorithm aimed at delimiting green infrastructure multifunctional buffer areas. To this end, the provision potential maps of the regulating ecosystem services considered to delimit the multifunctional buffer areas are clustered in groups, so that ecosystem services that create trade-offs are excluded in each group. The normalized provision potential maps of the ecosystem services in each group are added to obtain a potential map per group which is normalized again. Then the potential maps for each group are combined in a raster map that shows the highest provision potential value in each cell. The combined map is then used in the objective function of the simulated annealing algorithm. The algorithm is run both using the proposed methodology and considering the ecosystem services individually. The results are analyzed with spatial statistics and landscape metrics to check the number of ecosystem services that the delimited areas produce, as well as their regularity and compactness. It has been observed that the proposed methodology increases the number of ecosystem services produced by delimited areas, improving their multifunctionality and increasing their effectiveness in preventing climate change impacts.

Keywords: ecosystem services trade-offs, green infrastructure delineation, multifunctional buffer areas, climate change

Procedia PDF Downloads 174
7604 Effects of Different Processing Methods on Composition, Physicochemical and Morphological Properties of MR263 Rice Flour

Authors: R. Asmeda, A. Noorlaila, M. H. Norziah

Abstract:

This research work was conducted to investigate the effects of different grinding techniques during the milling process of rice grains on physicochemical characteristics of rice flour produced. Dry grinding, semi-wet grinding, and wet grinding were employed to produce the rice flour. The results indicated that different grinding methods significantly (p ≤ 0.05) affected physicochemical and functional properties of starch except for the carbohydrate content, x-ray diffraction pattern and breakdown viscosity. Dry grinding technique caused highest percentage of starch damage compared to semi-wet and wet grinding. Protein, fat and ash content were highest in rice flour obtained by dry grinding. It was found that wet grinding produce flour with smallest average particle size (8.52 µm), resulting in highest process yield (73.14%). Pasting profiles revealed that dry grinding produce rice flour with significantly lowest pasting temperature and highest setback viscosity.

Keywords: average particle size, grinding techniques, physicochemical characteristics, rice flour

Procedia PDF Downloads 191
7603 3D Point Cloud Model Color Adjustment by Combining Terrestrial Laser Scanner and Close Range Photogrammetry Datasets

Authors: M. Pepe, S. Ackermann, L. Fregonese, C. Achille

Abstract:

3D models obtained with advanced survey techniques such as close-range photogrammetry and laser scanner are nowadays particularly appreciated in Cultural Heritage and Archaeology fields. In order to produce high quality models representing archaeological evidences and anthropological artifacts, the appearance of the model (i.e. color) beyond the geometric accuracy, is not a negligible aspect. The integration of the close-range photogrammetry survey techniques with the laser scanner is still a topic of study and research. By combining point cloud data sets of the same object generated with both technologies, or with the same technology but registered in different moment and/or natural light condition, could construct a final point cloud with accentuated color dissimilarities. In this paper, a methodology to uniform the different data sets, to improve the chromatic quality and to highlight further details by balancing the point color will be presented.

Keywords: color models, cultural heritage, laser scanner, photogrammetry

Procedia PDF Downloads 280
7602 Comparative DNA Binding of Iron and Manganese Complexes by Spectroscopic and ITC Techniques and Antibacterial Activity

Authors: Maryam Nejat Dehkordi, Per Lincoln, Hassan Momtaz

Abstract:

Interaction of Schiff base complexes of iron and manganese (iron [N, N’ Bis (5-(triphenyl phosphonium methyl) salicylidene) -1, 2 ethanediamine) chloride, [Fe Salen]Cl, manganese [N, N’ Bis (5-(triphenyl phosphonium methyl) salicylidene) -1, 2 ethanediamine) acetate) with DNA were investigated by spectroscopic and isothermal titration calorimetry techniques (ITC). The absorbance spectra of complexes have shown hyper and hypochromism in the presence of DNA that is indication of interaction of complexes with DNA. The linear dichroism (LD) measurements confirmed the bending of DNA in the presence of complexes. Furthermore, isothermal titration calorimetry experiments approved that complexes bound to DNA on the base of both electrostatic and hydrophobic interactions. Furthermore, ITC profile exhibits the existence of two binding phases for the complex. Antibacterial activity of ligand and complexes were tested in vitro to evaluate their activity against the gram positive and negative bacteria.

Keywords: Schiff base complexes, ct-DNA, linear dichroism (LD), isothermal titration calorimetry (ITC), antibacterial activity

Procedia PDF Downloads 471
7601 Create a Brand Value Assessment Model to Choosing a Cosmetic Brand in Tehran Combining DEMATEL Techniques and Multi-Stage ANFIS

Authors: Hamed Saremi, Suzan Taghavy, Seyed Mohammad Hanif Sanjari, Mostafa Kahali

Abstract:

One of the challenges in manufacturing and service companies to provide a product or service is recognized Brand to consumers in target markets. They provide most of their processes under the same capacity. But the constant threat of devastating internal and external resources to prevent a rise Brands and more companies are recognizing the stages are bankrupt. This paper has tried to identify and analyze effective indicators of brand equity and focuses on indicators and presents a model of intelligent create a model to prevent possible damage. In this study, the identified indicators of brand equity are based on literature study and according to expert opinions, set of indicators By techniques DEMATEL Then to used Multi-Step Adaptive Neural-Fuzzy Inference system (ANFIS) to design a multi-stage intelligent system for assessment of brand equity.

Keywords: brand, cosmetic product, ANFIS, DEMATEL

Procedia PDF Downloads 417
7600 Automatic Lead Qualification with Opinion Mining in Customer Relationship Management Projects

Authors: Victor Radich, Tania Basso, Regina Moraes

Abstract:

Lead qualification is one of the main procedures in Customer Relationship Management (CRM) projects. Its main goal is to identify potential consumers who have the ideal characteristics to establish a profitable and long-term relationship with a certain organization. Social networks can be an important source of data for identifying and qualifying leads since interest in specific products or services can be identified from the users’ expressed feelings of (dis)satisfaction. In this context, this work proposes the use of machine learning techniques and sentiment analysis as an extra step in the lead qualification process in order to improve it. In addition to machine learning models, sentiment analysis or opinion mining can be used to understand the evaluation that the user makes of a particular service, product, or brand. The results obtained so far have shown that it is possible to extract data from social networks and combine the techniques for a more complete classification.

Keywords: lead qualification, sentiment analysis, opinion mining, machine learning, CRM, lead scoring

Procedia PDF Downloads 85
7599 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 432
7598 Random Forest Classification for Population Segmentation

Authors: Regina Chua

Abstract:

To reduce the costs of re-fielding a large survey, a Random Forest classifier was applied to measure the accuracy of classifying individuals into their assigned segments with the fewest possible questions. Given a long survey, one needed to determine the most predictive ten or fewer questions that would accurately assign new individuals to custom segments. Furthermore, the solution needed to be quick in its classification and usable in non-Python environments. In this paper, a supervised Random Forest classifier was modeled on a dataset with 7,000 individuals, 60 questions, and 254 features. The Random Forest consisted of an iterative collection of individual decision trees that result in a predicted segment with robust precision and recall scores compared to a single tree. A random 70-30 stratified sampling for training the algorithm was used, and accuracy trade-offs at different depths for each segment were identified. Ultimately, the Random Forest classifier performed at 87% accuracy at a depth of 10 with 20 instead of 254 features and 10 instead of 60 questions. With an acceptable accuracy in prioritizing feature selection, new tools were developed for non-Python environments: a worksheet with a formulaic version of the algorithm and an embedded function to predict the segment of an individual in real-time. Random Forest was determined to be an optimal classification model by its feature selection, performance, processing speed, and flexible application in other environments.

Keywords: machine learning, supervised learning, data science, random forest, classification, prediction, predictive modeling

Procedia PDF Downloads 94
7597 Examples from a Traditional Sismo-Resistant Architecture

Authors: Amira Zatir, Abderahmane Mokhtari, Amina Foufa, Sara Zatir

Abstract:

It exists in several regions in the world, of numerous historic monuments, buildings and housing environment, built in traditional ways which survive for earthquakes, even in zones where the seismic risk is particularly raised. These constructions, stemming from vernacular architecture, allow, through their resistances in the time earthquakes, to identify the various sismo-resistant "local" techniques. Through the examples and the experiences presented, the remark which can be made, is that in the traditional built, two major principles in a way opposite, govern the constructions in earthquake-resistant. It is about the very big flexibility, whom answer very light constructions, like the Japanese wooden constructions, Turkish and even Chinese; that of the very big rigidity to which correspond constructions in masonry in particular stone, more or less heavy and massive, which we meet in particular in the Mediterranean Basin, and in the historic sanctuary of Machu Pacchu. In it sensible and well-reflected techniques of construction are added, of which the use of the humble materials such as the earth and the adobe. The ancient communities were able to face the seismic risks, thanks to them know-how reflected in their intelligently designed constructions, testifying of a local seismic culture.

Keywords: earthquake, architecture, traditional, construction, resistance

Procedia PDF Downloads 420
7596 Application of Nanofibers in Heavy Metal (HM) Filtration

Authors: Abhijeet Kumar, Palaniswamy N. K.

Abstract:

Heavy metal contamination in water sources endangers both the environment and human health. Various water filtration techniques have been employed till now for purification and removal of hazardous metals from water. Among all the existing methods, nanofibres have emerged as a viable alternative for effective heavy metal removal in recent years because of their unique qualities, such as large surface area, interconnected porous structure, and customizable surface chemistry. Among the numerous manufacturing techniques, solution blow spinning has gained popularity as a versatile process for producing nanofibers with customized properties. This paper seeks to offer a complete overview of the use of nanofibers for heavy metal filtration, particularly those produced using solution blow spinning. The review discusses current advances in nanofiber materials, production processes, and heavy metal removal performance. Furthermore, the field's difficulties and future opportunities are examined in order to direct future research and development activities.

Keywords: heavy metals, nanofiber composite, filter membranes, adsorption, impaction

Procedia PDF Downloads 68
7595 The Design Inspired by Phra Maha Chedi of King Rama I-IV at Wat Phra Chetuphon Vimolmangklaram Rajwaramahaviharn

Authors: Taechit Cheuypoung

Abstract:

The research will focus on creating pattern designs that are inspired by the pagodas, Phra Maha Chedi of King Rama I-IV, that are located in the temple, Wat Phra Chetuphon Vimolmangklararm Rajwaramahaviharn. Different aspects of the temple were studied, including the history, architecture, significance of the temple, and techniques used to decorate the pagodas, Phra Maha Chedi of King Rama I-IV. Moreover, composition of arts and the form of pattern designs which all led to the outcome of four Thai application pattern. The four patterns combine Thai traditional design with international scheme, however, maintaining the distinctiveness of the glaze mosaic tiles of each Phra Maha Chedi. The patterns consist of rounded and notched petal flowers, leaves and vine, and various square shapes, and original colors which are updated for modernity. These elements are then grouped and combined with new techniques, resulting in pattern designs with modern aspects and simultaneously reflecting the charm and the aesthetic of Thai craftsmanship which are eternally embedded in the designs.

Keywords: Chedi, Pagoda, pattern, Wat

Procedia PDF Downloads 387
7594 Open-Loop Vector Control of Induction Motor with Space Vector Pulse Width Modulation Technique

Authors: Karchung, S. Ruangsinchaiwanich

Abstract:

This paper presents open-loop vector control method of induction motor with space vector pulse width modulation (SVPWM) technique. Normally, the closed loop speed control is preferred and is believed to be more accurate. However, it requires a position sensor to track the rotor position which is not desirable to use it for certain workspace applications. This paper exhibits the performance of three-phase induction motor with the simplest control algorithm without the use of a position sensor nor an estimation block to estimate rotor position for sensorless control. The motor stator currents are measured and are transformed to synchronously rotating (d-q-axis) frame by use of Clarke and Park transformation. The actual control happens in this frame where the measured currents are compared with the reference currents. The error signal is fed to a conventional PI controller, and the corrected d-q voltage is generated. The controller outputs are transformed back to three phase voltages and are fed to SVPWM block which generates PWM signal for the voltage source inverter. The open loop vector control model along with SVPWM algorithm is modeled in MATLAB/Simulink software and is experimented and validated in TMS320F28335 DSP board.

Keywords: electric drive, induction motor, open-loop vector control, space vector pulse width modulation technique

Procedia PDF Downloads 147
7593 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 78
7592 Study on Acoustic Source Detection Performance Improvement of Microphone Array Installed on Drones Using Blind Source Separation

Authors: Youngsun Moon, Yeong-Ju Go, Jong-Soo Choi

Abstract:

Most drones that currently have surveillance/reconnaissance missions are basically equipped with optical equipment, but we also need to use a microphone array to estimate the location of the acoustic source. This can provide additional information in the absence of optical equipment. The purpose of this study is to estimate Direction of Arrival (DOA) based on Time Difference of Arrival (TDOA) estimation of the acoustic source in the drone. The problem is that it is impossible to measure the clear target acoustic source because of the drone noise. To overcome this problem is to separate the drone noise and the target acoustic source using Blind Source Separation(BSS) based on Independent Component Analysis(ICA). ICA can be performed assuming that the drone noise and target acoustic source are independent and each signal has non-gaussianity. For maximized non-gaussianity each signal, we use Negentropy and Kurtosis based on probability theory. As a result, we can improve TDOA estimation and DOA estimation of the target source in the noisy environment. We simulated the performance of the DOA algorithm applying BSS algorithm, and demonstrated the simulation through experiment at the anechoic wind tunnel.

Keywords: aeroacoustics, acoustic source detection, time difference of arrival, direction of arrival, blind source separation, independent component analysis, drone

Procedia PDF Downloads 162
7591 Wavelet Coefficients Based on Orthogonal Matching Pursuit (OMP) Based Filtering for Remotely Sensed Images

Authors: Ramandeep Kaur, Kamaljit Kaur

Abstract:

In recent years, the technology of the remote sensing is growing rapidly. Image enhancement is one of most commonly used of image processing operations. Noise reduction plays very important role in digital image processing and various technologies have been located ahead to reduce the noise of the remote sensing images. The noise reduction using wavelet coefficients based on Orthogonal Matching Pursuit (OMP) has less consequences on the edges than available methods but this is not as establish in edge preservation techniques. So in this paper we provide a new technique minimum patch based noise reduction OMP which reduce the noise from an image and used edge preservation patch which preserve the edges of the image and presents the superior results than existing OMP technique. Experimental results show that the proposed minimum patch approach outperforms over existing techniques.

Keywords: image denoising, minimum patch, OMP, WCOMP

Procedia PDF Downloads 389
7590 Obtaining High-Dimensional Configuration Space for Robotic Systems Operating in a Common Environment

Authors: U. Yerlikaya, R. T. Balkan

Abstract:

In this research, a method is developed to obtain high-dimensional configuration space for path planning problems. In typical cases, the path planning problems are solved directly in the 3-dimensional (D) workspace. However, this method is inefficient in handling the robots with various geometrical and mechanical restrictions. To overcome these difficulties, path planning may be formalized and solved in a new space which is called configuration space. The number of dimensions of the configuration space comes from the degree of freedoms of the system of interest. The method can be applied in two ways. In the first way, the point clouds of all the bodies of the system and interaction of them are used. The second way is performed via using the clearance function of simulation software where the minimum distances between surfaces of bodies are simultaneously measured. A double-turret system is held in the scope of this study. The 4-D configuration space of a double-turret system is obtained in these two ways. As a result, the difference between these two methods is around 1%, depending on the density of the point cloud. The disparity between the two forms steadily decreases as the point cloud density increases. At the end of the study, in order to verify 4-D configuration space obtained, 4-D path planning problem was realized as 2-D + 2-D and a sample path planning is carried out with using A* algorithm. Then, the accuracy of the configuration space is proved using the obtained paths on the simulation model of the double-turret system.

Keywords: A* algorithm, autonomous turrets, high-dimensional C-space, manifold C-space, point clouds

Procedia PDF Downloads 139
7589 A Variable Neighborhood Search with Tabu Conditions for the Roaming Salesman Problem

Authors: Masoud Shahmanzari

Abstract:

The aim of this paper is to present a Variable Neighborhood Search (VNS) with Tabu Search (TS) conditions for the Roaming Salesman Problem (RSP). The RSP is a special case of the well-known traveling salesman problem (TSP) where a set of cities with time-dependent rewards and a set of campaign days are given. Each city can be visited on any day and a subset of cities can be visited multiple times. The goal is to determine an optimal campaign schedule consist of daily open/closed tours that visit some cities and maximizes the total net benefit while respecting daily maximum tour duration constraints and the necessity to return campaign base frequently. This problem arises in several real-life applications and particularly in election logistics where depots are not fixed. We formulate the problem as a mixed integer linear programming (MILP), in which we capture as many real-world aspects of the RSP as possible. We also present a hybrid metaheuristic algorithm based on a VNS with TS conditions. The initial feasible solution is constructed via a new matheuristc approach based on the decomposition of the original problem. Next, this solution is improved in terms of the collected rewards using the proposed local search procedure. We consider a set of 81 cities in Turkey and a campaign of 30 days as our largest instance. Computational results on real-world instances show that the developed algorithm could find near-optimal solutions effectively.

Keywords: optimization, routing, election logistics, heuristics

Procedia PDF Downloads 93
7588 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 221