Search results for: word processing
2986 Kannada HandWritten Character Recognition by Edge Hinge and Edge Distribution Techniques Using Manhatan and Minimum Distance Classifiers
Authors: C. V. Aravinda, H. N. Prakash
Abstract:
In this paper, we tried to convey fusion and state of art pertaining to SIL character recognition systems. In the first step, the text is preprocessed and normalized to perform the text identification correctly. The second step involves extracting relevant and informative features. The third step implements the classification decision. The three stages which involved are Data acquisition and preprocessing, Feature extraction, and Classification. Here we concentrated on two techniques to obtain features, Feature Extraction & Feature Selection. Edge-hinge distribution is a feature that characterizes the changes in direction of a script stroke in handwritten text. The edge-hinge distribution is extracted by means of a windowpane that is slid over an edge-detected binary handwriting image. Whenever the mid pixel of the window is on, the two edge fragments (i.e. connected sequences of pixels) emerging from this mid pixel are measured. Their directions are measured and stored as pairs. A joint probability distribution is obtained from a large sample of such pairs. Despite continuous effort, handwriting identification remains a challenging issue, due to different approaches use different varieties of features, having different. Therefore, our study will focus on handwriting recognition based on feature selection to simplify features extracting task, optimize classification system complexity, reduce running time and improve the classification accuracy.Keywords: word segmentation and recognition, character recognition, optical character recognition, hand written character recognition, South Indian languages
Procedia PDF Downloads 4942985 Comparison of Tribological and Mechanical Properties of White Metal Produced by Laser Cladding and Conventional Methods
Authors: Jae-Il Jeong, Hoon-Jae Park, Jung-Woo Cho, Yang-Gon Kim, Jin-Young Park, Joo-Young Oh, Si-Geun Choi, Seock-Sam Kim, Young Tae Cho, Chan Gyu Kim, Jong-Hyoung Kim
Abstract:
Bearing component has strongly required to decrease vibration and wear to achieve high durability and life time. In the industry field, bearing durability is improved by surface treatment on the bearing surface by centrifugal casting or gravity casting production method. However, this manufacturing method has caused problems such as long processing time, defect rate, and health harmful effect. To solve this problem, there is a laser cladding deposition treatment, which provides fast processing and food adhesion. Therefore, optimum conditions of white metal laser deposition should be studied to minimize bearing contact axis wear using laser cladding techniques. In this study, we deposit a soft white metal layer on SCM440, which is mainly used for shaft and bolt. On laser deposition process, the laser power and powder feed rate and laser head speed factors are controlled to find out the optimal conditions. We also measure hardness using micro Vickers, analyze FE-SEM (Field Emission Scanning Electron Microscope) and EDS (Energy Dispersive Spectroscopy) to study the mechanical properties and surface characteristics with various parameters change. Furthermore, this paper suggests the optimum condition of laser cladding deposition to apply in industrial fields. This work was supported by the Industrial Innovation Project of the Korea Evaluation Institute of Industrial Technology (KEIT) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (Research no. 10051653).Keywords: laser deposition, bearing, white metal, mechanical properties
Procedia PDF Downloads 2642984 Establishment of Precision System for Underground Facilities Based on 3D Absolute Positioning Technology
Authors: Yonggu Jang, Jisong Ryu, Woosik Lee
Abstract:
The study aims to address the limitations of existing underground facility exploration equipment in terms of exploration depth range, relative depth measurement, data processing time, and human-centered ground penetrating radar image interpretation. The study proposed the use of 3D absolute positioning technology to develop a precision underground facility exploration system. The aim of this study is to establish a precise exploration system for underground facilities based on 3D absolute positioning technology, which can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The study developed software and hardware technologies to build the precision exploration system. The software technologies developed include absolute positioning technology, ground surface location synchronization technology of GPR exploration equipment, GPR exploration image AI interpretation technology, and integrated underground space map-based composite data processing technology. The hardware systems developed include a vehicle-type exploration system and a cart-type exploration system. The data was collected using the developed exploration system, which employs 3D absolute positioning technology. The GPR exploration images were analyzed using AI technology, and the three-dimensional location information of the explored precise underground facilities was compared to the integrated underground space map. The study successfully developed a precision underground facility exploration system based on 3D absolute positioning technology. The developed exploration system can accurately survey up to a depth of 5m and measure the 3D absolute location of precise underground facilities. The system comprises software technologies that build a 3D precise DEM, synchronize the GPR sensor's ground surface 3D location coordinates, automatically analyze and detect underground facility information in GPR exploration images and improve accuracy through comparative analysis of the three-dimensional location information, and hardware systems, including a vehicle-type exploration system and a cart-type exploration system. The study's findings and technological advancements are essential for underground safety management in Korea. The proposed precision exploration system significantly contributes to establishing precise location information of underground facility information, which is crucial for underground safety management and improves the accuracy and efficiency of exploration. The study addressed the limitations of existing equipment in exploring underground facilities, proposed 3D absolute positioning technology-based precision exploration system, developed software and hardware systems for the exploration system, and contributed to underground safety management by providing precise location information. The developed precision underground facility exploration system based on 3D absolute positioning technology has the potential to provide accurate and efficient exploration of underground facilities up to a depth of 5m. The system's technological advancements contribute to the establishment of precise location information of underground facility information, which is essential for underground safety management in Korea.Keywords: 3D absolute positioning, AI interpretation of GPR exploration images, complex data processing, integrated underground space maps, precision exploration system for underground facilities
Procedia PDF Downloads 622983 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 892982 Building Cardiovascular Fitness through Plyometric Training
Authors: Theresa N. Uzor
Abstract:
The word cardiovascular fitness is a topic of much interest to people of Nigeria, especially during this time, some heart diseases run in families. Cardiovascular fitness is the ability of the heart and lungs to supply-rich blood to the working muscle tissues. This type of fitness is a health-related component of physical fitness that is brought about by sustained physical activity such as plyometric training. Plyometric is a form of advanced fitness training that uses fast muscular contractions to improve power and speed in the sports performance by coaches and athletes. Plyometric training involves a rapid stretching of muscle (eccentric phase) immediately followed by a concentric or shortening action of the same muscle and connective tissue. However, the most basic example of true plyometric training is running and can be safe for a wide variety of populations. This paper focused on building cardiovascular health through Plyometric Training. The centre focus of the article is cardiovascular fitness and plyometric training with factors of cardiovascular fitness. Plyometric training at any age provides multiple benefits even beyond weight control and weight loss, decrease the risk of cardiovascular diseases, stroke, high blood pressure, diabetes, and other diseases, among other benefits of plyometric training to cardiovascular fitness. Participation in plyometric training will increase metabolism of an individual, thereby burning more calories even when at rest and reduces weight is also among the benefits of plyometric training. Some guidelines were recommended for planning plyometric training programme to minimise the chance of injury. With plyometric training in Nigeria, fortune can change for good, especially now that there has been an increase in cardiovascular diseases within the society for great savings would be saved.Keywords: aerobic, cardiovascular, concentric, stretch-shortening cycle, plyometric
Procedia PDF Downloads 1392981 Intelligent Process and Model Applied for E-Learning Systems
Authors: Mafawez Alharbi, Mahdi Jemmali
Abstract:
E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.Keywords: artificial intelligence, architecture, e-learning, software engineering, processing
Procedia PDF Downloads 1912980 An Analysis of Miguel Syjuco’s Ilustrado: The Reconstructed Oriental Image
Authors: Christine Ivy A. Nogot
Abstract:
Under the colony of Spain for more than three centuries, the Philippines has a deep-rooted structure of Western ideologies and colonialism. The late 19th century, the period of Enlightenment, created a significant impact on our history when a group of middle-class Filipino men were sent to Europe to study. They were called Ilustrados, a Spanish word for erudite. They were the enlightened; the well-educated, intellectual scholars. Their writings provide intellectual grounds for the awakening of national consciousness that eventually prompted national movements and revolutions. They helped to establish a postcolonial society. In the modern era, Miguel Syjuco, a Filipino expatriate, wrote a novel and titled it Ilustrado. It is a representation of the liberal mind of the diasporic author in contemporary discourse. It provides a critical examination of the ilustrado in transition through the character of Miguel, who is also an expatriate writer. Using Syjuco’s award-winning novel as the primary text and anchored on Said’s concept of Orientalism, this paper examines how the depiction of features of the Eastern world is presented in the literary discourse. This paper looks into Said’s concept of orientalism as a hegemonic discursive structure and shows how Western superiority influences the Eastern culture in literary discourse. It explores Gramsci’s theory of cultural hegemony to explore Said’s argument that Western powers conquer the orient through culture and ideology. This paper presents how dominant ideologies and the social context redefine the ilustrado in the contemporary era.Keywords: cultural hegemony, ilustrado, orientalism, postcolonial
Procedia PDF Downloads 762979 Integrating Natural Language Processing (NLP) and Machine Learning in Lung Cancer Diagnosis
Authors: Mehrnaz Mostafavi
Abstract:
The assessment and categorization of incidental lung nodules present a considerable challenge in healthcare, often necessitating resource-intensive multiple computed tomography (CT) scans for growth confirmation. This research addresses this issue by introducing a distinct computational approach leveraging radiomics and deep-learning methods. However, understanding local services is essential before implementing these advancements. With diverse tracking methods in place, there is a need for efficient and accurate identification approaches, especially in the context of managing lung nodules alongside pre-existing cancer scenarios. This study explores the integration of text-based algorithms in medical data curation, indicating their efficacy in conjunction with machine learning and deep-learning models for identifying lung nodules. Combining medical images with text data has demonstrated superior data retrieval compared to using each modality independently. While deep learning and text analysis show potential in detecting previously missed nodules, challenges persist, such as increased false positives. The presented research introduces a Structured-Query-Language (SQL) algorithm designed for identifying pulmonary nodules in a tertiary cancer center, externally validated at another hospital. Leveraging natural language processing (NLP) and machine learning, the algorithm categorizes lung nodule reports based on sentence features, aiming to facilitate research and assess clinical pathways. The hypothesis posits that the algorithm can accurately identify lung nodule CT scans and predict concerning nodule features using machine-learning classifiers. Through a retrospective observational study spanning a decade, CT scan reports were collected, and an algorithm was developed to extract and classify data. Results underscore the complexity of lung nodule cohorts in cancer centers, emphasizing the importance of careful evaluation before assuming a metastatic origin. The SQL and NLP algorithms demonstrated high accuracy in identifying lung nodule sentences, indicating potential for local service evaluation and research dataset creation. Machine-learning models exhibited strong accuracy in predicting concerning changes in lung nodule scan reports. While limitations include variability in disease group attribution, the potential for correlation rather than causality in clinical findings, and the need for further external validation, the algorithm's accuracy and potential to support clinical decision-making and healthcare automation represent a significant stride in lung nodule management and research.Keywords: lung cancer diagnosis, structured-query-language (SQL), natural language processing (NLP), machine learning, CT scans
Procedia PDF Downloads 1012978 Multiscale Process Modeling Analysis for the Prediction of Composite Strength Allowables
Authors: Marianna Maiaru, Gregory M. Odegard
Abstract:
During the processing of high-performance thermoset polymer matrix composites, chemical reactions occur during elevated pressure and temperature cycles, causing the constituent monomers to crosslink and form a molecular network that gradually can sustain stress. As the crosslinking process progresses, the material naturally experiences a gradual shrinkage due to the increase in covalent bonds in the network. Once the cured composite completes the cure cycle and is brought to room temperature, the thermal expansion mismatch of the fibers and matrix cause additional residual stresses to form. These compounded residual stresses can compromise the reliability of the composite material and affect the composite strength. Composite process modeling is greatly complicated by the multiscale nature of the composite architecture. At the molecular level, the degree of cure controls the local shrinkage and thermal-mechanical properties of the thermoset. At the microscopic level, the local fiber architecture and packing affect the magnitudes and locations of residual stress concentrations. At the macroscopic level, the layup sequence controls the nature of crack initiation and propagation due to residual stresses. The goal of this research is use molecular dynamics (MD) and finite element analysis (FEA) to predict the residual stresses in composite laminates and the corresponding effect on composite failure. MD is used to predict the polymer shrinkage and thermomechanical properties as a function of degree of cure. This information is used as input into FEA to predict the residual stresses on the microscopic level resulting from the complete cure process. Virtual testing is subsequently conducted to predict strength allowables. Experimental characterization is used to validate the modeling.Keywords: molecular dynamics, finite element analysis, processing modeling, multiscale modeling
Procedia PDF Downloads 922977 Compensatory Articulation of Pressure Consonants in Telugu Cleft Palate Speech: A Spectrographic Analysis
Authors: Indira Kothalanka
Abstract:
For individuals born with a cleft palate (CP), there is no separation between the nasal cavity and the oral cavity, due to which they cannot build up enough air pressure in the mouth for speech. Therefore, it is common for them to have speech problems. Common cleft type speech errors include abnormal articulation (compensatory or obligatory) and abnormal resonance (hyper, hypo and mixed nasality). These are generally resolved after palate repair. However, in some individuals, articulation problems do persist even after the palate repair. Such individuals develop variant articulations in an attempt to compensate for the inability to produce the target phonemes. A spectrographic analysis is used to investigate the compensatory articulatory behaviours of pressure consonants in the speech of 10 Telugu speaking individuals aged between 7-17 years with a history of cleft palate. Telugu is a Dravidian language which is spoken in Andhra Pradesh and Telangana states in India. It is a language with the third largest number of native speakers in India and the most spoken Dravidian language. The speech of the informants is analysed using single word list, sentences, passage and conversation. Spectrographic analysis is carried out using PRAAT, speech analysis software. The place and manner of articulation of consonant sounds is studied through spectrograms with the help of various acoustic cues. The types of compensatory articulation identified are glottal stops, palatal stops, uvular, velar stops and nasal fricatives which are non-native in Telugu.Keywords: cleft palate, compensatory articulation, spectrographic analysis, PRAAT
Procedia PDF Downloads 4432976 Smart Online Library Catalog System with Query Expansion for the University of the Cordilleras
Authors: Vincent Ballola, Raymund Dilan, Thelma Palaoag
Abstract:
The Smart Online Library Catalog System with Query Expansion seeks to address the low usage of the library because of the emergence of the Internet. Library users are not accustomed to catalog systems that need a query to have the exact words without any mistakes for decent results to appear. The graphical user interface of the current system has a rather skewed learning curve for users to adapt with. With a simple graphical user interface inspired by Google, users can search quickly just by inputting their query and hitting the search button. Because of the query expansion techniques incorporated into the new system such as stemming, thesaurus search, and weighted search, users can have more efficient results from their query. The system will be adding the root words of the user's query to the query itself which will then be cross-referenced to a thesaurus database to search for any synonyms that will be added to the query. The results will then be arranged by the number of times the word has been searched. Online queries will also be added to the results for additional references. Users showed notable increases in efficiency and usability due to the familiar interface and query expansion techniques incorporated in the system. The simple yet familiar design led to a better user experience. Users also said that they would be more inclined in using the library because of the new system. The incorporation of query expansion techniques gives a notable increase of results to users that in turn gives them a wider range of resources found in the library. Used books mean more knowledge imparted to the users.Keywords: query expansion, catalog system, stemming, weighted search, usability, thesaurus search
Procedia PDF Downloads 3882975 Bactericidal Efficacy of Quaternary Ammonium Compound on Carriers with Food Additive Grade Calcium Hydroxide against Salmonella Infantis and Escherichia coli
Authors: M. Shahin Alam, Satoru Takahashi, Mariko Itoh, Miyuki Komura, Mayuko Suzuki, Natthanan Sangsriratanakul, Kazuaki Takehara
Abstract:
Cleaning and disinfection are key components of routine biosecurity in livestock farming and food processing industry. The usage of suitable disinfectants and their proper concentration are important factors for a successful biosecurity program. Disinfectants have optimum bactericidal and virucidal efficacies at temperatures above 20°C, but very few studies on application and effectiveness of disinfectants at low temperatures have been done. In the present study, the bactericidal efficacies of food additive grade calcium hydroxide (FdCa(OH)), quaternary ammonium compound (QAC) and their mixture, were investigated under different conditions, including time, organic materials (fetal bovine serum: FBS) and temperature, either in suspension or in carrier test. Salmonella Infantis and Escherichia coli, which are the most prevalent gram negative bacteria in commercial poultry housing and food processing industry, were used in this study. Initially, we evaluated these disinfectants at two different temperatures (4°C and room temperature (RT) (25°C ± 2°C)) and 7 contact times (0, 5 and 30 sec, 1, 3, 20 and 30 min), with suspension tests either in the presence or absence of 5% FBS. Secondly, we investigated the bactericidal efficacies of these disinfectants by carrier tests (rubber, stainless steel and plastic) at same temperatures and 4 contact times (30 sec, 1, 3, and 5 min). Then, we compared the bactericidal efficacies of each disinfectant within their mixtures, as follows. When QAC was diluted with redistilled water (dW2) at 1: 500 (QACx500) to obtain the final concentration of didecyl-dimethylammonium chloride (DDAC) of 200 ppm, it could inactivate Salmonella Infantis within 5 sec at RT either with or without 5% FBS in suspension test; however, at 4°C it required 30 min in presence of 5% FBS. FdCa(OH)2 solution alone could inactivate bacteria within 1 min both at RT and 4°C even with 5% FBS. While FdCa(OH)2 powder was added at final concentration 0.2% to QACx500 (Mix500), the mixture could inactivate bacteria within 30 sec and 5 sec, respectively, with or without 5% FBS at 4°C. The findings from the suspension test indicated that low temperature inhibited the bactericidal efficacy of QAC, whereas Mix500 was effective, regardless of short contact time and low temperature, even with 5% FBS. In the carrier test, single disinfectant required bit more time to inactivate bacteria on rubber and plastic surfaces than on stainless steel. However, Mix500 could inactivate S. Infantis on rubber, stainless steel and plastic surfaces within 30 sec and 1 min, respectively, at RT and 4°C; but, for E. coli, it required only 30 sec at both temperatures. So, synergistic effects were observed on different carriers at both temperatures. For a successful enhancement of biosecurity during winter, the disinfectants should be selected that could have short contact times with optimum efficacy against the target pathogen. The present study findings help farmers to make proper strategies for application of disinfectants in their livestock farming and food processing industry.Keywords: carrier, food additive grade calcium hydroxide (FdCa(OH)₂), quaternary ammonium compound, synergistic effects
Procedia PDF Downloads 2942974 A Study in the Formation of a Term: Sahaba
Authors: Abdul Rahman Chamseddine
Abstract:
The Companions of the Prophet Muhammad, the Sahaba, are regarded as the first link between him and later believers who did not know him or learn from him directly. This makes the Sahaba a link in the chain between God and the ummah (community). Apart from their role in spreading the Prophet’s teachings, they came to be regarded as role models, representing the Islamic ideal of life as prescribed by the Prophet himself. According to Hadith, the Prophet had promised some Sahaba unqualified admission to paradise. It is commonly agreed that the Sahaba have the following attributes in common: God is well pleased with them; they will surely go to paradise; they are perfectly trustworthy; and they are the authorities from whom Muslims can learn all matters related to their religion. No other generation of Muslims has received the attention received by the Companions of the Prophet. In spite of the importance of the Sahaba in Islam, we still know comparatively little about them. There are at least two reasons for this. First, there is the overall scarcity of information surviving from the early period. At the death of the Prophet, it is said, there were more than 100,000 Companions. As we shall see, this is a complex issue, involving the definition of the term Sahaba. However, only few Companions of the Prophet are known to us. Ibn Hajar al-‘Asqalani, who wrote in the fifteenth century A.D., was only able to collect facts about 11,000 of them (including those whose status as Sahaba was disputed). Ibn Sa‘d, Ibn ‘Abd al-Barr and Ibn al-Athir, all of whom lived earlier than Ibn Hajar, included in their respective works fewer lives of Sahaba than he did. If we consider Ibn Hajar’s Isaba as the most complete biographical account of the Sahaba that remains available, we have information, presumably, on approximately one tenth of them. The remaining nine tenths are apparently lost from the historical record. Second, discussion of the Sahaba tends to focus on those considered the most important among them such as ‘Uthman, ‘Ali and Mu‘awiya, while others, who together number in the thousands, are less well-known. This paper will try to study the origins of the term Sahaba that became exclusive to the Companions of the Prophet and not a synonym of the word companions in general.Keywords: companions, Hadith, Islamic history, Muhammad, Sahaba, transmission
Procedia PDF Downloads 4162973 The Effect of Racism in the Media to Deal With Migration
Authors: Rasha Ali Dheyab, Edurad Vlad
Abstract:
Migration is associated with other important global issues, including development, poverty, and human rights. Migrants are often the most dynamic members of society; historically, migration has supported economic development and the rise of nations and enriched cultures. It also presents significant challenges. The word ‘racism’ is not just about beliefs or statements; it also contains the ability to force those beliefs or world views as hegemonic and as a basis for the refusal of rights or equality. For this reason, racism is embedded in power relations of different types. Racism is not only an awareness of distinction and groups, but it also has extremely practical roles in maintaining: First, inequitable social power arrangements; and second, racist behavioral manifestations such as verbal rejection, avoidance, discrimination, physical attack, and elimination. The focus is on aspects of racism in the media to deal with the migration phenomenon. The reproduction and promotion of racism by certain areas of the media is not a simple and straightforward process. It is important to see how the media serves in the reproduction of racism. This article shows attitudes to migration as they have appeared in British periodicals over the last few years. One might conclude that the reproduction of racism by the media is not a simple and straightforward process. It has become obvious that the role of the media in the reproduction of racism is inextricably linked to the general characteristics of racism and white domination in society, particularly the structural and ideological structuring of that kind of group power. This highlights the press's function as a business, social, and cultural institution. The press has to be examined in connection to the institutions of the economic and political as well.Keywords: British periodicals, culture studies, migration, racism
Procedia PDF Downloads 2132972 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 392971 Building Atmospheric Moisture Diagnostics: Environmental Monitoring and Data Collection
Authors: Paula Lopez-Arce, Hector Altamirano, Dimitrios Rovas, James Berry, Bryan Hindle, Steven Hodgson
Abstract:
Efficient mould remediation and accurate moisture diagnostics leading to condensation and mould growth in dwellings are largely untapped. Number of factors are contributing to the rising trend of excessive moisture in homes mainly linked with modern living, increased levels of occupation and rising fuel costs, as well as making homes more energy efficient. Environmental monitoring by means of data collection though loggers sensors and survey forms has been performed in a range of buildings from different UK regions. Air and surface temperature and relative humidity values of residential areas affected by condensation and/or mould issues were recorded. Additional measurements were taken through different trials changing type, location, and position of loggers. In some instances, IR thermal images and ventilation rates have also been acquired. Results have been interpreted together with environmental key parameters by processing and connecting data from loggers and survey questionnaires, both in buildings with and without moisture issues. Monitoring exercises carried out during Winter and Spring time show the importance of developing and following accurate protocols for guidance to obtain consistent, repeatable and comparable results and to improve the performance of environmental monitoring. A model and a protocol are being developed to build a diagnostic tool with the goal of performing a simple but precise residential atmospheric moisture diagnostics to distinguish the cause entailing condensation and mould generation, i.e., ventilation, insulation or heating systems issue. This research shows the relevance of monitoring and processing environmental data to assign moisture risk levels and determine the origin of condensation or mould when dealing with a building atmospheric moisture excess.Keywords: environmental monitoring, atmospheric moisture, protocols, mould
Procedia PDF Downloads 1392970 AI-Based Techniques for Online Social Media Network Sentiment Analysis: A Methodical Review
Authors: A. M. John-Otumu, M. M. Rahman, O. C. Nwokonkwo, M. C. Onuoha
Abstract:
Online social media networks have long served as a primary arena for group conversations, gossip, text-based information sharing and distribution. The use of natural language processing techniques for text classification and unbiased decision-making has not been far-fetched. Proper classification of this textual information in a given context has also been very difficult. As a result, we decided to conduct a systematic review of previous literature on sentiment classification and AI-based techniques that have been used in order to gain a better understanding of the process of designing and developing a robust and more accurate sentiment classifier that can correctly classify social media textual information of a given context between hate speech and inverted compliments with a high level of accuracy by assessing different artificial intelligence techniques. We evaluated over 250 articles from digital sources like ScienceDirect, ACM, Google Scholar, and IEEE Xplore and whittled down the number of research to 31. Findings revealed that Deep learning approaches such as CNN, RNN, BERT, and LSTM outperformed various machine learning techniques in terms of performance accuracy. A large dataset is also necessary for developing a robust sentiment classifier and can be obtained from places like Twitter, movie reviews, Kaggle, SST, and SemEval Task4. Hybrid Deep Learning techniques like CNN+LSTM, CNN+GRU, CNN+BERT outperformed single Deep Learning techniques and machine learning techniques. Python programming language outperformed Java programming language in terms of sentiment analyzer development due to its simplicity and AI-based library functionalities. Based on some of the important findings from this study, we made a recommendation for future research.Keywords: artificial intelligence, natural language processing, sentiment analysis, social network, text
Procedia PDF Downloads 1152969 Association of Transforming Growth Factor-β1 Gene 1800469 C > T and 1982073 C > T Polymorphism with Type 2 Diabetic Foot Ulcer Patient in Cipto Mangunkusumo National Hospital Jakarta
Authors: Dedy Pratama, Akhmadu Muradi, Hilman Ibrahim, Patrianef Darwis, Alexander Jayadi Utama, Raden Suhartono, D. Suryandari, Luluk Yunaini, Tom Ch Adriani
Abstract:
Objective: Diabetic Foot Ulcer (DFU) is one of the complications of Type 2 Diabetes Mellitus (T2DM) that can lead to disability and death. Inadequate vascularization condition will affect healing process of DFU. Therefore, we investigated the expression of polymorphism TGF- β1 in the relation of the occurrence of DFU in T2DM. Methods: We designed a case-control study to investigate the polymorphism TGF- β1 gene 1800469 C > T and 1982073 C > T in T2DM in Cipto Mangunkusumo National Hospital (RSCM) Jakarta from June to December 2016. We used PCR techniques and compared the results in a group of T2DM patients with DFU as the case study and without DFU as the control group. Results: There were 203 patients, 102 patients with DFU and 101 patients control without DFU. 49,8% is male and 50,2% female with mean age about 56 years. Distribution of wild-type genotype TGF-B1 1800469 C > T wild type CC was found in 44,8%, the number of mutant heterozygote CT was 10,8% and mutant homozygote is 11,3%. Distribution of TGF-B1 1982073 C>T wild type CC was 32,5%, mutant heterozygote is 38,9% and mutant homozygote 25,1%. Conclusion: Distribution of alleles from TGF-B1 1800469 C > T is C 75% and T 25% and from TGF-B1 1982073 C > T is C53,8% and T 46,2%. In the other word polymorphism TGF- β1 plays a role in the occurrence and healing process of the DFU in T2DM patients.Keywords: diabetic foot ulcers, diabetes mellitus, polymorphism, TGF-β1
Procedia PDF Downloads 2882968 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception
Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu
Abstract:
Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish
Procedia PDF Downloads 1462967 Validation of Escherichia coli O157:H7 Inactivation on Apple-Carrot Juice Treated with Manothermosonication by Kinetic Models
Authors: Ozan Kahraman, Hao Feng
Abstract:
Several models such as Weibull, Modified Gompertz, Biphasic linear, and Log-logistic models have been proposed in order to describe non-linear inactivation kinetics and used to fit non-linear inactivation data of several microorganisms for inactivation by heat, high pressure processing or pulsed electric field. First-order kinetic parameters (D-values and z-values) have often been used in order to identify microbial inactivation by non-thermal processing methods such as ultrasound. Most ultrasonic inactivation studies employed first-order kinetic parameters (D-values and z-values) in order to describe the reduction on microbial survival count. This study was conducted to analyze the E. coli O157:H7 inactivation data by using five microbial survival models (First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic). First-order, Weibull, Modified Gompertz, Biphasic linear and Log-logistic kinetic models were used for fitting inactivation curves of Escherichia coli O157:H7. The residual sum of squares and the total sum of squares criteria were used to evaluate the models. The statistical indices of the kinetic models were used to fit inactivation data for E. coli O157:H7 by MTS at three temperatures (40, 50, and 60 0C) and three pressures (100, 200, and 300 kPa). Based on the statistical indices and visual observations, the Weibull and Biphasic models were best fitting of the data for MTS treatment as shown by high R2 values. The non-linear kinetic models, including the Modified Gompertz, First-order, and Log-logistic models did not provide any better fit to data from MTS compared the Weibull and Biphasic models. It was observed that the data found in this study did not follow the first-order kinetics. It is possibly because of the cells which are sensitive to ultrasound treatment were inactivated first, resulting in a fast inactivation period, while those resistant to ultrasound were killed slowly. The Weibull and biphasic models were found as more flexible in order to determine the survival curves of E. coli O157:H7 treated by MTS on apple-carrot juice.Keywords: Weibull, Biphasic, MTS, kinetic models, E.coli O157:H7
Procedia PDF Downloads 3662966 Digi-Buddy: A Smart Cane with Artificial Intelligence and Real-Time Assistance
Authors: Amaladhithyan Krishnamoorthy, Ruvaitha Banu
Abstract:
Vision is considered as the most important sense in humans, without which leading a normal can be often difficult. There are many existing smart canes for visually impaired with obstacle detection using ultrasonic transducer to help them navigate. Though the basic smart cane increases the safety of the users, it does not help in filling the void of visual loss. This paper introduces the concept of Digi-Buddy which is an evolved smart cane for visually impaired. The cane consists for several modules, apart from the basic obstacle detection features; the Digi-Buddy assists the user by capturing video/images and streams them to the server using a wide-angled camera, which then detects the objects using Deep Convolutional Neural Network. In addition to determining what the particular image/object is, the distance of the object is assessed by the ultrasonic transducer. The sound generation application, modelled with the help of Natural Language Processing is used to convert the processed images/object into audio. The object detected is signified by its name which is transmitted to the user with the help of Bluetooth hear phones. The object detection is extended to facial recognition which maps the faces of the person the user meets in the database of face images and alerts the user about the person. One of other crucial function consists of an automatic-intimation-alarm which is triggered when the user is in an emergency. If the user recovers within a set time, a button is provisioned in the cane to stop the alarm. Else an automatic intimation is sent to friends and family about the whereabouts of the user using GPS. In addition to safety and security by the existing smart canes, the proposed concept devices to be implemented as a prototype helping visually-impaired visualize their surroundings through audio more in an amicable way.Keywords: artificial intelligence, facial recognition, natural language processing, internet of things
Procedia PDF Downloads 3552965 Audio-Visual Co-Data Processing Pipeline
Authors: Rita Chattopadhyay, Vivek Anand Thoutam
Abstract:
Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech
Procedia PDF Downloads 802964 Friction Stir Processing of the AA7075T7352 Aluminum Alloy Microstructures Mechanical Properties and Texture Characteristics
Authors: Roopchand Tandon, Zaheer Khan Yusufzai, R. Manna, R. K. Mandal
Abstract:
Present work describes microstructures, mechanical properties, and texture characteristics of the friction stir processed AA7075T7352 aluminum alloy. Phases were analyzed with the help of x-ray diffractometre (XRD), transmission electron microscope (TEM) along with the differential scanning calorimeter (DSC). Depth-wise microstructures and dislocation characteristics from the nugget-zone of the friction stir processed specimens were studied using the bright field (BF) and weak beam dark-field (WBDF) TEM micrographs, and variation in the microstructures as well as dislocation characteristics were the noteworthy features found. XRD analysis display changes in the chemistry as well as size of the phases in the nugget and heat affected zones (Nugget and HAZ). Whereas the base metal (BM) microstructures remain un-affected. High density dislocations were noticed in the nugget regions of the processed specimen, along with the formation of dislocation contours and tangles. .The ɳ’ and ɳ phases, along with the GP-Zones were completely dissolved and trapped by the dislocations. Such an observations got corroborated to the improved mechanical as well as stress corrosion cracking (SCC) performances. Bulk texture and residual stress measurements were done by the Panalytical Empyrean MRD system with Co- kα radiation. Nugget zone (NZ) display compressive residual stress as compared to thermo-mechanically(TM) and heat affected zones (HAZ). Typical f.c.c. deformation texture components (e.g. Copper, Brass, and Goss) were seen. Such a phenomenon is attributed to the enhanced hardening as well as other mechanical performance of the alloy. Mechanical characterizations were done using the tensile test and Anton Paar Instrumented Micro Hardness tester. Enhancement in the yield strength value is reported from the 89MPa to the 170MPa; on the other hand, highest hardness value was reported in the nugget-zone of the processed specimens.Keywords: aluminum alloy, mechanical characterization, texture characterstics, friction stir processing
Procedia PDF Downloads 1072963 Production of Oral Vowels by Chinese Learners of Portuguese: Problems and Didactic Implications
Authors: Adelina Castelo
Abstract:
The increasing number of learners of Portuguese as Foreign Language in China justifies the need to define the phonetic profile of these learners and to design didactic materials that are adjusted to their specific problems in pronunciation. Different aspects of this topic have been studied, but the production of oral vowels still needs to be investigated. This study aims: (i) to identify the problems the Chinese learners of Portuguese experience in the pronunciation of oral vowels; (ii) to discuss the didactic implications drawn from those problems. The participants were eight native speakers of Mandarin Chinese that had been learning Portuguese in College for almost a year. They named pictured objects and their oral productions were recorded and phonetically transcribed. The selection of the objects to name took into account some linguistic variables (e.g. stress pattern, syllable structure, presence of the Portuguese oral vowels in different word positions according to stress location). The results are analysed in two ways: the impact of linguistic variables on the success rate in the vowels' production; the replacement strategies used in the non-target productions. Both analyses show that the Chinese learners of Portuguese (i) have significantly more difficulties with the mid vowels as well as the high central vowel and (ii) do not master the vowel height feature. These findings contribute to define the phonetic profile of these learners in terms of oral vowel production. Besides, they have important didactic implications for the pronunciation teaching to these specific learners. Those implications are discussed and exemplified.Keywords: Chinese learners, learners’ phonetic profile, linguistic variables, Portuguese as foreign language, production data, pronunciation teaching, oral vowels
Procedia PDF Downloads 2232962 Detecting Hate Speech And Cyberbullying Using Natural Language Processing
Authors: Nádia Pereira, Paula Ferreira, Sofia Francisco, Sofia Oliveira, Sidclay Souza, Paula Paulino, Ana Margarida Veiga Simão
Abstract:
Social media has progressed into a platform for hate speech among its users, and thus, there is an increasing need to develop automatic detection classifiers of offense and conflicts to help decrease the prevalence of such incidents. Online communication can be used to intentionally harm someone, which is why such classifiers could be essential in social networks. A possible application of these classifiers is the automatic detection of cyberbullying. Even though identifying the aggressive language used in online interactions could be important to build cyberbullying datasets, there are other criteria that must be considered. Being able to capture the language, which is indicative of the intent to harm others in a specific context of online interaction is fundamental. Offense and hate speech may be the foundation of online conflicts, which have become commonly used in social media and are an emergent research focus in machine learning and natural language processing. This study presents two Portuguese language offense-related datasets which serve as examples for future research and extend the study of the topic. The first is similar to other offense detection related datasets and is entitled Aggressiveness dataset. The second is a novelty because of the use of the history of the interaction between users and is entitled the Conflicts/Attacks dataset. Both datasets were developed in different phases. Firstly, we performed a content analysis of verbal aggression witnessed by adolescents in situations of cyberbullying. Secondly, we computed frequency analyses from the previous phase to gather lexical and linguistic cues used to identify potentially aggressive conflicts and attacks which were posted on Twitter. Thirdly, thorough annotation of real tweets was performed byindependent postgraduate educational psychologists with experience in cyberbullying research. Lastly, we benchmarked these datasets with other machine learning classifiers.Keywords: aggression, classifiers, cyberbullying, datasets, hate speech, machine learning
Procedia PDF Downloads 2282961 An Approximation of Daily Rainfall by Using a Pixel Value Data Approach
Authors: Sarisa Pinkham, Kanyarat Bussaban
Abstract:
The research aims to approximate the amount of daily rainfall by using a pixel value data approach. The daily rainfall maps from the Thailand Meteorological Department in period of time from January to December 2013 were the data used in this study. The results showed that this approach can approximate the amount of daily rainfall with RMSE=3.343.Keywords: daily rainfall, image processing, approximation, pixel value data
Procedia PDF Downloads 3872960 Classification of EEG Signals Based on Dynamic Connectivity Analysis
Authors: Zoran Šverko, Saša Vlahinić, Nino Stojković, Ivan Markovinović
Abstract:
In this article, the classification of target letters is performed using data from the EEG P300 Speller paradigm. Neural networks trained with the results of dynamic connectivity analysis between different brain regions are used for classification. Dynamic connectivity analysis is based on the adaptive window size and the imaginary part of the complex Pearson correlation coefficient. Brain dynamics are analysed using the relative intersection of confidence intervals for the imaginary component of the complex Pearson correlation coefficient method (RICI-imCPCC). The RICI-imCPCC method overcomes the shortcomings of currently used dynamical connectivity analysis methods, such as the low reliability and low temporal precision for short connectivity intervals encountered in constant sliding window analysis with wide window size and the high susceptibility to noise encountered in constant sliding window analysis with narrow window size. This method overcomes these shortcomings by dynamically adjusting the window size using the RICI rule. This method extracts information about brain connections for each time sample. Seventy percent of the extracted brain connectivity information is used for training and thirty percent for validation. Classification of the target word is also done and based on the same analysis method. As far as we know, through this research, we have shown for the first time that dynamic connectivity can be used as a parameter for classifying EEG signals.Keywords: dynamic connectivity analysis, EEG, neural networks, Pearson correlation coefficients
Procedia PDF Downloads 2142959 The Impact of Legislation on Waste and Losses in the Food Processing Sector in the UK/EU
Authors: David Lloyd, David Owen, Martin Jardine
Abstract:
Introduction: European weight regulations with respect to food products require a full understanding of regulation guidelines to assure regulatory compliance. It is suggested that the complexity of regulation leads to practices which result to over filling of food packages by food processors. Purpose: To establish current practices by food processors and the financial, sustainable and societal impacts on the food supply chain of ineffective food production practices. Methods: An analysis of food packing controls with 10 companies of varying food categories and quantitative based research of a further 15 food processes on the confidence in weight control analysis of finished food packs within their organisation. Results: A process floor analysis of manufacturing operations focussing on 10 products found over fill of packages ranging from 4.8% to 20.2%. Standard deviation figures for all products showed a potential for reducing average weight of the pack whilst still retain the legal status of the product. In 20% of cases, an automatic weight analysis machine was in situ however weight packs were still significantly overweight. Collateral impacts noted included the effect of overfill on raw material purchase and added food miles often on a global basis with one raw material alone creating 10,000 extra food miles due to the poor weight control of the processing unit. A case study of a meat and bakery product will be discussed with the impact of poor controls resulting from complex legislation. The case studies will highlight extra energy costs in production and the impact of the extra weight on fuel usage. If successful a risk assessment model used primarily on food safety but adapted to identify waste /sustainability risks will be discussed within the presentation.Keywords: legislation, overfill, profile, waste
Procedia PDF Downloads 4062958 A Second Order Genetic Algorithm for Traveling Salesman Problem
Authors: T. Toathom, M. Munlin, P. Sugunnasil
Abstract:
The traveling salesman problem (TSP) is one of the best-known problems in optimization problem. There are many research regarding the TSP. One of the most usage tool for this problem is the genetic algorithm (GA). The chromosome of the GA for TSP is normally encoded by the order of the visited city. However, the traditional chromosome encoding scheme has some limitations which are twofold: the large solution space and the inability to encapsulate some information. The number of solution for a certain problem is exponentially grow by the number of city. Moreover, the traditional chromosome encoding scheme fails to recognize the misplaced correct relation. It implies that the tradition method focuses only on exact solution. In this work, we relax some of the concept in the GA for TSP which is the exactness of the solution. The proposed work exploits the relation between cities in order to reduce the solution space in the chromosome encoding. In this paper, a second order GA is proposed to solve the TSP. The term second order refers to how the solution is encoded into chromosome. The chromosome is divided into 2 types: the high order chromosome and the low order chromosome. The high order chromosome is the chromosome that focus on the relation between cities such as the city A should be visited before city B. On the other hand, the low order chromosome is a type of chromosome that is derived from a high order chromosome. In other word, low order chromosome is encoded by the traditional chromosome encoding scheme. The genetic operation, mutation and crossover, will be performed on the high order chromosome. Then, the high order chromosome will be mapped to a group of low order chromosomes whose characteristics are satisfied with the high order chromosome. From the mapped set of chromosomes, the champion chromosome will be selected based on the fitness value which will be later used as a representative for the high order chromosome. The experiment is performed on the city data from TSPLIB.Keywords: genetic algorithm, traveling salesman problem, initial population, chromosomes encoding
Procedia PDF Downloads 2702957 Greening the Blue: Enzymatic Degradation of Commercially Important Biopolymer Dextran Using Dextranase from Bacillus Licheniformis KIBGE-IB25
Authors: Rashida Rahmat Zohra, Afsheen Aman, Shah Ali Ul Qader
Abstract:
Commercially important biopolymer, dextran, is enzymatically degraded into lower molecular weight fractions of vast industrial potential. Various organisms are associated with dextranase production, among which fungal, yeast and bacterial origins are used for commercial production. Dextranases are used to remove contaminating dextran in sugar processing industry and also used in oral care products for efficient removal of dental plaque. Among the hydrolytic products of dextran, isomaltooligosaccharides have prebiotic effect in humans and reduces the cariogenic effect of sucrose in oral cavity. Dextran derivatives produced by hydrolysis of high molecular polymer are also conjugated with other chemical and metallic compounds for usage in pharmaceutical, fine chemical industry, cosmetics, and food industry. Owing to the vast application of dextran and dextranases, current study focused on purification and analysis of kinetic parameters of dextranase from a newly isolated strain of Bacillus licheniformis KIBGE-IB25. Dextranase was purified up to 35.75 folds with specific activity of 1405 U/mg and molecular weight of 158 kDa. Analysis of kinetic parameters revealed that dextranase performs optimum cleavage of low molecular weight dextran (5000 Da, 0.5%) at 35ºC in 15 min at pH 4.5 with a Km and Vmax of 0.3738 mg/ml and 182.0 µmol/min, respectively. Thermal stability profiling of dextranase showed that it retained 80% activity up to 6 hours at 30-35ºC and remains 90% active at pH 4.5. In short, the dextranase reported here performs rapid cleavage of substrate at mild operational conditions which makes it an ideal candidate for dextran removal in sugar processing industry and for commercial production of low molecular weight oligosaccharides.Keywords: Bacillus licheniformis, dextranase, gel permeation chromatograpy, enzyme purification, enzyme kinetics
Procedia PDF Downloads 440